From: Howard Brazee on
On Sat, 15 Sep 2007 01:50:53 -0500, Robert <no(a)e.mail> wrote:

>Most people who use indexes believe it. Why else would they use indexes?

Habit.
From: Howard Brazee on
On Sat, 15 Sep 2007 21:19:28 -0700, Richard <riplin(a)Azonic.co.nz>
wrote:

>Do you really think that generalizing from a _single_ case has any
>value at all. Oh, wait, it supported your prejudice so it must be
>accepted as general.

I have the bumper sticker:
"Everybody generalizes from one example. At least, I do." - Steven
Brust
From: Howard Brazee on
On Mon, 17 Sep 2007 02:14:38 GMT, "William M. Klein"
<wmklein(a)nospam.netcom.com> wrote:

>Agani, your expereince is not the same as mine. In most (certainly NOT all)
>cases, SEARCH ALL is done on tables of things like "tax codes" "state
>abreviations", etc. Although ti would certainly be "nice" if such code was
>dynamically read in, most that I have seen are "hard-coded" and the length of
>the table is changed when new entries are added (or entries removed).
>
>Again, commonly (not always) when a smething needs to be "searched" in a file, a
>keyed (VSAM on IBM mainframes) are used and access is "direct" via the "searched
>upon" information.

In real small sorted tables, I don't care whether a SEARCH ALL would
be optimized into a linear search - I avoid the command altogether.
From: Charles Hottel on

"Howard Brazee" <howard(a)brazee.net> wrote in message
news:pabte318v4n344saoq74ifeh7u3t1f8v9n(a)4ax.com...
> On Sat, 15 Sep 2007 01:50:53 -0500, Robert <no(a)e.mail> wrote:
>
>>Most people who use indexes believe it. Why else would they use indexes?
>
> Habit.

When was the last time anyone posting here ever had a performance problem
whose solution was related to subscripting or indexing? Changing to indexes
in order to do a binary search being excluded. The few times that I have had
to optimize generally involved changing to a better algorithm. One program
on a 360/30 took over two hours. I changed it to use two 8000 byte
buffers (that was the largest I could make them due to storage constraints)
and the time dropped to 10 to 15 minutes. I have changed sequential sorts
to binary sort both hand coded using subscripts and using SEARCH ALL. The
hand coded subscript approach avoided searching any table entries that have
not been loaded with data just by proper setting of the HI subscript.
Although as Pete and other here have posted I don't recall ever having that
problem. Much later on I had to optimize a couple of programs that were
slow due to high volume random access of VSAM KSDS files. The solution was
to sort the input transactions which took away a lot of the randomness and
effectively resulting in cacheing where subsequent transaction could take
advantage of data already read by the previous transaction. I have not had a
performance problem of any type in the last 20 years.

I suggest we move on to another more interesting thread, perhaps one where
two fleas argue over which one owns the dog that they live on.


From: Robert on
On Mon, 17 Sep 2007 11:01:38 -0600, Howard Brazee <howard(a)brazee.net> wrote:

>On Mon, 17 Sep 2007 02:14:38 GMT, "William M. Klein"
><wmklein(a)nospam.netcom.com> wrote:
>
>>Agani, your expereince is not the same as mine. In most (certainly NOT all)
>>cases, SEARCH ALL is done on tables of things like "tax codes" "state
>>abreviations", etc. Although ti would certainly be "nice" if such code was
>>dynamically read in, most that I have seen are "hard-coded" and the length of
>>the table is changed when new entries are added (or entries removed).
>>
>>Again, commonly (not always) when a smething needs to be "searched" in a file, a
>>keyed (VSAM on IBM mainframes) are used and access is "direct" via the "searched
>>upon" information.
>
>In real small sorted tables, I don't care whether a SEARCH ALL would
>be optimized into a linear search - I avoid the command altogether.

Serial search is faster when the number of entries is fewer than 100 .. or maybe 50. One
would hope the compiler or library would switch automatically, but they don't.