From: Robert on
On Mon, 17 Sep 2007 02:14:38 GMT, "William M. Klein" <wmklein(a)nospam.netcom.com> wrote:

>
>"Robert" <no(a)e.mail> wrote in message
>news:j6ore39s4saaeccj4mkfsghkb0s0blk19j(a)4ax.com...
>> On Mon, 17 Sep 2007 01:43:39 GMT, "William M. Klein"
>> <wmklein(a)nospam.netcom.com> wrote:
>>
>>>An ODO is ONLY faster if the number of "filled in" table entries varies.
>>
>> It usually does. It's usually loaded from a database table or file.
>>
>
>Agan, your expereince is not the same as mine. In most (certainly NOT all)
>cases, SEARCH ALL is done on tables of things like "tax codes" "state
>abreviations", etc. Although it would certainly be "nice" if such code was
>dynamically read in, most that I have seen are "hard-coded" and the length of
>the table is changed when new entries are added (or entries removed).

That's how we did it in the Old Days. Today, a program change, no matter how trivil, takes
six months of approvals and testing. Hard coded tables, which were once cheap, have become
very expensive. It's easier to change a Reference Table in the database.

>Again, commonly (not always) when a smething needs to be "searched" in a file, a
>keyed (VSAM on IBM mainframes) are used and access is "direct" via the "searched
>upon" information.

Today, that's a index on a database table.

A smart program might cache the results of the last hundred lookups in a Cobol table. It
would look there first. The table shoudl be described with ODO so the search doesn't waste
time looking at filler entries.
From: Pete Dashwood on


"Robert" <no(a)e.mail> wrote in message
news:snore3pl4g4jglkiu4hmqn48967jiberjk(a)4ax.com...
> On Mon, 17 Sep 2007 13:52:58 +1200, "Pete Dashwood"
> <dashwood(a)removethis.enternet.co.nz>
> wrote:
>
>>
>>
>>"Robert" <no(a)e.mail> wrote in message
>>news:hvrqe394mgcejqa6bgfvu4ge141tmqtima(a)4ax.com...
>>> On Mon, 17 Sep 2007 01:17:37 +1200, "Pete Dashwood"
>>> <dashwood(a)removethis.enternet.co.nz>
>>> wrote:
>>>
>>>
>>>>(BTW, the reason I use indexes is not for any of the reasons you
>>>>described;
>>>>I simply like INDEXED BY and SEARCH. Having gone to the trouble of
>>>>defining
>>>>an index for a table it seems impolite to then use a subscript... :-))
>>>>Couldn't care less whether they're faster or slower; on modern hardware
>>>>it
>>>>makes very little difference, and even if it did, I'd still do it.
>>>>Because
>>>>I
>>>>can.:-))
>>>
>>> I hope you use ODO on 'high speed' SEARCHes. If not, they're taking
>>> twice
>>> as long as they
>>> should for serial, 10% longer for binary.
>>>
>>I NEVER use ODO for ANYTHING.
>>
>>The only time I would code ODO is when I am accessing something that
>>requires it.
>
> The only good ODO is a dead ODO.
>
>>I've seen the arguments for ODO on searches.
>>
>>As the tables I search are rarely more than 1K in size (I think the
>>largest
>>I can remember doing in COBOL recent was 8K),and I do use SEARCH ALL on
>>non-volatile data, I really don't care if it takes a few microseconds (or
>>even milliseconds) longer. The table is in memory.
>
> If it's in MEMORY, speed doesn't matter. I thought you were SEARCHing
> tables on disk.
>
> (How do you do that in Cobol?)
>
>>To me ODO is just ugly pointless code that buys you nothing in terms of
>>saved space, so it doesn't deliver what it promises.
>>
>>The last time I used COBOL for an application that was time critical where
>>a
>>few Milliseconds MIGHT matter (it was a process control app), was nearly
>>30
>>years ago and GO TO was employed to get out of routines fast. I wouldn't
>>code normal COBOL apps like that and I don't use ODO in them either.
>
> It is little known that ODO was invented in Russia. It's a Communist
> conspiracy to screw
> up Western software.

Yeah, them Commies is sly....

Just as well I never fell for it.

Pete.
--
"I used to write COBOL...now I can do anything."


From: Richard on
On Sep 17, 1:39 pm, Robert <n...(a)e.mail> wrote:
> On Mon, 17 Sep 2007 01:15:35 GMT, "William M. Klein" <wmkl...(a)nospam.netcom.com> wrote:
>
> >Robert,
> > For what compiler? What operating systems? And your evidence is ...?
>
> It's true for every compiler and operating system. Telling Cobol to search 1,000 rows when
> half of them are filled with high values will take log2 1,000 / log2 500 times as long.
> Roughly 10% longer.
>
> No timing test is necessary, although I'm tempted to write one just for fun. Deductive
> logic OR common sense should tell you that.
>
> >Of course, we all KNOW that there is no such thing as a "guaranteed" BINARY
> >search. (SEARCH ALL is not guaranteed to be "binary").
>
> That's true in Standard-land. In Reality-land, every compiler does a binary search.

> I spent years trying to prove that 2 is not the optimal division factor. Based on
> calculus, I really believed it was e - 1, which is approximately 1.7. I almost 'proved'
> it with tests. Years later I saw that 2 really IS the optimal division factor. On well, I
> tried.

And you spent countless messages trying to 'prove' that Microfocus
advice is "bad".

At least you admit to that credibility gap, how many people did you
insult in those years because they didn't agree with you ? How many
simply stopped replying so you thought you 'won' ?


From: Richard on
On Sep 17, 2:11 pm, Robert <n...(a)e.mail> wrote:
> On Mon, 17 Sep 2007 01:43:39 GMT, "William M. Klein" <wmkl...(a)nospam.netcom.com> wrote:
>
> >An ODO is ONLY faster if the number of "filled in" table entries varies.
>
> It usually does. It's usually loaded from a database table or file.
>
> > If the
> >number of entries is stable AND all entries are filled in (which is the most
> >common situation in the SEARCH ALL programs that I have seen). For serial
> >SEARCHes where the "empty" entries are at the end, I can't see how or why an ODO
> >would ever be faster.
>
> You seem to contradict yourself. When "all entries are filled in", there are no empty
> entries at the end. The table is dimensioned with OCCURS to exactly the right size.

Not to those with a sufficiently high reading comprehension level.
Peter talks about two distinct situations: a SEARCH ALL where the
table is filled, and, in a different program, a serial SEARCH with
blank entries at the end.


> >P.S. If you search the records of this group, there ARE compilers that have
> >used non-Binary searches for SEARCH ALL. However, I do agree that no one has
> >been able to point out any that are still sold/distributed that do so today.
>
> There are Computer Science students who would claim that hashing is faster. They are
> simply wrong.

Hashing _IS_ faster when the table is a hash-table.

This seems to be another of your 'I believes'.


From: Richard on
On Sep 17, 10:33 am, Robert <n...(a)e.mail> wrote:
> On Sun, 16 Sep 2007 00:26:57 +0000 (UTC), docdw...(a)panix.com () wrote:
> >In article <rqnoe3hgjei5ir0b6ht4kefrmli2mr2...(a)4ax.com>,
> >Robert <n...(a)e.mail> wrote:
> >>On Sat, 15 Sep 2007 11:43:24 -0700, Richard <rip...(a)Azonic.co.nz> wrote:
>
> >>>On Sep 15, 6:50 pm, Robert <n...(a)e.mail> wrote:
> >>>> On Fri, 14 Sep 2007 22:51:45 -0700, Richard <rip...(a)Azonic.co.nz> wrote:
>
> >[snip]
>
> >>>> >Just why is 'index is faster than subscript' a myth, again ?
>
> >>>> 1. Because a timing test showed indexes are slower.
>
> >>>And you have done a timing test on every machine in the universe.
>
> >>If humans were unable to generalize, there wouldn't be any machines.
> >>We'd be living in
> >>shacks and tents.
>
> >What Mr Plinston puts forward, Mr Wagner, may demonstrate why there is a
> >season to things and a time to every purpose. The above might be phrased
> >otherwise and yet still retain some original flavor, eg:
>
> >A: Just why is 'index is faster than subscript' a myth, again ?
>
> >B: Because a timing test showed indexes are slower.
>
> >A: '*A* timing test' (emphasis added) shows that under *a* set of
> >conditions one might not be better than the other; it is possible that
> >under other sets of conditions the other might be better than the one.
>
> I tried to make the tests represent typical usage AND I posted source code. If you think
> the test is unfair, say so or write your own test. Thanks to Richard's complaints, I saw
> the subscript test did NOT represent typical usage, so I'll fix it and rerun.

It is not just that the tests were incompetent (as in they didn't do
anything), you conclusions were generally invalid. For example you
counted advice as 'busted' when it was, in fact, faster.


> >>The Micro Focus page is generalized advice. Write and tell them
> >>generalization is BAD.
>
> >Leaving aside the Brooklyn Bridge nature of this argument - 'Micro Focus
> >jumps off the Brooklyn Bridge, you will, too?' - one might believe that
> >when Micro Focus (or an appropriate representative thereof) comes
> >a-posting here the responses might be the same.
>
> I have no problem with generalizations, nor do I fault Micro Focus for making them. I
> think a few of the SPECIFIC points are erroneous, because they're based on commonly held
> belief (myth).

You seem to forget that in your initial message there were two
distinct groups: the MF advice and the 'legacy beliefs' with which you
failed to show that _anyone_ falsely believes.

In the above you completely mix these up in claiming that MF 'points'
are erroneous because they are 'based on myth'.

The index vs subscript is _NOT_ in the MF advice.

The index vs subscript is _NOT_ a myth.

The MF advice is likely to be based on the internal workings of the
compilers and run-times, and is nothing to do with what people
'believe'.

Foe legacy coders it is likely that indexes _are_ faster than
subscripts. It is not a myth.

In fact my own tests with reasonably modern hardware and a current
compiler shows indexes to be consistently the fastest with subscripts
being marginally slower to much slower depending on their USAGE. ie
COMP-5 vs COMP.

> If Itanium is unsuccessful, CPUs are close to hitting the wall in terms of speed. We can't
> make them much more complex (required to avoid instruction collisions) because we can't
> make traces much smaller. We're approaching the size of atoms.

No, wrong. Itanium is 180nanom to 90nanom. Current stuff is down to
40nanom.