From: Charles Hottel on

"Pete Dashwood" <dashwood(a)removethis.enternet.co.nz> wrote in message
news:5lu49rFa5hnvU1(a)mid.individual.net...
>
>
> "Robert" <no(a)e.mail> wrote in message
> news:uq8jf3pd3rq48eqio0hdtqo172nv2c16is(a)4ax.com...
>> On Tue, 25 Sep 2007 22:45:12 +0000 (UTC), docdwarf(a)panix.com () wrote:
>>
>>>In article <regif3d0b34nreavsckap09omqjhptnik8(a)4ax.com>,
>>>Robert <no(a)e.mail> wrote:
>>>>On Tue, 25 Sep 2007 09:25:04 +0000 (UTC), docdwarf(a)panix.com () wrote:
>>>>
>>
>>>>>Now, Mr Wagner... is one to expect another dreary series of repetitions
>>>>>about how mainframers who said that indices were faster than subscripts
>>>>>were, in fact, right about something?
>>>>
>>>>I expected I-told-you-so from the mainframe camp.
>>>
>>>It may be interesting to see if you get one; my point - and pardon the
>>>obscure manner of its making - was that you made a series of repetitions
>>>which a demonstration has disproved and it may be interesting to see if
>>>an
>>>equally lengthy series of repetitions follows... or if it just Goes Away
>>>until you next get an idea about something... and begin another, similar
>>>series of repetitions.
>>
>> We saw that subscript and index run at the same speed on three CPU
>> families -- HP PA
>> (SuperDome), DEC Alpha (Cray) and Richard's undisclosed machine, possibly
>> Intel. I am
>> confident we'd see the same on Intel, PowerPC (pseries, iseries, Mac) and
>> SPARC, based on
>> tests I ran a few years ago. Thus the generalizaton. I was surprised to
>> see zSeries did
>> not follow the pattern of the others.
>
> Well, Robert, I don't want to shake your confidence, and I deliberately
> refrained from posting these results (I felt you were getting enough
> flak...), but reconsidered when I saw your statement above :-)
>
> Here are the results of "Speed2" from a genuine Intel Celeron Core 2 Duo
> Vaio AR250G notebook with 2 GB of main memory, running under Windows XP
> with SP2 applied, using your code (with the following amendments: all
> asterisks and comments removed, exit perform cycle removed), compiled with
> no options other than the defaults (which includes "Optimize"), with the
> Fujitsu NetCOBOL version 6 compiler, compiled to .EXE:
>
> Null test 1
> Index 3
> Subscript 25
> Subscript comp-5 3
> Index 1 3
> Subscript 1 22
> Subscript 1 comp-5 3
>
> As you can see, indexing is between 7 and 8 times more efficient than
> subscripting, unless you use optimized subscripts, in this environment.
>
> (I was surprised that the figures are 3 times faster than the z/OS
> mainframe figures posted by Charlie...:-)

<snip>

I was in somewhat of a hurry when I converted Roberts program to use my
mainframe timer routine. Looking at the TIMER-OFF:

TIMER-OFF.
PERFORM READ-THE-TIME
COMPUTE ELAPSED-TIME ROUNDED =
((TIME-NOW - TIME-START) - TIMER-OVERHEAD) /1000000

IF ELAPSED-TIME NOT GREATER THAN ZERO
MOVE 'ERROR' TO ELAPSED-TIME-DISPLAY
ELSE
COMPUTE ELAPSED-TIME-EDITED ROUNDED = ELAPSED-TIME * 10
END-IF
DISPLAY TEST-NAME ELAPSED-TIME-DISPLAY

My timing routine computes the time in microseconds since the task started.

The ELAPSED-TIME-EDITED is ELAPSED-TIME multiplied by 10, so I think all of
the times I published may be 10 times higher than the actual elasped time. I
am somewhat skeptical of measuring actual times on a mainframe without
repeating the tests six to ten times and computing an average, variance
and/or standard deviation. I did not spend much time thinking about whether
to remove that COMPUTE because to me it was the relative speeds that were
important. Also I did not analyze Roberts timing method or the rationale for
this COMPUTE because my past experience using his timing method showed it
was grossly inaccurate on a mainframe (the task gets swapped out, put into a
wait state, but the time of day clock keeps on ticking). Sorry I guess I was
lazy, and my only excuse is that I am still far from being 100% of my old
self lately.
.


From: Charles Hottel on

"Judson McClendon" <judmc(a)sunvaley0.com> wrote in message
news:KlrKi.80086$Lu.64000(a)bignews8.bellsouth.net...
> "Pete Dashwood" <dashwood(a)removethis.enternet.co.nz> wrote:
>>
>> It is things like this that make me wonder why we even bother about
>> performance and have heated discussions about things like indexes and
>> subscripts, when the technology is advancing rapidly enough to simply
>> take care of it.
>
> Consider this. If Microsoft had put performance at a premium, Windows
> would boot in 1 second, you could start any Office application and it
> would be ready for input in the blink of an eye, and your Access test
> would
> have run in a few seconds. How many thousand man-years have been spent
> cumulatively all over the planet waiting on these things? :-)
> --
> Judson McClendon judmc(a)sunvaley0.com (remove zero)
> Sun Valley Systems http://sunvaley.com
> "For God so loved the world that He gave His only begotten Son, that
> whoever believes in Him should not perish but have everlasting life."
>

I always play a game of free cell while waiting for everything to start.
Norton anti-virus is particularly slow.


From: Pete Dashwood on


<docdwarf(a)panix.com> wrote in message news:fdd9bf$pps$1(a)reader1.panix.com...
> In article <5lu49rFa5hnvU1(a)mid.individual.net>,
> Pete Dashwood <dashwood(a)removethis.enternet.co.nz> wrote:
>
> [snip]
>
>>A few days ago I
>>was running a test on a P4 notebook that had to create a couple of million
>>rows on an ACCESS database.
>
> Why, Mr Dashwood... how interesting! Keep at it, you'll be up to sixty
> million and change in no time!

Yes, that thought occurred to me at the time :-)

As soon as I get a chance I'll try doing it with a Query Expression in C#
instead of SQL... that should be very interesting as it will allow both of
the processors on my machine to run in parallel...

It wrote the millions of rows due to an oversight on my part. <blushes and
shuffles feet>

I wrote a tool that analyses COBOL source code for VSAM/ KSDS and ISAM data
sets (The SELECTS and the FD/01s) and generates corresponding tables on a
Relational Database. Because Repeating Groups (OCCURS) must be removed as
part of normalization, my tool creates a separate table when it encounters
an OCCURS in the source code, and links it back to the base table with a
Referential Integrity constraint. (There's much more to it than that, it
handles multi level tables and REDEFINES as well, but you get the general
idea...) Having done this, it then generates Host Variables (DECLGEN) for
the new DB structure and also generates a Load Module in COBOL (using
embedded SQL and the Host Variables) that can read the ISAM file and load
the new database tables.

I had a situation where the source definition had a table embedded in a
record definition, with an OCCURS 99.

At load time the generated load module created a base row then attached 99
linked rows on an attached table to it. I didn't realise the test data I
had been given (ISAM file) had around 20,000 records on it :-). Of course,
most of the rows were empty and it was very simple to run an SQL query that
dropped them. It isn't so simple to generate a check for empty rows into the
Load Module, because a table may comprise any number of different fields of
different types and lengths. (There are around 200 ISAM files in the
existing system and many of them have multiple OCCURS clauses in them, and
some are multi-level tables.)

I thought about several approaches and opted for one that is relatively easy
to generate into the Load Module source. I tested it today and it worked
properly with no empty rows loaded. (The total was around 70,000 rows; big
difference :-)

I was pretty pleased, as it is getting close to delivery time and I was able
to load a set of database tables which were generated 100% from COBOL source
(no manual tweaking), using a COBOL program 100% generated from a C#
program, with no manual tweaking. The idea is to be able to analyse the
existing COBOL source and create and load a normalized Relational DB from
it, without manual intervention. It is pretty much there.

Pete.
--
"I used to write COBOL...now I can do anything."


From: Pete Dashwood on


"Judson McClendon" <judmc(a)sunvaley0.com> wrote in message
news:KlrKi.80086$Lu.64000(a)bignews8.bellsouth.net...
> "Pete Dashwood" <dashwood(a)removethis.enternet.co.nz> wrote:
>>
>> It is things like this that make me wonder why we even bother about
>> performance and have heated discussions about things like indexes and
>> subscripts, when the technology is advancing rapidly enough to simply
>> take care of it.
>
> Consider this. If Microsoft had put performance at a premium, Windows
> would boot in 1 second, you could start any Office application and it
> would be ready for input in the blink of an eye, and your Access test
> would
> have run in a few seconds. How many thousand man-years have been spent
> cumulatively all over the planet waiting on these things? :-)

Apparently Bill Gates showed a Windows machine that did a cold boot in 14
seconds. He claims it is not Windows that slows the boot down.

Certainly mine comes out of hibernation (warm start) in about 10 seconds. I
had occasion to remove Norton the other day before re-installing it, and
noted that without it my machine boots in 1 minute 40 seconds. With it, 3
minutes.

My Office Applications do start very quickly (Word is immediate, Access is
under 2 seconds, Project is around 5 seconds, Excel is immediate, PowerPoint
1 second, Outlook between 2 and 5 seconds, depending on the profile. I'd be
interested as to how this compares with other users)

The latest ACCESS test does run in a few seconds :-) (70,000 records = 41
seconds, using embedded SQL from COBOL).

Overall, I'm happy with it; if I weren't, I'd change... :-)

Pete.
--
"I used to write COBOL...now I can do anything."


From: Howard Brazee on
On Thu, 27 Sep 2007 01:04:49 +1200, "Pete Dashwood"
<dashwood(a)removethis.enternet.co.nz> wrote:

>Again, to me at least, this just completely confirms that it is not possible
>to make meaningful statements about performance unless you run actual tests.

Another thing it shows is that you shouldn't expect those efficiencies
that one needed to determine via testing to stay the most efficient as
compilers and platforms change.