From: James J. Gavan on
Pete Dashwood wrote:
> "James J. Gavan" <jgavandeletethis(a)shaw.ca> wrote in message
> news:sNVSh.51648$6m4.42486(a)pd7urf1no...
> <snip>>
>
>>Pete Dashwood's solution and ESQL Assistant. Pete reads your COBOL file
>>records and generates equivalent DB rows into a table.
>
>
> Not quite.
>
> ISAM2RDB reads your COBOL source definitions and generates one or more
> TABLES into a designated DATABASE.

Yes I can see I worded that badly - I really meant as you point out,
that you generate the format for a Row in a particular table, or
perhaps, like the reference to OCCURS below - you may generate
definitions for one or more Tables. (I wonder if that's why the J4 gang
are so keen on an 800 plus page document - so that you dot all the i's
and cross all the t's :-) )

>
> A single ISAM file definition COULD generate several tables. Ths is because
> the normalization process removes repeating groups (OCCURS in COBOL) to a
> separate linked table for each group. (One advantage of this is that every
> "table" in your COBOL system now has unlimited rows (no more maintaining
> OCCURS items) and will only take as much space as it actually needs...)
>
> If your ISAM source definition has no OCCURS in it, then you should get a
> single table, with a correctly typed column for each of the fields defined
> in your ISAM definition. (Both groups and elements are defined, to assist
> with data loading and converting existing programs that may reference group
> fields. Unreferenced groups can be removed later.)
>
> Certainly this tool gives you a structure that is useful while you are
> transitioning from ISAM to RDB. However, I would not suggest that this is
> how Relational Databases should be designed and built :-) It is a
> non-relational solution forced into a relational framework and, as such, it
> can never be as good as a Relational solution built from scratch. Sadly few
> of us have the time or funding to restart from scratch when moving to new
> technology, so ISAM2RDB provides a helpful bridge. (It also assists with the
> learning process; for example, you can look at the PICTURE of a given
> element in your COBOL code and see what type of Database definition for the
> corresponding column was generated on the RDB.)
>
>

If he should opt to go with ISAM2RDB to get his Table definitions - and
it involves OCCURS - please make sure you spell out to him how to use
the resulting 'inter-linking' tables. Aeons ago you produced similar for
me - but without explanation how to use the 're-vamped OCCURS', I was
stumped on how to use them !

Just one other thought which you might comment on. When Dave has done
his setup with tables and definitions, for his real-world COBOL files, I
can't think of another solution, other than generating CSV files to
import to the Database ??? (And given any OCCURS - he may have to
generate more than one CSV from a given COBOL file).

Don't want to insult you Dave - but in case the term is new to you,
'CSV' = Comma, Separated, Variables where records are written to a Line
Sequential in the field/column sequence to make up a record being
imported to a Table row, and usually with the file suffix .txt'.

No more below.

Jimmy

> ESQL Assistant -
>
>>*YOU* design the DB Table and determine what type of SQL fields/columns
>>(Int, Char etc.), you want. Naturally the latter approach means you have
>>to get au fait with the particular DB package, (how numerics are stored)
>>- so it could take you a week or more to convert, depending upon the
>>number of tables you are going to generate.
>>
>>There might be a nice solution using both Pete's package and ESQL
>>Assistant. Pete will read a whole bunch of COBOL files in mere minutes
>>and generate your DB Table formats. Now using ESQL Assistant, (the major
>>thing being you can generate your SQL statements knowing they will be
>>correct), when you test on a particular table the ESQL package will ask
>>you if you want to generate a copyfile - three parts :-
>>
>>DB Table format
>>DB Table NULL columns
>>COBOL - a typical COBOL record
>>
>>So Pete provides the initial definition of your DB Table and from that
>>ESQL using Pete's DB Table, generates the copyfile above which you can
>>test your individual queries with.
>
>
> Yes, that could be viable. I guess it depends on how many tables are
> involved. If it is just a few, it is probably better to simply use ESQL and
> learn how to define tables. If there are many, it might be useful to use
> ISAM2RDB and pick up some pointers on how to define and normalize tables, in
> passing.
>
>>
>>If/when you get a paperback, or articles on design from the Web,
>>concentrate on the term 'Normalization' so that you have a handle on it.
>>
>
>
> A very important observation, Jimmy.
>
> I have some stuff on this somewhere... I'll see if I can post it to a web
> server so people can access it.
>
> Pete.
>
>
From: Pete Dashwood on

"James J. Gavan" <jgavandeletethis(a)shaw.ca> wrote in message
news:4c%Sh.53345$6m4.32414(a)pd7urf1no...
> Pete Dashwood wrote:
>> "James J. Gavan" <jgavandeletethis(a)shaw.ca> wrote in message
>> news:sNVSh.51648$6m4.42486(a)pd7urf1no...
>> <snip>>
> If he should opt to go with ISAM2RDB to get his Table definitions - and it
> involves OCCURS - please make sure you spell out to him how to use the
> resulting 'inter-linking' tables. Aeons ago you produced similar for me -
> but without explanation how to use the 're-vamped OCCURS', I was
> stumped on how to use them !
>
What you got was a pre-release of the product.

You never asked for help.

Since then a number of customers have successfully used this tool. It comes
with a comprehensive Help file that took almost as long to develop as the
software did :-) There are also various ancilliaries and "support"
functions.

The concept of a secondary table linked to a primary one by a Foreign Key is
not exactly rocket science. It's actually pretty fundamental to Relational
Databases.

Obviously, if anyone had trouble using it I would gladly help, but I am no
longer in the business of converting ISAM files (unless someone pays me a
market rate to do so...:-)) and I'm not actually trying to sell this
software...The tool is there and I'm happy to let people use it, but I'm not
spending hours of time (which I simply don't have, at the moment) explaining
fundamental concepts and providing free support.

> Just one other thought which you might comment on. When Dave has done his
> setup with tables and definitions, for his real-world COBOL files, I can't
> think of another solution, other than generating CSV files to import to
> the Database ??? (And given any OCCURS - he may have to generate more than
> one CSV from a given COBOL file).
>

That would possibly work, but there is no need to do so. An approach I have
used in the past, successfully, is to write a COBOL program that reads the
ISAM file sequentially and simply transfers each record obtained, to the RDB
interface generated by ISAM2RDB. As I mentioned elsewhere, the tool allows
people with no RDB or SQL knowledge to access the RDB using COBOL records.
The RDB maintenance module for the table being loaded will simply INSERT all
the data. There is no need for intermediate processes or export/import; it
is direct from ISAM to RDB.

The fundamental thing here is that a COBOL program can read ISAM and "write"
RDB (within the same program). Once you write such a program you have a
template that enables the loading of all future tables from their equivalent
ISAM files.

Pete.


From: James J. Gavan on
Pete Dashwood wrote:
> "James J. Gavan" <jgavandeletethis(a)shaw.ca> wrote in message
> news:4c%Sh.53345$6m4.32414(a)pd7urf1no...
>
>>Pete Dashwood wrote:
>>
>>>"James J. Gavan" <jgavandeletethis(a)shaw.ca> wrote in message
>>>news:sNVSh.51648$6m4.42486(a)pd7urf1no...
>>><snip>>
>>
>>If he should opt to go with ISAM2RDB to get his Table definitions - and it
>>involves OCCURS - please make sure you spell out to him how to use the
>>resulting 'inter-linking' tables. Aeons ago you produced similar for me -
>>but without explanation how to use the 're-vamped OCCURS', I was
>>stumped on how to use them !
>>
>
> What you got was a pre-release of the product.
>
> You never asked for help.

True. But I didn't ask for the table definitions either, which you
returned to me having used, as yet another test set, my COBOL record
copyfiles. I was supposed to know what the Table definitions were ? You
subsequently developed the Help File below which probably covered this
and other points.

I'm aware you do a good job on Help Files, so Dave shouldn't have a
problem working his way through it. For me, just academic at this stage,
as I wont be doing any more productive coding.

>
> Since then a number of customers have successfully used this tool. It comes
> with a comprehensive Help file that took almost as long to develop as the
> software did :-) There are also various ancilliaries and "support"
> functions.
>
> The concept of a secondary table linked to a primary one by a Foreign Key is
> not exactly rocket science. It's actually pretty fundamental to Relational
> Databases.
>
> Obviously, if anyone had trouble using it I would gladly help, but I am no
> longer in the business of converting ISAM files (unless someone pays me a
> market rate to do so...:-)) and I'm not actually trying to sell this
> software...The tool is there and I'm happy to let people use it, but I'm not
> spending hours of time (which I simply don't have, at the moment) explaining
> fundamental concepts and providing free support.
>
>
>>Just one other thought which you might comment on. When Dave has done his
>>setup with tables and definitions, for his real-world COBOL files, I can't
>>think of another solution, other than generating CSV files to import to
>>the Database ??? (And given any OCCURS - he may have to generate more than
>>one CSV from a given COBOL file).
>>
>
>
> That would possibly work, but there is no need to do so. An approach I have
> used in the past, successfully, is to write a COBOL program that reads the
> ISAM file sequentially and simply transfers each record obtained, to the RDB
> interface generated by ISAM2RDB. As I mentioned elsewhere, the tool allows
> people with no RDB or SQL knowledge to access the RDB using COBOL records.
> The RDB maintenance module for the table being loaded will simply INSERT all
> the data. There is no need for intermediate processes or export/import; it
> is direct from ISAM to RDB.
>
> The fundamental thing here is that a COBOL program can read ISAM and "write"
> RDB (within the same program). Once you write such a program you have a
> template that enables the loading of all future tables from their equivalent
> ISAM files.
>
Again yes to the above. But from my experience, the quick transfer is
not necessarily the best for every situation :-

- Using any new tool, (Screen to GUIs, COBOL files to DBs etc., or as
you have done, COBOL to Java, then moving on to C#), should,
I believe, trigger in your mind what advantages the new tools give you.
So as part of the design planning inevitably you think of enhanced ways
of doing things - which can lead to a slightly different, or
considerably different database from the original.

- In my case the necessity of converting 700 data sets (different
geographical locations), with some 7 files each, accumulated over a
twenty-year period, to fit a new design format. That meant in some
stages of the transfer process the 'old' records just didn't have some
fields needed in the "new" DB tables. So process in phases, having
generated error reports where necessary, stop, do some manual editing
and then move to the later phases.

- The best way of achieving this for me - Line Sequential - using a
fixed record template. Given a value of 2.345 held in pic 9(03)v9(03)
comp-3, the template outputs : ",002.345," - the result, all columns are
aligned making it very easy to swiftly scroll through records, and make
editing changes where necessary.

- Even with the above approach it is still a fiddly operation, i.e.
checking the validity of the input data. You could of course take the
'cleaned up' Line Sequential CSV and "write" to the DB as you suggest.
However given the format is a CSV - just do the one import.

Different strokes for different folks.

Jimmy
From: Pete Dashwood on

"James J. Gavan" <jgavandeletethis(a)shaw.ca> wrote in message
news:g6hTh.61601$aG1.7987(a)pd7urf3no...
> Pete Dashwood wrote:
>> "James J. Gavan" <jgavandeletethis(a)shaw.ca> wrote in message
>> news:4c%Sh.53345$6m4.32414(a)pd7urf1no...
>>
<snip>>
> Again yes to the above. But from my experience, the quick transfer is
> not necessarily the best for every situation :-

It is if you're in a hurry... :-)

>
> - Using any new tool, (Screen to GUIs, COBOL files to DBs etc., or as you
> have done, COBOL to Java, then moving on to C#), should,
> I believe, trigger in your mind what advantages the new tools give you.
> So as part of the design planning inevitably you think of enhanced ways
> of doing things - which can lead to a slightly different, or
> considerably different database from the original.

Sure. Growth occurs. And when it does, we see different, and often better,
ways of doing things.

We are then faced with the dilemma of whether we should convert or scrap
what we have and go back to square one, or continue with what we (now...)
know is a sub-optimum solution. Usually a balance is struck between these
two extremes... in this case it is a conversion from ISAM to RDB; probably
the best that can done under the circumstances.

A "Relational purist" will be unhappy because the RDB is being constrained
by the original ISAM DB design (which may have also "evolved" over time),,
and therefore doesn't realise all the true benefits of RDB; someone who
LOVES ISAM and flat files will be unhappy about why something that works
fine, is being moved to a "new fangled technology" (even if it is nearly 25
years old... some people are suspicious of anything less than two decades
old...)...no-one is ever completely happy.

Given all these conflicts of perspective and interest, it is a miracle there
are any functioniong systems in the world at all...:-)

My experience is that people do the best they can; (no-one gets out of bed
in the morning and decides: "I think I'll go into work today and really
screw up everything I do...yeah, that'll make me feel better...")...
sometimes they get it wrong, sometimes they get it right, sometimes they are
constrained by other factors into compromises which are not optimal. Only
very rarely (and it is more likely if you work for yourself, and can
therefore decide what you will or will not go with) does the opportunity to
do something from scratch, using the best technology arise. (Even then, in 5
years you will realise it could have been better... :-))
>
> - In my case the necessity of converting 700 data sets (different
> geographical locations), with some 7 files each, accumulated over a
> twenty-year period, to fit a new design format. That meant in some stages
> of the transfer process the 'old' records just didn't have some fields
> needed in the "new" DB tables. So process in phases, having generated
> error reports where necessary, stop, do some manual editing and then move
> to the later phases.
>
Sounds pretty drastic...

One advantage of RDB of course, is that it is very easy to "expand the
design" and tolerate missing data as well.

> - The best way of achieving this for me - Line Sequential - using a fixed
> record template. Given a value of 2.345 held in pic 9(03)v9(03) comp-3,
> the template outputs : ",002.345," - the result, all columns are aligned
> making it very easy to swiftly scroll through records, and make editing
> changes where necessary.

This is one step away from what I suggested. Only difference, I'd have the
template write the data directly to the RDB and do the "cleanup" on the RDB,
where there are better tools available for identifying and correcting data
anomalies.

However, ANY solution that works, is a GOOD solution... :-}

(I was embarrassed when, as a student pilot, I made a pretty heavy landing
and apologised to my instructor. He was a Texan who was rarely phased by
anything, and he just drawled: "Son, any landing you walk away from is a
GOOD landing...")
>
> - Even with the above approach it is still a fiddly operation, i.e.
> checking the validity of the input data. You could of course take the
> 'cleaned up' Line Sequential CSV and "write" to the DB as you suggest.
> However given the format is a CSV - just do the one import.
>
> Different strokes for different folks.

Absolutely.

Pete.


From: Rick Smith on

"Pete Dashwood" <dashwood(a)removethis.enternet.co.nz> wrote in message
news:585r42F2e105nU1(a)mid.individual.net...
[snip]
> We are then faced with the dilemma of whether we should convert or scrap
> what we have and go back to square one, or continue with what we (now...)
> know is a sub-optimum solution. Usually a balance is struck between these
> two extremes... in this case it is a conversion from ISAM to RDB; probably
> the best that can done under the circumstances.

< http://www.microfocusworld.com/track_page.php?id=5 >
"A partner will present a session that shows how a relational
database can be used with a COBOL application using
standard COBOL I/O statements, WITHOUT any changes
to the code!"

Perhaps the best is no conversion, at all! Just upgrade to the
latest technology.