× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



-----Original Message-----
From: Joe Pluta <joepluta@PlutaBrothers.com>
To: midrange-l@midrange.com <midrange-l@midrange.com>
Date: Thursday, September 06, 2001 11:08 PM
Subject: RE: SQL vs V5R1 and service woes


>This in fact is a common
>misconception: if you use SQL, you can change your database more easily.

>"Easier" in this case meant that you could add a field to both files
without
>changing the SQL statement.  This is only the case when using "SELECT *",
>and my belief is that "SELECT *" really isn't really a common use, except
in
>a subselect.

This is not quite true. If you are using SELECT *, then adding fields to the
file means that the program must be able to deal with the new fields that
are added to the files, just like if you were using native I/O, the
structure of the target changes and the program must be recompiled. You
actually allude to that in your next statement.

> (The other use is when you subsequently FETCH into a data
>structure, but then you have the same issues as RPG or COBOL.)
>
>The other way SQL is easier is that "you don't have to recompile".  This is
>true if you're explicitly naming fields in your SELECT statements, but then
>my comment on that is "Uh, so what?".  How often do you change your
>database?

In a real environment? Sometimes more often than you would like to think. In
a leasing firm I once worked at, the lease master file grew from 30 or 40
fields to over 300 fields in the course of two years time. Business changes
and new offerings drove this. We were using native I/O, so recompiles over
the weekends were the standard fare for programmers. And, yes, I know that
some of those fields did not really belong in that file, but that required
more careful examination and analysis and design time than the users were
willing to put up with at that time. It was not a small company.

> How hard is it to then recompile the programs that use the
>changed file?  To determine which programs use a given file, you can do a
>DSPPGMREF to an outfile and use SQL to determine the programs to recompile
>(this, by the way, is a really GOOD use of SQL, in my book!).

See my previous statement. Are you kidding? 81,000 objects on the system,
5,000+ using this one file. 8 Million lines of code on-line, 1 million lines
of code active. How hard was it to recompile when only three new programs
were going to use the field? (retorical)

Adoption of SQL by large mainframe and COBOL using organizations, is
partially predicated on the fact that these numbers are SMALL to them. And
SQL helps them isolate and manage those inevitable changes.

> Personally, I
>prefer the security of format level checking - with SQL, if I forget to
>change one of my SQLRPG programs to update the new fields, I don't even get
>a warning.  I just find bad data somewhere down the road, and I'm not even
>sure why.  It isn't terribly difficult to recompile programs, so SQL isn't
>really that much easier, and it's potentially pretty dangerous.

Actually, the bad data does not happen in an SQL situation. In a situation
with level checks turned off, the bad data happens because of a write
without the field and no defaults enforced for the data in the new fields.

When you alter a table to add a field, it must be null capable initially.
That's because as the field is added, it is null. Then you can alter the
field to have a default value and/or not accept a null (if you populate the
field first).

If you are using SQL inserts, the data will not be corrupted in any field or
the write will not complete. Any missing fields will be either null or their
specified defaults.

>You can avoid (some) of these dangers with database triggers.  While not
>capable of handling every business rule, triggers can handle most.  And so
a
>combination of SQL and trigger programs can induce a level of database
>integrity and ease of use that has a certain appeal.  In some ways, they
can
>even separate the business logic from the physical database by allowing the
>triggers to do "invisible" file updates in response to application actions.

Ah. But the trigger (non SQL triggers, program triggers) is the program that
DOES depend on the format of the record as a whole. The record buffers are
in there and dependant on the added fields.

But since there are a specific number, usually less than a few thousand
<vbg>, that are attached to the file, the location and recompilation of the
triggers in association with column changes and additions, is less fretful
than the previously described scenario.


===========================================================
R. Bruce Hoffman, Jr.
 -- IBM Certified Specialist - AS/400 Administrator
 -- IBM Certified Specialist - RPG IV Developer

"America is the land that fought for freedom and then
  began passing laws to get rid of it."

     - Alfred E. Neuman





As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.