It appears to me that encoding the file's buffer length on the "f" spec accomplishes the same thing as using an LF with explicitly defined fields. With the "f" spec approach you end up with fewer "file objects" and associated source members (DDS or DDL) to maintain. Under the LF approach, you end up creating a new LF every time a new field is added to a physical file.
In specifying a buffer length on the "f" spec, you're not necessarily removing the safety net provided by LVLCHK(*YES). You're retaining it for modules that you may not control.
It appears to me that you're completely safe when adding new columns to the end of the table. You're making an informed choice on when to recompile, vs. being forced to recompile every time a new field is added.
Recompiling modules may not be the only problem. Maybe you're an ISV supporting hundreds of customers. Recompiling may force you to deploy large changes across hundreds of sites as opposed to deploying say one new program that uses a new field.
Good to have a helmet. You may choose to use it while playing ice hockey, but not when you're in the swimming pool.
-Nathan
----- Original Message -----
From: CRPence <CRPbottle@xxxxxxxxx>
To: rpg400-l@xxxxxxxxxxxx
Cc:
Sent: Thursday, November 8, 2012 1:11 PM
Subject: Re: Are level checks still useful?
On 07 Nov 2012 18:17, Nathan Andelin wrote:
I'm not settled on this technique, I just put it out for thought. It
appears that you can add table columns without "agonizing" about
"recompiling the world".
It seems to me that it would be much easier to just implement logical
independence so that only the programs that need to use the new column
would be changed and recompiled; changed to use the new LF which now
includes the new column added to the PF. All other programs would
remain unchanged, continuing to use their unchanged LF [an LF which was
never updated to include the new column irrespective of PF changed by
ALTER TABLE, CHGPF SRCFILE(named), or even when the PF was re-created
such that the LF had also been re-created from source]. That works
because all of the LFs would have been defined with explicit fields
instead of merely inheriting the record format from the PFILE
specification, which ensures the logical independence from the physical
changes.
Thus total freedom from the alluded "agonizing" is achieved for the
minor cost of some extra logical files to manage, while maintaining any
protection provided from level checking, and as Charles stated... also
with the great benefit of no changes to the other programs and their
LF(s) which effectively eliminates the need for re-testing those other
programs.
FWiW with regard to the subject, my opinion is that Level Checking is
only as useful as a helmet might be for riding. In either case there is
no value provided whenever no situation arose such that the protection
offered would be required. Anyone who can correctly predict that no
such situation will arise has no need for any assistance from a
preventive. Also FWiW, failing to ensure a change to a record format
effects an actual change to the Level Identifier [it is just a hash]
when there has been a change [that could cause data issues], might be
considered similar to failing to ensure a helmet is properly fastened;
i.e. the best possible protection may not be afforded even though by
appearances, the preventive seems enabled.
As an Amazon Associate we earn from qualifying purchases.