What Mark said...

However, I will call out that any existing logical files defined with an
implicit format (ie...just keys no columns) will need to be changed to have
an explicit format, with all of the currently existing PF columns added.

Same goes for SQL indexes, if used by RPG programs for record level
access. However, the caveat there is that adding an explicit column list
of an index will change the format and any RPG programs will have to be
recompiled this one time.

As you move forward, all new logical files (or SQL Indexes) should be
created with an explicit format.
And you'll need to ensure that no RPG program reference the new physical
table directly in a f-spec/d-spec.

Adding a new column to the table means adding a new LF/index that contains
the new column.
This doesn't add overhead, assuming the key is the same as an
existing LF/index.

Charles




On Wed, Nov 24, 2021 at 9:32 AM Mark Waterbury <
mark.s.waterbury@xxxxxxxxxxxxx> wrote:

Richard,

Rather than incurring the risk of missing some objects (during recompiles)
etc., it is far better to use a methodology that dates back to S/38 CPF:

#1. rename the Physical File (you were going to do that anyway)

#2. create a new Logical File with the same name as the original physical
file. Be sure to add all the fields in the existing PF to the LF
definition DDS, as well as replicating any keys, if the PF is keyed.

#3. now ensure that the new LF has the same "Record Format Level ID" as
the original PF ...

Depending on your naming conventions, say the PF was named EMPLOYEE with
some LFs over it, e.g. EMPLOYEE1, EMPLOYEE2, etc. You could create the
new LF with a name such as "EMPLOYEE0" ... you get the idea?

Next, do some local testing to ensure everything still works as expected.

Now, deploy all of that ... it could be as simple as deploying a
"script" or small CL *PGM and then running it one time on each target
system to automate the above steps. Obviously, this needs to run at night
or over a week-end etc. when no users are in the application.


And then, on to "PHASE TWO" ...

Now, you are in a position to make changes to the newly renamed "base"
Table or PF ...

If it is DDS-based, you can change the DDS source and use CHGPF to apply
the changes, without having to copy all the data, etc. If SQL DDL based,
it is an "ALTER TABLE ..." (also, be aware that CHGPF may require someone
reply to an Inquiry message, to warn of "possible data loss" depending on
what was changed.)

Be sure to test this locally on your "test" or Q/A environment before
attempting it "live"...

You will need to ensure that any LFs (for DDS-based LFs) specify all the
fields in the original table, before you do the CHGPF ... to prevent
"level checks" etc. See above.

Now, when you have tested everything and are ready to deploy changes, you
can just send over a "script" or CL *PGM to run one time to issue the
"ALTER TABLE" (for DDL based tables) or "CHGPF" (for DDS based PFs).

Then, you need only to make changes to any programs that need to access
any of the new or changed fields. You may want to create some new LFs for
that access to the new fields, so no programs ever depend directly on the
"base" table or PF.

One of the benefits of this approach is, you have to recompile and
re-deploy far fewer objects in total, to make incremental changes to the
database design.

This was one of the main reasons for logical files and SQL views in the
first place, to provide a layer of independence from the underlying
physical file / table implementation for the using applications and the
users.

You may also need to investigate if any queries (esp. Query/400) are using
the PF or table.

Please review the above, and let me know if you have any questions, etc.

All the best,

Mark S. Waterbury

On Wednesday, November 24, 2021, 10:56:30 AM EST, Richard Schoen <
richard@xxxxxxxxxxxxxxxxx> wrote:

Right. Thus the complexity.

In my scenario you can assume V7R3 and higher.

There could be: DDL, DDS, add new fields, remove fields, resize fiels,
add indexes, add logical files, the list is endless.......

And any one of these things can then cause level checks for RPG, COBOL or
CL programs.

I knew this wouldn't be a simple question, but I'm curious where to start.

And in the end the answer may be: manually rename/move old table,
manually stage the new table and copy data back, then recompile all the
associated objects to avoid level checks.

Which also begs the question as to whether DSPPGMREF has any alternatives
to id potentially affected changed programs.

I'm working with a IBMi dev team who is going from single box development
to have a dev/test machine and production machine and trying to make their
deployment options easier.

Regards,
Richard Schoen
Web: http://www.richardschoen.net
Email: richard@xxxxxxxxxxxxxxxxx
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription related
questions.

Help support midrange.com by shopping at amazon.com with our affiliate
link: https://amazon.midrange.com


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2022 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.