× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Hi, James:

Perhaps those files that are performing badly have too many extents? What was specified for the SIZE parameter when the file was originally created, and how large is the file now? (DSPFD should tell you these things.) Also, even though you say you have one PF with 36 LFs and one with 30 LFs, those 36 LFs could all be sharing the same (or only a few) access paths, while the file with 30 LFs might have one unique access path for each LF, and so, this could account for some of the overhead.

Also, for the field (company code) you are altering, (widening, changing data type, etc.), does that happen to be a "key" field for the PF and/or in one or many of the LFs? If so, that can also mean that the system must update the internal representation in not only the Physical File access path, but also in each access path where that field is used as a key. Do a DSPFD for the PF and each of the LFs and look at the information about access paths and sharing.

Also, looking at that output for DSPFD, do you have *IMMED access path maintenance specified? And what is the FRCRATIO? This can really slow things down, when doing a massive file update as you are describing.

See also the output file produced by the DSPDBR command.

Also, your comment about using a "similar RPG program" -- have you written a "conversion" program that reads and updates every record in the file (e.g. to convert the data in the field(s) to the new format?) This WILL cause each and every access path to get updated, perhaps needlessly.

You could also look into the ability to use the CHGPF command with the SRCFILE(library/filename) parameter to actually perform the equivalent of SQL ALTER TABLE "under the covers" -- you can read about this in the help text for this command, or in the InfoCenter under CHGPF.
You can also make this process much faster by first removing all Logical File members (RMVM library/filename membername) and then running the conversion, then adding the LF members back again (ADDLFM) ...

Or, you can delete all of the LFs first, then convert the PFs, then add the LFs back again, one by one, from their DDS source (with CRTLF). Of course, this might take a long time, if you have millions of records and many access paths to maintain. If you add the LFs in the "correct" order, you can ensure "optimal" access path sharing. Go to www.google.com and enter this "os/400 optimal access path sharing" in the search box ... several of the first few links returned should give you some good information.

Without wanting to see any of your "data", perhaps you could zip up and send me just the DDS source members for several of the PFs and all of their LFs, for example, the file with 30 LFs versus the one with 36 LFs that you mentioned, so I can recreate them on my system and do some further analysis?

Here is a really good article (from that google search):
http://www-03.ibm.com/servers/eserver/iseries/db2/pdf/Performance_DDS_SQL.pdf

Hope that helps?

Sincerely,

Mark S. Waterbury

> fu_james@xxxxxxxxxxxxxx wrote:
We are updating a field (company code) in some very large physical files.
These PFs are created by DDS, without journaling.
We use the similar RPG program (very simple, just read and update) on them
and the system environment is exactly the same.
One batch job for each PF. Some jobs can update more than 5 millions record
per hour, some can do 1 millions
per hour, while some can only do 100 thousands per hour. At first, I think
the reason might be the number of dependent logical files.
But then I found a PF with 36 dependent LFs is 10 times faster than a PF
with 30 dependent LFs. Any clues for me?
Thanks...

Best Regards.

James Fu

TDL Group Corp. (Tim Hortons)
Oakville, Ontario
Direct: 905-339-5798
fu_james@xxxxxxxxxxxxxx
Web: http://www.timhortons.com


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.