×
The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.
My first thought is that it isn't the number of LFs so much as the number
of access paths used to create the logical files. My second thought is
that if anyone is using the files while you are updating them, you might
be running into record locking issues.
Jim
original message:
We are updating a field (company code) in some very large physical files.
These PFs are created by DDS, without journaling.
We use the similar RPG program (very simple, just read and update) on them
and the system environment is exactly the same.
One batch job for each PF. Some jobs can update more than 5 millions
record
per hour, some can do 1 millions
per hour, while some can only do 100 thousands per hour. At first, I
think
the reason might be the number of dependent logical files.
But then I found a PF with 36 dependent LFs is 10 times faster than a PF
with 30 dependent LFs. Any clues for me?
Thanks...
Best Regards.
James Fu
TDL Group Corp. (Tim Hortons)
Oakville, Ontario
Direct: 905-339-5798
fu_james@xxxxxxxxxxxxxx
Web:
http://www.timhortons.com
As an Amazon Associate we earn from qualifying purchases.