× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Conceptually, I had a front end program where I specified how many instances of the updating process I wanted, then spun off jobs with beginning and ending RRNs. I varied the number of instances on a development box and compared elapsed times to get close to most effective number of jobs. (I did this in several applications, and in some cases I worked from a keyed input file and broke it up by key ranges.)

Driving the CPU to 100% isn't bad. I ran the updating jobs at a priority between interactive and regular batch. Interactive did not see much impact, as you would expect. The rest of batch crawled, but this was acceptable to user management.

One thing to be aware of is that the updating program needs to be able to handle record locks created by the other parallel processes, especially if you are splitting the incoming transactions by RRN and not processing them in any particular key order. Not all existing single threaded programs handle record locks, since they never expect to have them.

Even when using keys on transaction file, the improvement in elapsed time was dramatic and the overhead of the initial setup was insignificant.

Sam

On 6/7/2010 8:37 AM, Bryce Martin wrote:
So, did you first get the total number of records and then split the rrn
10 ways? Or did you just alot a certain number of records per instance
and when you had covered the last rrn you were done?

Also, why not drive the CPU hard? If there is any part of the system
that can handle a consistently high load it should be the cpu. Rev that
baby up. Idle cycles are lost cycles... can't ever get them back so you
might as well use them. It would be interesting to see how a nightly
batched MRP run would work if this approach was taken. Wonder if we could
take it from 2 hours to down to 20 minutes....


Thanks
Bryce Martin
Programmer/Analyst I
570-546-4777


"Lennon_s_j@xxxxxxxxxxx"<lennon_s_j@xxxxxxxxxxx>
Sent by: rpg400-l-bounces@xxxxxxxxxxxx
06/04/2010 06:29 PM
Please respond to
RPG programming on the IBM i / System i<rpg400-l@xxxxxxxxxxxx>


To
rpg400-l@xxxxxxxxxxxx
cc

Subject
Re: Speed in Reading


Yes, I've done that too with good results, but in my case I was reading
a significantly large transaction file and updating many summary files.
Splitting up the transaction file largely by RRN and processing
between 7 and 10 in parallel significantly reduced the elapsed time, but
did drive the CPU hard.

Sam


As an Amazon Associate we earn from qualifying purchases.

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.