×
The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.
... 69 million transactions in 5 h is making 4000 per second; is neither too bad, nor really fast.
First step for me would be to have a look to the used ressources: is it CPU bound or I/O bound (let it run in unrestricted batch and have a look at the CPU usage, it could be easy calculated from the jobstart and job completion messages. As a rule of thumb rest of the time could be assumed as synchronous I/O.
Next step for me would be to have a look at free ressources during the run of the job. This could be easy done by having a look during the run of the job to % CPU usage and % disc busy, WRKSYSSTS would be sufficient for this.
If you have enough power available 5 parallel streams will finish the work in 1 h, that’s more than you would win by tweaking the code of your “loops”. Segmenting the data (easy done for SQL cursors) and synchronizing your parallel Jobs would be less work and could be part of a generalized solution.
D*B
As an Amazon Associate we earn from qualifying purchases.
This thread ...
Re: Suggestion for improving loop performance, (continued)
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact
[javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.