× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



On 23-May-2011 07:44 , Boris wrote:
I have a program that copies a DB2 file into an IFS stream file. It
may process millions of records. The problem is that the program
consumes high CPU (more than 90%). I removed all the processing and
only left the read from the DB2 file and write to the IFS file
functions. The program stack shows the write function almost
constantly so I assume the write function is the one to blame. What
is the best way to implement a solution like that?

Please do not post an entire digest; especially when just posting a new message, as even more of the extra data is unrelated. Just compose a "new message" to the list as addressee; there should be no reason to use "reply to" [a digest] just to get that addressee, and if for some reason that is done, then please trim all of the digest not related to the message being composed then sent.

High CPU is not necessarily bad. Maximizing CPU, i.e. high CPU, is generally a good thing, when efficiency is high. However most likely this scenario is the effect of the underlying write processing efficiently performing each of the many more than necessary write requests.

The database read will by-default be reading "blocks" of data, though probably each of the "write" requests in the program is writing only one record [, or worse just one column of data for one record, or absolutely worst just one byte of data for one record] of database data to the stream file for each database record that is "read". And although the program may "read" just one record from the database, the read is likely from a block of data, not from the database itself. The resolution in that case is to perform similar to the database which is reading in blocks, and that would be to write in blocks. So instead of writing [what is probably currently] one byte for each write, tens or hundreds of bytes for each write, the program would do better to write thousands, and even better to write millions of bytes for each write.

Regards, Chuck

As an Amazon Associate we earn from qualifying purchases.

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.