× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Thanks for the quick response.
I'm probably missing on some basics here.
Here is a sample code:


for (i=0;i<=2000;i++)
buf[i]='c';
out = fopen("fileout.txt","w");
/*out = fopen("fileout.txt","wb,type=record,lrecl=2000");*/

for (i=0;i<=900000;i++)
{
num = fwrite (buf, 1, sizeof(buf), out );
}

fclose(out);

I removed the read function. It was populating the buf.
This program takes about 90% CPU.
I also tried:
num = fwrite (buf, sizeof(buf), 1, out );
It did reduce the CPU to about 85%. Then I increased the size of the buf to
10000 assuming it would further reduce the CPU but it instead jumped to 94%.


How do I make the program write by records/blocks? The commented line
doesn't work - fopen returns NULL.

Thanks.
Boris.

-----Original Message-----
From: c400-l-bounces+bbresc512=rogers.com@xxxxxxxxxxxx
[mailto:c400-l-bounces+bbresc512=rogers.com@xxxxxxxxxxxx] On Behalf Of
c400-l-request@xxxxxxxxxxxx
Sent: 23/05/2011 13:00
To: c400-l@xxxxxxxxxxxx
Subject: C400-L Digest, Vol 9, Issue 22

Send C400-L mailing list submissions to
c400-l@xxxxxxxxxxxx

To subscribe or unsubscribe via the World Wide Web, visit
http://lists.midrange.com/mailman/listinfo/c400-l
or, via email, send a message with subject or body 'help' to
c400-l-request@xxxxxxxxxxxx

You can reach the person managing the list at
c400-l-owner@xxxxxxxxxxxx

When replying, please edit your Subject line so it is more specific
than "Re: Contents of C400-L digest..."


Today's Topics:

1. Re: Write to IFS file - high CPU (CRPence)
2. Re: Write to IFS file - high CPU (Daransky, Peter)


----------------------------------------------------------------------

message: 1
date: Mon, 23 May 2011 09:09:32 -0700
from: CRPence <CRPbottle@xxxxxxxxx>
subject: Re: [C400-L] Write to IFS file - high CPU

On 23-May-2011 07:44 , Boris wrote:
I have a program that copies a DB2 file into an IFS stream file. It
may process millions of records. The problem is that the program
consumes high CPU (more than 90%). I removed all the processing and
only left the read from the DB2 file and write to the IFS file
functions. The program stack shows the write function almost
constantly so I assume the write function is the one to blame. What
is the best way to implement a solution like that?

Please do not post an entire digest; especially when just posting a
new message, as even more of the extra data is unrelated. Just compose
a "new message" to the list as addressee; there should be no reason to
use "reply to" [a digest] just to get that addressee, and if for some
reason that is done, then please trim all of the digest not related to
the message being composed then sent.

High CPU is not necessarily bad. Maximizing CPU, i.e. high CPU, is
generally a good thing, when efficiency is high. However most likely
this scenario is the effect of the underlying write processing
efficiently performing each of the many more than necessary write requests.

The database read will by-default be reading "blocks" of data, though
probably each of the "write" requests in the program is writing only one
record [, or worse just one column of data for one record, or absolutely
worst just one byte of data for one record] of database data to the
stream file for each database record that is "read". And although the
program may "read" just one record from the database, the read is likely
from a block of data, not from the database itself. The resolution in
that case is to perform similar to the database which is reading in
blocks, and that would be to write in blocks. So instead of writing
[what is probably currently] one byte for each write, tens or hundreds
of bytes for each write, the program would do better to write thousands,
and even better to write millions of bytes for each write.

Regards, Chuck


------------------------------

message: 2
date: Mon, 23 May 2011 18:15:44 +0200
from: "Daransky, Peter" <Peter.Daransky@xxxxxxx>
subject: Re: [C400-L] Write to IFS file - high CPU

If it is possible try to check how the write is implemented, its better
write bigger chuck of data at once, instead writing byte at byte
(buffering)?
Second can be, is this file open as TEXT file or binary, I mean, before
write is performed a data conversion can be called EBCDIC -> ASCII (i.e.).
Or you can look which Type is this File, usually are all IFS files created
as TYPE2 (performance enhancements) instead of TYPE1 ...
I think the best, will be to start a PEX session to see what happens in the
job, this must not be an write() problem ?

Regards
Peter

-----Original Message-----
From: c400-l-bounces+peter.daransky=uc4.com@xxxxxxxxxxxx
[mailto:c400-l-bounces+peter.daransky=uc4.com@xxxxxxxxxxxx] On Behalf Of
Boris
Sent: Montag, 23. Mai 2011 16:44
To: c400-l@xxxxxxxxxxxx
Subject: [C400-L] Write to IFS file - high CPU

I have a program that copies a DB2 file into an IFS stream file.
It may process millions of records.
The problem is that the program consumes high CPU (more than 90%). I removed
all the processing and only left the read from the DB2 file and write to the
IFS file functions. The program stack shows the write function almost
constantly so I assume the write function is the one to blame.
What is the best way to implement a solution like that?

Thanks
Boris.




------------------------------


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.