No. 117 bytes is a source file record size (the one accessed by record I/O).
The output file "record" size is 142.
As you probably could see, my program (and also Boris' problem) was reading a DB2 table and writing a stream to IFS.
I do not know for sure, but I assumed that Boris was also reading DB2 file using record I/O...
-----Original Message-----
From: c400-l-bounces+jevgeni.astanovski=sampopank.ee@xxxxxxxxxxxx [mailto:c400-l-bounces+jevgeni.astanovski=sampopank.ee@xxxxxxxxxxxx] On Behalf Of Daransky, Peter
Sent: 26. mai 2011. a. 12:52
To: C programming iSeries / AS400
Subject: Re: [C400-L] Write to IFS file - high CPU
Hi Jevgeni,
hm, when you says your record size is 117 bytes (i assume this is he buffer size which you write into the IFS File) ...
I would expect that the STC (fopen, fwrite) will be much faster, because of the internal buffering (I talks about the write case).
As mentioned before, STC uses internal buffer, what means less write access to DASD - in this case (unless you didn't use fflush or other sync method). You can configure the buffering with setbuf and setvbuf functions.
I've tested only write access and not read write?
I don't want say that general the STDC APIs are bad and POSIX is better ... The main difference between this APIs is that POSIX APIs are System API, which means you can manipulate the System structures and there is nothing between that. Opposite to STDC , which is an User friendly form of APIs, which can manipulate also System structures but it's doing some stuffs for you automatically. You can adjust this write case also on any other platform. I mean for small buffers is the fwrite better as write.
peter
-----Original Message-----
From: c400-l-bounces+peter.daransky=uc4.com@xxxxxxxxxxxx [mailto:c400-l-bounces+peter.daransky=uc4.com@xxxxxxxxxxxx] On Behalf Of Jevgeni Astanovski
Sent: Donnerstag, 26. Mai 2011 09:06
To: 'C programming iSeries / AS400'
Subject: Re: [C400-L] Write to IFS file - high CPU
Hi, Boris and Peter.
As the question discussed is absolutely practical for me, I decided to test which is better - my current version with fopen/fputs/fclose or Unix style with open/write/close.
What I did?
I just changed my program from handlers to file descriptions, that is:
fopen->open
fputs->write
fclose->close
All processing - source records reading and output records generation, code page conversion was left "as was".
Source file: 12M records, record size: 117 bytes Resulting file size 1,6GB Both programs have been submitted into batch on our test box today morning. Timing retrieved from the system log (DSPLOG).
I made 2 cycles.
First cycle:
My original program:
Started 9:15:01, ended 9:17:55, 98.270 seconds used Program version that uses Unix style API-s:
Started 9:40:52, ended 9:44:04, 128.497 seconds used
Second cycle:
My original program:
Started 9:54:49, ended 9:57:22, 98.822 seconds used Program version that uses Unix style API-s:
Started 9:50:58, ended 9:54:10, 129.444 seconds used
CPU was average 32-33% on both jobs, I could not make difference.
From here I can derive, that using handlers is a bit (ca 30%) faster than using file descriptors, however I am not this level specialist to say why it is so...
Additionally, of course, the results apply to our particular machine.
And there was no competition - I've been playing with a program that runs every night in live environment - so these seconds really mean for us.
Jevgeni.
As an Amazon Associate we earn from qualifying purchases.