×
The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.
Hi, Boris
I've recently had the same problem - exporting 12,000,000 records table to a CSV file in IFS. It was a daily process and took ca 50 minutes - just a plain CPYTOIMPF. All my attempts to tune block sizes gave nothing. It consumed 50% CPU on our 2-core (I think) box.
Then I wrote a specific program to export the table in exactly the same format as CPYTOIMPF did. It takes 5 minutes now and it also consumes 50% CPU, that is, if I'm not mistaken, the same close to 100% you are worrying about.
So to solve your psychological problem - try to do the same using CPYTOIMPF, then switch back and be happy with you program as you will probably also see the difference.
Here's what my program does:
/*---------------------------------------------------------*/
/* Must be compiled with options */
/* SYSIFCOPT(*IFSIO *IFS64IO) */
/*---------------------------------------------------------*/
#include <stdio.h>
#include <stdlib.h>
#include <recio.h>
#include <string.h>
#include <decimal.h>
#include <ctype.h>
#include <iconv.h>
My first version was without CCSID conversion but adding it had close to no effect on the timing (that is additional 12 million calls to iconv).
The only difference I see is that you use "fwrite" and I use "puts".
Try to change - may be it will have some effect.
But once again, if timing is OK, do not understand why you should care about CPU load...
HTH,
Jevgeni.
-----Original Message-----
From: c400-l-bounces+jevgeni.astanovski=sampopank.ee@xxxxxxxxxxxx [mailto:c400-l-bounces+jevgeni.astanovski=sampopank.ee@xxxxxxxxxxxx] On Behalf Of Boris
Sent: 24. mai 2011. a. 6:21
To: c400-l@xxxxxxxxxxxx
Subject: Re: [C400-L] C400-L Digest, Vol 9, Issue 22
Thanks for the quick response.
I'm probably missing on some basics here.
Here is a sample code:
for (i=0;i<=2000;i++)
buf[i]='c';
out = fopen("fileout.txt","w");
/*out = fopen("fileout.txt","wb,type=record,lrecl=2000");*/
for (i=0;i<=900000;i++)
{
num = fwrite (buf, 1, sizeof(buf), out );
}
fclose(out);
I removed the read function. It was populating the buf.
This program takes about 90% CPU.
I also tried:
num = fwrite (buf, sizeof(buf), 1, out );
It did reduce the CPU to about 85%. Then I increased the size of the buf to
10000 assuming it would further reduce the CPU but it instead jumped to 94%.
How do I make the program write by records/blocks? The commented line
doesn't work - fopen returns NULL.
Thanks.
Boris.
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact
[javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.