But more importantly, is there anything I can do short of multiple
files, to help the program process the file faster?
You could create a data area with the number of records to start at and
starting place. Say that was 30,000 and 1 initially.
Run program. When 30,000 records processed update the data area
starting place to 30,001 and end the program. Be sure to set on LR and
do all the clean up.
Next execution picks up at 30,001 runs through 60,000.
Sound hokey, I know. But if the program is doing everything you want
and just hits a wall at 35,000 records or so then stopping before you
hit the wall and letting the OS clean up is the fastest way to fix it.
The more elegant ways to fix this will involve Java programming.
-----Original Message-----
From: java400-l-bounces@xxxxxxxxxxxx
[mailto:java400-l-bounces@xxxxxxxxxxxx] On Behalf Of Marvin Radding
Sent: Monday, October 20, 2008 11:21 AM
To: java400-l@xxxxxxxxxxxx
Subject: A question about JAVA JVM and large EXCEL spreadsheet creation
I have created a command (CVTXLS) to convert a file into an EXCEL
spreadsheet using the HSSF/POI classes from Jakarta project. (Thanks to
Scott Klement for his documentation)
I am using this command to convert a 466,000 record file into a
spreadsheet. It has 113 columns of mostly numeric data. The code already
is able to break every 65k records and start a new tab.
The problem is after processing over 35k records quite quickly, it is
now working very slowly and I was wondering why? Is this a memory
allocation problem? Can I speed thinks up by saving the file every now
and then?
Can anyone tell me why it suddenly when from 30k records in the first
hour to only a few hundred records in the next hour? But more
importantly, is there anything I can do short of multiple files, to help
the program process the file faster?
Thanks,
Marvin
As an Amazon Associate we earn from qualifying purchases.