Neil:
Duhh...you are right! I suppose I was thinking along the lines of
simply translating the cells files into fields for cells.
If the tabs don't cross reference each other
> The code already is able to break every 65k records and
> start a new tab.
Could the "tab object" be deleted?
Nope, saw as soon as I wrote that performance is slowing down have way
through a tab. Wouldn't do any good.
Oh well.
-----Original Message-----
From: java400-l-bounces@xxxxxxxxxxxx
[mailto:java400-l-bounces@xxxxxxxxxxxx] On Behalf Of Neill Harper
Sent: Monday, October 20, 2008 12:50 PM
To: 'Java Programming on and around the iSeries / AS400'
Subject: RE: A question about JAVA JVM and large EXCEL spreadsheet
creation
Is this being done by JNI?
Or a client / server approach, I ask this because for JNI if there is
even
one object being created inside a loop that is not being destroyed
manually
by either freelocalfree or object group then you have a memory leak.
Also for such a big task i.e. 466,000 records garbage collection isn't
really going to help you anyway because every cell object belongs to a
row,
every row to a sheet and every sheet to a workbook. As long as you have
a
reference to your workbook object (which you need if your going to save
the
spreadsheet) then you have a reference to all of the other objects
(cells,
rows etc) as well so gc can't delete them.
Neill
-----Original Message-----
From: java400-l-bounces@xxxxxxxxxxxx
[mailto:java400-l-bounces@xxxxxxxxxxxx]
On Behalf Of Blalock, Bill
Sent: 20 October 2008 18:09
To: Java Programming on and around the iSeries / AS400
Subject: RE: A question about JAVA JVM and large EXCEL spreadsheet
creation
But more importantly, is there anything I can do short of multiple
files, to help the program process the file faster?
You could create a data area with the number of records to start at and
starting place. Say that was 30,000 and 1 initially.
Run program. When 30,000 records processed update the data area
starting place to 30,001 and end the program. Be sure to set on LR and
do all the clean up.
Next execution picks up at 30,001 runs through 60,000.
Sound hokey, I know. But if the program is doing everything you want
and just hits a wall at 35,000 records or so then stopping before you
hit the wall and letting the OS clean up is the fastest way to fix it.
The more elegant ways to fix this will involve Java programming.
-----Original Message-----
From: java400-l-bounces@xxxxxxxxxxxx
[mailto:java400-l-bounces@xxxxxxxxxxxx] On Behalf Of Marvin Radding
Sent: Monday, October 20, 2008 11:21 AM
To: java400-l@xxxxxxxxxxxx
Subject: A question about JAVA JVM and large EXCEL spreadsheet creation
I have created a command (CVTXLS) to convert a file into an EXCEL
spreadsheet using the HSSF/POI classes from Jakarta project. (Thanks to
Scott Klement for his documentation)
I am using this command to convert a 466,000 record file into a
spreadsheet. It has 113 columns of mostly numeric data. The code already
is able to break every 65k records and start a new tab.
The problem is after processing over 35k records quite quickly, it is
now working very slowly and I was wondering why? Is this a memory
allocation problem? Can I speed thinks up by saving the file every now
and then?
Can anyone tell me why it suddenly when from 30k records in the first
hour to only a few hundred records in the next hour? But more
importantly, is there anything I can do short of multiple files, to help
the program process the file faster?
Thanks,
Marvin
As an Amazon Associate we earn from qualifying purchases.