× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Are you using streaming mode ?

I have these notes from another contributor ...

Below are the pieces of code that was added to the HSSF_H source member.

******************************************************************
* SXSSF (Streaming Usermodel API) Data Types
******************************************************************
dcl-c SXSSF_Workbook_Class 'org.apache.poi.xssf.streaming.SXSSFWorkbook';

dcl-s SXSSFWorkbook object(*JAVA: 'SXSSF_WORKBOOK_CLASS');


*+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
* SXSSF_dispose
*
* Dispose of temporary files backing this workbook on disk.
* Calling this method will render the workbook unusable.
*
* Returns TRUE is all temporary files were delete successfully
*+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
SXSSF_dispose PR N extproc(*java
:
SXSSF_Workbook_Class
:'dispose')


Added the following prototype after the new_XSSFWorkbook prototype

D new_SXSSFWorkbook...
D PR O class(*java
D : SXSSF_Workbook_Class )
D extproc(*java
D : SXSSF_Workbook_Class
D : *constructor)


Then in the program the prototype for new_XSSFWorkbook was changed to
new_SXSSFWorkbook as shown below.

// Create a new workbook.
wkBook = new_SXSSFWorkbook();

Then after the SS_save I added the call to SXSSF_dispose, as shown below.

// Save the workbook to the IFS.
SS_save( wkBook: wkFilename );
SXSSF_dispose( wkBook );

That was all that needed to change in the program to allow the streaming
save
of the excel file.






Don Brown
Director

MSD Information Technology
t: 07 3368 7888
m: 0408 751 644
e: dbrown@xxxxxxxxxx
www.msd.net.au



From: "Jorge Ubeda" <jubeda@xxxxxxxxx>
To: RPG400-L@xxxxxxxxxxxxxxxxxx
Date: 26/08/2021 09:10 PM
Subject: subject: Re: Worksheet failing when reaching a large
number of rows-Using SXSSFWorkbook Hello Jorge, I think POI 3.9 should
work as I believe the streaming was added to 3.8 beta 3. However if it is
failing I would suggest installing the 3.14 version and adjusting your
classpath to test with the later version. We have successfully created
spreadsheets with close to a million records maybe even over the million,
the spreadhseet was fine. But we did then have a problem emailing the
spreadsheet as it was too big. Hope this helps. Thanks Don
Sent by: "RPG400-L" <rpg400-l-bounces@xxxxxxxxxxxxxxxxxx>



Since the last message, we have done some work, and it worked a little
better.
As Don Brown suggested, we upgraded POI to 3.14 version, and changed some
API. We didn't close previously the xssf workbook which is the template
base for our SXSSFworkbook object. We found, executing a loop for our
function, that closing the underlying workbook was required. Now we can
execute our functions in a repeated cycle without problems.
But it fails when we reach about 200,000 rows, far of what gets Don (a
million records)
We found that the error is produced when the write of Excel file happens,
as if temporary files exhausted the jvm space. In fact, it throws a out of
memory error. Our code to do this is:
SXSSFStr = new_String(%trimr(&(1:)));
SXSSFile = new_FileOutputStream(SXSSFStr);
SXSSF_write(ExcelSXSSFBook:SXSSFile);
FileOutputStream_close(SXSSFile);

hssf_freeLocalRef(SXSSFile);

hssf_freeLocalRef(SXSSFStr);

and SXSSF_write is defined this way:
*-----------------------------------------------------------
D SXSSF_write PR EXTPROC(*JAVA
D :SXSSF_WORKBOOK_CLASS
D :'write')
D tgtExcelFile like(jOutputStream)

At end, we first dispose temporary files, and then close the original
template.

&(1:) is a parameter passed by our API from CA Plex, in this case, a
string
with filename and full path.
All our programs are designed and generated from the CA Plex generator.

In our java execution, we declare an environment for our starting job. A
CL
declares the required environment variables , basically classpath and java
home. However, while the job works, a complete list of PASE variables are
declared. I suppose it is defined by the system start of java. As far as I
can see in java IBM manuals, these definitions does not interfere with our
job.

In sum, our definitions seem to work, but we get an out of memory error
when a certain number of rows are reached. Error is produced when calling
the write method.

We have some questions:
I have seen some developers (StackOverflow) calling flush() method to
write
when reached the windows rows limit. It writes to an Excel file or
temporary file? In this last case, the problem does not change.
A colleague asked if we are using java 64 bits. As long as I can see, yes
, we use it. It is what I can see when dump executes.
He says also:


*The problem is how to ensure your jobs running ILE RPG IV and using "POI"
(via that JNI interface) will use the 64-bit version.The 64-bit JVM gives
you the possibility to have a much larger "address space" for the Java
Virtual Machine, and thus a much larger heap size.This will ultimately
impact the application for things like a very large spreadsheet, if you
run
out of heap space.*

which suggest two questions:
1, we are starting a defined version of java, and can see it is the used
java version when looking at our job messages, and looking at dump data.
Could JNI with POI call another java?
2, if it is the case that we use java 8 64 bits, how can we enhance memory
management?

Any help would be welcome
With regards
Jorge Ubeda

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.