× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



I've found that the fastest way to insert a whole table's worth of rows is to use a bulk insert.

Which sounds like what you're doing already -- except that I'd use CPYTOIMPF to generate the file (not Client Access) and I'd use JDBCR4 to run the BULK INSERT SQL statement.

JDBC and this type of file access is not really meant as a way to replace entire tables at once. It works great for individual queries or record access, but mass updates of all data? You're much better off using BULK INSERT.

If you do decide to use JDBC to insert row by row (?? not sure why ??) then I would consider doing something like:

Insert Into MYTABLE (Col1, Col2)
VALUES (Row1Val1, Row1Val2), (Row2Val2, Row2Val2), (Row3Val1, Row3, Val2), etc.

By putting multiple rows on a single insert, I would expect it to be faster. But, haven't really benchmarked it.

-SK





On 2/25/2013 10:11 AM, Anderson, Kurt wrote:
At IBM i 7.1, we've been using Scott Klement's JDBCR4 for some time now. It's been great, although we've been doing inserts not as it's shown in the JDBCR4 presentation, but by sending the SQL statement to SQL Server for it to then use the Linked Server to select the records to insert. But we're finding limitations in doing it that way (when inserting millions of records) - every so often we get a "Connection Reset" message. I'm told that the Linked Server can't reliably handle such a large volume of records, so we're looking at other options.

One option is to instead have the RPG program use JDBCR4 to directly insert into the SQL Server database. From my understanding based on the presentation, this method inserts one record at a time instead of writing a block of records. I ran a test insert and it took about 5 minutes to insert 10,000 records, way too slow for our purposes. This test was block-reading the file, but was preparing and executing the insert statement for every record read.

Presentation: http://www.scottklement.com/presentations/External%20Databases%20from%20RPG.pdf
See pg 25 for the Prepared Statement Insert

Another suggestion was using Client Access to create a file for a bulk insert by SQL server.
http://blog.stevienova.com/2009/05/20/etl-method-fastest-way-to-get-data-from-db2-to-microsoft-sql-server/

We also tried using CPYTOIMPF and then FTP'd the file to a location for SQL Server to load the file using a bulk insert. The bulk insert only took a minute for 2 million records. The COPY and FTP took about 10 minutes. This speed was great compared to both the test I mentioned above (inserting directly to the db from RPG) and compared to the method we have been using of sending a Linked Server statement to SQL Server (when it would work).
I was hoping to use JDBCR4 to perform the insert so there would be less steps involved in the process. It's not necessary, but I thought I might draw upon others' experience.

Thanks,

Kurt Anderson
Sr. Programmer/Analyst
CustomCall Data Systems, a division of Enghouse Systems Ltd.


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.