I am assuming this is something that occurs on a regular basis? If you
talking millions of records, wouldn't the best way to use the IFS, write
the records directly to a text file, map a network drive and have bulk copy
or SSIS just read the text file direct? If you are talking huge numbers of
records I don't see how JDBC is going to ever be fast enough.
On Mon, Feb 25, 2013 at 10:17 AM, Dan Kimmel <dkimmel@xxxxxxxxxxxxxxx>wrote:
Couple things to look at:
1) What JDBC driver are you using to connect to SQL Server? I've had the
best results with the jTDS driver. The Microsoft drivers just don't go as
2) You don't need to prepare and execute for each record. A JDBC
PreparedStatement object is reusable. I don't know how Scott has structured
his interface, but you should be able to create a PreparedStatement object
and then use the executeUpdate() method against it in a loop after setting
the values using the set...() methods (setString(), setBigDecimal() and so
on.) Reusing a prepared statement is much, much faster than making a new
one each time. The driver can then optimize and block up requests. Again,
the capabilities of the driver is important. Study Scott's interface
documentation for how you might utilize these capabilities.
From: midrange-l-bounces@xxxxxxxxxxxx [mailto:
midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Anderson, Kurt
Sent: Monday, February 25, 2013 10:12 AM
To: 'Midrange Systems Technical Discussion'
Subject: JDBCR4 and Inserts
At IBM i 7.1, we've been using Scott Klement's JDBCR4 for some time now.
It's been great, although we've been doing inserts not as it's shown in
the JDBCR4 presentation, but by sending the SQL statement to SQL Server for
it to then use the Linked Server to select the records to insert. But
we're finding limitations in doing it that way (when inserting millions of
records) - every so often we get a "Connection Reset" message. I'm told
that the Linked Server can't reliably handle such a large volume of
records, so we're looking at other options.
One option is to instead have the RPG program use JDBCR4 to directly
insert into the SQL Server database. From my understanding based on the
presentation, this method inserts one record at a time instead of writing a
block of records. I ran a test insert and it took about 5 minutes to
insert 10,000 records, way too slow for our purposes. This test was
block-reading the file, but was preparing and executing the insert
statement for every record read.
See pg 25 for the Prepared Statement Insert
Another suggestion was using Client Access to create a file for a bulk
insert by SQL server.
We also tried using CPYTOIMPF and then FTP'd the file to a location for
SQL Server to load the file using a bulk insert. The bulk insert only took
a minute for 2 million records. The COPY and FTP took about 10 minutes.
This speed was great compared to both the test I mentioned above
(inserting directly to the db from RPG) and compared to the method we have
been using of sending a Linked Server statement to SQL Server (when it
I was hoping to use JDBCR4 to perform the insert so there would be less
steps involved in the process. It's not necessary, but I thought I might
draw upon others' experience.
CustomCall Data Systems, a division of Enghouse Systems Ltd.
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe,
unsubscribe, or change list options,
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a
moment to review the archives at http://archive.midrange.com/midrange-l.
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives