Thank you for your response. I hope that neither you nor anyone else in this thread gets offended by the occasional baiting nature of some of my questions and comments. I really do hope to provoke thinking and discussion that helps all involved.
I wrote another benchmark. This time using Visual Foxpro to insert rows into the same "orders" table using the ODBC connector that comes with IBM i Access 7.1
This interface required 22 seconds to insert 10,000 rows (455 rows per second). How would you account for the rather huge difference between this interface and the others we've been discussing?
I think that is a valid question because the majority of updates to most DBMS products from most applications occur through ODBC / JDBC interfaces.
----- Original Message -----
From: Matt Olson <Matt.Olson@xxxxxxxx>
To: Midrange Systems Technical Discussion <midrange-l@xxxxxxxxxxxx>
Sent: Friday, September 6, 2013 10:28 AM
Subject: RE: How to handle Big Data under IBM i?
Another thing, this BulkUploadToSQL utility is part of the .NET framework (built-in library), but it's specifically designed for Microsoft SQL Server. For MySQL you use MySqlBulkLoader, for oracle you use the OracleBulkCopy namespace / classes.
As you can see, these fast results are available in .NET, purely object oriented programming language and is interoperable with many database types. Whereas your DDS and RPG example is good for... DB2 on i only.