|
Joe, first of all, sorry for what is probably an off-topic in the Java list. I'm not subscribed to MIDRANGE-L, but this was not the first time I heard that static SQL on the 400 is significantly worse in performance terms than native I/O. However, this was the first time anybody actually published both the source and the results. I decided to re-run your test on my system (720 1-way - V4R3M0 with a lot of DASD and main storage) and to see what it would look like in a different environment. I re-ran only the UPDATE test because I did not know how you set up the blocking factor in the WRITE/INSERT test for RPG. My test was not particulary "clean" (e.g. I ran parts of it concurrently with other jobs, although for most of the time CPU and DASD utilisation figures were quite low). I made a few modifications to your test: first of all, I turned Commitment Control on for both environments and worked with a logical (keyed access path), rather than physical, file. I also purged all relevant data from the main storage before each test using SETOBJACC *PURGE. To be fair, when running the SQL version of the test I actually FETCHed the values of all file fields into a host structure. I did not place journal receivers into a separate ASP. Every 10 updates were bundled into a committed transaction. My version of the test did not modify any of the table key fields. The results were: Update of 15,500 records (3 sets of tests) SQL - 31, 33 and 32 seconds RPG - 26, 27 and 27 seconds Update of 1,013,862 records (2 sets of tests) SQL - 18:00 and 20:02 RPG - 11:11 and 13:01 The last set of tests ran concurrently with a few other jobs - hence higher execution time. CPU consumption rate of a job was appr. 10% higher in case of SQL. No significant task activity was spotted. I did not have a chance to run this test on a more powerful iSeries server, but the results were somewhat more in line with my expectations. Lo PS In my view, running positioned updates of a large file is questionable programming technique - searched updates would seem to be a more natural choice. PPS It would be interesting to compare static SQL and native I/O in a typical OLTP environment with each transaction accessing just a small subset of database records. I can't think of how to avoid opening and closing cursors: each time a new subset of records (e.g. customer data) is needed. SETLL/READE would probably perform better, but how much better is a question. ---------------------------------------------------------------------------- * From: "Joe Pluta" <joepluta@xxxxxxxxxxxxxxxxx> Look in MIDRANGE-L, search for "sql exec speed", and you should find them. INSERTs take nearly six times as long as WRITEs, while FETCH/UPDATE takes four times as long as READ/UPDAT. Here's an excerpt: WRITE Test: 100,000 WRITE operations took 7 seconds 100,000 INSERT INTO ... VALUES operations took 40 seconds 0.07 msec/WRITE 0.40 msec/INSERT Not two orders of magnitude, but still a very poor performance from SQL. UPDATE Test: 100,000 READ/UPDAT operations took 10 seconds 100,000 SQL FETCH NEXT/UPDATE WHERE CURRENT operations took 40 seconds 0.10 msec/UPDAT 0.40 msec/UPDATE
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.