× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



I found roughly similar results with my own test. Despite the IBM hype about
SQL performance, it simply does not perform as fast as native RPG file
access in a like-for like test. However, I think the problem is not
necessarily in the SQL processor itself, but in the RPG preprocessor.

In my test the file being updated is quite wide (50+ fields). If instead of
a SELECT *, I only extract the few fields I'm interested in, the SQL test
actually runs faster than the RPG version. I think that the preprocessor is
extracting each field one at a time into the host structure, rather than
moving the whole record format, as an RPG programmer would do.

I remember reading somewhere that IBM are planning a major overhaul of the
RPG/SQL preprocessor for the next release. Hopefully this is the kind of
thing they are looking at.

Chris.

-----Original Message-----
From: Raikov, Lo [mailto:RaikovL@mki.com.au]
Sent: 04 September 2001 07:54
To: 'java400-l@midrange.com'
Subject: SQL vs. RPG (off-topic) was: Communicating between java & rpg?


Joe,

first of all, sorry for what is probably an off-topic in the Java list. I'm
not subscribed to MIDRANGE-L, but this was not the first time I heard that
static SQL on the 400 is significantly worse in performance terms than
native I/O. However, this was the first time anybody actually published both
the source and the results. I decided to re-run your test on my system (720
1-way - V4R3M0 with a lot of DASD and main storage) and to see what it would
look like in a different environment.

I re-ran only the UPDATE test because I did not know how you set up the
blocking factor in the WRITE/INSERT test for RPG.

My test was not particulary "clean" (e.g. I ran parts of it concurrently
with other jobs, although for most of the time CPU and DASD utilisation
figures were quite low). I made a few modifications to your test: first of
all, I turned Commitment Control on for both environments and worked with a
logical (keyed access path), rather than physical, file. I also purged all
relevant data from the main storage before each test using SETOBJACC *PURGE.
To be fair, when running the SQL version of the test I actually FETCHed the
values of all file fields into a host structure.
I did not place journal receivers into a separate ASP. Every 10 updates were
bundled into a committed transaction. My version of the test did not modify
any of the table key fields.

The results were:

Update of 15,500 records (3 sets of tests)

SQL - 31, 33 and 32 seconds
RPG - 26, 27 and 27 seconds

Update of 1,013,862 records (2 sets of tests)

SQL - 18:00 and 20:02
RPG - 11:11 and 13:01

The last set of tests ran concurrently with a few other jobs - hence higher
execution time. CPU consumption rate of a job was appr. 10% higher in case
of SQL. No significant task activity was spotted.

I did not have a chance to run this test on a more powerful iSeries server,
but the results were somewhat more in line with my expectations.

Lo

PS In my view, running positioned updates of a large file is questionable
programming technique - searched updates would seem to be a more natural
choice.

PPS It would be interesting to compare static SQL and native I/O in a
typical OLTP environment with each transaction accessing just a small subset
of database records. I can't think of how to avoid opening and closing
cursors: each time a new subset of records (e.g. customer data) is needed.
SETLL/READE would probably perform better, but how much better is a
question.

----------------------------------------------------------------------------
* From: "Joe Pluta" <joepluta@xxxxxxxxxxxxxxxxx>

Look in MIDRANGE-L, search for "sql exec speed", and you should find them.
INSERTs take nearly six times as long as WRITEs, while FETCH/UPDATE takes
four times as long as READ/UPDAT.  Here's an excerpt:

WRITE Test:

 100,000 WRITE operations took 7 seconds
 100,000 INSERT INTO ... VALUES operations took 40 seconds

 0.07 msec/WRITE
 0.40 msec/INSERT

Not two orders of magnitude, but still a very poor performance from SQL.

UPDATE Test:

 100,000 READ/UPDAT operations took 10 seconds
 100,000 SQL FETCH NEXT/UPDATE WHERE CURRENT operations took 40 seconds

 0.10 msec/UPDAT
 0.40 msec/UPDATE
_______________________________________________
This is the Java Programming on and around the iSeries / AS400 (JAVA400-L)
mailing list
To post a message email: JAVA400-L@midrange.com
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/cgi-bin/listinfo/java400-l
or email: JAVA400-L-request@midrange.com
Before posting, please take a moment to review the archives
at http://archive.midrange.com/java400-l.


As an Amazon Associate we earn from qualifying purchases.

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.