|
Ok, I was out yesterday so I missed most of this discussion, but here's my . 02. Some (maybe all) was mentioned by others, but I thought I'd put it all into one message since I didn't really see much back from Joe. Joe, the problem is you're comparing apples to oranges. You've got SQL doing what amounts to random reads while RPG gets to do sequential reads. Of course RPG is going to be better! Even though you are using CHAIN in the RPG, the way it is being used is sequential. To confirm this, replace the chain with READ, your performance won't change much if at all. Now if you want to test sequential reads, you should just be opening the SQL cursor once and fetching one row at a time. If you want to test random read, generate a random number and the do the chain. On the SQL side, as others have mentioned you don't need to open a cursor at all simply do a select into . Using the cursor like you are is equivalent to opening and closing the file between each read! Secondly, how realistic is this test really? In another message, you say "This defeats the purpose of the test, which is to compare the performance of a single-record fetch, as would occur dozens of times in any typical OLTP application. Getting an item master record, or getting a price record, or getting an inventory record." One thing to keep in mind is that the situations in an OLTP where you need to fetch one record at a time are situations where user interaction is required. That being the case, the human wait time required is many orders of magnitude higher than RPG or SQL. The rest of the processing involves set-at-a-time processing, usually between multiple files. That's where SQL is going to shine. The interesting thing is even those situations where you think you need one record at time, you may really be able to do it as a set. For instance, say you've got a "screen at a time" subfile you need to load with data from your item master file so the user can select an item to change/order/sell/ ect... //Tradition I/O for 1 to MAXSUBFILERECS read ItemMaster write SubFileRec endfor //SQL set at a time fetch from ItemMasterCursor into :RtndData for :MAXSUBFILERECS records for 1 to MAXSUBFILERECS write SubFileRec endfor Two things I try to keep in mind about SQL 1) It's not a 1-1 replacement for native I/O. You shouldn't just replace native I/O in a program with SQL. 2) Its for set at a time processing. If you're reading one record from one file, you're probably using it the wrong way. HTH, Charles > -----Original Message----- > From: Joe Pluta [mailto:joepluta@xxxxxxxxxxxxxxxxx] > Sent: Thursday, July 22, 2004 9:10 PM > To: 'Midrange Systems Technical Discussion' > Subject: RE: SQL vs. traditional I/O? > > > I don't know who this person is, but I've just run a test for > the fun of > it. I realize this test is blisteringly unscientific, but the results > indicate that perhaps the hype might not match the reality. > > I have a file with 800,000+ records, each with a 10-character > key with a > different unique number (yeah, I know, numeric values in alpha field, > but hey, I just whipped it together). > > I then took the following code: > > c for x = 1 to count > c eval mykey = %char(x) > c mykey chain j4pp1r > c endfor > > - and - > > c for x = 1 to count > > c eval mykey = %char(count) > > c/EXEC SQL declare c cursor for select * from j4pp1 where > key1 = :mykey > c/END-EXEC > > c/EXEC SQL open c > > c/END-EXEC > > c/EXEC SQL fetch from c into :j4pp1 > > c/END-EXEC > > c/EXEC SQL close c > > c/END-EXEC > > c enddo > > > I compiled the two, and then showed the results. I had to > crank up the > native I/O to 100,000 iterations to get a measurable difference, while > the SQL version only required 10,000. > > Results: > > Start End Count > Native I/O: 193459 193509 100000 > SQLRPGLE: 194230 194241 10000 > > To the naked eye, in this very unscientific test, native I/O performed > roughly 10,000 chains per second, while SQL scored about 900 > fetches per > second. > > This is an order of magnitude. This was with no > optimization, right out > of the box. I later optimized both programs, and the native I/O went > down to seven seconds, while the SQL program, with COMMIT(*NONE) and a > CHGPGM OPTIMIZE(*YES) dropped to ten seconds (for one tenth > the records > of the native I/O). So it seems optimization is even more > effective for > native I/O. > > I may have screwed up something here. Feel free to point out any > issues. But this is the sort of test I think the new IAAI > site is going > to be very useful for. And those who are interested should definitely > stop by the website. > > http://forums.plutabrothers.com/IAAI > > Joe > > -- > This is the Midrange Systems Technical Discussion > (MIDRANGE-L) mailing list > To post a message email: MIDRANGE-L@xxxxxxxxxxxx > To subscribe, unsubscribe, or change list options, > visit: http://lists.midrange.com/mailman/listinfo/midrange-l > or email: MIDRANGE-L-request@xxxxxxxxxxxx > Before posting, please take a moment to review the archives > at http://archive.midrange.com/midrange-l. >
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.