× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



<snip>
My benchmarks showed the opposite.  SQL INSERT is 10-100 times slower
than
the native WRITE.  You will see some performance improvement using
prepared
statements, but native WRITE is still faster. Please provide an example
program and your benchmarks.
</snip>

This from IBM. Articles have repeatedly been in Iseries Network about
SQL improvements. According to IBM, differences occur because in DDS I/O
system writes are not validated but every time that you read a record
data is validated. In SQL I/O system, data is validated in on write and
not validated on read. If you are writing all the time and never
reading, you would see slower but usually you write something and read
many, many, many times. 

Were your benchmarks on current versions of the AS/400 operating system
as most of the changes are with current OS? Also, were tables created in
SQL or DDS. If DDS, will still use current DDS file system.

<snip>
New index system"?  Other than the concept of an EVI, I know of no
changes
to the basic index concept.  Again, please provide examples.
</snip>

If you create an index with SQL DDL, the new index page is larger and
much faster. If you then create a DDS logical that uses that index, the
logical will use new SQL index. Again, from articles in Iseries Network.
You might want to pull old copies, read articles on SQL or lookup
online. 

<snip>
This is also simply untrue in my experience.  The record-level access is
the
same.  The data must still be read from disk, and in all but the most
extreme examples you will read in all of the data from a record the vast
majority of the time.  Reading directly into a data structure as you can
in
recent versions of RPG makes this even faster.  SQL adds overhead which
is
only negated when you reduce the number of trips under the MI layer, as
you
do in set-based queries.
</snip>

I would agree that actual accessing data access is always going to be
the largest component but you have to look at how SQL gets data vs how
record I/O gets data. Record I/O pass through a whole series of buffers
and every time you chain or reade, it clears the buffers. SQL I/O occurs
at the machine level. The biggest slowdown as I stated earlier is that
IBM refuses to rewrite the SQL precompiler and runtime to use the ILE
call model. Record I/O has used the ILE call model since RPG IV was
created. 

As far as reading data in, how often do we do single read vs reading in
multiple records? Most of the time we are reading in multiple records
joined, as least, logically to other records. You are almost always
doing the equivalent of SETLL and READ or READE all the time. Again
thinking database instead of files. 

As far as MI, all I/O goes under the MI. If I issues an SQL like this, I
am going under the MI once vs in this example if I did it with Record
I/O would make thousands of trips under the MI and it is even worst then
that because I must read all the POITEM records whether I want them or
not, then chain 2700 times to WHSMST to chain to ITMRVA 2700 times for
only 61 records that I want to update. In this example, I did one I/O
and since I wrote this I found an example on this forum that would have
allowed me to do in one update statement. 

c/Exec Sql                                     
 +  Declare ListCursor Cursor For              
 +    Select t01.ORDNO,                        
 +           T01.POISQ,                        
 +           t03.ITCLS as From_Item_Class      
 +      From  POITEM t01                       
 +        Inner Join WHSMST t02                
 +          On t01.HOUSE = t02.WHID            
 +        Inner Join ITMRVA t03                
 +          On t02.STID = t03.STID And         
 +             t01.ITNBR = t03.ITNBR           
 +      Where t01.ITCLS <> t03.ITCLS           
 +  For Read Only                              
c/End-Exec                                     
                                               
c/Exec Sql                                     
 +  Open ListCursor                            
c/End-Exec                                     
                                               
c                   If        SqlState <> cSqlStateOk               
c                   Eval      UEM0002_MsgDta.ReasonCode = 1         
c                   LeaveSr                                         
c                   EndIf                                           
                                                                    
c/Exec Sql                                                          
 +  Fetch Next From ListCursor For 32766 Rows Into :dsItemRecord    
c/End-Exec                     
                                     
<snip>
ut to do that in SQL I would
need a header/detail join, and if I read that into a table that would
duplicate the header information for every detail record, making LOTS of
extra moves.  At some point, SQL would still outperform native I/O, but
my
guess is that the number would be rather large (and would depend a lot
on
the number of levels of data).

If I didn't use a JOIN and instead used a separate SQL statement to read
the
detail, there would be less wasted moves, although opening and closing
the
cursor on the detail record for every header record would quickly negate
that benefit.
</snip> 

Remember when doing SQL with join, the I/O only occurs once and even the
moves would only occur once in the buffer. From then on, only the
changed fields would be moved

Cursors are not opened and closed with I/O. They remain open. SQL makes
every effort to leave the cursor open. Even if you do a close cursor,
the file is not actually closed. The previous SQL example would leave
the table open. 

<snip>
One of my biggest problems with SQL is the fact that header/detail JOINs
duplicate the header data, a fact which is ignored by most SQL
advocates.
</snip>

As stated above, this statement is just not true. Why would IBM move the
same data into buffer again when it hasn't changed and again all this is
occurring at the machine level?

Anyway, this could go on forever. I better get some work done. Probably
the only biggest hit against SQL is takes more knowledge but since the
entire rest of the world using SQL people might as well learn SQL.

My bottom line is one stated by other previously in SQL vs Record I/O
discussions. SQL has changed and the AS/400 has changed. A lot of the
old assumption we used to make are just not true anymore. 

 


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.