× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



I did not specify a LF in my SQL statement. I used the physical file. I only
said that there were LF's already existing that SQL could make use of. Which
one it uses is up to IBM. 
As far as the number of runs, it is in the thousands, and native RPG still
outperformed SQL by a factor of 10. 
I am not saying one is better than the other, just that in this instance,
RPG is faster. I know this is not the popular view around here. That's why I
said it. I think everyone just assumes that SQL is faster except for a
single record chain. I actually retrieved system times and wrote a record to
a log file after each file was finished, showing how many records were
changed, and the elapsed times. I did this with both methods and after tens
of thousands of employees were processed with both methods, RPG came out on
top.
JMHO
Tom

-----Original Message-----
From: rpg400-l-bounces@xxxxxxxxxxxx [mailto:rpg400-l-bounces@xxxxxxxxxxxx]
On Behalf Of HauserSSS
Sent: Thursday, February 02, 2006 12:14 AM
To: RPG programming on the AS400 / iSeries
Subject: AW: Chaining with a very large file

Hi,

it's wrong to say SQL is slower than native I/O, without knowing what really
happens under the covers.
(I don't want to say, I'd know all, but a little more than most other
RPG-programmers)

To say I have logical files that are used, because I specify it in an SQL
statement is wrong.
SQL won't care about the access path specified in the logical file in the
first step of execution.
In the first step the query optimizer only takes the field selection,
select/omit clauses and joins from a specified logical file and rewrites the
SQL-statement (with the associated physical files). In the second step all
access path (logical files and indexes) are checked by the optimizer and the
optimal access path gets determined. This access path may or may not be the
access path specified in the logical file.

At this time we have two Query-Engines the CQE (classical query engine) and
SQE (SQL query engine). The SQE is new designed with OO-design and can use a
lot of features the CQE can't. There is a query dispatcher that determines
which Query Engine will execute the SQL-statement. As soon a logical file is
specified, the query gets rerouted to the CQE and the advantages of the SQE
cannot be used. This rerouting may cause up to 10% performance degrease.

At the first time a SQL-Statement gets executed, the access plans that are
stored in the program or service program object get validated, all indexes
get checked and finally the optimal index gets determined. The ODP gets
deleted after the first execution. At the second execution of the same
statement in the same activation group, the decision form the first
execution gets validated again and confirmed. After the second execution the
ODP stays open and will be used every time the statement will be executed
(until the cursor gets hard closed). For all further executions, SQL is much
faster, compared with the first and second execution and even compared with
native I/O.
This means native I/O and SQL can only be compared with the execution times
after the third run.
There may be situations, where an ODP is not reusable, but this must be
analyzed and in most cases the problem can be solved by rewriting the select
statement. Sometimes a complex statment must be splitted into separate
statemens (or cursors)

Each SQL-statement gets its own ODP. That's why its the best solution to
externalize SQL-Statements into procedures and if there are no
updates/inserts or writes in the statement, these procedures should run in
Service programs with named activation groups. In this way the number of ODP
(and access plan validations) can be reduced to a minimum.

If a cursor gets hard closed, the ODP gets deleted and all validation and
query optimization must be repeated again and again. A hard close of a
cursor is performed at the end of a module or program, if the option
CLOSQLCSR at the compile command (or in the SET OPTION-statement) is set to
*ENDMOD (ILE) or *ENDPGM (OPM).

If possible static SQL should be prefered, as soon as the focus is on the
performance. With static SQL the access plans are stored in the program or
service program objects (which is similar to the SQL-package) and can be
revalidated. With dynamic SQL, no access plans are stored in the objects and
the validation must take place from the scratch. In contrary to static SQL
the ODPs cannot be reused and must be recreated with each EXECUTE IMMED or
PREPARE-Statement, even if it is the same statement. If you use dynamic SQL,
a statement should only be prepared once and executed several times. If
different parameters are neccessary, parameter markers should be used.

The SQE may help with access plans that are stored in the plan cache. That
contains systemwide all access plans from all SQL-statements executed with
SQE. If a SQL-statement gets executed for the first time in an activation
group, first an access plan is searched in the plan cache and if found
validated.

Each SQL-statement should be anlayzed with the iSeries Navigator Visual
Explain (or at least with STRDBG). Both will advice indexes that are
necessary. Additionally database monitors should be used to analyze the
SQL-statements.
There are a lot of coding considerations, that may influence the performance
and the decision of the query optimizer.
For example: Never use SELECT * but always list the fields you need. By
specifying the fields, more rows can be returned within a block. (In
contrary to native I/O, where always the complete record is read, SQL only
returns the specified columns)
It also allows an index only access, that means all important information
are stored in the key fields and can be used immediatedly. No additional
database action, to retrieve other information is neccessary.

To be fair, if a single record must be read, a native I/O chain IS! faster
than the appropriate SQL-Statement.
But as soon a block of rows must be read SQL is faster.

There are a lot more considerations about performance boosting with SQL and
it would fill a whole book.
In either way, the times are over where IBM could say, we don't need a DBA
for our database.

Mit freundlichen Gru?en / Best regards

Birgitta

"Shoot for the moon, even if you miss, you'll land among the stars."
(Les Brown)

-----Ursprungliche Nachricht-----
Von: rpg400-l-bounces@xxxxxxxxxxxx
[mailto:rpg400-l-bounces@xxxxxxxxxxxx]Im Auftrag von Tom Huff
Gesendet: Donnerstag, 2. Februar 2006 06:19
An: 'RPG programming on the AS400 / iSeries'
Betreff: RE: Chaining with a very large file


By the way, we are using SQL even though it is slower. I couldn't convince
my boss that RPG was faster. I didn't tell him that I wrote the RPG pgms and
tested them. We had already started processing with the SQL version of the
process.
But I ran about 25,000 employees through the RPG version of the process to
test. Just for my own curiosity.
Tom

-----Original Message-----
From: rpg400-l-bounces@xxxxxxxxxxxx [mailto:rpg400-l-bounces@xxxxxxxxxxxx]
On Behalf Of Booth Martin
Sent: Wednesday, February 01, 2006 8:58 PM
To: RPG programming on the AS400 / iSeries
Subject: Re: Chaining with a very large file

Here's where I get boo'ed.

You are changing every record.  Two choices come to mind:

1)Is there anyway to get away from using keyed files?  There is no
requirement that processing be in employee number order.  Removing the K
makes these programs fly.

2)If you must use keys to keep things in synch then update/primary with
update secondary on the other five files, and then use Matching Records.
  This also makes a program fly.



Tom Huff wrote:
> The last time I tested this, RPG using SETLL & read loop with a test in
the
> loop beat SQL by a factor of 10. I am changing all the records in an HR
> system one employee at a time. The employee number is being changed from
> SocSec to a generated number. There are about 600 files and each employee
> has anywhere from 25 to 25000 records. SQL takes an average of 8.5 minutes
> and RPG takes an average of 1 minute.
> The total number of records to change was too large to run over a weekend
so
> we broke it down to one employee submitted when the new employee badge is
> made.
> The programming in SQL was shorter but the RPG was MUCH FASTER.
> I wrote a log file of the number of records changed and the elapsed time
> using both methods. We are changing about 250000 employees.
> Sorry to burst SQL's bubble, but it can not hold a candle to RPG.
> Thanks
> Tom
--
-----------------------------------
Booth Martin
http://martinvt.com
-----------------------------------
--
This is the RPG programming on the AS400 / iSeries (RPG400-L) mailing list
To post a message email: RPG400-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/rpg400-l
or email: RPG400-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/rpg400-l.



--
This is the RPG programming on the AS400 / iSeries (RPG400-L) mailing list
To post a message email: RPG400-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/rpg400-l
or email: RPG400-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/rpg400-l.




As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.