× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Our experience with embedded SQL seems to involve the following steps:
1) Design the project with all the business logic in the SQL.
2) Code and test the project.
3) Gradually remove business logic from the SQL and put it back in the program until performance is acceptable.

Trevor Briggs
Analyst/Programmer
Lincare, Inc.
(727) 431-1246
TBriggs2@xxxxxxxxxxx
-----Original Message-----
From: rpg400-l-bounces@xxxxxxxxxxxx [mailto:rpg400-l-bounces@xxxxxxxxxxxx] On Behalf Of Birgitta Hauser
Sent: Saturday, November 03, 2012 6:30 AM
To: 'RPG programming on the IBM i / System i'
Subject: AW: Should I replace all CHAIN, SETLL, and READs to SQL

Replacing RPG with (embedded) SQL 1:1 does not make any sense!

Instead you should externalize your data access, i.e. writing exported
procedures, that perform the CHAIN, UPDATE, DELETE, WRITE (or sometimes in
future the appropriate SQL-Statement) and return the data.

Analyze your programs where loops are performed. Instead of reading a file
(for example Order Header) and chaining another file (for example Address
Master), you should create an SQL view that does the join replace the READ
and CHAIN with a single Cursor using this view. Using MULTIPLE ROW Fetches
instead of performing one row after the other may fasten up your program.
Also only SELECT the data you really need instead of performing SELECT *,
much less database access must be performed and the optimizer may use an IOA
(Index only Access).

Do not change the compile option CLOSQLCSR from *ENDACTGRP to *ENDMOD. With
*ENDMOD the open data path get deleted as soon as the module ends.
The next time the same SQL Statement is executed within the same job a FULL
OPEN (instead of a PSEUDO OPEN) must be performed.

For a FULL OPEN an available access plan which can be stored in either the
(service-)program object (static SQL only) or in the SQL Plan Cache (SQL
Statements performed with the SQL Query Engine SQE) must be validated. If
there is no access plan available it must be generated by scratch.
An access path contains all information for executing the SQL statement,
i.e. access method (index access or table scan or table probe), the access
paths to be used (SQL Index, Constraints, keyed logical files) and the
temporary objects (such as hash tables, RRN Lists) to be generated and
filled with data.
When validating or creating an access plan the statistics manager gets
interviewed and all available access paths built over the tables specified
in the SQL-Statement are estimated.
After the access plan is built or validated, the data path will be opened
(ODP), i.e. the temporary objects described in the access plan are generated
and filled with data based on the access paths described in the access plan.
Opening the data path is the most time consuming process when executing an
SQL statement.

After the first execution of the SQL statement the ODP will be deleted. For
the next execution of the same SQL Statement the access plan gets cross
checked and the ODP opened. After the second call (if option CLOSQLCSR
*ENDACTGRP was specified and the program is not running with activation
group *NEW), the ODP stays open, i.e. the temporary objects are not deleted.
For all subsequent calls or the same SQL Statement within the same
activation group within the same job, only the data within the temporary
objects are updated.

Because a FULL OPEN for SQL is much time consuming than the "optimization"
for native I/O (simply open the specified physical/logical file), RPG may
win the first 2 executions. Beginning with the 3rd execution SQL should be
faster (otherwise something is wrong and must be analyzed!).

Each SQL statement that is performed gets its own ODP, i.e. if you code the
same SQL Statement multiple times (in different subroutines, procedures or
program) a FULL OPEN must be performed for each of these executions.

The main goals when replacing native I/O with embedded SQL are:
- Reduce source code (--> move business rules into the database, i.e. create
and use views for joining multiple tables, grouping data and preselecting
data and perform only a single read instead of multiples)
- Externalize the SQL code into procedures.
- Externalized procedures for reading/selecting data only should be located
in service programs with a named activation group (to minimize the number of
FULL OPENS).
- For procedures that read data within a loop you may consider using call
back processing (i.e. pass a procedure pointer for procedures to be executed
for each record).
- Externalized procedures for Inserting, Updating and Deleting data, should
be grouped in service programs with activation group *CALLER to make sure a
ROLLBACK would work as expected (Default for commitment scope within the
STRCMTCTL command is *ACTGRP!)

If you want to modernize, start analyze your programs start writing
procedures, test these procedures, remove the current source code in your
programs and call the new procedures instead.

Mit freundlichen Grüßen / Best regards

Birgitta Hauser

"Shoot for the moon, even if you miss, you'll land among the stars." (Les
Brown)
"If you think education is expensive, try ignorance." (Derek Bok)
"What is worse than training your staff and losing them? Not training them
and keeping them!"


-----Ursprüngliche Nachricht-----
Von: rpg400-l-bounces@xxxxxxxxxxxx [mailto:rpg400-l-bounces@xxxxxxxxxxxx] Im
Auftrag von John Allen
Gesendet: Friday, 02.11 2012 16:06
An: RPG400-L@xxxxxxxxxxxx
Betreff: Should I replace all CHAIN, SETLL, and READs to SQL



I have been thinking about changing all CHAINs SETLLs and READs in our
software to SQL



Our programs use these operations for various reasons such
as:

Simple SETLL to check if a value is valid

Reading records for loading subfiles

Chaining by RRN (used in subfile processing, the input file does not have a
unique key so RRN is stored in Subfile then used for accessing the original
record)

Reading and chaining to several files for processing thousands of records
and doing validations, calculations etc.



My main concern would be:

How this would affect performance

Does SQL allow for accessing records by RRN.



Does anyone have any thoughts on why this is a good or bad idea



Thanks



John



--
This is the RPG programming on the IBM i / System i (RPG400-L) mailing list
To post a message email: RPG400-L@xxxxxxxxxxxx To subscribe, unsubscribe, or
change list options,
visit: http://lists.midrange.com/mailman/listinfo/rpg400-l
or email: RPG400-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives at
http://archive.midrange.com/rpg400-l.



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.