× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Hi,

I am using indexed logical views of both files (PO and Vendor) to gather
the records

Does it mean you are using DDS described logical files and specify them in
the SQL statement?
If so remove them and directly access the based physical files. When
specifying DDS described logical files in a SQL statement the query will be
rerouted to the old (Classic) Query Engine CQE, which in 95 % performs much
worse than the newer SQL Query Engine.

If the CQE is used, you may get a better performance when using Group By
instead of Distinct.
If the vendor will be selected by the used, specify the vendor file as the
first of your files, to help the optimizer.

In either way run your SQL statement through iSeries Navigator's Visual
Explain and check which Query Engine is used. If it is the CQE (you'll get
the cause why CQE must be used), try to get the query run through the SQL
Query Engine. (You also can used the steps Mike proposed, but you'll get
more information with Visual Explain).

but the initial load, understandably, is slow and takes about one minute
to load the first page.
First off SQL is not native I/O. SQL always returns a block of data (AFAIK
currently it is 32K), even only a few rows are needed.

When using SQL the first call of a query is always (much slower) especially
when Order By is used (and often also when Group By or Distinct are used),
because the complete query must be executed to group and order the data.
Specifying a logical file will not help, because the query optimizer only
takes the field selection, join information and select/omit clauses, but
ignores the key information and rewrites the query based on the physical
files/tables. After rewriting the optimization starts where ALL indexes and
keyed logical files are estimated.
Also normally SQL needs a bunch of temporary objects (such as Hash Tables or
relative Record lists), which must be determined during the optimization (an
access plan gets created or at least validated) and finally be built and
filled with data (the ODP gets opened). These steps must be run through, the
first time the query gets executed (within a job). (BTW you'll see all the
temporary objects with Visual Explain, which is nothing else than making the
access plan visible). After the first execution the ODP (i.e. the temporary
objects) will be deleted. If the next time the same query will be executed
the access plan will be validated and the ODP rebuilt. After the second
execution the ODP stays open, if it is reuseable and you are not using the
Option CLOSQLCSR = *ENDMOD and your SQL program is not running in the
activation group *NEW. For all subsequent calls only the data in the
temporary object will be actualized.
This may be an explication why the first execution is so slow.

BTW when using static SQL the access plan will be stored in the program
object, so next time it is not necessary to built it by scratch.
When using dynamic SQL the access plan will not be stored in the program
object. In this way, if the query is executed by the CQE it must be built by
scratch.
If the query is executed by the SQE the access plan will be stored in the
SQL Plan Cache and can be validated next time the query will be executed.

To give you more information, how your query could be optimized, we'd need
to see your SQL statement.
Sometimes rewriting the query (and creating additional indexes) will help to
speed it up.

Mit freundlichen Grüßen / Best regards

Birgitta Hauser

"Shoot for the moon, even if you miss, you'll land among the stars." (Les
Brown)
"If you think education is expensive, try ignorance." (Derek Bok)
"What is worse than training your staff and losing them? Not training them
and keeping them!"

-----Ursprüngliche Nachricht-----
Von: midrange-l-bounces@xxxxxxxxxxxx
[mailto:midrange-l-bounces@xxxxxxxxxxxx] Im Auftrag von RWMunday
Gesendet: Tuesday, 02. June 2009 04:32
An: 'Midrange Systems Technical Discussion'
Betreff: Speeding Up A Slow SQL

Greetings from sunny Florida.


I have an RPG /Free program gathering data using embedded SQL. It's
slooooow. I have 13+ million records in a file of purchase orders and I'm
assembling a list of eligible vendors in a subfile window from which to
select to display the respective PO data for that vendor. I'm using a
Select Distinct on the vendor number and joining to the vendor file to get
the vendor name. I only need to allow selection of vendors who actually
have data in this file. Once a user selects a vendor, the individual PO
records are displayed in a subfile. Both are page-at-a-time subfiles. No
need to wait for thousands of records to load in a subfile. I am using
indexed logical views of both files (PO and Vendor) to gather the records
but the initial load, understandably, is slow and takes about one minute to
load the first page. Of course, subsequent pages load instantaneously when
the page key is keyed.

How can I speed up performance of my SQL? Would doing a Select Distinct on
the single PO file, then chaining to get the vendor name as needed speed
things up? I will probably try that in the morning just to see what it
does. But if you have any tips to improve my speed, send them on, please.

Thanks,



Robert Munday
Munday Software Consultants
Montgomery, AL
On assignment in Jacksonville, FL


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.