× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



I forget which specific book it is, but there is a book that explains
how the query optimizer works. It's well worth reading up on when you're
doing lots of SQL (particularly with non-trival record counts). There
are lots of variables involved (index creation order, SMP or not, OS
version, sort sequencing, etc...). Sometimes, you have to do stupid
things to get it to use a particular index (like add field1=field1 in a
where clause or add a field from your where clause to your order by) but
generally, if you create indexes so the first fields are the ones used
in the where clause followed by the fields you need in your order by
clause, it will use the index you expect.

One exception we've seen here (I don't know if this holds true for
anything beyond V5R1) is that for large files (the one we had problems
with has tens of millions of records) on machines with SMP in
interactive programs, you need to set ALWCPYDTA TO *YES or it will not
use the indexes you expect and create temporary indexes. Also, the query
optimizer will generally not use indexes with SELECT/OMIT criteria in
them.

The cool thing about OS/400's query optimizer compared to other
platforms is that it's very easy to get diagnostic information. All you
have to do is enter STRDBG UPDPROD(*YES) before you start the program
(you can start a service job on batch programs to make it do the same
thing and there's a setting in QQAINI [name may be wrong] you can
change) and you'll get the diagnostic information. Usually, when a
temporary access path is created, there will another message in the job
log giving an access path suggestion and there will always be at least
one message on why it chose a particular index over another.

Matt

-----Original Message-----
From: rpg400-l-bounces@xxxxxxxxxxxx
[mailto:rpg400-l-bounces@xxxxxxxxxxxx] On Behalf Of DeLong, Eric
Sent: Wednesday, February 01, 2006 3:59 PM
To: 'RPG programming on the AS400 / iSeries'
Subject: RE: Chaining with a very large file

Properly indexed, SQL will perform much like native I/O.  However,
things
get complicated with joins and group by expressions, and the SQL
optimizer
may decide on its whim to create a temporary access path.  When this
happens
on a file with many rows, expect to wait for some time....  Many factors
can
influence the optimizer to alter its "access plan", such as cardinality
of
the result set or sub select...  SQL maintains statistics about this
stuff,
and as part of it optimization, will review the source table for changes
to
these items.  

Eric DeLong
Sally Beauty Company
MIS-Project Manager (BSG)
940-297-2863 or ext. 1863



-----Original Message-----
From: rpg400-l-bounces@xxxxxxxxxxxx
[mailto:rpg400-l-bounces@xxxxxxxxxxxx]On Behalf Of Mike Wills
Sent: Wednesday, February 01, 2006 11:53 AM
To: RPG programming on the AS400 / iSeries
Subject: Re: Chaining with a very large file


I was wondering why the iSeries would take that long. Should a properly
indexed file also not take long in SQL?

On 2/1/06, Art Tostaine, Jr. <atostaine@xxxxxxxxx> wrote:
>
> That's a small file.  It should get the record subsecond.  C'mon you
> serious?
>
> Art
>
> On 2/1/06, Douglas W. Palme <dpalme@xxxxxxxxxxx> wrote:
> >
> > My plan at this point would be to juse handle it via straight RPG
> access,
> > via of two possible logical files depending on the first value.
> >
> > It would be interesting to know how long it would take to locate the
> > record
> > with the chain....1 sec 10 sec , etc., especially since this will be
an
> > interactive process or at least one portion of it would be.
> >
> > Thanks for the tip, I know sql is out of the question for a file
this
> size
> > and the machine we have.
> >
> > Douglas
> >
> >
> > On Wed, 1 Feb 2006 09:55:44 -0600 , DeLong, Eric wrote
> > > Douglas,
> > >
> > > As long as you don't do things that try to build access paths
> > dynamically
> > >
> > > (such as SQL or OPNQRYF) and you stick to keyed access in RPG, you
> > > should be fine.  I would think that breaking this down into
smaller
> > > units would only increase the hassles of managing these file
objects,
> > >  especially if they'll all be joined into a common access path
> > anyway....
> > >
> > > If you need to access this monster file via SQL, it's important to
> > ensure
> > > that you have the correct indexes already built.  Performance
tuning
> > > in SQL is sometimes a bit like sorcery or alchemy...  You might
see
> > > what you want today, but tomorrow it won't work the same....
> > >
> > > Eric DeLong
> > > Sally Beauty Company
> > > MIS-Project Manager (BSG)
> > > 940-297-2863 or ext. 1863
> > >
> > > -----Original Message-----
> > > From: rpg400-l-bounces@xxxxxxxxxxxx
> > > [mailto:rpg400-l-bounces@xxxxxxxxxxxx]On Behalf Of Douglas W.
Palme
> > > Sent: Wednesday, February 01, 2006 8:08 AM
> > > To: RPG Group
> > > Subject: Chaining with a very large file
> > >
> > > we may be faced with having to allow an interactive program to do
a
> > > chain to
> > >
> > > a very large file; one with approximately 1.1 billion records and
a
> > > file size of approximately 9.7gb of disk space.
> > >
> > > Are there any performance issues that I should take into
> > > consideration?  Should I consider breaking the data down into
> > > multiple files?
> > >
> > > The file will have a total of three numeric fields, all packed, 5
> > > digits for
> > >
> > > the first two and 4 digits for the third.
> > >
> > > Suggestions, thoughts, helpful hints.....would all be appreciated.
> > >
> > > If you bought it, it was hauled by a truck - somewhere, sometime.


As an Amazon Associate we earn from qualifying purchases.

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.