× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Hi Rick,

That scenario is a little trickier. QS only looks at individual queries,
not at the entire body of a procedure. It also depends on how unique the
signature of that troublesome SQL is. QS provides a number of knobs to let
you focus your thresholds down to a particular job/user/subsystem. As
well, there's a lot of information that gets passed to the exit program
that would allow it to do post-filtering before deciding to dump the
cached plan, etc. So I can imagine a situation where a particular workload
runs most queries most of the time in less than a second but you want to
log data about a query that departs from this normal behavior. In that
case you could set up a QS threshold for an elapsed time of three or five
seconds or whatever seems appropriate. You can also set thresholds for
I/O, CPU time or temp storage, if you suspect that overuse of those
resources is contributing to the troublesome query. In short, there's
probably some way to use QS to help with your situation, but it may
require some iterative tweaking (and more than the 15 minutes I advertised
in my earlier email.)

Hope that helps!

Tim Clark
DB2 for IBM i / SQL Optimizer


"MIDRANGE-L" <midrange-l-bounces@xxxxxxxxxxxxxxxxxx> wrote on 07/13/2021
12:23:49 PM:

From: "Rick Rauterkus" <rick.a.rauterkus@xxxxxxxxxx>
To: "Midrange Systems Technical Discussion"
<midrange-l@xxxxxxxxxxxxxxxxxx>,
Date: 07/13/2021 12:24 PM
Subject: [EXTERNAL] Re: Analyzing RUNSQLSTMT jobs
Sent by: "MIDRANGE-L" <midrange-l-bounces@xxxxxxxxxxxxxxxxxx>

Sadly, still stuck at 7.2.

And not sure this would help. It's not a query running for 8 hours, it
is
a procedure with SQL that instead of taking a split second to complete
its
task like normal, it is taking several seconds. Called many 1000s of
times, adds up to the job taking 8 hours instead of 2 minutes. But
maybe
it could find something like that?


On Tue, Jul 13, 2021 at 11:38 AM Timothy P Clark <timclark@xxxxxxxxxx>
wrote:

I could have not set this up better if I were paying you... And now
that
you've pushed the soapbox conveniently into place, I'll take just a
minute
to get up on it and say that scenarios like this are *exactly* the
reason
that IBM delivered the Query Supervisor in the most recent TRs for IBM
i
7.3 and 7.4.

With QS, you can set up a threshold for query elapsed time (e.g. 6
hours)
and then have an exit program that dumps any information about the
query
or the job you may be interested in. For example, I would start by
dumping
the cached query plan for the query and then comparing it to a fast
running plan. Did the plan change? Was an index advised? You could
also
dump the job stack to show whether you may be waiting for a resource.
Or
start job watcher for the job or dump Mimix status or...

In fact, you can probably set this up in under 15 minutes...

CALL QSYS2.ADD_QUERY_THRESHOLD(
THRESHOLD_NAME=>'6 hour queries',
THRESHOLD_TYPE=>'ELAPSED TIME',
THRESHOLD_VALUE=>6*60*60,
LONG_COMMENT=>'Detects queries that run for more than 6 hours');

Then go to

https://www.ibm.com/docs/en/i/7.4?topic=programs-exit-program-
dump-plan-cache-information-query
and pick your favorite programming language. Copy and paste the sample
program and build instructions.

Next time a query runs for 6 hours, you'll have a plan cache snapshot
in
the SUPERVISOR library containing only the query/queries that ran
long.
You can use the SQL performance center in ACS to import the snapshot
and
then Visual Explain the query.

And if you want to automatically kill the query, there is sample code
for
that, too:

https://www.ibm.com/docs/en/i/7.4?topic=programs-exit-program-end-query

Hope that helps!

Tim Clark
DB2 for IBM i / SQL Optimizer

"MIDRANGE-L" <midrange-l-bounces@xxxxxxxxxxxxxxxxxx> wrote on
07/13/2021
08:56:26 AM:

From: "Rick Rauterkus" <rick.a.rauterkus@xxxxxxxxxx>
To: "Midrange Systems Technical Discussion"
<midrange-l@xxxxxxxxxxxxxxxxxx>,
Date: 07/13/2021 08:57 AM
Subject: Re: [EXTERNAL] Re: Analyzing RUNSQLSTMT jobs
Sent by: "MIDRANGE-L" <midrange-l-bounces@xxxxxxxxxxxxxxxxxx>

We have had something very similar happen 3 or 4 times over the past
year.
Something that runs every day and takes about 2 minutes to complete
(an
inventory report to a customer), will suddenly one day take 8 hours
to
run. The job runs, and makes steady progress, but just really
slowly.
Looking at the call stack during a slow run, it shows the same spot
almost
every time, a procedure in a service program that is using embedded
SQL
to
extract some data from a file. This procedure is called for every
item.
During the last long run, we killed the job, and then immediately
resubmitted it, and it ran in its normal 2 minutes.

The only theory we could muster up is that for some reason when the
first
call to that procedure happened, the access path that the SQL
normally
used
was not available for some reason, so it ended up with a very much
less
efficient one. And was stuck with it for the duration of the job.
As
to
why it wouldn't be available, maybe the system was rebuilding the
index?
Maybe Mimix had it locked for whatever it was doing? We have found
nothing
to explain this either. We are considering changing the embedded
SQL to
RPG and see if the issue ever comes up again.


On Tue, Jul 13, 2021 at 7:58 AM Alan Shore via MIDRANGE-L <
midrange-l@xxxxxxxxxxxxxxxxxx> wrote:

Hi everyone
Here is my latest quirk/conundrum
I ran the reorgs this past weekend and the SQL query that was
causing
me
heartache ran EXTRMELY quick on Sunday night, with no problems
However, the run on Monday night was STILL running in batch
Tuesday
morning – 11 hours later
Now heres the next kicker
I took a copy of the query that is being run, and changed it to
create
a
file in a different library and ran it interactively
It finished in under 2 minutes
I have NO idea what to look at or even where to start
Does anyone have ANY ideas or hunches
As always – ALL answers gratefully accepted

Alan Shore
Solutions Architect
IT Supply Chain Execution

[cid:image001.png@01D777C5.3A4B1220]

60 Orville Drive
Bohemia, NY 11716
Phone [O] : (631) 200-5019
Phone [C] : (631) 880-8640
E-mail : ASHORE@xxxxxxxxxxxxxxxxxxxx

‘If you're going through hell, keep going.’
Winston Churchill

From: MIDRANGE-L [mailto:midrange-l-bounces@xxxxxxxxxxxxxxxxxx] On
Behalf
Of Alan Shore via MIDRANGE-L
Sent: Tuesday, July 6, 2021 11:57 AM
To: Midrange Systems Technical Discussion
<midrange-l@xxxxxxxxxxxxxxxxxx>
Cc: Alan Shore <ashore@xxxxxxxx>
Subject: RE: [EXTERNAL] Re: Analyzing RUNSQLSTMT jobs

Thanks for your reply Mark

Re-use deleted records is NOT set on the files that I am looking
at. I
would DEFINITELY need to perform a RE-ORG on 3 files in particular
THEN
change them to re-use deleted records



Heres another question - hopefully you know the answer.

On at least 2 of the 3 files, there are some join logicals of BOTH
of
these 3 files

I was thinking of running 3 separate jobs at the same time doing a
re-org
on these 3 files

However - because of the join logicals - will that cause some
conflict
between the 2 re-orgs?




Alan Shore
Solutions Architect
IT Supply Chain Execution

[cid:image001.png@01D7725D.FBFDACB0]

60 Orville Drive
Bohemia, NY 11716
Phone [O] : (631) 200-5019
Phone [C] : (631) 880-8640
E-mail : ASHORE@xxxxxxxxxxxxxxxxxxxx<
mailto:ASHORE@xxxxxxxxxxxxxxxxxxxx>

‘If you're going through hell, keep going.’
Winston Churchill

From: MIDRANGE-L [mailto:midrange-l-bounces@xxxxxxxxxxxxxxxxxx] On
Behalf
Of Mark Waterbury
Sent: Tuesday, July 6, 2021 11:52 AM
To: Alan Shore via MIDRANGE-L
<midrange-l@xxxxxxxxxxxxxxxxxx<mailto:
midrange-l@xxxxxxxxxxxxxxxxxx>>
Subject: Re: [EXTERNAL] Re: Analyzing RUNSQLSTMT jobs

Alan,

A "mass purge" of database records could impact your applications
performance if those physical files are not using REUSEDLT(*YES),
so
that a
RGZPFM is needed to reorganize the PF member(s) to reclaim the
space
used
by the deleted records.

After such a RGZPFM, any and all Logical File access paths over
those
PF
members need to be updated; if the Logical Files (or physical
files
with
keys) were not set to MAINT(*IMMED), this access path rebuild will
occur on
the next OPEN of that file. This could significantly slow down
your
applications, the first time each file is used after such a
"re-org."

HTH,

Mark S. Waterbury

On Tuesday, July 6, 2021, 11:30:44 AM EDT, Alan Shore via
MIDRANGE-L
<
midrange-l@xxxxxxxxxxxxxxxxxx<mailto:midrange-l@xxxxxxxxxxxxxxxxxx
<mailto:


midrange-l@xxxxxxxxxxxxxxxxxx%3cmailto:midrange-l@xxxxxxxxxxxxxxxxxx>>>
wrote:

Thanks for your reply Vern
I am wading my way through the document that Jack sent, and it's a
LOT
of
information to soak in
My problem is that a job that's presently running is still running
from
last night
Its been running for about 5 weeks or so - taking different times
to
complete - but NOTHING like it is today
However - in my analysis I have discovered that I believe a purge
was
recently run on a couple of the files
There are quite a number of deleted rows in these files
My question is - would the deleted rows impact SQL queries?
Forgive my ignorance as I haven't had a decent nights sleep in
quite a
few
days and Im not thinking straight

Alan Shore
Solutions Architect
IT Supply Chain Execution

60 Orville Drive
Bohemia, NY 11716
Phone [O] : (631) 200-5019
Phone [C] : (631) 880-8640
E-mail : ASHORE@xxxxxxxxxxxxxxxxxxxx<
mailto:ASHORE@xxxxxxxxxxxxxxxxxxxx
<

mailto:ASHORE@xxxxxxxxxxxxxxxxxxxx%3cmailto:ASHORE@xxxxxxxxxxxxxxxxxxxx>>

'If you're going through hell, keep going.' --Winston Churchill






As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.