× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



"database would have the capability to make temporary indexes persistent
for a given amount of storage."

That's already been done, take a look at "Maintained Temporary Indexes
(MTI)" starting at v5r4 I believe...

I'd agree that VE is mostly useful if you have a specific query you're
trying to improve...though I've been successful at using it for classes of
queries also. But certainly it takes some manual analysis and
understanding.

For a BI environment, the "SQL Plan cache" and "Index advisor" are your
best tools. One key to making the most of them is to clear them regularly.

Charles

On Thu, May 30, 2013 at 10:18 AM, D*B <dieter.bender@xxxxxxxxxxxx> wrote:

@vernon:

if you have to deal with tables containing hundreds of millions of records
each, in a mixed environment with a BI frontend generating SQL statements
by the fly and query users going out to the same tables (sometimes with
strange statements like incomplete join conditions or strange order by
criteria) index advisory files, plan cache snapshots are absolutely
useless, because you don't see where the statements where coming from. In
an environment with thousands of SQL statements Visual Explain is
absolutely useless, new altered statements could arise faster, than 10
people could examine the statements - so I call this toys for people with
too much time, very nice for playing around.

Bringing the QZDAxxx Jobs to debug is no big issue and doesn't harm the
system too much and having a look to long running jobs is often bringing
solutions in very short time. If IBM marketing is telling that this is
deprecated (I don't believe so), maybe the marketing of IBM is deprecated.

The tool of first choice (in my experience) is database monitor and the
only way to analyze the results is OoopsNerv (it's a nightmare using the
german versions, most of my custumors are providing to me! The german
versions are done by automatic translation and a new version normaly
crashs, because the translator is destroying SQL statements...). If there
would be something better, I would use this, but there isn't. Much better
would be tools doing some analysis by automatic or if the database would
have the capability to make temporary indexes persistent for a given amount
of storage.

Just a little example: last time we've had some queries to a rather
complex view not coming back within two days. Some collegues had a look to
index advisory file, lots of (useless) EVIs where recommended. Next step
was visual explain: playing around for hours they didn't find a solution.
10 minutes reading joblogs and 10 minutes to replace 2 time views (to join
a time_id to last date of month and last date of week) with denormalized
tables with some hundred of records brought the response time to some
seconds.

As a conclusion of this:
- First you have to know where the problem is located (which SQL statement
exactly is consuming the time)
- Then you can start to solve the problem (mostly creating an index or
denormalise some data).

D*B


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.