× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Hi Roger

Is your development an exact mirror? All data duplicated? Workload the same?

The optimizer will change its choices based on the entire environment - I don't really know the specifics, so that's all I know for maybe. So if there is almost no work being done on the development box, it might not give you the same results.

In current versions there is the plan cache - you can use Visual Explain against statements in there, and there might be enough to tell you what the optimizer decide - on each box - then compare.

Good luck - hope I'm not being too pessimistic! What you are doing is the thing to do, I think, with some cautions.

Regards
Vern

On 2/18/2018 5:38 PM, Robert Rogerson wrote:
Thanks for All the replies.

With the help of all the replies and further research on my part I think I've found three areas for improvement.

First, by running a database monitor for QZDASOINIT for a short time I was able to see that some full table scans are being performed.

Second, I saw (by checking the external program called in the stored procedures) that some were compiled with actgrp(*caller) and this would result in the external program and called programs (assuming they are defined as *caller as well) running in the DAG.

Third, that if the actgrp was not *caller it was *new.

So for the first issue I thought this could be fixed by creating the indexes suggested in the DBmonitor;  IIRC,  even if a index is created the optimizer still may not use the index.

With that in mind, our set up is we have a development box that mirrors production.  I thought the best way to test if we would benefit from a new index would be start a DBMonitor on the dev box, create the index and then using the values from the DBMonitor on the production box call the stored procedure on the dev box and check the DBMonitor results on the dev box to see if the full table scan was eliminated.  If it was eliminated then create the index on the production box.

Does this sound right?

For the second issue change the activation group of the external program to not be *caller.  The new value depends on the answer to the third issue.

For the third issue I'd appreciate clarification that my understanding is correct.

First, a little background.  We have a web server on the IBM i but as far as I know (and I'll confirm this for myself) it only serves responses for stored procedures.  Both our intranet and internet sites are hosted on the Microsoft platform and access data on the IBM i by calling stored procedures.  I will also assume that any programs called by the external program on the stored procedure are compiled with *caller (again I will need to confirm this).

There are many QZDASOINIT jobs for user WEBUSER that are in TIMW status just waiting for a stored procedure to be called.  For now, I only want to think about one job.

When the server job starts processing the stored procedure/external program an activation group is created.  This is regardless of the program specifying *new or a named activation group.  I guess I need clarification that after the external program ends this is the end of the job.  The named activation group is destroyed.  That the named activation group doesn't persist for the life of the server (QZDASOINIT) job but rather only persists for the life of the external program.

So if *new or a named activation group is specified for the external program, each time it processes a stored procedure it will create and destroy the activation group.  So my understanding would be that if all programs called by the external program specified *caller and the external program didn't call itself then there wouldn't be any difference (under the previous circumstances) between *new and named activation groups. In both cases when the external program starts an activation group is created and when the external program ends the activation group is destroyed.

So our existing strategy of the initial program using *new should be acceptable.

Thanks again,

Rob



On 2/18/2018 2:41 AM, D*B wrote:
<Rob>
We have multiple stored procedures called from dotNet applications. Periodically the CPU percentage spikes and when I do a WRKACTJOB and sort by CPU % QZDASOINIT jobs are at the top of the list.
These jobs run in the *DFTACTGRP (do the have to or should they?).
Here is where I want to pick the group's brain.
A program does not specify any keywords on the F-specs and sets on *INLR at the end of the program.  So does this mean that each time the stored procedure runs it must open and close the files? My understanding is that opening and closing files is expensive.
My understanding is the if a file is specified as static in a service program it is opened when the service program is first called and remains open until the activation group of the service program ends.
I use this approach in many of my service programs so the file is opened only once and closed when the activation group ends. But I'm not sure this is the right approach for QZDASOINIT jobs as they run in the *DFTACTGRP which means the file won't be closed until the job ends and *DFTACTGRP jobs remain for a long time.
Am I looking in the wrong place for potential performance gains? Is there a best practice for handling files in external programs called by stored procedures?
Thanks for your insight and recommendations.
</Rob>

@high cpu percentage: Using SQL high CPU% is caused by building a missing index by the fly, or by full table scans and has to be solved by database design: adding needed indexes or denormalisation (e.g. pre aggregation of summarized values). There might be cases to tolerate high cpu usage and long running queries (e.g.: in a BI environment)

@activation groups: here we are talking about sub microseconds and for remote access, you would have no chance to controll the ACTGRP of the Systemjob serving the remote connection. Best practice is to let your user programms run in *CALLER (otherwise you would loose transaction safety!)

@wrong place: for sure!!!

@open close: If your client apps are following best practices, they would use a connection pool and your stored procedures should free all ressources (using SQL the database engine would cache it anyway)

@stored procedures: if your stored procedures are returning result sets, it might be all working as designed. Returning Result Sets from stored procedures and UDTf was introduced to make IBM and Oracle & Co. happy.

D*B





As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.