× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Clutching at straws here...

Is there enough memory in the sub system? No excessive paging showing up during the 1 minute?
Any excessive disk activity show up during the 1 minute run?
Any indexes or EVIs on the work file?
Batch or interactive? Batch may perform better based on time slice.

And which OS version?

Is maybe the CQE being invoked instead of the SQE?

Sam

On 5/29/2013 3:48 PM, Peter Connell wrote:
I've have an SQL script that runs considerably faster on SQL Server that than on system i.

I have 300 SQL statements which must be run each time a transaction is requested by the client and the response is slow. It is problematic to convert these 300 to native RPGLE which I know would perform well.
The client makes a usual inquiry against a consumer database and a report is generated from matching detail entries on several database files. The report was developed long ago with RPGLE.
But now the RPGLE creates a work file of a few hundred values deemed as significant which are derived from the same data source. The work file is repopulated for each client transaction and its content depends on each new client request, so it differs each time.

Each of the 300 SQL statements (which in fact are supplied by a 3rd party) defines a separate business characteristic that uses SQL to join the work file entries to a predefined table of attributes and return a summary result value.
The net result is that, in addition to the report, a further 300 characteristic values can be returned that relate to specific client request.
Unfortunately, all 300 statements can take about a minute to complete which is too slow. However, if the same transaction is repeated then the same result can be returned in about 5 seconds.
Diagnostics from running in debug mode show that a repeated request reuses ODPs. Unfortunately, when a new work file is created for the next client transaction, these ODPs get deleted and the transaction is slow again.

I've tried playing around with activation groups , shared opens via OVRDBF and using an running a pre-requisite OPNDBF but had no success in reducing the transaction time.
I am perplexed by the fact that we have a developer familiar with SQL server who has demonstrated that he can easily create a script that runs all 300 statements on SQL server in just a few second with no performance problems. Why is that?

Peter


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.