× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Peter:

Could we please see a "sample" of what the SQL Serve Script is doing? -- It is really hard to tell exactly what might be going on, without seeing at least a sample of the "script" in question. Perhaps you could create a "mock up" -- a "cut-down" version that uses dummy table names and dummy column names, so you can post it to the list? (You can also post such "source code" at http://code.midrange.com and then just post the generated URL, in an e-mail reply to the list.)

Anyway, here is what /I/ think is likely happening:

Way back when, in the early days of the AS/400, many people were trying to migrate their applications from IBM System/36 SSP to the then-new AS/400, and were greatly disappointed by the seeming lack of performance. It turns out that many System/36 SSP applications used many "work files" where they dynamically created a work file in the OCL, then called an RPG II program to copy some data records into the work file, then invoked the S/36 SORT utility to sort the work file, and then called another RPG II program to do more processing, such as generating a printed report from the data in the work file. (You get the idea ...)

It turned out that method of doing things was just about the "worst case" possible for an OS/400 application. Creating objects (e.g. a "work file") incurs a lot of overhead in OS/400.

So, the solution was to pre-allocate these "work files" (create them ahead of time), and use separate members for each job that needed to use that work file, if more than one job stream needed ot use the same work files at the same time. It is much faster and less resource intensive to do a CLRPFM or ADDPFM on an already exisiting physical file, than it is to create a new physical file "from scratch" (e.g. from DDS source, or even using CRTDUPOBJ).

The same is still true today for DB2/400 using SQL DDL on IBM i -- creating a new table is a relatively expensive operation, especially if your script is doing lots of CREATE TABLE, then a bunch of processing, then a DROP TABLE ...

Similarly, it was usually far better to create Logical File over the work file to define the needed sequence (access path), rather than invoking a stand-alone SORT utility as a separate job step.

I suspect that with your "SQL server" script scenario, by just trying to "port" it to run on IBM i, you may be suffering much the same fate as those S/36 programmers who were just trying to "port" their S/36 applications straight over to OS/400, without really understanding what is actually going on, "under the covers."

I hope this helps,

Mark S. Waterbury

> On 5/29/2013 3:48 PM, Peter Connell wrote:
I've have an SQL script that runs considerably faster on SQL Server that than on system i.

I have 300 SQL statements which must be run each time a transaction is requested by the client and the response is slow. It is problematic to convert these 300 to native RPGLE which I know would perform well.
The client makes a usual inquiry against a consumer database and a report is generated from matching detail entries on several database files. The report was developed long ago with RPGLE.
But now the RPGLE creates a work file of a few hundred values deemed as significant which are derived from the same data source. The work file is repopulated for each client transaction and its content depends on each new client request, so it differs each time.

Each of the 300 SQL statements (which in fact are supplied by a 3rd party) defines a separate business characteristic that uses SQL to join the work file entries to a predefined table of attributes and return a summary result value.
The net result is that, in addition to the report, a further 300 characteristic values can be returned that relate to specific client request.
Unfortunately, all 300 statements can take about a minute to complete which is too slow. However, if the same transaction is repeated then the same result can be returned in about 5 seconds.
Diagnostics from running in debug mode show that a repeated request reuses ODPs. Unfortunately, when a new work file is created for the next client transaction, these ODPs get deleted and the transaction is slow again.

I've tried playing around with activation groups , shared opens via OVRDBF and using an running a pre-requisite OPNDBF but had no success in reducing the transaction time.
I am perplexed by the fact that we have a developer familiar with SQL server who has demonstrated that he can easily create a script that runs all 300 statements on SQL server in just a few second with no performance problems. Why is that?

Peter

-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Hoteltravelfundotcom
Sent: Thursday, 30 May 2013 12:38 a.m.
To: Midrange Systems Technical Discussion
Subject: Performance question and SQL SERVER

Has anyone ever ported AS400 data to a SQL Server? Why would you do that, because of the company having some extra servers that can be used as either Replication, or to use for reporting tools based in Windows.

If so, are there tools or products for this or something one can do on their own.
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a moment to review the archives at http://archive.midrange.com/midrange-l.

#####################################################################################

This correspondence is for the named person's use only. It may contain confidential
or legally privileged information, or both. No confidentiality or privilege is waived
or lost by any mistransmission. If you receive this correspondence in error, please
immediately delete it from your system and notify the sender. You must not disclose,
copy or rely on any part of this correspondence if you are not the intended recipient.
Any views expressed in this message are those of the individual sender, except where
the sender expressly, and with authority, states them to be the views of Veda.
If you need assistance, please contact Veda on either :-
Australia 1300-762-207 or New Zealand +64 9 367 6200


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.