× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Sounds more like the access plan is created each time through.




On Thu, May 30, 2013 at 8:49 AM, Matt Olson <Matt.Olson@xxxxxxxx> wrote:

So I guess the age old question, which type of power you running on your
DB2 box versus the SQL box?

1. What is the CPU compared between the two?
2. What is the memory sizes between the SQL box and the DB2 box?
3. How many disk arms do you have on your SQL box versus the DB2 box?
For instance, if your SQL box is running all SSD drives and your running
15K RPM drives on your IBM i you will have no hope of trying to beat the
SQL box.

However maybe I'm looking in the wrong places, because you stated that it
runs in less than 5 seconds if you run it a second time, meaning there are
indexes being created on the fly the first time around when you query it.

-----Original Message-----
From: Peter Connell [mailto:Peter.Connell@xxxxxxxxxx]
Sent: Wednesday, May 29, 2013 3:42 PM
To: Midrange Systems Technical Discussion
Subject: RE: Performance question and SQL SERVER

Matt,

The files were created via SQL. The fixed one has < 2000 records. The
transaction file < 1500.
We have in fact used Index Advisor to create quite a few indexes over both
the transaction and fixed files but it's still slow.
Each of the 300 SQL statements is different. Many are differentiated by
complex summary clauses.
The work (transaction) file is a necessary part of the design which in
fact originated on SQL Server. It contains a few hundred attribute values
which must be joined to the fixed file.
Basically each statement calculates a score which selects certain
attribute values from the work file which are matched against predefined
values in the fixed file.
Since the work file is populated prior to commencing the 300 SQL
statements then that is not a factor in time for them to complete.

Peter

-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx [mailto:
midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Matt Olson
Sent: Thursday, 30 May 2013 8:03 a.m.
To: Midrange Systems Technical Discussion
Subject: RE: Performance question and SQL SERVER

So you have a 300 SQL statement script that runs fast on SQL Server, but
the same SQL script is slow on DB2 when you run it? Be careful what you
say on these forums you'll get boycotted for such blasphemy by the "IBM i
is best server on planet folks".

Joking aside, here is a few things I can think of:

1. Are you running the SQL statements against DDS described files in DB2?
If so, this might be a performance problem as there are likely no SQL
indexes on those files, and DDL described files can use much larger page
sizes (64KB if I recall), so your problem might be there.

2. Are the SQL statements really the same? It seems to me you might be
comparing SQL statement execution to that of an RPGLE program execution
doing the old CHAIN's, SETLL, etc type of logic which is nearly always
going to be slower than doing it via SQL statements.

3. Why are you making a work file? Is the SQL Server equivalent creating
work files as well? Because if you have to delete/create these perhaps
that's where the extra time is being consumed?

Matt

-----Original Message-----
From: Peter Connell [mailto:Peter.Connell@xxxxxxxxxx]
Sent: Wednesday, May 29, 2013 2:48 PM
To: Midrange Systems Technical Discussion
Subject: RE: Performance question and SQL SERVER

I've have an SQL script that runs considerably faster on SQL Server that
than on system i.

I have 300 SQL statements which must be run each time a transaction is
requested by the client and the response is slow. It is problematic to
convert these 300 to native RPGLE which I know would perform well.
The client makes a usual inquiry against a consumer database and a report
is generated from matching detail entries on several database files. The
report was developed long ago with RPGLE.
But now the RPGLE creates a work file of a few hundred values deemed as
significant which are derived from the same data source. The work file is
repopulated for each client transaction and its content depends on each
new client request, so it differs each time.

Each of the 300 SQL statements (which in fact are supplied by a 3rd party)
defines a separate business characteristic that uses SQL to join the work
file entries to a predefined table of attributes and return a summary
result value.
The net result is that, in addition to the report, a further 300
characteristic values can be returned that relate to specific client
request.
Unfortunately, all 300 statements can take about a minute to complete
which is too slow. However, if the same transaction is repeated then the
same result can be returned in about 5 seconds.
Diagnostics from running in debug mode show that a repeated request reuses
ODPs. Unfortunately, when a new work file is created for the next client
transaction, these ODPs get deleted and the transaction is slow again.

I've tried playing around with activation groups , shared opens via OVRDBF
and using an running a pre-requisite OPNDBF but had no success in reducing
the transaction time.
I am perplexed by the fact that we have a developer familiar with SQL
server who has demonstrated that he can easily create a script that runs
all 300 statements on SQL server in just a few second with no performance
problems. Why is that?

Peter

-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx [mailto:
midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Hoteltravelfundotcom
Sent: Thursday, 30 May 2013 12:38 a.m.
To: Midrange Systems Technical Discussion
Subject: Performance question and SQL SERVER

Has anyone ever ported AS400 data to a SQL Server? Why would you do that,
because of the company having some extra servers that can be used as either
Replication, or to use for reporting tools based in Windows.

If so, are there tools or products for this or something one can do on
their own.
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe,
unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a
moment to review the archives at http://archive.midrange.com/midrange-l.


#####################################################################################

This correspondence is for the named person's use only. It may contain
confidential or legally privileged information, or both. No confidentiality
or privilege is waived or lost by any mistransmission. If you receive this
correspondence in error, please immediately delete it from your system and
notify the sender. You must not disclose, copy or rely on any part of this
correspondence if you are not the intended recipient.
Any views expressed in this message are those of the individual sender,
except where the sender expressly, and with authority, states them to be
the views of Veda.
If you need assistance, please contact Veda on either :- Australia
1300-762-207 or New Zealand +64 9 367 6200
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe,
unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a
moment to review the archives at http://archive.midrange.com/midrange-l.

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe,
unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a
moment to review the archives at http://archive.midrange.com/midrange-l.


#####################################################################################

This correspondence is for the named person's use only. It may contain
confidential or legally privileged information, or both. No confidentiality
or privilege is waived or lost by any mistransmission. If you receive this
correspondence in error, please immediately delete it from your system and
notify the sender. You must not disclose, copy or rely on any part of this
correspondence if you are not the intended recipient.
Any views expressed in this message are those of the individual sender,
except where the sender expressly, and with authority, states them to be
the views of Veda.
If you need assistance, please contact Veda on either :- Australia
1300-762-207 or New Zealand +64 9 367 6200
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe,
unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a
moment to review the archives at http://archive.midrange.com/midrange-l.

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.





As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.