× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



@Joe,
When the result set is not grouped, the number of rows may vary from a few
hundred to a few thousand rows (that's why I need pagination), for
aggregated results less than one hundred rows, for the total just one
number.
I wouldn't choose a work keyed file nor a global temporary table in SQL
because I am not sure the response time will not be affected by this, even
if the system runs V7R2 updated to the latest cumulative and PTF groups.
Your suggestions are the same I am torn between, meaning there is no futher
tricky way to do it, so I will choose one of those.




Il giorno mar 28 lug 2020 alle ore 14:48 Joe Pluta <
joepluta@xxxxxxxxxxxxxxxxx> ha scritto:

Hi Maria!

If you truly don't want to keep re-executing the same query, you will
have to save the results somehow. How many records are you returning?
Is it small enough to create a work file? Or perhaps a work file of
just the keys for the records, which you could then join back to the
master table(s)? If your connection is stateless, you could use a
dedicated work file keyed by a unique session ID. If it's stateful, you
could probably use a global temporary table in SQL.


On 7/28/2020 2:16 AM, Maria Lucia Stoppa wrote:
Hi all,
Hope you are well.

A REST API I am working on returns data split into pages: many calls are
necessary to get the complete set of data, but in this way, the user
doesn't wait too much for the first page. Data are retrieved within an
RPG
ILE service program by a static SQL statement which applies some dynamic
filters that come within the request.

Everything works fine, except the same static SQL statement is run at
least
twice to know the total number of rows (a simple count(*)) and the rows
themselves page by page.
Now, on the same data retrieved by this SQL statement, others select
statements must be run to get some totals according to different group by
clauses in order to present the data set distribution to the final user.

I hardly accepted the idea of running the same SQL statement (which, by
the way, is pretty complex) at least twice to get both the total and the
row, but I can't stand the idea of even more runs to get data
distribution.

There might be errors in my design of how the procedure should work,
nonetheless, I wonder if a single common table expression might be used
many times to serve many different select statements, as I would avoid
the
use of other solutions like global temporary tables.

Any suggestion is really appreciated.

Many thanks


--
This is the RPG programming on IBM i (RPG400-L) mailing list
To post a message email: RPG400-L@xxxxxxxxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/rpg400-l
or email: RPG400-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/rpg400-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription related
questions.

Help support midrange.com by shopping at amazon.com with our affiliate
link: https://amazon.midrange.com




As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.