× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



 I did a quick test and created a simple SQL *View* real quick and then used it in a DCL-F statement in RPG and it compiles as a file.

So RPG reads it as a file. Questions:

(1)  I assume it can only be read sequentially?

(2) Can it be used "as if" it were an SQL cursor? (sort of) Open and close on the file using RPG i/o to load a subfile?

--Alan



On 4/21/2022 2:31 PM, Alan Cassidy wrote:
Thanks a lot Charles! Yes, the idea is to load the the data from the cursor into a load-all subfile. Or rather, to do it as a something callable from more than a dozen different programs, to keep some of the supporting infrastructure in place when moving from one program to another. Each will be pulling its own subset from the data.

I should practice doing UDTFs and stored procedures again for my own benefit, but I haven't done any UDTFs that I recollect, and the team here might throw fits because it would be just me and one other guy (of about 20) that could work with them. Not a heavy knowledge pool of SQL around here. But the performance is killing productivity in that way that economists call "opportunity costs", or "hidden costs" because the snail's pace loading costs not only the time, but I think it is what is causing record locks.

I've only used SQL cursors to read data straight to a subfile, but closing and opening with ASENSITIVE would catch changes while remembering the access paths and joins built the previous open. I assume that is correct? Anyway, I'll proceed and incorporate your suggestions.  I think using a VIEW for the cursor may be the best idea, in another step. I'm working on just putting together what accounts a current user should see depending on the setup of assigned accounts and backups to cover absences et. al. I'll see what returns on investments this yields.  (A boatload of joins are involved with complex logic).

In fact I'll finish this particular change, and then consider how I'd implement using a VIEW. That's the best approach anyway I think for this situation. (The data to see changes all day).

Thanks again for your time and clarifications,

Alan



On 4/21/2022 9:54 AM, Charles Wilt wrote:
Generally speaking, both SENSITIVE and INSENSITIVE are detrimental to
performance, as you are forcing the DB down a single path.

ASENSITIVE lets the DB decide which to use.

Having said that, SENSITIVE should perform just fine as long as you have
the right indexes.

However, do you really need SENSITIVE SCROLL?

If you're opening the cursor and reading the entire set into a load-all
subfile, then ASENSITIVE NO SCROLL is sufficient.  Simply close and reopen
the cursor when the user refreshes.

If using a page-at-time subfile, then SENSITIVE SCOLL means when the user
pages backwards, they might not get the same set of records they saw the
first time.  If that's really what you want, fine.

In any UI other than twinax attached 5250 over a dial-up line, page-at-a
time or even expanding subfile makes little sense IMHO :)

Personally, I'd encapsulate the SQL statement in a stored procedure, view
and/or UDTF rather than directly embedding it in an RPG program.  Many of
IBM's new services are view's generated by UDTFs ... there's a good reason
for that.

Charles

On Thu, Apr 21, 2022 at 6:01 AM Alan Cassidy <cfuture@xxxxxxxxxxx> wrote:

I've searched around but haven't found clarity to get my understanding
confirmed on this.

I'm working on getting to a solution to a horrendous performance problem
in a set of programs that show different views to data that changes all
day that a team is working on with customers. It is important that what
they see is the most currently available status of the data.

I converted the primary file read that is input to populate the subfile
to an SQL cursor as the first step but it's not enough. So now I'm doing
a cursor with somewhat complex join logic including a couple of common
table expressions, WITH clauses, and so on.

The same data set is used in various programs that they navigate.

I have changed the first program called from the menu from ACTGRP( *NEW
) to ACTGRP( XYZ ), and I've changed most of the first programs called
from there to ACTGRP( *CALLER ). I am grateful for the response I got on
this forum (and/or the midrange forum) for helping me to decide on a
named activation group. And THANK YOU DAVID for creating these lists and
managing them!

This is the idea I'm asking for confirmation for:

Now I'm proceeding with the idea of using a service program procedure to
create a SENSITIVE DYNAMIC SCROLL CURSOR. That way it gets real-time
changes all day. So first time in, the SQL cursor gets created, and the
code reads it start to finish to populate the subfile. When the user
refreshes, the FETCH FIRST returns the "pointer" to the first row and
then FETCH NEXT reads thru the cursor and I can populate the subfile
from that.

That should reduce performance time, should it not?

Sorry if this is more words than necessary,

---Alan

--
This is the RPG programming on IBM i (RPG400-L) mailing list
To post a message email: RPG400-L@xxxxxxxxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/rpg400-l
or email: RPG400-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/rpg400-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription related
questions.

Help support midrange.com by shopping at amazon.com with our affiliate
link: https://amazon.midrange.com

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.