× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



LIMIT and OFFSET is really great news. Its the best SQL syntax for
selecting X number of rows I've dealt with. Honestly though, if you have an
ever increasing column in a table, can't you just use WHERE that_col >
lastval-returned FETCH FIRST x ROWS? On Microsoft SQL Server, I use a WHERE
condition with TOP to overcome the lack of OFFSET.

On Mon, Oct 12, 2015 at 1:05 PM Charles Wilt <charles.wilt@xxxxxxxxx> wrote:

While it may be useful in a subfile, the real use is in client/server where
you don't have state.

It's a lot easier and quicker for the client side to simply request page 1,
page 2, page 3...instead of sending back 1,000 rows and letting the client
handle the paging.

Way back when (v5r?), you needed something like so to fetch rows 10 to 20

select *
from (select *
from (select iitnbr, idescr
from ioritemmst
order by idescr asc
fetch first 20 rows only
) as tmp1
order by idescr desc
fetch first 10 rows only
) as tmp2
order by idescr asc

and btw (originally?) the integers used in FETCH FIRST xxx ROWS couldn't be
variables. So you had to use dynamic SQL. So, not a speed demon, and the
performance gets worse the more pages you go down. But better than trying
to send the whole results set.

Now when ROW_NUMER() came about, it made things easier...

select iitnbr, idescr
from (select iitnbr, idescr
, row_number() over (order by idescr) as rn
from ioritemmst
) as tmp
where rn between 11 and 20
order by idescr

Now with LIMIT and OFFSET, all you'd need is
select iitnbr, idescr
from ioritemmst
order by idescr
limit 10 offset 10

I'd imagine the IBM is doing some magic under the covers with open data
paths so that when the next page is requested, it's quicker than the older
methods.

Lastly, other DBs have LIMIT and OFFSET, so if you're porting it's good
that IBM i has them now too.

Charles



On Mon, Oct 12, 2015 at 11:41 AM, <darren@xxxxxxxxx> wrote:

Excited to see this...

http://www.mcpressonline.com/db2/what-s-new-in-db2-for-i-with-tr3.html

No more jumping through hoops to return a page at a time.


How would you use something like this? I assume we're talking about
filling a page of a subfile, and for that I'm currently using a cursor.
Perhaps with my limited imagination, I can only think to use this to
perhaps not use a cursor, and track the last row read to feed into that
statement, but wouldn't that be pretty inneficient to tell it to skip the
first 100 rows or whatever you're now paged down to?

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.


--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.