× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.




On Nov 10, 2011, at 1:40 PM, rpg400-l-request@xxxxxxxxxxxx wrote:

Loading the entire file into one or more arrays will incur the overhead
of loading the entire file for each job that needs to search the array,
and this increases the virtual storage used by each and every job that
needs to access this file.

We have been using the array technique for several years now and avoid this problem by storing it in a User Space which all users then share.


So, before you decide to load all records of this file into an array and
search using a binary search, consider the following alternatives:

#1. issue SETOBJACC to load the entire *FILE into a dedicated memory
pool; sized based on the size of the file; this will speed up access for
all jobs using this file, without any coding changes to the programs
using the *FILE.

In practice in an overnight batch scenario I have not found that this makes much difference. The OS will tend to keep the pages in memory anyway and SETOBJACC won't reduce the code path length by much.


#2. create a "server" job that loads the array, and communicates with
requester jobs via data queues. Instead of loading every record into
the array, start out with an empty array, or load just one record. When
a client program sends a message to the request data queue, requesting a
"look-up", the server job first looks in the local "cache" (the array),
and if not found, it then chains out to the file, and adds that record
to the "cache" array, so any subsequent look-ups for that record will be
faster, before returning the results via a results data queue (one per
client). Typically each client sends the name of its results data queue
as part of the request message. The server job will need to maintain a
counter of how many records are in the array, for use on the %lookup BIF.

I could go with that approach - but given the number of records the OP mentioned just allowing RPG to block the file will result in pretty fast I/O.

#3. create a user index, and load the data into the user index, and use
the MI builtin function _FNDINXEN to do the look-up. This should be
faster than database I/O, but you will have to incur the overhead to
initially load the user index. Note that you can do this just once, and
use an insert, update and delete trigger attached to the file to
maintain the *USRIDX whenever the corresponding file is changed. All
users or jobs can then share this one *USRIDX object.

It is a long time since I tested this, but bsearch and %lookup outperformed this approach at that time - not by much. admittedly OI din not try to maintain the index.

Jon Paris

www.partner400.com
www.SystemiDeveloper.com





As an Amazon Associate we earn from qualifying purchases.

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.