Thanks Chuck.
When I was doing my performance tests, I was doing a CLRPOOL (of a special pool in a subsystem created only to house file data in memory) instead of SETOBJACC *PURGE. Is my understanding that a CLRPOOL will essentially do a *PURGE of all files in that pool?
When I ran a test to see if I had a performance difference, they came out virtually identical. On a dedicated system as well. Which was leading me to believe I was doing something wrong. These tests ran about 8 minutes, close in seconds between SETOBJACC and no SETOBJACC (using CLRPOOL to remove the files from memory for the comparison test).
Was really hoping for some performance gains with this, my 8 minute test was with 200k records... our full run is 25 million.
-Kurt
-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of CRPence
Sent: Monday, May 31, 2010 11:58 AM
To: midrange-l@xxxxxxxxxxxx
Subject: Re: SETOBJACC & seeing it work in a job
On 31-May-2010 09:37, Kurt Anderson wrote:
I'm just starting to use SETOBJACC b/c we have some heavy file
reads that I'd like to move into memory.
I know there's the job log entry saying that the files were
moved into memory, but is there any way to see that the job is
utilizing the memory the file was loaded into? Specifically I
was wondering if the Display Open Files screen when looking at a
job would show no I/O if it was actually using memory - or does
that page not care if the I/O is against disk or memory? Is
there any indicator that the job is using it, other than a
performance comparison?
<<SNIP>>
The single level storage ensures "the memory the file was loaded
into" is *the memory* being used; i.e. there is just one copy in
memory, and thus any reference to the data will be via *that*
memory... with the only exception being, when the referenced data
had already been /paged out/ such that the data would have been
faulted into any memory for any allowed pool. Thus if properly
loaded into a pool that will not have other storage displace the
file-data in memory, the utilization of the memory versus having to
refer to the disk, is assured; i.e. little reason to seek out any
indication that the job used the in-memory data, and to be clear,
that is irrespective of in what pool the job runs when accessing the
data.
If some value is perceived in attempting to verify that the file
data was not paged into memory from disk, concentrate on the
database faults, not the file I\O counts. The I\O counts from the
WRKJOB OPTION(*OPNF) are of no use for the given, as they are
effectively read\write counts, not paging\faulting counts. The
Elapsed Auxiliary IO for a job would be telling for access counts
which include non-fault IO, but those counts also include
non-database IO requests. IIRC, TRCJOB output provides some IO
count details.
A performance comparison is a good indication of the effect. Run
a test once after *PURGE of the file, then again after loading the
file data [and index] into memory, each time tracking CPU time,
Clock time, and difference in database fault counts from beginning
of the request to the end. That will give a good sense for how well
the request to Set Object Access has effected the intent of ensuring
the file data is referenced from memory versus disk; plus an idea of
the benefit measured in reduced clock time, from having pre-loaded
the data instead of having the data faulted or paged into memory.
Regards, Chuck
As an Amazon Associate we earn from qualifying purchases.