×
The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.
On 31-May-2010 12:42, Kurt Anderson wrote:
When I was doing my performance tests, I was doing a CLRPOOL (of
a special pool in a subsystem created only to house file data in
memory) instead of SETOBJACC *PURGE. Is my understanding that a
CLRPOOL will essentially do a *PURGE of all files in that pool?
If any of the data [& access path] segments of the file which
will be target of SETOBJACC as part of a test already reside in
memory of pool-7, then clearing pool-6 would not assist to infer the
efficacy of the SETOBJACC. The SETOBJACC *PURGE of the file data
[and access path] must be performed explicitly, to ensure that the
objects\data are not loaded into any memory.
When I ran a test to see if I had a performance difference, they
came out virtually identical. On a dedicated system as well.
Which was leading me to believe I was doing something wrong.
These tests ran about 8 minutes, close in seconds between
SETOBJACC and no SETOBJACC (using CLRPOOL to remove the files
from memory for the comparison test).
As noted above, CLRPOOL may be inappropriate for testing
SETOBJACC. Also...
If the time spent is mostly in processing the data, then there
may be little gain from moving the object to memory in advance of
running the application. That is why learning the clock time, CPU
time, and faulting reductions are valuable; to see if in-memory
might be a futile strategy due to the /bottleneck/ being somewhere
other than in access of the data. If in-memory is of little assist,
and if the CPU utilization is low [percentage], then pushing up the
CPU utilization by parallel access might be an approach to speed up
processing; and with that, in-memory might be more important.
Was really hoping for some performance gains with this, my 8
minute test was with 200k records... our full run is 25 million.
IMO 8min for 200K rows seems probable to be something other than
just a disk-to-memory issue. I suspect unnecessary data movement
and other inefficiencies in the application are a more likely issue
for impact to the performance; that although parallel processing
could still improve the overall throughput, the original problem(s)
would persist [until addressed\eliminated].
Regards, Chuck
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact
[javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.