Elvis Budimlic wrote:
To enhance it, use blocked inserts.
Also, if you're doing 400K inserts, you might think about doing some of it
in parallel (i.e. submit 4 jobs/threads that perform 100K inserts each).
Has anyone achieved good results from concurrent writes to a physical file?
I attempted to block insert transactions into a single physical file using
a number of identical java server jobs. The load test consisted of pumping a
batch of 10,000 request-response transactions and extrapolating to derive a
transaction throughput per hour. Each transaction wrote around 400 records.
I noticed that increasing the number of jobs beyond one actually made
negligable difference i.e. could be accounted by other jobs also running.I
would have expected the throughput to increase as the number of server jobs
increases and then tail off due to resource contention.
I should add that concurrent write support is enabled since i'm running
V5R4. Also, although the PF was created via DDS, the REUSEDLT (reuse deleted
record) parameter was set to *Yes (*No is default).