MIDRANGE dot COM Mailing List Archive



Home » MIDRANGE-L » February 2009

Re: SQL optimize for only writes



fixed

Elvis Budimlic wrote:


To enhance it, use blocked inserts.
Also, if you're doing 400K inserts, you might think about doing some of it
in parallel (i.e. submit 4 jobs/threads that perform 100K inserts each).

Has anyone achieved good results from concurrent writes to a physical file?

I attempted to block insert transactions into a single physical file using
a number of identical java server jobs. The load test consisted of pumping a
batch of 10,000 request-response transactions and extrapolating to derive a
transaction throughput per hour. Each transaction wrote around 400 records.
I noticed that increasing the number of jobs beyond one actually made
negligable difference i.e. could be accounted by other jobs also running.I
would have expected the throughput to increase as the number of server jobs
increases and then tail off due to resource contention.

I should add that concurrent write support is enabled since i'm running
V5R4. Also, although the PF was created via DDS, the REUSEDLT (reuse deleted
record) parameter was set to *Yes (*No is default).





Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2014 by MIDRANGE dot COM and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available here. If you have questions about this, please contact