×
The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.
I/O parallelism helps. Not nearly as much as blocked inserts do, but it
does help.
Of course, number of arms, amount of write cache and other factors influence
the process.
Elvis
Celebrating 11-Years of SQL Performance Excellence on IBM i, i5/OS and
OS/400
www.centerfieldtechnology.com
-----Original Message-----
Subject: Re: SQL optimize for only writes
Elvis Budimlic wrote:
To enhance it, use blocked inserts.
Also, if you're doing 400K inserts, you might think about doing some of it
in parallel (i.e. submit 4 jobs/threads that perform 100K inserts each).
Has anyone achieved good results from concurrent writes to a physical file?
I attempted to block insert transactions into a single physical file using
a number of identical java server jobs. The load test consisted of pumping a
batch of 10,000 request-response transactions and extrapolating to derive a
transaction throughput per hour. Each transaction wrote around 400 records.
I noticed that increasing the number of jobs beyond one actually made
negligable difference i.e. could be accounted by other jobs also running.I
would have expected the throughput to increase as the number of server jobs
increases and then tail off due to resource contention.
I should add that concurrent write support is enabled since i'm running
V5R4. Also, although the PF was created via DDS, the REUSEDLT (reuse deleted
record) parameter was set to *Yes (*No is default).
As an Amazon Associate we earn from qualifying purchases.