|
I think I found my answer in
http://publib.boulder.ibm.com/infocenter/iseries/v5r4/topic/rzajq/rzajq.pdf
Under "Control database manager blocking"
"The SQL run-time automatically blocks rows with the database manager in
the following cases:
INSERT
If an INSERT statement contains a select-statement, inserted rows are
blocked and not actually inserted
into the target table until the block is full. The SQL run-time
automatically does blocking for blocked
inserts.
Note: If an INSERT with a VALUES clause is specified, the SQL run-time
might not actually close the
internal cursor that is used to perform the inserts until the program
ends. If the same INSERT"
So it sounds like in my case, where I'd be doing inserts using Values,
that blocking wouldn't really be occurring. I'm not sure how to interpret
the bit about the internal cursor.
However it does say we can control blocking by using OVRDBF and SEQONLY -
which we actually currently do. So if it obeys the override in term of
blocking the write, then I should be set.
Ultimately I could run a test of a huge number of records, so I figure
I'll get that on my to-do list.
-Kurt
-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx [mailto:
midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Anderson, Kurt
Sent: Wednesday, April 10, 2013 11:17 AM
To: 'Midrange Systems Technical Discussion'
Subject: Multiple Inserts - does block writing occur?
Continuing down the SQL path...
If I execute 100 individual insert statements to a table (no commitment
control), does the system insert each record out individually, or does
blocked inserting occur?
A little background:
We receive files with all sorts of layouts, and we mediate them and output
them to our standard file. The mediation programs have a lot of
similarities, and I have moved most of those similarities into service
programs. However one thing they all still share is that each program is
writing to the standard file. I'd like to encapsulate that standard file.
In doing so, I'd like to use SQL to write the records, but I can't
sacrifice blocked writing due to the high volume of transactions.
I'm aware of inserting multiple rows in one statement using arrays,
however I don't want to attempt to control the number of records the system
should write out at once. In addition, there will already be an element of
complexity using arrays because one incoming record may result in more than
one standard record (so doing a blocked insert for that situation would be
fine).
I'm on IBM i 7.1.
Thanks,
Kurt Anderson
Sr. Programmer/Analyst
CustomCall Data Systems, a division of Enghouse Systems Ltd.
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe,
unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a
moment to review the archives at http://archive.midrange.com/midrange-l.
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.