× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



I'm not sure I can apply blocked inserts here w/o getting overly complex - but I'm not counting it out.

That said, I tried testing to see the difference between blocking and not blocking, and I did not get the results I was expecting. Essentially they took the same amount of time. To keep it simple, I was changing our native RPG I/O from using blocking to specifying Block(*No). Inserted 1,000,000 records and it took 21 seconds with blocking on and off. I suppose I should figure out what I want to do, run a huge test to see if variance (if any) to run the process is acceptable.

Dan, thanks for the note on the unique index thing. I have the table defined with a primary key (which presumably creates an underlying index), so the table will be unique (it wasn't before).

-Kurt

-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Charles Wilt
Sent: Thursday, April 11, 2013 8:40 AM
To: Midrange Systems Technical Discussion
Subject: Re: Multiple Inserts - does block writing occur?

If you want to enforce blocking from the SQL statement level, you'd have to do it as a "blocked insert." aka "insert multiple rows"

INSERT INTO DEPARTMENT (DEPTNO, DEPTNAME, ADMRDEPT) VALUES ('B11', 'PURCHASING', 'B01'),
('E41', 'DATABASE ADMINISTRATION', 'E01')

INSERT INTO CORPDATA.EMPLOYEE (EMPNO,FIRSTNME,MIDINIT,LASTNAME,WORKDEPT)
10 ROWS VALUES(:DSTRUCT:ISTRUCT)

For best performance, if you table is journaled, make sure you are using commitment control and not commiting after every record. Ideally, you'd want just under 128K of journal entries before commiting.

Charles






On Wed, Apr 10, 2013 at 1:22 PM, Anderson, Kurt <KAnderson@xxxxxxxxxxxx>wrote:

I think I found my answer in
http://publib.boulder.ibm.com/infocenter/iseries/v5r4/topic/rzajq/rzaj
q.pdf Under "Control database manager blocking"

"The SQL run-time automatically blocks rows with the database manager
in the following cases:
INSERT
If an INSERT statement contains a select-statement, inserted rows are
blocked and not actually inserted into the target table until the
block is full. The SQL run-time automatically does blocking for
blocked inserts.
Note: If an INSERT with a VALUES clause is specified, the SQL run-time
might not actually close the internal cursor that is used to perform
the inserts until the program ends. If the same INSERT"

So it sounds like in my case, where I'd be doing inserts using Values,
that blocking wouldn't really be occurring. I'm not sure how to
interpret the bit about the internal cursor.

However it does say we can control blocking by using OVRDBF and
SEQONLY - which we actually currently do. So if it obeys the override
in term of blocking the write, then I should be set.

Ultimately I could run a test of a huge number of records, so I figure
I'll get that on my to-do list.

-Kurt


-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx [mailto:
midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Anderson, Kurt
Sent: Wednesday, April 10, 2013 11:17 AM
To: 'Midrange Systems Technical Discussion'
Subject: Multiple Inserts - does block writing occur?

Continuing down the SQL path...

If I execute 100 individual insert statements to a table (no
commitment control), does the system insert each record out
individually, or does blocked inserting occur?

A little background:
We receive files with all sorts of layouts, and we mediate them and
output them to our standard file. The mediation programs have a lot
of similarities, and I have moved most of those similarities into
service programs. However one thing they all still share is that each
program is writing to the standard file. I'd like to encapsulate that standard file.
In doing so, I'd like to use SQL to write the records, but I can't
sacrifice blocked writing due to the high volume of transactions.

I'm aware of inserting multiple rows in one statement using arrays,
however I don't want to attempt to control the number of records the
system should write out at once. In addition, there will already be
an element of complexity using arrays because one incoming record may
result in more than one standard record (so doing a blocked insert for
that situation would be fine).

I'm on IBM i 7.1.

Thanks,
Kurt Anderson
Sr. Programmer/Analyst
CustomCall Data Systems, a division of Enghouse Systems Ltd.
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe,
unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take
a moment to review the archives at http://archive.midrange.com/midrange-l.

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe,
unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take
a moment to review the archives at
http://archive.midrange.com/midrange-l.


--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a moment to review the archives at http://archive.midrange.com/midrange-l.


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.