|
Charles
Is this the sentence you refer to?
Start a commit cycle in the CL
program before calling the batch program. __IN THE APPLICATION
PROGRAM(S), CHANGE THE FILE DESCRIPTION TO SPECIFY THAT COMMITMENT
CONTROL IS IN USE.___ Once the program returns to the CL program, end
the commit cycle to force any pending file I/O to complete.
It nowhere says to execute the COMMIT opcode that I can see. Only to use the commit keyword on the F-spec - FILE DESCRIPTION in the quote.
It'd be easy enough to do a test, and, being on vacation, maybe I will!
Regards
Vern
Charles Wilt wrote:
No, read the 2nd to last line in the quote I posted from Rick's document.--
__IN THE APPLICATION
PROGRAM(S), CHANGE THE FILE DESCRIPTION TO SPECIFY THAT COMMITMENT
CONTROL IS IN USE.___
So you need to add the COMMIT keyword to the f-spec for RPGLE
programs. Thus the writes to the file will use commitment control and
therefore you'll need at least one COMMIT (and maybe ROLLBACK) op-code
in the program.
So the Steps
1) Add STRCMTCTL/ENDCMTCTL to the CL if commitment control isn't already running
2) Modify the F-specs of the updated/added to file(s) to include the COMMIT
3) Add a COMMIT op-code before the program returns.
This places all writes by the program into one big commit transaction
which means you'd get the most benefit from caching journal writes...
One additional consideration, if the program updates 10,000 records.
One commitment transaction is probably fine. Even with 100,000
records, you're probably still alright. But trying to put 1,000,000
records into a single commit transaction is going hurt.
Ideally, you'd look at the records being written and figure out how
many of them it takes to get 128KB. Then commit after each X records.
Where X gives you just less than 128KB.
What you don't want to do is commit after each record, as then you
lose the cached journal writes.
HTH,
Charles
On Fri, Mar 6, 2009 at 9:58 AM, Vern Hamberg <vhamberg@xxxxxxxxxxx> wrote:
Hi Charles
I'm not so sure we are not agreeing violently. As I remember, from my
former boss, who was on the development team for much of the journaling
stuff, you only have to change your PFs to be journaled, STRCMTCTL
before every operation on those PFs, then ENDCMTCTL when done. That is
the meaning of "uses commitment control" as I remember I was told - no
need to actually commit or rollback. That is why this is a sweet,
relatively painless way, to get the journal caching.
Rick's article says that the commit boundary is the key time - this is
established by the STR|ENDCMTCTL commands, right? Once a commit boundary
is established, writes to the journal are scheduled asynchronously - it
is not determined by whether you actually have a COMMIT or ROLLBACK opcode.
Now I've been wrong once before - oops, that makes twice!
Charles Wilt wrote:
Hey Vern you've got your info wrong.--
The REDBOOK I mentioned before makes it pretty clear that you must
actually use commitment control to get journal caching.
The Document from Rick Turner also seems pretty clear:
"Though most database and other write operations are asynchronous,
database journal receiver write operations are usually synchronous to
the issuing job. This means the job is forced to wait (in the system’s
disk I/O write functions) for the I/O (write) to complete before it
continues processing. The SLIC Journal functions can do the journal
writes asynchronously if the job uses commitment control.
When commitment control is in effect, the database journal write
functions know that file integrity is required only at a commit
boundary and not at every record update/add/delete operation. Because
of this, the database journal writes are scheduled asynchronously.
When a commit boundary is reached, the database functions ensure that
all pending database file I/O is complete before continuing.
Lab tests show that using commitment control and journaling yields
performance almost equal to not using database journaling. If you use
journaling but not commitment control, a job can be three to four
times slower than when you don’t use journaling at all.
“But this means I have to change my code!” you say. True, but the cost
of the changes are minimal compared to the performance benefit. In the
CL program that calls the batch program, specify the files that use
commitment control and open them. Start a commit cycle in the CL
program before calling the batch program. __IN THE APPLICATION
PROGRAM(S), CHANGE THE FILE DESCRIPTION TO SPECIFY THAT COMMITMENT
CONTROL IS IN USE.___ Once the program returns to the CL program, end
the commit cycle to force any pending file I/O to complete."
(emphasis mine)
Charles
On Thu, Mar 5, 2009 at 4:29 PM, Vern Hamberg <vhamberg@xxxxxxxxxxx> wrote:
Also look at Rick Turner's classic articles on batch performance at
http://www-01.ibm.com/support/docview.wss?uid=nas1e907e76673a614dd86256a290054f546
This includes the tip to turn on commitment control in order to get a
kind of journal caching. You don't have to rollback or commit - just
start commitment control before running a program that uses a journaled
file - journal writes will be asynchronous, much as the LPP does. Tip
has been around for probably at least a decade. ;-)
HTH
Vern
Charles Wilt wrote:
Just to be clear,--
Journalling + commitment control + "Soft Commit" = awesome :)
Journalling + commitment control = good
Journalling by itself = can be ok, but the performance (by default) it
is definitely noticeable on batch processes even on today's
processors. Mainly because processor doesn't matter, disk speed is
the big factor.
I say "by default" because IBM has a LPP that you can pay for to
improve the situation. That allows you to turn on "journal caching"
when you don't use commitment control.
Note that the "Soft Commit" option mentioned is new to v5r4 and brings
the benefits of journal caching to smaller commit transactions.
Standard journalling + commitment control performs best the closer you
can get your transactions to 128KB of data.
I highly reccommend the redbook "Striving for Optimal Journal
Performance on DB2 Universal Database for iSeries"
http://www.redbooks.ibm.com/abstracts/sg246286.html?Open
And the technote "Soft Commit: Worth a Try on IBM i5/OS V5R4"
http://www.redbooks.ibm.com/abstracts/tips0623.html
HTH,
Charles
On Thu, Mar 5, 2009 at 1:18 PM, Peter Connell
<Peter.Connell@xxxxxxxxxxxxxxxxx> wrote:
We journal everthing to be sent to a DR box.
Even with everything journalled the performance is very good with today's processors.
For 1 file. You won't even notice it.
You can restrict the size of that the journal will reach before the system detaches it and starts another and have some job remove detached journals.
Peter
This is the RPG programming on the IBM i / System i (RPG400-L) mailing list
To post a message email: RPG400-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/rpg400-l
or email: RPG400-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/rpg400-l.
This is the RPG programming on the IBM i / System i (RPG400-L) mailing list
To post a message email: RPG400-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/rpg400-l
or email: RPG400-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/rpg400-l.
This is the RPG programming on the IBM i / System i (RPG400-L) mailing list
To post a message email: RPG400-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/rpg400-l
or email: RPG400-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/rpg400-l.
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.