× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Chuck,

I completely agree. However, from experience I've had file which
contained a copy of data from someplace else, and while not normally
written to "in mass" there were circumstance where that would occur.

The files in questioned were journalled to support transactions in the
normal course of events.

Having processes turn off journalling and back on is something that
usually bugs the auditors. <grin>

In addition, turning off journalling would have meant that I'd have
prevent the normal processes that depended on journalling from
running. As it stands, they'll have a longer wait than normal if they
happen to be working on a record in a given batch of the mass update.

Possibly a unique situation and the whole design might have worked
differently, but it was there when I got there. I just ended up
trying to improve the performance without pitching the whole thing.
Cursors with periodic commits were a big part of the solution.

In the end, I just want others to be aware of the performance
implications of journaling when it comes to "mass updates".

Charles



On Thu, May 7, 2009 at 5:16 PM, CRPence <CRPbottle@xxxxxxxxx> wrote:
  Of course this is effectively a case of a work-file, written as
/extract file/, according to original scenario; difficult to infer
otherwise.  If it were not, then the original data should be just as
capable for most whatever operation; i.e. a copy of a subset is
inherently a work-file, because the data could have been reported on
directly from the original file.  And while _I agree_ with a concern
for journaling, I disagree that making a copy of the data would be
done better with "reading with a cursor and committing periodically"
because...

  If the application is just going to COMMIT periodically for the
sake of performance without regard to actual recovery purposes, then
the better solution is to just avoid logging altogether.  Although
in production processing commitment control is ideal to ensure
atomicity, there is little reason for logging for [what intends to
be] an essentially atomic operation in creating a copy of the data.
 In the case of making a copy, either the statement completes or it
does not. Without logging & CmtCtl avoids any need for tricks to
speed up the processing by committing changes, as though the
application were designed to handle recovery if the program is
interrupted, when most probably it is not; i.e. the program will
likely require a DROP TABLE and then starting over [ever since the
first commit], just as if done without any logging.

  Other databases do not have this option of not logging [for DML],
and thus such tricks are required to avoid reaching their logging
limits.  For the DB2 for i, the reason for not logging is typically
performance, for which turning off logging is a better resolution in
many cases.

  If the TABLE is created into a SCHEMA or a library enabled for
QDFTJRN automatic journaling, then issue a separate CREATE TABLE and
then issue ENDJRNPF.  Then use a single DML statement to make the
data copy.  Finally if journaling is of value, which I doubt from
the /extract file/ reference [as being a subset of data], then
STRJRNPF could be performed after the copy.

  Note:  This all assumes CPYF was originally be used or considered
acceptable for use, a tool for which there is effectively no
isolation support.  If one were to attempt to, with their use of
SQL, introduce isolation to ensure integrity of the copied data,
then the only option is commitment control which requires logging.

Regards, Chuck

Charles Wilt wrote:

The only issue I have with this, is when the destination table is
 journalled, there's a huge performance hit if you write to it
without commitment control (unless you have the v5r4+ HA Journal
Performance LICPGM or prior to v5r4 the Journal Caching PPRQ).
If you're writing lots of records, trying to do them as a single
commit transaction can also be a killer.

At that point, you're better off reading with a cursor and
committing periodically.

The write should be done with either RPG op-codes or via SQL
batch inserts.

Granted, many times you're copying data like this for work files
which don't need to journalled.  But if that isn't the case...

On Thu, May 7, 2009 at 3:48 PM, CRPence wrote:
In making a final copy as TABLE, as a subset of data from a
database file, there would be little reason to move any data
anywhere besides directly from the source table to the target
table. No FETCH means no array.  Just choose an SQL statement
that does it all, and avoid any cursor and the additional
coding for array & movement of data.

Regards, Chuck

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.