× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



On Wed, 2014-10-08 at 01:57 -0700, Kirk Goins wrote:
I have some really basic questions on using journals to recover from a bad
SQL based update to a file. The SQL was to update a single field and a
rather small number of records from what I was told, but updated several
hundred thousand before it was stopped. So the file was journaled with
Before and After images.

If the journal was defined as before/after then it will hold such data,
if defined to hold after only then that is what is stored.


1st question I assume that what is journaled is the Entire Record not just
what was changed ( say 1 field ), correct?

That is correct, at least as far as my knowledge goes from RPG... the
best way to check would be a simple DSPJRN just in case SQL differs.


2nd Question Since the file was in active use and other jobs could have
updated other fields of the modified records after the bad SQL was run I
don't think we can just use RMVJRNCHG cmd to backup those SQL changes and
then run APYJRNCHG from the SEQ directly following the the bad changes. If
we use the APLYJRNCHG to a recorded we rolled back it will just put the bad
data back if my assumption in #1 is correct?

If you roll back the changes, it will also roll back other changes done
at the same time as the SQL to both the file in question and all other
files that were journaled.


Last question for now, What happens if there are triggers on this file if
we were to use RMVJRNCHG cmd? How about APYJRNCMD?

I can't help on that one.

I would have thought the best bet would be to extract the journled
entries relating to the file in question... this might require a bit of
programming to get all the entries into a file, then sub set to the ones
in question (the file name is also stored in the journal as is other
info such as user, type of update, etc.)

Then once you have verified the data, update the field in question from
the "before" image, a simple chain/update/commit for each record (or
batch the commit to X records, I don't know if it still holds true but
having a lot of uncommited (in my case 10s of thousands, don't ask;
logically it was a good idea, in reality a really bad one :-/ ) records
has a major impact and will pull the system down).

You will obviously get a whole new set of before/after images in the
journal (best to disconnect it, the original, and save it before running
the new update). Any triggers/cascades etc. will all be done as normal.

My gut feeling would be "every one off the system" while this update was
run, so that any problems, programm, logic, errors can be very quickly
rolled back if it doesn't go to plan. If having the field "incorrect"
doesn't impact to much on the live system then the fix could be put off
till a quiet time.

The other possibility is roll the whole system back to before the
problem happened, then re-do the work done from that point forward as if
the problem had never happened... it sucks but some times its the least
problematic, although you have to keep an eye out for things such as
stock pick lists being duplicated so they don't get re-picked.


Thanks



--
Kirk



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.