Hi Rob ..... about your journaling e-note embedded below .....
Not everyone has a terabyte of hard drive available for Journal Receivers
(JRs). We've seen places past 90% DASD consumed because no-one actively
manages JR accumulation. Our Stitch-in-Time software has the functionality to
store only information the user defines as pertinent for a given file. That
translates to a DASD footprint +/- 92% to 98% smaller than JRs for an
equivalent retention period.
Beyond using less DASD, Stitch-in-Time would also make it DRAMATICALLY
simpler to find the source of rogue database updates. Consider the
current-events LWK example:
If you were attempting to determine which program was making mysterious
changes in rate fields by using JR data, you'd start by getting a list of all
records in LWK that were potentially (but not necessarily) changed. Then
you'd wade through that streamed data dump, looking for data in columns x
through y that changed in consecutive journal records. That's a chore because
there are no visual divisions between fields and because only a single
journal entry is viewable at a time ... so you'd have to jump back and forth
between two screens to spot value changes.
To avoid eye strain, a sophisticated iSeries guy could extract JR data for
LWK into two identical external data files and then purchase a utility to
exclude records with no data change. Then he'd create a query or SQL process
that compared data in columns x through y between the two files to find rogue
data changes. Records ID'd that way would include the rogue program name.
The very sophisticated, less-eye-strain JR approach would consume 130 - 160
minutes. The same objective could be achieved in 2-4 minutes using
Now, If you wanted to get an email alert about a LWK data accident just
seconds after the bad data hit LWK, use Needle in a Haystack software. The
Needle alert would include the rogue program name plus all the other
information you'd need to fix the error immediately ... before it propagated
itself into CAP calculations.
We're a Mimix shop so it has to be journaled. If you start journaling on
this file you can easily see who updated it, from what job, using what
Heck try it on your own test db somewhere.
Drawback, there's a whole discussion on journal receiver maintenance.
We've gone nuts with our journal receivers - they're so useful management
wants to keep them for 100 days. So we tie up a TB or so with all the
journaling we do. But what the heck, I have 1/2TB on my laptop.
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2022 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.