This is just a hypothetical situation but say you worked at a bank where you had 500,000 records in a file with credit card numbers and bank account numbers (all encrypted of course). And you get a report that your information had been discovered in the hands of a identity theft villain. You go looking around and object auditing shows that someone who you fired last month (worked for ITS and job duties required all object access) had accessed this file a few hours before they were fired and they had no reason to be using the file at all. If that is all you know you might need to assume that all 1,000,000 records were accessed and you need to contact 500,000 customers about the problem. Actually this person only accessed 100 records (was not being greedy). The only way you could know this is by having a read trigger and doing your own logging.
-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of CRPence
Sent: Wednesday, August 27, 2008 8:02 AM
To: midrange-l@xxxxxxxxxxxx
Subject: Re: DB2/400 logging of read only access
MichaelQuigley wrote:
A word of caution, my understanding is that read triggers drastically
impact performance. With a read trigger things like blocked reads,
etc. are not available.
Blocking to the program is lost, but the database paging is still in
effect. Random access via a unique index both prior to and after the
addition of a read trigger sees the least but consistently measurable
impact.
The recommendation I've heard is to at least double memory and CPU
resources to handle the triggers.
Only if based on FUD; e.g. by someone selling that hardware. It
should be obvious that for lack of blocking, the memory requirements
actually reduce in the program for its input. In most cases the only
way to reduce the impact would be to get a faster CPU, not more CPU.
Regardless, it is easy to test the true impact, by designing a trigger
that does absolutely nothing; if ILE then with a named activation group.
That fake read trigger can measure the minimal impact. Any additional
[e.g. memory and real time] overhead in the actual read trigger program,
beyond that for the fake read trigger program, will depend of course on
what the real trigger program does; it should be designed for minimal
impact to real time.
This may not be so big if the file *REALLY* is not accessed often
And that is best investigated for each situation. That is, the
decision to avoid the read trigger should not be based on some
implication of what the impact *might* be.
I think the recommendation to do object auditing generally makes more
sense.
So true, if read auditing [via *ALL object auditing; chgobjaud] is
sufficient. If the requirement is to record accesses to any specific
row versus the file.mbr, then only a read trigger would generically
suffice [i.e. for access outside of application control]. For other
situations the open\close entries for the journaled file might be a
better option, and for some rare applications the /database open/ exit
might be of value.
Regards, Chuck
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit:
http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at
http://archive.midrange.com/midrange-l.
As an Amazon Associate we earn from qualifying purchases.