× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



It's probably been 12 years since these cross-reference files were first documented - at that time it was pretty strongly recommended to leave them alone. I suppose there could have been changes - there are more of them now - but I still tend to leave them alone and use the views in QSYS2 (or now SYSIBM). YMMV

Just looked up more info. V5R2 InfoCenter speaks of 8 of these cross-reference files. Use authority is restricted to security officer. But viewing data is OK for all users, using the supplied (it seems) logical files. It does say that authorities cannot be changed, as the files are always open. So my caution may be overstated, although it does speak of looking at data via the logicals, not directly.

Info APAR II08311 has considerable info on these files. It came out first with V3R1, which is when this new SQL catalog stuff first came out, I think. But updated recently, so still current. Good stuff. Find it in one of the technical databases on the support page. Or order it with iPTF or ECS.

You are probably right on RCVJRNE - I might be thinking of data queues. There is a mechanism for notifying any job waiting on an entry whether a delete should be allowed, IIRC. Now, it can be a good idea to remove entries, in order to conserve disk space, so this inter-process mechanism can be important. Old receivers should be removed eventually, anyway.

I don't believe there is any limit on the number of objects that can be audited. It is not a system setting, after all - it's an object attribute that the system checks, so I see no reason for a limit. Could not find it mentioned.

It might be necessary to use an API to retrieve the file description, to identify the type of file - I don't remember whether the extended attribute is available from the journal entry.

I'm interested in what you finally come up with, and how it works for you.

Vern

At 03:11 PM 9/12/2003 -0700, you wrote:
QDBJRN is just another journal, isn't it?  Even if I RCVJRNE on it, it
won't make the entries unavailable to another process, will it?

QAUDJRN won't necessarily capture file creations and deletions (unless
*OBJMGT ? is set on QAUDLVL).  Even if they were being captured, would it
tell me whether the file was a source PF?

QADBXREF: Yours is the first I've ever heard about not reading the
QADBXREF file - not touching it period.  I've known that it was not
recommended to build logicals on that file, but I've never run into
problems querying the file.  Does IBM warn against this?

The 1000 limit was just an example.  Does the system limit the number of
objects that can be audited?  Or are there any performance implications
with having a large number of objects (in this case, only source files -
again, by nature, being very low volume) being audited?

Thanks for giving me a few more things to think about!

GA

--- Vern Hamberg <vhamberg@xxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
> I recommend RCVJRNE on QAUDJRN, not QDBJRN - I don't like messing with
> those cross-reference files. if you have some kind of object replication
> software, watch out for this, as they may need the entry to stay in the
> audit journal.
>
> No problem deleting an object with auditing turned on - all it is is a
> flag that is checked at tevery access to the object anyway - that's why
> there is minimal impact on the system, as you believe. The check is
> always done -
> extra overhead comes from writing the journal record, if needed.
>
> Don't interrogate QADBXREF - run DSPFD *ALL/*ALL fileatr(*pf)
> output(*outfile) and query the outfile - again, stay away from the
> system database cross-reference - the system is always touching that
> stuff.
>
> BTW, I'm curious about your 1000 limit - I could not find that in the
> Capacities Reference.
>
> HTH
>
> Vern
>
> At 01:01 PM 9/12/2003 -0700, you wrote:
> >Well, I was specifically hoping I wouldn't have to do a CHGOBJAUD on
> each
> >source file.  Source files get created, get deleted, etc, and I want to
> >track all source files on the system, regardless of who creates them or
> >when they are created.  However, if that is the only way, will the
> >following scenario work?
> >
> >Let's say I have an initial step where I do a CHGOBJAUD on every source
> >file known on the system.  I can interrogate the QADBXREF file to do
> this,
> >since it tells whether a file is a source or data file.
> >
> >What if, immediately after that first step, I start a RCVJRNE on the
> >QDBJRN journal, which journals the QADBXREF file, and I look for source
> >files being created or deleted.  When one is created, I could
> immediately
> >issue a CHGOBJAUD on it.  (What happens when a file being audited is
> >deleted?  Anything bad happen?  Should I issue a CHGOBJAUD
> OBJAUD(*NONE)
> >on it?)
> >
> >And, assuming all of the above is viable, what happens if I have 1000
> >source files that I've started auditing?  I'm asking this question as
> it
> >pertains to the limit on the number of objects that can be audited.  I
> >don't really see where auditing source member adds, updates, and
> deletes
> >is going to impact system performance.  But someone may educate me
> >otherwise.
> >
> >Is it possible to easily list all objects that are being audited?
> >
> >BTW, do I want to use OBJAUD(*CHANGE) on the CHGOBJAUD command?
> >
> >I'll see the wisdom of asking this on Friday afternoon.  <g>
> >
> >TIA, GA



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.