× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



On 14-Jul-2014 15:26 -0500, Steinmetz, Paul wrote:
<<SNIP>>
I ran the command you listed earlier in this thread, with the only
difference being the library, QRECOVERY, not QSYS.

CHGJRN JRN(QRECOVERY/QDBJRNXRFQ) JRNRCV(*GEN)

Most journals have a new journal receiver created with more frequent
time period.

I believe the size THRESHOLD() specified in other system-supplied journaling ecosystems [i.e. the attributes specified for the Create Journal Receiver (CRTJRNRCV) for the initial receiver] probably does not default [to the newer\larger value of 1.5GB]; that instead, the initial journal receiver is more typically explicitly created with a smaller size for the threshold, that although cause for more receivers to be generated over time, will reduce the minimal storage requirements for those types of journaling that will consistently and periodically fill the active receiver.

If not, should this be part of a normal maintenance, possibly
included in QSTRUP?

The feature for which that journal environment exists [and the other two environments for the QDBSRVXR and QDBSRVXR2 system jobs] intend to /rollover/ the data over time; the size threshold should reflect some [acceptable] amount of storage that would periodically and repeatedly be taken by the feature. If new *JRNRCV objects are being generated within a month or especially a week, due to the current storage threshold, then there is little reason IMO to try to be aggressively paring them.

If placed within the processing of the QSTRUPPGM I would probably submit a job to start that work and any other work that would best be done asynchronously from the startup of the system; there is no reason such work can not be done concurrent to, or after, the startup. To limit the impact to the requirement for unique naming of the new receivers, I would temper any requests to perform CHGJRN JRNRCV(*GEN), to whatever is the /acceptable/ amount of minimum storage that your system will allow to be allocated for that feature; rather than blindly issuing the Change Journal, only perform the request whenever the /size/ of the Journal Receiver object has reached some threshold [just like the system-managed does].

My preference would be to change\customize the THRESHOLD() to allow the system to continue the receiver management, and presume that the *DBXREF would not at some point in the future, effect the recreate of that journal environment such that the THRESHOLD() gets reset to the larger default; see another reply in this thread where I explain how to be sure to establish minimally, proper ownership [and authority] when customizing the journaling ecosystem for the QDBXREFQ journaling [or other system-established journaling environments]. Another option might be to customize the CLEANUP options to include those system journals along with those that feature manages already; I believe there is a capability to add some user-defined actions.? Yet another option would be to perform the request in a scheduled job to run monthly. In any case, if the receiver is growing large quicker than whatever would be the frequency done outside of the already system-managed setup, I might just accept the current defaults... what's another couple GB ;-)


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.