----- Original Message -----
From: <bill.reger@convergys.com>
> In either case, it is not a particularly difficult task for an Analyst to
> figure out what the appropriate SIZE values should be when
> creating/maintaining a database file.  The important thing is to consider
> the current requirements, reasonable growth expectations, data entry
> and purge criteria.


Should programs have a MaxCpuPerEnterKey( ## )  ( or max nbr of rpg
instructions per enter key ) attribute.  An analyst could determine how much
cpu a pgm should be using and if it wildly exceeds this amount, the system
will halt the pgm and send a "program is looping"  msg to Qsysopr.

Or would this be a nusiance that catches some bugs but also halts too many
programs that have been incorrectly classified ?

Limiting the nbr of rcds in a PF prevents the system from being crashed by a
looping pgm.

But, speaking from recent experience where the domino server crashed because
a pgmr  was duplicating a library and caused the system to run out of space,
should not OS400 be enhanced to take action when aux stg usage approaches
the crash point ?   Like not allowing a job to add a lot of rcds to a file
when aux stg is over 98% ?

If this type of reasonable protection was in place, would you still not want
the max nbr of rcds set to *NoMax ?

Steve Richter

As an Amazon Associate we earn from qualifying purchases.

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2022 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.