× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Heba & Rob & Vernon & others

Thanks for your inspiring questions that got me interested enough in what was
doable to actually look this up in an IBM manual.  Any time I assert one
thing & someone says something to the contrary, I like to check my
understanding of the facts, so as to drum in appropriate correction to my
know how.

Short answer for Heba ... depending upon the complexity of your business
rules & size of your files ... you might want to have
1. a CL program that reimposes all your rules.
2. a CL program that removes all your rules.
During recovery, you would do the removal, then later the reinstatement.
This would give you greater flexibility, but you would need to keep the two
programs current with any subsequent changes to your overall business rules.
I saw nothing in the Advanced DB/400 manual that limited the imposition of
referential integrity against different kinds or sets of 400 files, so long
as they are 100% externally defined, and have fields in common that you can
equate to same key meaning.

Short answer for Rob ... This stuff appears to be a useful tool that might
help me with some trouble shooting.

Short answer for Vernon ... The manual implies that both check constraints &
referential integrity is imposed against anything that accesses the DB,
including SQL.  But like I said at the outset, I am merely a student of this
topic.

Reference www.redbooks.ibm.com
Redbook = SG24-4249 (I have dead tree edition) Advanced DB/400 Manual
which indicates our library also needs to include SC41-5701 & SC41-5612

  a.. We define our BPCS files with DDS programming language.  Other
companies might define their files some other way such as thru SQL.

  b.. We can impose additional business rules on those files using either
command line access or SQL.  These rules are at the relational data base
definition level, so it does not matter if someone accessing the files
through a BPCS program, DFU, some PC tool, command line, SQL on the fly.  If
we have set a business rule that the cost will never be negative or whatever,
any attempt to make it so will crash that program job user whatever that
tries to do it.

  c.. Some of the types of business rules require journaling and/or
commitment control which is a major disk space bite & BPCS modification
topic.  There is also a performance issue.  Imposing business rules through
the relational data base is MUCH more efficient than imposing them through
programs, as we now supposedly do, but if programs not modified, adding
business rules at the relational data base level means another big hit on
performance.  So this basically tells me that companies that want to impose
business rules to a BPCS reality should only seriously contemplate it where
the bad data is the most serious of problems that cannot be easily resolved
some other way.

  d.. When a business rule is imposed for the first time, the file that is
getting the rule applied gets tied up for a while, depending on size of file
(# records) & complexity of the rule.  In the event of a recovery from backup
& journals, you can deactivate the business rules for a time period, then
reimpose them.  This way Heba would avoid having to muck with EDTCPCST (edit
pending conflicts that are temporary due to the sequence of recovery steps).

  e.. The rules can be applied by command line or by SQL.  Ditto removal.
You can check on your rules via WRKPFCST or DSPFD TYPE(*CST)

  f.. If they are applied by SQL & if the file already has some data in there
that is in violation of the rules, then the application will fail.  In other
words you end up with some error messages & the business rule is not in
effect.

  g.. If they are applied by command line command & if the file already has
content in violation of the new rules, then the rule is now in effect for any
new work on the file, while the old violations remain in place, which could
make it difficult to maintain those messed up records.

  h.. You can run something where you are proposing some business rule & seek
to get a list of all the records out there that are in violation of that
business rule.  Now that is something I am more interested in exploring to
see if the output is easier to read than query/400.  That way we would not be
forcing buggy BPCS programs to crash, but we would be getting regular stories
on what they messed up.  This is one possible route for dealing with our
current problem with corrupted BPCS file(s) which I had suspected ELA but
figured out Fri nite that it was FMA file responsible, and along my
exploration way I found another BPCS program where the supplied source has
too many errors in it to compile, so obviously this can't be the same version
as what came out of the box.

  i.. Suppose you had a rule that said that we do not want to delete customer
# that is used somewhere else in data base.  Depending on exactly how that is
defined, it would be possible for someone to delete that customer & wham, the
business rules delete thousands of records that are using that customer # ...
this sort of connection nice for people who want to delete all uses of an
item or something, but suppose oops we did not mean to do this.  Well if
journalling is going on, there is a record of everything that got deleted,
and there is a way to reverse that action.

   j.. DB2/400 does not prevent some klutz from creating conflicting or
mutually exclusive business rules.

MacWheel99@aol.com (Alister Wm Macintyre) (Al Mac)


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.