On 10/2/2015 8:29 AM, Nathan Andelin wrote:
I have a very vague idea that moving business logic and development from
COBOL into DB2 as much as possible is a good thing.
The idea is to move data validation and business rules "out of
applications" and "into the DB", meaning that the logic is run in (or
behind) triggers. So "rules" and "business logic" cannot be bypassed. That
ensures DB integrity, and makes the logic more readable and maintainable.
I'm an RPG programmer, only do web stuff for my own internal
consumption, but the database topic is intensely interesting to me.
Nathan's description is quite good, and I agree with it right up to the
point where I see 'more readable'. Why? See below.
-snip-
Applications then just issue DB I/O statements and handle error messages
generated in or behind triggers.
I routinely work with files where the header has a total of the detail
lines, and this total doesn't match the current, actual sum of the
detail lines. This is one example of many which demonstrates only one
of the bad things that happens when we rely on application programs to
keep the database sane. The data simply can't be trusted without good
DB integrity. So I'm very, very much in favour of moving integrity
operations from the application layer into the database layer!
Having said that, I don't personally find it easier to read / maintain
an application where the business rules aren't easily visible while I'm
editing code. The source code says 'update salesdetail' but there are 4
triggers which fire, one of which updates the sales header, which fires
triggers, one of which updates the accounts receivable balance, which
fires triggers... What once was a simple case of 'change one column in
one row of one table' - all of which I can see in the source code I am
trying to maintain - has now become an interlocking cascade of updates
which touch many rows in many tables. None of which can be seen in my
simple update statement in my source editor.
Going toward triggers, constraints and the like is very much necessary
in my opinion, but the tooling to make this integrated DB / application
program ecosystem easily understandable does not exist. Or, perhaps
better, is not yet known to me.
One example is the side effects of adding a trigger to a file which
currently does not have any. Imagine my example above, where I'm
updating the sales detail, which flows into the sales header and A/R
balance tables. I add a trigger to the A/R balance table, which tries
to update the general ledger, but the G/L doesn't yet have 100%
integrity, so that update falls over. That's all well and good, but now
what does the application do? I have a customer service person who
can't update a quantity sold because the accounting people didn't fix a
G/L number yet? Ouch.
How do I find out all of the places which cascade updates down into say
the A/R balance table? It's easy to find the triggers which fire upon
update of a table, but it's a lot harder problem to find all the
triggers, stored procedures, and heaven help me, SQL UDFs which update a
given table.
This analysis was always needed, but with traditional RLA, all of the
classic cross reference tools could find the programs which update a
given file. Now that some of this has been pushed down a layer, those
cross reference tools can't find those updates.
Let me repeat: I'm in favour of triggers. I think they're necessary to
keeping a sane database. But I don't find this design easier to
maintain than the clunky RLA design.
As an Amazon Associate we earn from qualifying purchases.