× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Hi Rob,

"Much of this farming could be done at startup time and not for the update
of each row."

Excellent idea! Caching said constraint data makes sense, just remember
that it can become stale if a database constraint change has been applied.

I'd do something along the lines of this, most likely, to yield a generic
solution that works for any table:

- For a row insert, package up the row as JSON, and
- Pass said JSON to a generic SQL UDTF function to parse the JSON to get
the table name, schema name, column name / value pairs. If you're on a
system with the JSON_TABLE() IBM function, this parse becomes simple :)
- The SQL UDTF function then compares the parsed JSON information to the
system catalog constraint tables, generically for any table, and
- The UDTF returns one row for every constraint violation it finds to
the caller, and
- Zero rows returned by the UDTF means no constraint violations found
(desired result), and
- With a design like that you should be able to perform this generically
WITHOUT creating separate code for each table.
- Another benefit is a single UDTF request / call would check ALL
constraints with one request to the database (one request but not one I/O).
Reducing I/Os improves performance, but so does reducing requests.
- For a row update, I'd do something very similar, except you have a
much smaller payload in the JSON document, if you're updating only certain
columns. The same UDTF could probably handle updates as well.
- I see no reason why that UDTF couldn't also perform the INSERT or
UPDATE if no violations were found. This means a single request can
validate the data against the constraint definitions, and perform the
database update, generically for any table.

With that design, if in widespread use in a system where data was being
validated before attempting to record transactions, the IBM system catalog
constraint tables would be getting hammered continuously. As a result,
you'd get some automatic caching of that data due to it being so frequently
requested.

I have a handful or two of SQL functions that allow constraint checking
generically in SQL for any table, any column, any constraint type. If I
were to wire that functionality up to transaction processing, I'd
definitely find a generic way to wire them together. JSON should handle
that, XML should, and I imagine a hundred other ways could be, or probably
already have been, devised.

Mike



------------------------------

message: 4
date: Wed, 17 May 2017 14:53:35 -0400
from: Rob Berendt <rob@xxxxxxxxx>
subject: Re: Some database discussions...

Interesting.
Much of this farming could be done at startup time and not for the update
of each row. Although the coding would be a lot harder to follow, then
again, maybe not. Might depend on how well you break your code down into
subprocedures

if PassedEditColA();


Rob Berendt
--
IBM Certified System Administrator - IBM i 6.1
Group Dekko


As an Amazon Associate we earn from qualifying purchases.

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.