× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



All,

Thanks for the thoughtful replies. Switching DDS to DDL will require learning
new best-practices and I wanted to make sure I wasn't missing any obvious DDL
techniques.

Although with some creative hoop-jumping it is possible to access DDS
multi-member files with SQL, you cannot use RI constraints on them, and that was
one of the things I was hoping to gain from this modernization effort. So the
multi-member concept will have to go away. A pity, since the multi-member concept
did provide a built-in method of segregating data within the larger "file"
container.

I ran through some scenarios with QTEMP or temp libs but decided that
duplicating all those extra objects (indexes, constraints, journals, *AUTL, etc.)
was more than I wanted to maintain. Yes, there are utilities to automate those
steps but still that little voice kept whispering: "more objects" = "more
potential problems".

Appending an artificial "member" column to each table seems a bit of a hack, but
/shrug/ you have to work with the available tools. It seems to be the best
overall solution even though it requires reworking the record format. I was kinda
hoping to limit changes to CL's.

Terry, I agree that separating the "member" (or key or whatever) from the row's
"source reference" is an excellent tool for allowing users and auditors to
backtrack. Thanks.

Joe, the BPCS technique you described (forcing *blank to the new column) will
probably be the best method of "posting" data to production. The OE tables, with
all their historical data are pretty large (31 million rows in ORDER_HEADER) and
I just need to become comfortable with the idea of that much activity on a key
field. Maybe it isn't a big deal, though. Or maybe I need to resurrect that old
workorder where we archive historical data to another set of tables...

At the risk of taking this too far OT, I'd like to add that iNav's "database
navigator map" was an unexpected bonus of adding RI to the tables - it even
showed me a typo I'd made while keying in all those "alter table" commands. I'm
assuming it reads SYSCOLUMNS and related tables. As nice as it is to see things
visually, the iNav tool is pretty simple. Could anyone point me in the direction
of articles or s/w that provides a more streamlined way to manipulate and view
the SYSxxx tables? I might add that this is a small shop, a really small shop,
and that proposals for 5-figure s/w will get me nothing but a condescending smile
from those who write the checks here ;)

Many thanks, JK


On Mon 08/07/14 07:57 , Joe Pluta sent:
JK wrote:
We have an application that allows users to extract a
subset of order-entry> data (ORDER_HDR, ORDER_DTL, etc.) into uniquely-named
members. Once cloned,> users are able to manipulate their temporary copy of
the data as necessary,> running 'what-if' scenarios, etc. in their own private
sandbox without> affecting the production data. When finished with a
batch, the user can save> their work in the temp member, append it production,
jump to another member,> etc. Since the members are in the same files they all
share record formats,> indexes, RPGs, queries, *AUTLs and journals. Nice and
tidy.>
As others have mentioned, the idea here is to create a "batch number"
field that corresponds to the member name. If you really want to keep
things clean from a program standpoint, you can then create a view over
that batch number; your program will only see those records.

If this app is switched to SQL we'll have to abandon
the multi-member> concept and I'm having difficulty visualizing the 'SQL
way'. Partitioned> tables aren't appropriate in this case because the
separation really isn't> based on a range or hash. Creating temporary libraries
brings up its own set> of issues with *LIBL and security.

Yup. Multi-member files, at least in the DB2 sense, are not compatible
with the DDL approach.

I suppose one could design a system with two sets of
tables> (ORDER_HDR/ORDER_DTL) and (ORDER_HDR_T/OREDR_DTL_T),
prefixing each row with> a key (a.k.a. "member") that each program would filter
on. It seems like> extra overhead, but maybe that's because I'm looking at
things from a DDS> perspective.


You have two basic approaches. One is to have all the records in a
single file, with production records having a batch number of zero (or
blanks). This is essentially what we did in BPCS. Originally we used
members for temporary data which was segregated by either user ID or
workstation ID. When the user posted the batch, we copied it from the
temporary member to production. We changed that to a batch field that
contained either workstation or user ID, and basically did the same
thing, except now moving temporary records to production was as simple
as clearing the member field.

The other option is two files, one for temporary data and one for
production data. You might be tempted to do this to "separate"
production and non-production data, but I'd advise against it.
Personally, I'd create a view over records that didn't have a batch
number and I'd use that in my production programs.

One thing I don't quite understand: you say you append to production.
Do you delete the old records? If not, then appending really is simple:
just clear the batch number field.

Joe
---- Msg sent via Internet America Webmail - www.internetamerica.com

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.