× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



JK wrote:
We have an application that allows users to extract a subset of order-entry
data (ORDER_HDR, ORDER_DTL, etc.) into uniquely-named members. Once cloned,
users are able to manipulate their temporary copy of the data as necessary,
running 'what-if' scenarios, etc. in their own private sandbox without
affecting the production data. When finished with a batch, the user can save
their work in the temp member, append it production, jump to another member,
etc. Since the members are in the same files they all share record formats,
indexes, RPGs, queries, *AUTLs and journals. Nice and tidy.
As others have mentioned, the idea here is to create a "batch number" field that corresponds to the member name. If you really want to keep things clean from a program standpoint, you can then create a view over that batch number; your program will only see those records.

If this app is switched to SQL we'll have to abandon the multi-member
concept and I'm having difficulty visualizing the 'SQL way'. Partitioned
tables aren't appropriate in this case because the separation really isn't
based on a range or hash. Creating temporary libraries brings up its own set
of issues with *LIBL and security.
Yup. Multi-member files, at least in the DB2 sense, are not compatible with the DDL approach.

I suppose one could design a system with two sets of tables
(ORDER_HDR/ORDER_DTL) and (ORDER_HDR_T/OREDR_DTL_T), prefixing each row with
a key (a.k.a. "member") that each program would filter on. It seems like
extra overhead, but maybe that's because I'm looking at things from a DDS
perspective.

You have two basic approaches. One is to have all the records in a single file, with production records having a batch number of zero (or blanks). This is essentially what we did in BPCS. Originally we used members for temporary data which was segregated by either user ID or workstation ID. When the user posted the batch, we copied it from the temporary member to production. We changed that to a batch field that contained either workstation or user ID, and basically did the same thing, except now moving temporary records to production was as simple as clearing the member field.

The other option is two files, one for temporary data and one for production data. You might be tempted to do this to "separate" production and non-production data, but I'd advise against it. Personally, I'd create a view over records that didn't have a batch number and I'd use that in my production programs.

One thing I don't quite understand: you say you append to production. Do you delete the old records? If not, then appending really is simple: just clear the batch number field.

Joe

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.