× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



I have been totally buried in work this week ... our General Ledger totals for various applications, do not agree with the totals of reports on those applications, and when I get a breather I want to do something about the inefficient ways we are tackling this.

In my current reality, we post customers data as rapidly as we can ... we cannot afford to wait a day, even an hour, before we act on the info, because lead times grow ever shorter.

What you are describing sounds similar to a transaction batch control system I wrote back in the 1970's. We would get little "batches" of customer charges, payments, etc. with control totals. People entered a batch, and it went through various error checking, and control total approval, then someone had a list of batches in a category & checked off which were Ok to merge, and ultimately that data was in the month file, and the little batches were gone. If corrections needed, people would access by the batch name, which was written on the control media, as batches were created.

One way I accomplished this was to have a program read through all the batches coded as Ok to merge, and basically write out a string of very simple CL like command "programs" that had a standard set of execution lines to do what with that batch. We were doing it that way, because after seeing the aggregate totals for a huge pile of batches, a decision could be made to back out of this, because there's a batch in there that should not be in there, we missed something.

Originally I had set this up to work for a particular kind of customer, but the company liked the system so much, that eventually all kinds of customers (consumer, corporate, retail, wholesale), purchasing, general ledger, other applications.

If I was doing this today, I would use members of an externally described file.
Each member would have one of these data sets, where the overall file has the template for a subtype of contain, the members in aggregate are all the data needed to be processed to the next step. I have run into issues with respect to how many members can be in the same logical access path, and some programming languages, like SQL are not as flexible with respect to all the ways the contents of a file can be organized..

The external file definition gives the file layout, and each and every member must adhere to that layout. Members can have varying numbers of records.

A file can have more than one layout format, but some programming languages cannot accept that reality.

In fact in our ERP (BPCS) this is how it handles rapid acceptance of new customer data. There is a master collection of official data. Each human working on additions to the master collection, is in fact working in separate members for whatever they doing ... you can have any number of people, each working on their separate inputs ... then they get to point where they done & there is a command to merge what they did with the master collection, or command to abandon the effort, or go back in and tinker some more.

There can be a problem associated with identifying old members you no longer need, and getting rid of them.

The 400 can have files with varying record lengths, but I not have a lot of experience with that reality.

The software that accesses any given file, tells the 400 the conditions of access ... such as READ but do not UPDATE. This way you can have a large number of programs simultaneously accessing the data, and you only have to consider lock recovery for more than one program needing to concurrently update same parts of a file ... the lockout is usually for a particular record, so different programs can be updating different records at the same time. In the software I have worked with, I often needed the lockout to be on a field basis, because different programs needing to update different fields, but RPG tends to read an entire record, update a handful of fields, then replace the entire record.

99% of the SQL I have written has been to read records, not update them, so this has not yet become an issue for me.
List,

Here's another unusual process we have on our VSE mainframe that will have
to be duplicated on the iSeries. I am hopeful that the iSeries can handle
this in a much easier manner than does VSE, but I am too new to the
environment to know.

Here's the process on VSE....

During the day, depending on which customers' data has arrived, it is
processed and their data "staged" to a sequential file in a specific VSAM
catalog. Throughout the day, more data files are generated. When it is time
to begin the nightly run, we have to "gather" these files together (merge to
a single dataset) and then purge the original individual files. (the catalog
is backed up before the process begins)

The way this process is currently handled is a COBOL program reads the VSAM
catalog, looking for datasets whose name matches specific templates. This
program generates JCL for a second job where another COBOL program will
process each of the specific files passed in the list that the first program
prepares. There is only one input file defined in this program. Before each
input file open, an assembler routine is called which overrides the "ASSIGN
TO name" of the file hardcoded in the ASSIGN clause of the program with one
that is passed to the program in the list. This way it opens and processes
each file dynamically according to the list for that day.

If the iSeries provides a way to concatenate files of a similar format with
names that match certain template(s) to a single file, this will be a piece
of cake. The constraints here are:

1) there will be one-to-many of these files for any given run - but all in
the same library
2) the files will be identified by one of several "strings" in their
physical names
3) it is possible there could be other files in the same library but are
irrelevant to this process (so #2 is very important)

How best to accomplish this on the iSeries?

--
Regards,

Michael Rosinger
Systems Programmer / DBA
Computer Credit, Inc.
640 West Fourth Street
Winston-Salem, NC  27101
336-761-1524
m rosinger at cciws dot com


--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.