|
Our volume sounds higher than yours, but there is a lot of stuff you are doing that does not sound relevant to us. Our daily volume of transactions is several thousand interactive, and several thousand generated via batch jobs, so for example, we have several thousand inventory transactions that are created either directly on-line or from other transactions ... labor transactions generate inventory backflush transactions ... engineering changes generate cost change history transactions ... but many of these transactions in turn generate other kinds of transactions thanks to someone taking a simple menu option. Example inventory transactions to General Ledger Post ... no human action needed except push a button, then review exception report, such as invalid combinations of transactions easily fixed. Perhaps I have a flawed understanding of what you trying to accomplish & perhaps I have flawed memory of how various things worked or did not work. 15 years ago we converted from MAPICS to BPCS. My memory of MAPICS was that there was "batch input" & interactive input. Interactive input was to update master files in interactive mode. What MAPICS called "batch input" was ... you entered transactions interactively to a "transaction batch file" & then when you happy with the batch totals, you told the thing to go update the master files & it did its thing in batch mode off the JOBQ. Something goes haywire & you may have to go back to your last backup or check point or whatever, and this happened far too often, because our reality was a bit fragile, such as PCs going down, communication lines going down, etc. in the middle of MAPICS update. When recovering your reality to a check point, such as restoring the last backup, you could keep or recover the stuff in the "transaction batch files" that got posted before the crisis, then you could repost them against the recovered master files. The challenge was whether sequence of different kinds of batches made a difference, and reposting interactive transactions with the help of poor quality audit trails. Now BPCS has something similar for some applications. Let's suppose someone is doing ORD500 customer orders interactively & kaboom their PC needs to be rebooted, or suppose someone is posting cash payments & kaboom someone kicks the power plug on the Twinax, or whatever the problem is. The transactional data is not really going directly to the master files, but rather it is going to a work area member that is named after a combination of the user work station address & the particular application they are doing. This work area literally has the last STUFF the user had keyed in before kaboom disruprion. Normally by time of interactive session ends, all that stuff in the work area has got to the master files, but kaboom, we not have master files with that data, we just have master files with story before user started hours of work. Now in theory, we could copy the CL that runs this, and tell it to continue using the same work area & get the user to where they left off & I have done this on occasion. I remember one case where a user session died, and the user came to cry on my shoulder about the many hours of work invested, so I changed the work station data area name to that of another work station, and recompiled a modified version of the CL, so the same user at a different work station could get into the transactional batch exactly where they left off, when their hardware died. But BPCS is not really designed for this recovery technique. I have just stumbled over this capability somewhat by accident. End users typically try to bull on through. The job bombs for whatever reason. They reboot their PC or whatever is needed & then take the same menu option as where they were. The first step of the standard CL is to clear the data areas, so this loses whatever work they had keyed before the disruption. If this works for them, they get their job done. Sometimes there are some flags associated with steps & BPCS does provide some reset functionality so we can force the reality to a particular clean check point. I defy you to find where in the BPCS documentation it explains what I am doing here. We do occasionally have situations where my solution is to tweak some files using DFU or SQL or some other action outside of the ERP menu options. Example: Today we did 2 shipments of customer items in which the price had not been updated in the customer order, because it was in the middle of requoting ... the customer had been notified of the new price, by the folks doing the requoting, but the data used by shipments had not been updated in BPCS files. I used DFU to insert the price in the shipment file so that the invoices receivables GL etc. would come out right, but it was too late in the process to fix the sales history & inventory history, which lots of people use for profit analysis. Example: There are known bugs in BPCS with mathematically improbable results, and I am backed up with so many modifications to do, that I will not be fixing many known bugs any time soon - I have given up on SSA GT fixing them. I periodically run a report that lists items with crazy values, such as negative costs, then I go into the master files & fix them. I feel very uncomfortable with the notion of fixing data outside of the ERP audit trail, so I have added my own audit trail of stuff like this that I do, but it is a bit of a kludge, because my interest was in getting the job done without a lot of programming, since it is not something that happens every day. If we were designing something that stored transactions since backup, then re-apply them after restore, but not re-apply the corrections that people made when they did not know we were going to do the restore, then the corrections would need to be coded somehow to distinquish them from the original transactions that went through the standard ERP options. You would almost have to have a field in each transaction that says which program wrote this transaction, and a way of coding your programs so you know which ones were legitimate & which were correction kludges. MacWheel99@aol.com (Alister Wm Macintyre) (Al Mac) +--- | This is the Midrange System Mailing List! | To submit a new message, send your mail to MIDRANGE-L@midrange.com. | To subscribe to this list send email to MIDRANGE-L-SUB@midrange.com. | To unsubscribe from this list send email to MIDRANGE-L-UNSUB@midrange.com. | Questions should be directed to the list owner/operator: david@midrange.com +---
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.