|
This is potentially a HUGE task, because of what I consider to be a BPCS design flaw. Basically there has been an assumption that disk space is infinite, that we will want to eagerly keep adding records, and have little or no interest in setting some ceiling on historical records. There may be some problems we have at 405CD that have been fixed in later versions.
Plus, you not want to do something totally on the advice of someone on a different version of BPCS, different mix of applications, company needs etc. There are BPCS programs that heavily use sequence #s of detail records. Let;s suppose you have records 1-1,000 in some file and you only need to keep 10 records, so you delete the other 990 ... bad move, because some BPCS programs read each record sequentially, by looking for the next # that was counted, not a simple READ next, and if it gets to a break in the #s, it stops, and never looks at the records after the break ... so if there are any records you need to keep, keep ALL of them ... you might say "Well can't we resequence what we need from 1-10?" ... you could try that and BPCS would also crash, because many other files copy the sequence #s of files they connect to. You'd have to map out all the places where that used.
Check out the reorganizations that come with BPCS, which we have talked about before in the archives. There are some you HAVE to run at a certain point in the end-fiscal if you are going to run them at all, because otherwise major messups, plus you need to have some idea about run time. Our end-month work takes approx 15 hours for me if not doing a special reorg, and if nothing serious goes wrong. I need to schedule some sleep time inside there.
There is a purge of inventory records that are all zero in allocations and ingredients to on-hand. I run this right after INV900 on the month or two before one that ends in a physical, so as to reduce the number of inventory tags that will need to be printed, and size of associated lists. We do NOT run this on month end that has a physical, because it can mess that up. I would like to run it on other month ends, but it is so time consuming to run.
You gotta make sure no one is using BPCS when you run certain reorgs. Some of them bypass files that someone happens to be using. Some of them get stuck when there is a conflict for file access.I run SYS120 each weekend. This gets rid of soft deleted records in a large number of volatile files. You are correct that many files are untouched by this, but almost 100 are taken care of. You found FMA ... well in my experience GJD is a much bigger sinner when it comes to deleted records.
We assign work stations based on location within the company (abbreviation for dept id used as part of the name) and PCs use Initials of the person who normally operates it, with letters or numbers added to make the sides unique. This means that over time, with personnel changes, we get new work station ids, and have lots older ones no longer in use. BPCS tracks a lot of work using objects using most of the letters of the work station (chop off last few) and some letters designating the nature of the application. These never get deleted.
If you do a display BPCS objects to an *OUTFILE then track what files have more than a handful of members, you may find some files have hundreds of members, named after long gone work stations. This has been the subject of much discussion in the archives. It is not just members where this happens. BPCSDOC SSALOG00 will tell you where to look.
There's a number of ways you can count deleted records by file, multiply by byte size, and get a total on how much disk space is being wasted.
If you have not run the BPCS supplied reorganizations in some time, in all probability you will need to review whether any files ought to be downsized after the reorg, because what can happen is you now have a bunch of growth space reserved to each file, but not disk space returned to the common pool.
BPCS has a purge of labor history, which I analysed, concluded it had some bugs, so I wrote my own.
WRKOBJ ORD500*How many copies of this do you have in your library list, and how large are they?
Ok F10 F9 F4 and change that to *ALL libraries, and look again.As we get new BPCS releases, BMRs, etc. the normal practice is to put the new version in the library list in front of the older version. This means that over time, we end up with many copies of many programs, and some of the programs are huge.
I do not have at my fingertips, but I am sure someone on this list will remind us what the commands are, but in OS/400 there is an IBM command to gather statistics ... you can get a report showing how much disk space is eaten by different libraries, objects within libraries. It has been a while since I looked at this.
BPCS software eats more disk space than BPCS files, because of how upgrades are handled.
Within just the files, some files that have lots of logicals, the logicals collectively eat more disk space than the physicals.
WRKOUTQ ... how far back do reports go? Are your people aware that backups not get what is in spool?Check Midrange_L archives for ideas on managing ancient reports and audit trails.
Ask your users ... * Who needs shipping detail (ESH ESL files) from how many years ago? * Who needs access to notes on orders that no longer exist?* After we delete an item, do we really want to save the history of transactions on that deleted item, or do we want it to get cleared so that we can then reassign that item # to some other meaning?
There then can be some consensus reached on how far into the past you really need to preserve what kinds of records.
I tackled it from a few perspectives of which of the humongous # of files to tackle * There are files that are huge in terms of amount of disk space or # of records ... for example, for every one good customer order record, we have perhaps 20 soft deleted historical records, and the ratio is growing, so this has a negative impact on productivity of access to the good records. I got a consensus from the folks managing customer orders how far back they needed this stuff & wrote a program ... it studies the lines of a certain kind of order ... Is 100% of the lines of this order completely shipped fulfilled, and was the last activity older than the consensus cut-off ... Ok then, let's delete that order. * Do we have some kind of a risk of running out of #s ...for example we use a certain range of high vendor #s to handle the one time payments, then after end year, we delete those whose accounting records no longer have a need for, but this is soft deleted, so there comes a time we are at risk of running out of those numbers
I second the notion of considering some archiving product such as Milt's Locksmith. * How long do you keep records in ITH inventory history file? a few years? Now we need that data for a variety of good reasons, but most operations do not need all of it, which has a performance implication. * When people use INV300 F21 how far back do they typically need to look? a month or so? (How much faster do you think this would be if there wasn't the extra history there?) * How long does INV900 take during end fiscal? (The run time is related to the number of records it has to wade thru)
With Milt's Locksmith archiving, you can keep what most people need to access most of the time in the regular library list, but when someone does need the older records, use a logical join what is there now with the older stuff. Thus, normal operations, and end-fiscal, go much faster. When it comes time to purge the older stuff from Milt's archive system, it does not share the BPCS design flaw that I mentioned at the start of this e-mail.
I have been asked to look into ways of removing 'soft-deleted' records from our BPCS 6.1 database. For example our FMA file has over 2.3 million record of which 2.2 are 'soft-deleted'. Before I go down the programming route I would appreciate any advice the forum can give me regarding files that can be successfully cleaned-up and what files should not be touched. I am aware of the BPCS supplied cleanup routines but am of the understanding that they target specific files or data-sets and do not apply soft-deletes across all of the BPCS tables? Regards, Andrew *************************** ADVERTISEMENT ****************************** Get BT Broadband from only EUR15 per month! Enjoy always-on internet for less! Check it out at http://www.btireland.ie -- This is the SSA's BPCS ERP System (BPCS-L) mailing list To post a message email: BPCS-L@xxxxxxxxxxxx To subscribe, unsubscribe, or change list options, visit: http://lists.midrange.com/mailman/listinfo/bpcs-l or email: BPCS-L-request@xxxxxxxxxxxx Before posting, please take a moment to review the archives at http://archive.midrange.com/bpcs-l. Delivered-To: macwheel99@xxxxxxxxxxx
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.