|
Be careful.
This is not for the feint of heart. I will share my experiences - ask me
questions, but temper them with your bpcs configuration as this will change
from shop to shop. Use the archives - some excellent contributors especially Al
M. and Genyphr.
If you have the time and the patience - do it. If you have the money - look at
locksmith (from Unbeaten Path).
The rewards are awesome in terms of performance. Don't forget to re-org your
files after you run your deletes.
Order entry took us 3 years to perfect (ok, but we only ran it once a year).
There are over 50 files that tie together. Miss one and you will be surprised
at the result in day to day operations. You'll spend a couple hours before the
DUH factor hits (because it will be 8 months after you ran the delete depending
on order volume).
AR and GL was pretty straightforward for us. We deleted the rar file based upon
payment date over 2 years old. GHH, GLA, GXR were targeted based upon the year
and I chunk out a year at a time once a year (GNH if you use notes?).
ZPD - this was a freebie - it is used for document regeneration. Improved our
billing run 20% by only keeping the last 30 days.
>From the archives - keyword "purge":
This is a complex topic. Look at the archives; follow SQL tuning advice; there
is excellent advice from key contributors. Especially from Al M. and Genyphr.
Here are some of the easier things you can try that worked for me. Do it on a
quiet system and make sure you have a backup of the files:
1. I would purge ZPD independent of anything else. The only thing WE USED IT
FOR after an order completes is reprint. I got some improvement gains by
writing an SQL that hard deleted closed records over 30 days old. Then I
re-org'd zpd.
2. Are you re-orging files ? I got a huge gain when I re-orged the IPP file
weekly. ELA is another one to look at. Do a dspfd ela and look at the number of
deleted records. If over 15%, do a rgzpfm (quiet system).
3. Do a prtsqlinf ORD550B1 ; look at the spool file. Look for the estimated
query run time. Anything over 1 second deserves research.
SQL4020 Estimated query run time is 2 seconds.
4. If you start deleting any other files; DON"T or BE CAREFUL. Widows and
orphans are a bad thing in BPCS and will cause problems. I wrote a purge; took
me a long time to perfect (months of work). But - we have over 800,000 orders
and 4,000,000 lines; we had to purge.
5. Find duplicate keys; they will cause an index build.
So - again - here is my disclaimer. The above items did wonders for us for
response time. They are not meant to be an exhaustive list but are a start.
IF you really want to short-circuit project time and have a couple $$$; look to
a third party product like Locksmith from Unbeaten Path... You owe me Milt.
-----Original Message-----
From: bpcs-l-bounces+ssegerstrom=intermatic.com@xxxxxxxxxxxx
[mailto:bpcs-l-bounces+ssegerstrom=intermatic.com@xxxxxxxxxxxx]On Behalf
Of Andrew
Sent: Tuesday, October 04, 2005 2:07 PM
To: bpcs-l@xxxxxxxxxxxx
Subject: [BPCS-L] Removing Soft-Deleted records from BPCS Files
I have been asked to look into ways of removing 'soft-deleted' records from
our BPCS 6.1 database. For example our FMA file has over 2.3 million record
of which 2.2 are 'soft-deleted'.
Before I go down the programming route I would appreciate any advice the
forum can give me regarding files that can be successfully cleaned-up and
what files should not be touched.
I am aware of the BPCS supplied cleanup routines but am of the
understanding that they target specific files or data-sets and do not apply
soft-deletes across all of the BPCS tables?
Regards,
Andrew
*************************** ADVERTISEMENT ******************************
Get BT Broadband from only EUR15 per month! Enjoy always-on internet
for less! Check it out at http://www.btireland.ie
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.