|
Phil, I never did this, but to reduce the backup time to a minimum, I'd suggest journaling, and SAVCHGOBJ with SAVJRNOBJ?(*NO). The problem is that if you add a single record to the F0911, you're backing up a tremendous amount of disk (as you know). I understand the hesitance about SAVCHGOBJ, as their IS a lot of CPU overhead. However, depending on the relative speed of the CPU vs. the tape drive, this can pay off. I think there is a lot of overhead, anyway, in getting all the locks... Also, if you want to minimize the downtime, you'd want to have as many backup jobs running simultaneously, as you have processors. You can get emulate the benefits of concurrent saves, like you get when saving in restricted state. I never used SWA, for the reasons Al Barsa just posted... My experience is that saving to *SAVF is SLOWER than going directly to tape... I don't understand this, at all, but that has consistently been my experience. (However, that was 5 years ago.. ***things change***.. plus I didn't try going to a *SAVF in a different ASP...) I assumed there was a bottleneck in the I/O controllers involved... Easiest solution...: throw money at the problem (fastest tape drive you can afford)... hth jt | -----Original Message----- | From: midrange-l-admin@midrange.com | [mailto:midrange-l-admin@midrange.com]On Behalf Of prumschlag@phdinc.com | Sent: Friday, November 30, 2001 10:01 AM | To: midrange-l@midrange.com | Subject: Reducing downtime for backups | | | | | I am investigating options for reducing the length of time that | the system is | unavailable to users during our nightly backup job. Not long ago | we added disks | and started creating *SAVF files in a separate ASP for the | backups, copying the | *SAVF's to tape later. This reduced downtime to about 1 hour, but this | continues to grow as we add files and the existing files grow. | We are doing | full SAVLIB's using BRMS and have split out all of the program | and non-essential | libraries into a separate job. We backup 50 GB of data which | compresses to | about 30 GB in the *SAVF format. We are a single AS/400 shop running JD | Edwards. | | At this point, the options I see are: | Use SAVCHGOBJ. We have done some tests on this and | determined it will not | save us much. A large percentage of the objects are changed | everyday and | the system takes too long trying to figure out which objects to save. | Separate "current" data from "prior year" data and store in different | libraries. We have already done some of this for the | largest files. This | requires system changes because the users still need access | to the prior | year data. | Use the Save While Active Option. I suspect that this is | the best solution | and there is little (if any) out-of-pocket cost. The | downsides are a) The | time and risk associated with investigating and revising our current | procedures, and b) Making sure that everyone here understands the | implications of the new procedure. | | I would be appreciative of other ideas anyone may have. | | I have seen testimonials on this list that the newer tape drives | are extremely | fast (we have a 3570). They couldn't possibly be faster that | going to *SAVF | files, could they? | | TIA | Phil
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.