× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Larry,

Is it correct that RCLSTG is NOT recommended as much as in past S/38 AS/400 days?
Is is also correct that some of the RCLSTG code is auto executed on an IPL, if and where needed?
Last time I worked on an issue, IBM support recommended NOT to run RCLSTG.
What was recommended was RCLSTG SELECT(*DBXREF) OMIT(*NONE) ASPDEV(*SYSBAS) following a V6R1 to V7R1 upgrade issue.

Paul

-----Original Message-----
From: MIDRANGE-L [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Mark S Waterbury
Sent: Monday, October 13, 2014 12:23 PM
To: Midrange Systems Technical Discussion
Subject: Re:Has Reclaim Storage becoming outdated?

Hi, Larry:

Your insights into such matters is always appreciated. Thanks.

Mark S. Waterbury

On 10/13/2014 11:46 AM, DrFranken wrote:
While I appreciate the CONCEPT here of allowing more parallelization
of the reclaim I don't at all support it for a production system.
Here's why:

In the last decade I can name two systems that really benefited from
RCLSTG and one of those had multiple hard crashes (due to power and
storms) in a three day period. Eventually it had to have one to
correct issues.

Second it is A LOT of work in most cases to break up workload by ASP.

Third it could cost A LOT of money to have enough disk arms available,
preferably in separate RAID sets to supply adequate performance for
each ASP.

Fourth there are many parts of RCLSTG that can be run with the system
up already.

Fifth what seems to be the most important part of RCLSTG the *DBXREF
part usually is a small fraction of the overall time, so I'd start
with that only.

So the risk/reward benefit here doesn't work for me.

Remember you can put 8TB EASILY in two drawers of disk today with lots
of open slots and hot spares. 8TB isn't a large system......

- Larry "DrFranken" Bolhuis

www.frankeni.com
www.iDevCloud.com
www.iInTheCloud.com

On 10/13/2014 11:25 AM, Mark S Waterbury wrote:

Hi, Kenneth:

A single large IASP of ~8TB, even if only 50% occupied, will take a
long time for RCLSTG to analyze.

The following approach may help ...

Instead of one large IASP of 8TB, why not have 8 IASPs of 1TB each?
That way, in the event of a system crash, you could attach and vary
on each separate IASP in a different LPAR and then run the RCLSTG process "in
parallel" ... I would think this should complete much faster than
running RCLSTG against a single large (8 TB) IASP.

Then, once that "parallel RCLSTG" is completed, you can vary off the
IASPs, then attach them and vary them all on to the primary "production"
LPAR once again.

Someone else mentioned "High Availability" -- for example, this
approach would also permit you to vary off one IASP, attach it to
another LPAR and run a full back-up of that IASP on that LPAR, then
vary it off and back onto the "live" production LPAR once again. If
you group libraries into these IASPs based on what "applications" use
those libraries, this could allow you to keep your "warehouse" applications "up and running"
while the "financials" applications are being backed-up, for instance.

HTH,

Mark S. Waterbury

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a moment to review the archives at http://archive.midrange.com/midrange-l.


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.