× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Hi Rob,

Exactly! Also, thank you for the tip on how to query the .nlo file sizes.

I agree that is a significant amount of disk space to possibly give up if
you increase the minimum size, but somewhat insignificant when considering
4TB. The only way you'll know for sure if changing the minimum size value
is the best move or what the true disk cost will be is to run the
estimator. I personally would run the DAOS estimator on all files and
look at the output before making any changes. If it runs for days, what
does it matter? You can always change the job priority if it is using too
many resources. Maybe the estimator will show that 128K or 512K would be
your ideal value. Because again, the ideal value for a server is a give
and take between disk space savings, backup/recovery time and DAOS catalog
resync times. There really is no magic number for that.

Just as an FYI - it the past when I was doing this I was updating from
minimum sizes of 0 - 16K to new minimum sizes of 64K to 1MB. In those
cases, the benefits to increasing the minimum size were obvious.

Good luck,
Amy
_________________________________________________________
Amy Hoerle
Kim Greene Consulting, Inc
Senior Consultant
ahoerle@xxxxxxxxxxxxx
507-775-2174 (direct) | 507-367-2888 (corporate)
Office Hours: 8:00 AM - 2:00 PM CST, other times by appointment
http://www.bleedyellow.com/blogs/iLotusDomino
http://www.twitter.com/iLotusDomino



From: rob@xxxxxxxxx
To: Lotus Domino on the iSeries / AS400 <domino400@xxxxxxxxxxxx>,
Date: 04/23/2012 10:57 AM
Subject: Re: Best practices: DAOS size recommendations
Sent by: domino400-bounces@xxxxxxxxxxxx



Thank you Amy. 40 containers times 40,000 nlo's per container approaches
2 million, well 1.6MM. And it's slower to back up lots of little objects
versus 1 big object.

I have 803,336 objects in /ARCHIVE1/NOTES/DATA/DAOS. Of these 100,084 are

larger than 1048576. Totalling to a size of 395,639,959,997. Total size
of all objects less than 1048576 is 179,593,939,632. Roughly 180GB. What

I don't know is how many of these are in more than 1 file. So I can't
compute half of these are in more than one file to determine if that's
(90GB) + (90GB * 2) or 270GB or an increase of 90GB I'd be chewing up by
changing my minimum size from 64k to 1MB.

So, is it worth 90???GB of disk to remove 700k plus objects (just in that
server alone)? We backup that server once a quarter and it takes 6 hours
and 38 minutes to backup the ifs (excluding qsys.lib and the usual
others). ARCHIVE1 is clustered to ARCHIVE2. ARCHIVE2 is backed up once a

week as we only archive compact once a week. It takes an additional 3
hours and 40 minutes to backup the ARCHIVE2 directory. The lpar running
ARCHIVE2 is only 55% full of 4TB.


See RTVDIRINF, PRTDIRINF, iNav's "Run SQL Scripts" and
select count(*), sum(QEZDTASIZE)
from routines.qaezd0025o
where QEZDIRIDX IN (
select QEZDIRIDX from routines.qaezd0025D
where QEZDIRNAM1 like '/ARCHIVE1/NOTES/DATA/DAOS%')
and QEZDTASIZE>1048576;


Rob Berendt

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.