|
My initial response was this was a ludicrous idea. The IFS is so
dependent on applications. Then I paused to think. Are there standard
IFS files predominately on all iSeries, that should be maintained to
reduce disk usage and/or increase performance?
The very first thing to do when trying to clean up disk space is to have
no assumptions. I can't believe the number of times I wanted to slap this
one fellow for spending weeks to clear out obsolete item numbers to save
0.0001% of disk space. What a lousy cost/benefit ratio!
So where is your IFS eaten up? Here, we wrote a program to drill down the
IFS, (ignoring symbolic links and a few other things). Every name, it's
directory it is in and it's size is written to a DB2 file. There is a
separate member for each date. It is run daily. We can query that file
to see what our biggest IFS objects are. We also run a report that
compares each immediate directory off of the root between two dates. The
size of each directory is summarized up from all drilled down members of
that directory. For example
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
GDISYS DISK ANALYSYS REPORT
DATE ONE DATE TWO
06/14/2004 06/15/2004
/ 734,136 734,136
/.eclipse 16,384 16,384
/botdir 4,394,063 4,394,063
/cgidev 4,535,632 4,535,632
/craigs 277,029 277,029
/davem 9,599 9,599
/dev 98,304 98,304
/dirs 65,598 65,598
/dummy 8,192 8,192
/edi 8,536 8,536
/etc 29,790 29,790
/fixes 45,034,602 45,034,602
/flashmail 8,192 8,192
/gdisys02 4,299,441,528 4,308,872,694 9,431,166 .21
...
The missing headings are Growth, and Percent Growth.
As you can see the only directory that grew between these dates was
gdisys02. And it grew 9.5MB. (Clue, that is a domino directory that
contains only one mail user - me.)
Here's from a production system:
*...+....1....+....2....+....3....+....4....+....5....+....6....+....7....+....8
GDIHQ DISK ANALYSYS REPORT
DATE ONE DATE TWO
03/09/2004 03/10/2004
/ 832,787 832,787
/a 8,192 8,192
/craigs 13,274 21,466 8,192 61.71
/dev 139,264 139,264
/ediftpdta 528,145 525,318 2,827-
.53-
/edtpdta 8,192 8,192
/etc 8,192 8,192
/fixes 8,192 8,192
/hmup 307,938 278,742 29,196-
9.48-
...
/NOTES01 192,835,359,567 193,461,664,332 626,304,765 .32
...
/QFPNWSSTG 361,827,576,320 361,827,576,320
/QOpenSys 168,108,133,668 168,108,133,668
...
771,386,926,270 772,222,742,427 835,816,157 .10
The biggest bang for the buck here would have been to implement email
restrictions. Such as archiving of old email, limiting the size of
attachments, and that rot. When proposed to management I was told no way.
Too many people go into old mail for reference purposes. And if that
major customer can't send us a drawing that impedes customer service.
(And, yes we have a ftp site.) A gig a day growth is acceptable. Here's
a check.
QFPNWSSTG is for our integrated Netfinity severs. We have about 6 of
these.
QOpenSys has a repository for TSM or Tivoli Storage Manager. But then
maybe I'm assuming where this disk is being used up. :-) Oh well, at
least it's static.
Now, if you want me to, I can query that file by the biggest objects and
discard the Domino stuff and see if any of these objects are germane to
the iSeries population as a whole.
Rob Berendt
--
Group Dekko Services, LLC
Dept 01.073
PO Box 2000
Dock 108
6928N 400E
Kendallville, IN 46755
http://www.dekko.com
Mike Berman <mikeba777@xxxxxxxxx>
Sent by: midrange-l-bounces@xxxxxxxxxxxx
06/16/2004 07:20 AM
Please respond to
Midrange Systems Technical Discussion <midrange-l@xxxxxxxxxxxx>
To
Midrange Systems Technical Discussion <midrange-l@xxxxxxxxxxxx>
cc
Fax to
Subject
Cleanup of IFS
I This raises a question for me of general cleanup of the IFS. I have no
idea where to find info on this. Are there Cleanup procedures/options
there similar to Cleanup on the AS/400?
John Ross <jross-ml@xxxxxxxxxxxxxxx> wrote:Learned something new today.
EDTF (since V4R4) has an option
9=Recursive Delete but I assume you want it in a CL or some program
so intead of writing your own use the DELTREE from IFSTOOL
http://www-912.ibm.com/s_dir/slkbase.nsf/0/3976fed8ab10134b862568b60071ccd3?OpenDocument
John Ross
David Gibbs wrote:
>> I need some IFS workfiles - some of which may have identical names
>> but different content, so I place them in different directories.
>> When finished an easy way to clean up is a generic DEL with
>> RMVLNK(*YES), but at least in /tmp and /home I get CPFA0AC ".. the
>> file system does not support removing existing links."
>> Is there any existing file system that supports this?
>
>
> This was asked a while ago ... but no answer was ever posted.
>
> Anyone know if it's possible to make the RMVDIR command work with
> RMVLNK(*YES)?
>
> I don't want to depend on having QSHELL available ... nor do I really
> want to have to write a program that will traverse the directory tree,
> removing each individual file.
>
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.