× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



It depends on the classic question "What problem are you trying to solve?"
Are you trying to free disk space or get faster run times, or...?  We have
an analyst who spends DAYS on deciding which item numbers can be deleted.
He can never give a dollars and cents answer as to how much disk space he
freed up or if the machine runs notably faster.  I consider that a complete
waste of time.

Disk space:
Assuming you have your IFS, and your folders, in order then I normally do
the following.  Every 4 weeks we do a DSPOBJD to an outfile.  We summarize
that by library, add a date and store that into a summary file.  This
allows me to do a comparison on disk growth and which libraries are the
culprits.  Some times QSPL can be the culprit.
I routinely query the detail file sorting it by size, descending.  This
tells me where I'll get the biggest bang for the buck.  Normally I'll find
some items named ITHOLD1, ITHOLD2, and the like.  Actually, as many data
files as we have IBM's cross reference files are normally in the top 20
largest files.  It probably costs more in your employers time for you to
type in DLTDTAARA then the dollar value of the amount of disk space
deleting a workstation data area cleans up.
You could always do a DSPFD *MBRLIST to an outfile.  Querying that by size
and/or date changed might be a useful exercise.
You could also do some query math to multiply the number of deleted records
by the record size to get an estimate of how much space a RGZPFM will give
you.  You're starting to hit the law of diminishing returns.  Yes, changing
your files to reuse deleted records may help, but if you have lots of space
used by deleted records it may be awhile until you reclaim this space.  See
also my notes regarding RGZPFM in Performance.

Performance:
I'll ignore coding tips for now and focus on how cleaning up data may, or
may not, help you.
If BPCS abandoned multiple member files and allowed you to use Referential
integrity it would sure be nice.  But the use of multiple member files
negates RI.  I would dearly love to set up a cascading delete on a test
library and delete 25% of the master file records and run something intense
like a mrp or a cost roll up and see what kind of a performance gain you
get - if any.
By using blocking, and RGZPFM your files using the KEYFILE() parameter to
get the files in the order which is most critical, performance wise, can be
a huge boost.  The F spec keyword BLOCK(*YES) is GREAT and beats the
drudgery of doing this in OVRDBF hands down.  The calculation is pretty
efficient versus calculating the values for OVRDBF.  But some people like
to do it by hand with OVRDBF - makes them feel like a man, (I have other
ways I prefer).

Rob Berendt

==================
"They that can give up essential liberty to obtain a little temporary
safety deserve neither liberty nor safety."
Benjamin Franklin



                    MacWheel99@aol.com
                    Sent by:                  To:     midrange-l@midrange.com
                    midrange-l-admin@mi       cc:
                    drange.com                Fax to:
                                              Subject:     Re: Deleting logical 
files - was - RE: What is really the
                                               level check
                    11/14/2001 01:04 PM
                    Please respond to
                    midrange-l






Perhaps you can advise on what is a bad thing & my clean up priority.

We have some files that have what I considered to be an excess number of
ingredients ... hundreds of members, hundreds of logicals, more than one
format.

We are on BPCS which does a real poor job of providing for the end of life
of
records, so I have been busy identifying stuff we no longer need & deleting
it ... like Gen Led Journals posted years ago, Customer Orders filled
months
ago, Notes for entities (customers orders items etc.) that do not exist any
more, while in other cases I merely identify suspects for human action,
such
as raw material that we have on hand that is needed for customer parts that
we have not had any orders for in years.

There is also the shameful reality that the SOFTWARE to run our BPCS eats
more disk space than the DATA.

At present my thinking is that members that have not been used in eons, or
named after work station addresses we not have any more (with allowance for
SPECIALLY named members that are application sensitive) is a bad thing,
especially when more than 32 of them on a physical file get in the way of
adding yet another logical.

So are
lots of members a bad thing?
multiple formats a bad thing?

By bad thing, I mean resource hog.

Also reality check ... when we delete a bunch of records in a file, that
does
not inherently save any disk space, because the file has grown to a certain
size.
We also need to review whether it makes sense to downsize it.

>  Access paths are a good thing!
>
>  There is a myth in the AS/400 community that is no longer true.  That
is:
>  Access paths (logical files) on a file are resource intensive, have a
>  negative impact on performance, and should only be used where absolutely
>  necessary.
>
>  This is no longer true in today's environment on the AS/400!
>
>  This myth stems from the S/38 and was the guideline that IBM was giving
>  customers at that time.  The Rochester development lab began to change
the
>  algorithms for access path maintenance in Version 2 of OS/400 and
finished
>  the task in V3R1.  Access paths (logical files) are no longer the
>  resource/performance hog they used to be on the S/38.

MacWheel99@aol.com (Alister Wm Macintyre) (Al Mac)
_______________________________________________
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@midrange.com
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/cgi-bin/listinfo/midrange-l
or email: MIDRANGE-L-request@midrange.com
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.







As an Amazon Associate we earn from qualifying purchases.

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.