× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



The wasn't a mention of this ifs object, but is this tied to an application
or ?? Maybe a lock was on it.

On Thu, Sep 13, 2012 at 10:37 AM, Mark S Waterbury <
mark.s.waterbury@xxxxxxxxxxxxx> wrote:

Hello, Argia:

_IBM Support_
Open a PMR to report this problem to IBM, since it appears to be
impacting your whole system, from a performance standpoint.

_IBM i 7.1_
I found this on page 34 of this document in the IBM i 7.1 InfoCenter:

http://publib.boulder.ibm.com/infocenter.iseries/v7r1m0/topic/ifs/rzaax.pdf

The integrated file system performs auxiliary storage operations to
ensure that files and directories persist across IPLs or crashes of
the system. However, many applications use temporary working files
and directories taht do not need to persist across IPLs of the
system. These applications are needlessly slowed down by objects
being forced to permanent storage.

(Note that the above paragraph also applies to all versions and releases
of OS/400 and i5/OS prior to 7.1)

In IBM i 7.1, IBM has addressed this performance issue. On the same
page of the above document, I see this:

*Temporary user defined file systems*
The temporary user-defined file system can increase performance by
reducing auxiliary storage operations.
...(snip)...
Users can create and mount a special type of UDFS that contains only
temporary objects. Temporary objects do not require any extra
auxiliary storage operations because they are automatically deleted
by the system when the system is restarted, or when the file system
is unmounted.

These extra I/O operations to force IFS data to disk may be having an
adverse impact with your particular I/O configuration.

Consider upgrading to IBM i 7.1 so that you can take advantage of the
new functionality mentioned above.

_VIOS and LUNs_
You might get better performance by dividing your total disk storage
into a larger number of smaller LUNs.

_Independent ASPs_
Create a separate Independent ASP (storage pool) and create a user
defined file system (UDFS) within that ASP, so you can isolate these IFS
related performance problems to that one storage pool. That way, it
might not impact on the entire system as much, because you are forcing
all this extra I/O activity to use only a few drives / heads / arms
(LUNs)..

_Application changes_
Look at your applications to see how the large IFS files are being
used. If they are truly "/temporary/" files, perhaps you could change
those applications to keep that data in memory, using teraspace to store
the data, so that way, it is never committed to permanent auxiliary
storage. Also, at end of job, it gets disposed of and "cleaned up"
automatically.

I also found this older document that might help somewhat:


http://www-03.ibm.com/systems/resources/systems_i_advantages_perfmgmt_pdf_ifsperf.pdf

I hope this may help.

Mark S. Waterbury

> On 9/13/2012 5:38 AM, Argia Sbolenfi wrote:
We're experiencing a strange disk performance problem with large (> 1
GB) files stored on the root IFS file system.
When we delete one of such files, we see that:
1) from WRKDSKSTS, all disks are 95%-100% busy until the delete
operation is completed; and obviously during this period of time the
whole system becomes unresponsive
2) file deletions become slower (in terms of MB/s) as the file size
increases!

take a look at the table below: for this test, temporary IFS files
were created in /tmp with the QSH command
dd if=/dev/zero of=/tmp/dummy bs=1024 count=1048576 (2097152..) , then
deleted with rm /tmp/dummy

MB dd s dd MB/s rm s rm MB/s
1024 15.4 66.5 4.8 213.3
2048 26.4 77.6 16.6 123.4
4096 64 64.0 99 41.4
8192 107 76.6 306 26.8

first column: file size in megabytes
second column: time elapsed to create the file in seconds
third column: file creation speed (column 1 / column 2, MB/s)
fourth column: time elapsed to delete the file in seconds
fifth column: file deletion speed (column 1 / column 4, MB/s)

So it took over 5 minutes to delete the 8 GB file (while the whole
system was almost unusable..)
Given that deletion speed seems to decrease quickly with file size,
does this mean that it will take almost forever for a, say, 20 GB
file? It makes no sense.

Some background information (please note that this a Bladecenter with
1xVIOS-based power blade and virtualized storage)
IBM Power PS700 on a Bladecenter S, OS 6.1.1
SAS Raid controller, 8x559 GB 15,000 rpm SAS disks
7x95443 MB LUNs (tot. 668 GB) in ASP1, 66% used

So can this be considered "normal"? Something wrong with our hw/sw
storage configuration? Any suggestion?

tnx

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.