Open a PMR to report this problem to IBM, since it appears to be
impacting your whole system, from a performance standpoint.
_IBM i 7.1_
I found this on page 34 of this document in the IBM i 7.1 InfoCenter:
The integrated file system performs auxiliary storage operations to
ensure that files and directories persist across IPLs or crashes of
the system. However, many applications use temporary working files
and directories taht do not need to persist across IPLs of the
system. These applications are needlessly slowed down by objects
being forced to permanent storage.
(Note that the above paragraph also applies to all versions and releases
of OS/400 and i5/OS prior to 7.1)
In IBM i 7.1, IBM has addressed this performance issue. On the same
page of the above document, I see this:
*Temporary user defined file systems*
The temporary user-defined file system can increase performance by
reducing auxiliary storage operations.
Users can create and mount a special type of UDFS that contains only
temporary objects. Temporary objects do not require any extra
auxiliary storage operations because they are automatically deleted
by the system when the system is restarted, or when the file system
These extra I/O operations to force IFS data to disk may be having an
adverse impact with your particular I/O configuration.
Consider upgrading to IBM i 7.1 so that you can take advantage of the
new functionality mentioned above.
_VIOS and LUNs_
You might get better performance by dividing your total disk storage
into a larger number of smaller LUNs.
Create a separate Independent ASP (storage pool) and create a user
defined file system (UDFS) within that ASP, so you can isolate these IFS
related performance problems to that one storage pool. That way, it
might not impact on the entire system as much, because you are forcing
all this extra I/O activity to use only a few drives / heads / arms (LUNs)..
Look at your applications to see how the large IFS files are being
used. If they are truly "/temporary/" files, perhaps you could change
those applications to keep that data in memory, using teraspace to store
the data, so that way, it is never committed to permanent auxiliary
storage. Also, at end of job, it gets disposed of and "cleaned up"
I also found this older document that might help somewhat:
I hope this may help.
Mark S. Waterbury
> On 9/13/2012 5:38 AM, Argia Sbolenfi wrote:
We're experiencing a strange disk performance problem with large (> 1
GB) files stored on the root IFS file system.
When we delete one of such files, we see that:
1) from WRKDSKSTS, all disks are 95%-100% busy until the delete
operation is completed; and obviously during this period of time the
whole system becomes unresponsive
2) file deletions become slower (in terms of MB/s) as the file size increases!
take a look at the table below: for this test, temporary IFS files
were created in /tmp with the QSH command
dd if=/dev/zero of=/tmp/dummy bs=1024 count=1048576 (2097152..) , then
deleted with rm /tmp/dummy
MB dd s dd MB/s rm s rm MB/s
1024 15.4 66.5 4.8 213.3
2048 26.4 77.6 16.6 123.4
4096 64 64.0 99 41.4
8192 107 76.6 306 26.8
first column: file size in megabytes
second column: time elapsed to create the file in seconds
third column: file creation speed (column 1 / column 2, MB/s)
fourth column: time elapsed to delete the file in seconds
fifth column: file deletion speed (column 1 / column 4, MB/s)
So it took over 5 minutes to delete the 8 GB file (while the whole
system was almost unusable..)
Given that deletion speed seems to decrease quickly with file size,
does this mean that it will take almost forever for a, say, 20 GB
file? It makes no sense.
Some background information (please note that this a Bladecenter with
1xVIOS-based power blade and virtualized storage)
IBM Power PS700 on a Bladecenter S, OS 6.1.1
SAS Raid controller, 8x559 GB 15,000 rpm SAS disks
7x95443 MB LUNs (tot. 668 GB) in ASP1, 66% used
So can this be considered "normal"? Something wrong with our hw/sw
storage configuration? Any suggestion?