× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



It's also about disk cache. We upgraded from a Model 520 to a Model 550,
with fewer drives in the new machine. I expressed concern about
performance, but our VAR said that the drive controller cards have about
1.5 GB cache, and that the fewer disks wouldn't be a problem. He was
right!

We ftp between 3 a 5 GB from the primary system to the niterun system, and
back to the primary. On the old system, this took about 30 minutes, on the
new system it takes about 20 minutes.

I also asked about separating journal receivers into a separate ASP for
performance, but when I looked at the amount of space required by
receivers, it would have only been 1 disk drive, and I couldn't see
setting up a separate ASP for that.

Another real world example of the disk cache: When I use FNDSTRPDM to
search for a string in our source files. The first time I submit a search
it takes a minute or 2 to search the source file. If I submit another
search against the same 1 or 2 source files while the file is in the
cache, the search is done in 1 or 2 seconds.

Be sure to evaluate the total disk package, and get some experienced help
to evaluate your proposed system.

Steve
- -
Steven Morrison
Fidelity Express





lbolhuis@xxxxxxxxxx
Sent by: midrange-l-bounces@xxxxxxxxxxxx
06/01/2009 12:58 PM
Please respond to
Midrange Systems Technical Discussion <midrange-l@xxxxxxxxxxxx>


To
Midrange Systems Technical Discussion <midrange-l@xxxxxxxxxxxx>
cc

Subject
Re: Large systems and full disks






For the record 1TB of disk is in no way shape or form a large system
today.
(Frankie, a 170 has 3/4 TB!) As Lukas said that can be had in 4 drives
today and even with RAID you'd have 1.35 TB with the largest drives!

When I originally read the post I though you said 1,000TB which WOULD be a
large system.

As others have said it's about arms, and I/O workload and what you do with
(on?) your disks. In other words as always YMMV! DO THE MATH first before
you resize a system to have fewer larger disk units!

- Larry


Larry Bolhuis IBM Certified Advanced Technical Expert:
Vice President System i Solutions
Arbor Solutions, Inc. IBM Certified Systems Expert:
1345 Monroe NW Suite 259 System i Technical Design and Implementation
V6R1
Grand Rapids, MI 49505
(616) 451-2500
(616) 451-2571 - Fax
(616) 260-4746 - Cell

If you can read this, thank a teacher....and since it's in English,
thank
a soldier.




rob@xxxxxxxxx
Sent by:
midrange-l-bounce To

s@xxxxxxxxxxxx Midrange Systems Technical
Discussion
<midrange-l@xxxxxxxxxxxx>
05/29/2009 09:50 cc

AM
Subject

Re: Large systems and full disks
Please respond to
Midrange Systems
Technical
Discussion
<midrange-l@midra
nge.com>






Sue has some good points. We had one file that we used to have to use
tape to reorg it (if we waited too long).

Keep in mind that you may have stream files (non qsys.lib stuff) that may
be a consideration also. These may be bigger than your qsys.lib stuff.

Then again, our biggest lpar is only at 54% with ninety 70GB disk drives.
%busy (see WRKDSKSTS) on those right now hovers between 2 to 4 percent).
But it's a Friday on a short holiday week - with one silly day of school
left on Monday.

You can play with WRKSYSVAL QSTGLOW* and STRSST ... "Work with ASP
threshold" for short term. Actually I leave that last at 80%. I want
enough time to go through management to order more disk.


Rob Berendt
--
Group Dekko Services, LLC
Dept 01.073
Dock 108
6928N 400E
Kendallville, IN 46755
http://www.dekko.com





From:
Sue Baker <smbaker@xxxxxxxxxxxx>
To:
midrange-l@xxxxxxxxxxxx
Date:
05/28/2009 07:48 PM
Subject:
Re: Large systems and full disks
Sent by:
midrange-l-bounces@xxxxxxxxxxxx



wrote on Thu, 28 May 2009 18:12:54 GMT:

On a large system with over 1,000 gig - is the 90% general rule
still a critical issue? The final 10% is over 100 gig.
Will snads and smtp have issues at 90%.

If you have the storage limits set appropriately (ASP limit in
service tools not in system values), SNADS and SMTP will not have
a problem. However, other things may perform poorly or you could
end up with an unexpected crash. It really depends on the needs
of various temporary objects and sometimes even permanent
objects. Some of the things I'd question are

- Is 100G (less a bit) enough space to hold copies of your two
largest DB tables?
- Do you have space already set aside for a main storage dump?
- How much data growth happens on a daily basis before you get
around to purging older data? This one can be pretty critical if
the system has to spend a lot of time looking for free space on
fragmented disk, it could have an impact on performance.

--
Sue
Americas Technical Sales Support - Power Systems i
Rochester, MN





As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.