Yep the disks all report in correctly. Since it's an '8 set' they show
as 15GB after RAID.
I certainly would love to find a hardware issue because it would be a
straight up fix. Thing is it seems only to effect jobs with a lot of
stuff in QTEMP.
I certainly am thinking that rolling the final PTF group for the release
could be good.
ODD THOUGHT: The drives that were added had previously be used on a
POWER5 machine with i 7.1 and TR5. Wondering if any chance that exists
that firmware on the disks is TOO NEW for this server?? If that was the
case wouldn't the disks seem slow? Pct busy be high?
- Larry "DrFranken" Bolhuis
On 2/28/2013 12:07 AM, Evan Harris wrote:
What model do the disks shows as ? Same model as a 17Gb 10K on another machine ?--
Any time I've seen something like this it has turned out that there
was a hardware fault but it took a while to actually show.
There sure were a lot of PTFs for disks back in those days - and not
always on cumes, though i guess they would push them all to cume in
the final wrap
Not claiming to know anything, just thinking aloud.
On Thu, Feb 28, 2013 at 4:21 PM, DrFranken <midrange@xxxxxxxxxxxx> wrote:
Customer with an old 720 4-way, old enough to be running V5R1.
A couple weeks ago we replaced 8 7200 RPM 8G drives with 10K RPM 17GB
Drives. Also added 768MB Memory lifting them to 2G and replaced 2741
RAID card with 4778. Disks were drained/added-no save restore. PTFs are
VERY close to the latest available for V5R1.
System had run for nearly 2 years without IPL before the upgrade.
Since the 'upgrade' the system is slower. CPU usage usually in single
digits, paging and faulting numbers are low double digits, disk percent
busys might hit 10 (15 total drives all 10K and 52% full.) WRKSYSACT and
performance tools reports show nothing odd at all. Seizes and locks are
low, disk response times are very good. IOP and IOA processor
utilizations are 10% or less. cfint shows 0.0 or 0.1% in WRKSYSACT.
Nothing at all in lic logs, problem logs, etc and yes Cache battery is
Any job that puts any amount of stuff in QTEMP just dogs. Some that used
to take 30 seconds take 10 minutes. Users who run tasks that use QTEMP
see more than two minutes to SIGNOFF.
I did some testing. Create file with 1M 1K Records (RPG) took 9 1/2
minutes (==1750 records per second and pushed CPU to 20%, disks to 25%
busy) Copying said file to another lib 4 1/2 minutes, copying to QTEMP 4
1/2 minutes. Clearing the other lib 30 seconds, clearing QTEMP 30
seconds. Signoff with empty QTEMP-immediate, Signoff with that file in
QTEMP-immediate. This tells me that basic stuff works fine.
Software in use is old JBA System-21. No source available but then no
changes were made.
Any ideas? Anyone recognize these symptoms?
This is odd.
- Larry "DrFranken" Bolhuis
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
This mailing list archive is Copyright 1997-2015 by MIDRANGE dot COM and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available here. If you have questions about this, please contact