MIDRANGE dot COM Mailing List Archive



Home » MIDRANGE-L » February 2013

qtemp performance oddity.



fixed

Customer with an old 720 4-way, old enough to be running V5R1.

A couple weeks ago we replaced 8 7200 RPM 8G drives with 10K RPM 17GB Drives. Also added 768MB Memory lifting them to 2G and replaced 2741 RAID card with 4778. Disks were drained/added-no save restore. PTFs are VERY close to the latest available for V5R1.

System had run for nearly 2 years without IPL before the upgrade.

Since the 'upgrade' the system is slower. CPU usage usually in single digits, paging and faulting numbers are low double digits, disk percent busys might hit 10 (15 total drives all 10K and 52% full.) WRKSYSACT and performance tools reports show nothing odd at all. Seizes and locks are low, disk response times are very good. IOP and IOA processor utilizations are 10% or less. cfint shows 0.0 or 0.1% in WRKSYSACT.

Nothing at all in lic logs, problem logs, etc and yes Cache battery is good. :-)

Any job that puts any amount of stuff in QTEMP just dogs. Some that used to take 30 seconds take 10 minutes. Users who run tasks that use QTEMP see more than two minutes to SIGNOFF.

I did some testing. Create file with 1M 1K Records (RPG) took 9 1/2 minutes (==1750 records per second and pushed CPU to 20%, disks to 25% busy) Copying said file to another lib 4 1/2 minutes, copying to QTEMP 4 1/2 minutes. Clearing the other lib 30 seconds, clearing QTEMP 30 seconds. Signoff with empty QTEMP-immediate, Signoff with that file in QTEMP-immediate. This tells me that basic stuff works fine.

Software in use is old JBA System-21. No source available but then no changes were made.

Any ideas? Anyone recognize these symptoms?

This is odd.






Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2014 by MIDRANGE dot COM and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available here. If you have questions about this, please contact