MIDRANGE dot COM Mailing List Archive



Home » MIDRANGE-L » July 2008

"Manage jobs" uses 1/3 of System ASP (28GB) - how to clean up?



fixed

Hi All,

I've ran into a rather interesting situation today at a (possible) new
customer. He us running a Model 170 with V4R5 and CUM PTF from 2002.
As such, the situation is already very, very bad.

The problem is that his DASD has been slowly filling up, with no idea
how to fix it.

I ran RTVDSKINF and then PRTDSKINF - one third of the 77GB of
available storage is used by "Internal object - Manage jobs" - 28 GB.
I've looked at bigger systems, and not one of that systems is nowhere
near the 28GB utilisation.

I've searched the web for what might causes this value to rise, but
found few hits. An "AS/400 System Operations" Guide was especially
helpful "Shows the amount of storage used to manage jobs".

I've looked at the number of jobs active in the system - the value was
at 26'000, quite much. Most of these jobs where just spoolfiles in
QEZJOBLOG. I've ran a clroutq QEZJOBLOG, and the number dropped down
to 600.

When looking at DSPJOBTBL, i saw that there were two job tables, one
with 16MB, another with 10 or so MB. I figured it didn't hurt if i ran
a CHGIPLA CMPJOBTBL(*NORMAL) and then IPLd the system. The IPL took a
bit longer than usual, but as expected, the disk utilization remained
roughly the same.

As the system is running V4R5, IBM Support is not an option. The
machine is running a RCLSTG through the night, but i don't expect that
to solve anything.

I'm all out of ideas - has anyone had this problem before? I'm
suspecting a problem with the operating system, since it's nearly a
decade old by now.

Regards,

Lukas






Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2014 by MIDRANGE dot COM and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available here. If you have questions about this, please contact