|
Martin, I'm also looking for some automatic means to find these runaway jobs. I've had 2 or the 3 of these excessive queries with up to 20GB of temp storage. I guess I'll first set the maxstg parm in the user profile to something else than *nomax. Does anybody know where to set limits for files created by queries? OLiver > -----Ursprüngliche Nachricht----- > Von: Martin Rowe [SMTP:martin@dbg400.net] > Gesendet am: Freitag, 16. November 2001 21:41 > An: midrange-l@midrange.com > Betreff: Monitoring runaway jobs (was Re: Oh where has my disk space > gone?) > > Hi all > > One of the problems I've had in identifying what's eating up disk space > is where some job is quietly consuming temporary space which isn't that > easy to track down. Just this week our production machine seemed a little > slow, but nothing seemed to be using a big amount of CPU. One query job > occasionally hit around 15%, but that's not that unusual. I did notice > that overall disk space had gone up over 10% from the previous week (a > lot on a 300Gb box). As there didn't seem to be any other likely culprits > I took a closer look at the query, and saw that the temporary storage for > the job was over 26Gb! After ending the job, the disk space soon settled > down to its usual level. > > That's happened a few times before, so I've now put together a routine > to check temporary storage use on all jobs in the system. That works > fine, but what it doesn't do is tell me about big files in QTEMP - which > I had *thought* would be part of a job's temporary storage. > > I already have a nightly job that does a DSPFD *ALL/*ALL to an outfile > that I can run SQL over to see files by decending size, and have a stern > word with Query users who leave 1Gb files in their wake ;-) That takes > care of the permanent objects, but obviously a DSPFD isn't going to pick > up on QTEMP objects, so how would you track down where the space was > being consumed? > > I can do this 'by hand' by looking at each likely job and taking option > 13 (Display library list, if active) from a WRKJOB or DSPJOB, then option > 5 against QTEMP to see the file sizes but I haven't seen a command or API > to duplicate this. > > Ideally I'd like to have a background job fire up once every 15 minutes > or so and scan the system for potential disk hogs, and alert our Ops > department. Any ideas or suggestions welcome. > > Regards, Martin > -- > martin@dbg400.net jamaro@firstlinux.net http://www.dbg400.net /"\ > DBG/400 - DataBase Generation utilities - AS/400 / iSeries Open \ / > Source free test environment tools and others (file/spool/misc) X > [this space for hire] ASCII Ribbon Campaign against HTML mail & news / \ > _______________________________________________ > This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing > list > To post a message email: MIDRANGE-L@midrange.com > To subscribe, unsubscribe, or change list options, > visit: http://lists.midrange.com/cgi-bin/listinfo/midrange-l > or email: MIDRANGE-L-request@midrange.com > Before posting, please take a moment to review the archives > at http://archive.midrange.com/midrange-l.
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.