Hi,
Well, I got consistent results. Last night, when this was first posted, I
submitted 21 jobs to generate 1000 spooled files each, specifically
DSPJOBLOG to *PRINT.
They took about an a hour to run in a multithreaded environment. The SAV
of the *OUTQ objects in QUSRSYS to the *SAVF this morning took about 27
minutes, and the display of the save file showed three *OUTQ objects saved,
all with the same name.
This tells me a few things:
1. When they designed saving spooled files, they didn't treat them like
multi-membered source files (which had been my suspicion).
2. They know that people are pack rats when it comes to spooled output,
and they allowed the design to accommodate that.
3. I did note from looking at the save file objects that the spooled
seemed to be assembled by jobs, not time of arrival, which is curious, but
if the design works, who cares.
4. For IBM, the name of the game is creating additional workload on the
system, and this certainly does that.
Clean up results:
Clearing the *SAVF was fast, as I expected.
Clearing the *OUTQ took a long time, but that's likely because there are a
lot of job structures to clean up. My default for SPLFACN is the default,
or *KEEP, otherwise the system would not work consistently with older
systems. I consider SPLFACN a bad default to change either at the SBMJOB
level, or the system value level, it just confuses people. Had I submitted
the jobs as SPLFACN(*DETACH), the clearing out the output queues likely
would have gone faster, but I'd rather watch paint dry.
RCLSPLSTG should be fast, but it hasn't run yet.
Al
Al
Al Barsa, Jr.
Barsa Consulting Group, LLC
400>390
"i" comes before "p", "x" and "z"
e gads
Our system's had more names than Elizabeth Taylor!
914-251-1234
914-251-9406 fax
http://www.barsaconsulting.com
http://www.taatool.com
http://www.as400connection.com
CRPence
<crp@xxxxxxxxxxxx
.ibm.com> To
Sent by: midrange-l@xxxxxxxxxxxx
midrange-l-bounce cc
s@xxxxxxxxxxxx
Subject
Re: SAVOBJ *OUTQ Results in
04/20/2007 06:45 multiple items in *SAVF
PM
Please respond to
Midrange Systems
Technical
Discussion
<midrange-l@midra
nge.com>
.
I was informed by a developer for save/restore that the noted outcome
is working as designed (WAD), and indeed the condition is related to the
number of spool files that are _in_ the output queue when the save is
performed. Additionally, a reference to that part of the design is
noted in the APAR SE28660 which can be searched on the IBM Service
website. That APAR provides a recent fix for a problem restoring from
such output queues. The specific text from the APAR relating to the
condition seen:
... large output queues were saved, and some of the large
output queues are not restored. For the purpose of this problem,
a _large output queue_ is defined as one which had its spooled
files divided into multiple groups when it was saved. A large
output queue is listed more than once when the save/restore
media is displayed (DSPTAP, DSPSAVF, or DSPOPT) to OUTPUT(*).
Regards, Chuck
-- All comments provided "as is" with no warranties of any kind whatsoever.
PaulMmn wrote:
I ran a SAVOBJ command to save an Output Queue to a Save File.
SAVOBJ OBJ(METASENT2) LIB(QGPL) DEV(*SAVF) OBJTYPE(*OUTQ)
SAVF(ARCHIVES/METASENT) SPLFDTA(*ALL)
I ended up with multiple items in the Save File:
+-------------------------------------------------------------------
| Display Saved Objects
|
| Library saved . . . . . . . : QGPL
|
| Type Options, press Enter.
| 5=Display
|
| Opt Object Type Attribute Owner Size (K) Data
| METASENT2 *OUTQ HQPGPEM 217420 YES
| METASENT2 *OUTQ HQPGPEM 290052 YES
+--------------------------------------------------------------------
The part that has be baffled is why I ended up with multiple objects
in the *SAVF!!
Could it be related to the 11,037 spooled files that were in the output
queue?
Inquiring minds want to know!
Paul E Musselman
PaulMmn@xxxxxxxxxxxxxxxxxxxx
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit:
http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at
http://archive.midrange.com/midrange-l.
As an Amazon Associate we earn from qualifying purchases.