OBJ- QAUOOPT CONTEXT- VHAMBERG
OBJECT TYPE- SPACE *FILE
NAME- QAUOOPT TYPE- 19 SUBTYPE- 01
OBJ- NEWTABLE CONTEXT- VHAMBERGAnd if you go further in the SPLF, there are *MEM (member) and *FMT (format) and other kinds of low-level objects. So a table is still a physical file with the difference in some attributes. Use DSPFD on either, and you'll see the _SQL type_ attribute in a table and not in a PF.
OBJECT TYPE- SPACE *FILE
NAME- NEWTABLE TYPE- 19 SUBTYPE- 01
I might add my understanding is limited on the differences between ddl and dds. However, One of the benefits might be to Identify logicals that might be used less frequently and with indexes that are not frequently utilized. Those might be canditates for views. That would be one step in reducing overhead in a large physical file.
On Saturday, April 23, 2022, 10:49:31 AM CDT, Vance Stanley via MIDRANGE-L <midrange-l@xxxxxxxxxxxxxxxxxx> wrote:
We experienced better performance updating a GL history file which had well over a billion records. That was with an older OS. I cant remember which version.
On Saturday, April 23, 2022, 07:41:33 AM CDT, Vern Hamberg via MIDRANGE-L <midrange-l@xxxxxxxxxxxxxxxxxx> wrote:
Now _that_ is a statement that begs for more information! Can you give
more detail of the benefit of going with DDL. If you are thinking of
what happens when updating a PF with LFs, I believe that indexes over
the tables would have the same issue of updating them when changing so
On 4/23/2022 7:35 AM, Vance Stanley via MIDRANGE-L wrote:
Maybe time to switch to ddl on that table.
On Friday, April 22, 2022, 02:23:00 PM CDT, x y <xy6581@xxxxxxxxx> wrote:
IBM"s advice in the past (IIRC) was to remove the LF members if you're
changing more than ~15% of the records. You can automate this with a
quick-and-dirty CL program: DSPDBR to an outfile, read the file, remove the
members, maintain the PF, read the DSPDBR OUTFILE, add the members back.
On Fri, Apr 22, 2022 at 11:18 AM <smith5646midrange@xxxxxxxxx> wrote:
We have a job that writes about 1.5B records to a PF. When the job starts,
it does a CLRPFM and then starts dumping records into the PF.
We recently added an LF over the PF with no mods to the original program
will be used later in the stream by something else) and the job went from
approx. 2 hrs to an estimated 11 hrs. We killed it after the normal 2 hour
window with less than 20% of the records written.
While trying to determine why, one thing that we noticed is the job I/O
count was different with and without the LF. We found that without the LF,
the output blocking factor is a pretty decent number of records (I/O count
is WAY less than RRN). However, when we created the LF, the I/O count and
the RRN are almost identical (like blocking is ignored). The LF is not
unique keyed, just a normal keyed logical.
I've never noticed this before but I've never really paid attention to the
I/O count on output files. Is this a normal thing for an LF to eliminate
the blocking factor of the output and make it 1 for 1 or is there something
that we muffed up in the create of the LF?
I know we can change the LF to *DLY and stuff like that but we just want to
understand what happened and why so we know for future reference.
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives
Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription related
Help support midrange.com by shopping at amazon.com with our affiliate
As an Amazon Associate we earn from qualifying purchases.
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.