× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



I redid some time trials for DTACPR on SAVOBJ. I took a 26GB data file
and saved it to a save file. Timed it and checked the size of the save
file upon completion.

Object Size Time
datafile 26,811,564,032
*NO 29,963,624,448 06:17
*MEDIUM 4,843,528,192 15:44
*HIGH 4,844,969,984 28:57
Why use *HIGH if it makes a bigger save file (freshly created) and takes
significantly longer than *MEDIUM? No, I didn't do anything different
like save access paths on one and not the other. The data only gets
updated once a week and today is not the day. Does anyone at IBM check
these? Is it different for different objects, like journal receivers or
management collections, versus files?

SAVOBJ OBJ(IFSLIST) LIB(ROUTINES) DEV(*SAVF) OBJTYPE(*FILE) SAVF(ROB/ROB)
DTACPR(*HIGH )

7.1
WRKPTFGRP PTFGRP(*ALL) PTFGRPLVL(*INSTALLED)
PTF Group Level
SF99710 10229
SF99709 27
SF99708 4
SF99707 1
SF99701 6
SF99637 2
SF99627 2
SF99617 5
SF99572 5
SF99369 4
SF99368 5
SF99367 2
SF99366 2
SF99364 3
SF99363 4
SF99362 9
SF99145 1



Rob Berendt

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.