× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



On 20-May-2016 14:57 -0500, Steinmetz, Paul wrote:

On our R&D system, I had the same weekly processes run, SM3UPD01 LF
object usage indicated the object was NOT used.

A restore reorders the [saved] database file network first to enable restore according to existing and saved dependencies, and then further to maximize sharing so as to minimize space and maintenance requirements. So while save\restore from production to R&D is probably the best method to mimic, the effect is nuances. Steps can be taken to maximize mimicry, e.g. by having used the same scripted creates from production on the R&D, and then performing a restore-over to get the data and access paths.

I suggest using the R&D system to test the effect of the RNMOBJ SM3UPD01 *FILE SM3UPDRNM; if the file has an alternative name, then use the SQL RENAME instead, to give a new\different value for both the short-name and the long-name.

Knowing the effect of that rename on the application can better direct efforts on finding the origin while running on the production system.
• If the application fails, then that is likely the program\statement causing the /use/; a /use/ due to a direct open of the LFM should fail if the named file is not found -- while there are ways an application could be coded to function irrespective a rename, that would be very unlikely.
• If the application does not fail, then the /use/ can be inferred [almost assuredly] to be due to a query access path reference. The database monitor and debug joblog messages can help to locate or at least narrow the probable reference.

If for whatever reason that would not be done, then I would consider comparing the /map/ of the database file network\relations betwixt, and introduce successive tweaks to the R&D setup, with each change progressing the restored version toward matching the production version. After each tweak, make a test run of the weekly application again, to see if the usage of that LF starts on R&D as well. If ever the usage starts to occur for that file on R&D, then whatever was the prior change should be responsible for the origin of that effect; investigate if the effect is sensible. Note: if the usage is per access path in query-implementation, making the optimizer behave the same on a different system can be challenging; the more identical the two systems, the easier for making the plans match -- the DBF network could become identical yet the /use/ still not occur, because the access plan on the R&D system never changes to match the one on the production system.


Since object auditing didn't tell me anything on Production LPAR,

Seriously consider adding journaling to the underlying PFs to catch open\close entries. Even if the /use/ is indirect, each program that opens the PF to access the data [irrespective of being via an LF or query] will be logged with a timestamp. I do not recall if the usage data for a member is updated on close or open, but the usage for an Access Path by a query occurs upon each use of an access plan for which that AccPth was used to implement and for which there was a full-open of a query ODP; probably also when [re]optimization builds such a plan, though mostly that plan is them immediately used to open.

I've created a scheduled job, that runs every minute.

DSPOBJD OBJ(BRFILES/SM3UPD01) OBJTYPE(*ALL) DETAIL(*FULL)
OUTPUT(*PRINT)

I can think of a few worse options; very few. Doubtful there would be any, but to prevent unwanted output, OBJTYPE(*FILE) is almost surely desired in this scenario.

And if done that way instead of as just one never-ending job, at least use OUTPUT(*OUTFILE) MBROPT(*ADD) on those requests; being sure to set the SIZE() on the [pre-created] version of the QADSPOBJ used as the output file (OUTFILE) to which the Display Object Description output is directed [instead of spooled file]. Even run every second for five days [in one job; definitely not separate jobs], the effect is less than .5M records, against which a simple group-by query can summarize to get the second of each change... IMO, much better than an effective binary-search using DSPSPLF over perhaps some 7K spool files looking for the minute(s) a change occurred.

For perhaps just one job submitted, for the CALL of a program created from the following source [compiled _after_ updating the initial values]:

pgm
dcl &dsp_libr *char 10 BRFILES
dcl &dsp_file *char 10 SM3UPD01
dcl &out_libr *char 10 qgpl /* reset value before crtcl*/
dcl &out_file *char 10 od_use /* reset value before crtcl*/
dltf &out_libr/&out_file
monmsg cpf2105
dspobjd &dsp_libr/&dsp_file *file output(*outfile) +
outfile(&out_libr/&out_file) outmbr(*first *add)
chgpf &out_libr/&out_file size(500000)
dowhile '1'
dspobjd &dsp_libr/&dsp_file *file output(*outfile) +
outfile(&out_libr/&out_file) outmbr(*first *add)
dlyjob 1
enddo
endpgm


Next week, on 5/25, our next weekly process, I will now have a time
stamp to the minute, showing when this object is used.
Using DSPLOG, I should be able to match the timestamp to the
processes that run, should be able to determine which process is
causing the object usage.

That could be far from worthwhile because processes\jobs are not simple events, they span time. Given I know the minute the change occurred, I do not know that the prior and next minute are when the job started and\or ended; a contrived /bad/ [though not worst] case scenario: If 25 jobs started at T₀ and 25 jobs started at T₁, both sets continue processing until T₃, and the usage occurred at T₂, then how much help is knowing that those processes were running, in combination with the time of the change? If the processes are assured to run to completion in less than a minute, and esp. if run sequentially instead of concurrently, then there could be some value; just seems improbable.

Much better than DSPLOG for which I presume there will be only job-start and job-end info, would be if there is logging in the application that can be turned on to narrow the work\phases to the time found for the change. Or as already noted, journaled OpnClo entries could be helpful.


Any thoughts from the group?


If the open is believed to be a query open instead of a direct open, e.g. if by the rename test there is no failure for the app, then consider running the database monitor [STRDBMON] for detailed entries since before the start of the weekly-run until after completion [or until after the /use/ is updated]. An every-second review for the /use/ over the same time period would allow for a much more granular and IMO worthwhile data-set over which to match the change time to the time of the query work; likely enabling the narrowing quite quickly, to a program and query statement.


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.