× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.


  • Subject: Re: query problems / reading files
  • From: MacWheel99@xxxxxxx
  • Date: Tue, 15 May 2001 11:12:56 EDT

from MacWheel99@aol.com (Alister Wm Macintyre) (Al Mac)

I find this list to be a great source of education & I try to try out things 
I see here that I had not previously been aware of.

Rob mentioned DSPDTAARA DTAARA(SF99105)
and I recognized SF99* as the number system of IBM PTFs
& I tried it on our system, curious about what Rob meant by the word 
"disparity"
I suspected he meant disparity across the different systems of Maarten

I could not find any DTAARA(SF99105)
I could not find any DTAARA(SF99*)
I tried WRKDTAARA *ALL in *LIBL then *ALL in *ALL libraries then had master 
security officer try it ... none nada with any such naming
I tried WRKOBJ SF99* & no hits even with *ALL libraries via master security 
officer

Along the way I was reminded that when using *ALL, I really really need 
sometimes an *ALL_EXCLUDING_BPCS

BPCS users will appreciate what I mean by that

So my question is if the PTF objects only exist at a particular point in the 
upgrade cycle & ought to be cleaned out after each release is stable.

We are running lots of queries on our system - a model 170
Most of them are Query/400 against BPCS
I am overdue to upgrade V4R3 to V4R5

The only problems we now have are with a query that joins ITH & CMF
That is to say problems with run time - we also have problems with people who 
create queries, myself included, who sometimes forget the design limitations 
of query/400.

ITH is the inventory history file with transactions for a year & we have 
several thousand of them a DAY
CMF is the cost master file & although the record size is small, it is our 
largest file with well over a million records.
As I understand query/400, it grabs all records in all files involved, then 
it selects what is needed, as opposed to RPG & SQL which grab only the 
records & fields needed.

Thus there is a practical ceiling on possible combinations of joined input, 
that doubtless is related to our available disk work space & hardware.

We have been trying to remind the users of THIS query that you need to change 
the selection criteria in WRKQRY then send the sucker to JOBQ then wait until 
it is finished running before changing selection criteria to send another 
version to JOBQ, because that way each iteration will complete in minutes & 
not lock up the system.

Meanwhile we have been trying to remind query users in general that you need 
to change selection criteria in RUNQRY & do interactive because we do not 
want to be altering default selection criteria that affect other people, 
except for queries like the ITH CMF one where they HAVE to go via JOBQ due to 
sheer size of records joined.

Sometimes the user of the query heeds the 2nd message instead of the first.

I wonder if the volume of records going into Maarten queries is comparable on 
each system, relative to disk work space.

I wonder if queries have *JOBD settings worth reviewing.
I mean where does it control sizes of blocks to use.
I know there are some disparities between BPCS & Query *JOBD that sometimes 
bite us.
I wonder if Maarten uses software that messes with said *JOBD

From:   rob@dekko.com

I assume that the 820 and the other machines are all running V4R5, the same
CUM, and, if you run:
DSPDTAARA DTAARA(SF99105)
there is no disparity, correct?

What application are you querying against?  Basically, if it is BPCS or one
of our other applications I could try it for you.

Rob Berendt

==================
Remember the Cole!


                                                                              
                                           
                    "Maarten Vries, de"   <m_devries74@hotmai     

We are running lots and lots of queries on our systems.
Recently we have upgrade to V4R5M0 and now we experience very very long
runtimes of the queries.

You can see that when the query runs, it is reading 300 records at the time
on a model 730 with 2GB main storage.
When I run the query on another system it is reading by blocks of 15000.
The other system is a base model 620.

This problem is happening on lots of systems, but I tested the query on a
model 820 and did not have any problems.

Another branch of us also runs on a model 620 and there the problem is
occuring. PTF level is the same and IBM cannot find the solution.
Does this ring a bell to anyone?

Maarten

MacWheel99@aol.com (Alister Wm Macintyre) (Al Mac)


+---
| This is the Midrange System Mailing List!
| To submit a new message, send your mail to MIDRANGE-L@midrange.com.
| To subscribe to this list send email to MIDRANGE-L-SUB@midrange.com.
| To unsubscribe from this list send email to MIDRANGE-L-UNSUB@midrange.com.
| Questions should be directed to the list owner/operator: david@midrange.com
+---

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.