|
Hello,
Several points follow:
Re: the below, please do not forget to check out IBM's Info APAR II11801 for
BPCS and AS/400 at OS/400 releases from V4R2 and up. There are some very
significant SQL performance PTFs that should be obtained where OS/400 is
doing table scans instead of building temporary access paths or even choosing
existing logicals for particular SQL constructions used in several BPCS
programs. Some PTFs are/were in TEST status, meaning contact IBM Helpline to
receive them if you can't order by SNDPTFORD.
And BPCS at version 6.0 uses SQL statements over standard RPG statements
(which would get records via logical files) for one reason only -- purely so
that the code can be ported easily to Unix (Unix has no concept of logicals
hardcoded in programs used to select certain data from a 'table' and the very
same ADK action diagrams are used to generate BPCS on AS/400 and Unix).
Problem is that often, the data models were not reassessed to ensure that
proper logicals existed to fit anticipated needs of the AS/400 SQL Optimizer
when code changes occurred - or worse yet, someone forgot to code in a WHERE
clause that selects only Active records, so the optimizer won't use that
logical file indexed by Active records, and instead builds one over the whole
physical file as it executes (in the WRKACTJOB screen if you see lots of
"IDX-xxx" where 'xxx' is a file name, an index is being built on the fly by
OS/400). All this is revealed in amazing (horrifying?) detail inside a DBMON
trace, which remains the single easiest way to sort all this out.
Bottom line: if you have a performance problem with 1 or 2 (or 10) particular
programs, and everything else on your system that ever uses SQL is
humming/screaming along quite nicely -- call the SSA Helpline Technical teams
for bug reporting. Many performance BMRs on 6.0.04 are complete and actually
do deliver some much-needed new logical files to speed performance, but
'squeaky wheel gets the grease'. i.e. - if you report it as a bug, and are
willing to provide details and/or DBMON evidence to them that a problem
exists in SQL or in the code or the delivered logicals, and pursue the BMR
fix with your SSA rep, eventually the problem will be resolved in the code or
in a database change. If no one reports it and just exchanges e-mail ideas
for new logicals on this mailing list, SSA R&D is not going to know its
broken.
As regards the CFINT* jobs, you may actually be getting slightly paranoid
with that statement! Search on the as400service.rochester.ibm.com page on
"CFINT*" and see what it pulls up, such as the following APAR/PTF list:
Document Description:
o (1000) not on CUM.
o (9166) on CUM - Was available as of 06/18/99.
o (9117) on CUM - Was available as of 04/29/99.
APAR PTF CUM Description
MA20579 MF22651 1000 TCP/IP - slow response time
MA20123 MF21988 9166 JAVA performance degradation when optimized
MA20035 MF21861 9117 WRKSYSACT causes CFINT CPU % to be inflated
MA19638 MF22602 1000 QZDASOINIT PJ job looping and will not end
SA68969 SF99104 1000 Database Group PTF (used to order all Database PTFs via
SNDPTFORD - PTFs to be delivered on media only)
And check out this Info APAR: basically the CFINT jobs are part of what on
various models of hardware will control how much CPU is dedicated to
interactive vs. batch jobs. There is no big 'secret' here. If you purchase a
'server model' -- batch processing is going to take precedence over
interactive. If you size your system for BPCS Full Client Server, and later
in the implementation project decide to switch to Mixed Mode, or suddenly
switch to 'green screen' COM instead of GUI COM - expect to take a hardware
performance hit if you had opted to buy a server model system and forgot to
consider the workload switch when you decided to implement the software in a
new way. So, be aware of this when choosing systems and deciding which kind
of BPCS version 6 to implement!! The newer models of hardware are supposed to
be more forgiving this way (700 series) -- but it seems there is paranoia and
misunderstanding being spread here in the mailing list regarding IBM trying
to 'pull the wool over your eyes' about performance. It sounds more like you
are not understanding how the hardware is supposed to work if you read this
APAR:
APAR#: II09200
Component: INFOAS400 - AS/400 Information
Release(s): R360
Abstract
IMPACT OF INTERACTIVE WORK ON SERVER MODEL PERFORMANCE
CFINT1 CFINT2 CFINT3 CFINT4 TASKS
Error Description
IMPACT OF INTERACTIVE WORK ON SERVER MODEL PERFORMANCE
______________________________________________________
The performance of the server models are optimized for
client/server and batch environments at the expense of
interactive environments. This means that as interactive
work is added to the server models, the overall system per-
formance decreases. In environments where only client/server
or batch work is present, the effective performance of a
server model is represented by the non-interactive RPR.
However, for mixed environments, the effective performance
of the server model is represented by the range of the non-
interactive and interactive RPRs, depending on the amount of
interactive work present. If the interactive CPU utilization
is at most 2% to 10%, the overall performance is represented
by the non-interactive RPR. This means that the performance
of both interactive and non-interactive work is represented
by the non-interactive RPR.
However, if the CPU utilization of interactive work is more
than about 2% to 10%, the overall performance decreases for
both interactive and non-interactive work until the perform-
ance of both is represented by the interactive RPR. There-
fore, maximum price/performance of the server models is
achieved when interactive work is kept to a minimum.
On some performance reports, additional tasks are listed as
compared to V3R1. The CFINTn task, where n is the processor
number (1, 2, 3, or 4), processes all of the interrupts on
the machine.
o On the traditional models (models 400, 500, 510 and
530), the time spent in CFINTn may be only a few per-
cent.
o On the server models (which are optimized for
client/server work), the time spent in CFINTn increases
significantly as you add more non-optimized work to the
system.
For example, when the CPU utilization for interactive
work (i.e. non - optimized work) on a 50S is 2%, the CPU
utilization for CFINTn is only 4%. As the CPU utiliza-
tion for interactive work increases to 21%, the time in
CFINTn increases and CPU utilization for CFINTn in-
creases to 40%. Finally, as the CPU utilization for
interactive work increases to 24%, the time spent in
CFINTn increases further and the CPU utilization for
CFINTn increases to 64%. This sizeable increase in
CFINTn utilization is a direct result of adding too much
non-optimized work (in this case, interactive work) to
the system.
Note that all of the server models follow the same
trend, but they differ significantly from the example.
For detailed predictions, use BEST/1.
In order to get the best price/performance from a server
model, minimize the amount of non-optimized work on the sys-
tem. A rule of thumb is to keep the non-optimized work on
the machine to less than 2% of the total CPU utilization.
Thanks
In a message dated 7/20/99 7:43:18 PM Central Daylight Time,
fkolmann@revlon.com.au writes:
>
> Dwight Slessman wrote:
>
> > Is the same job consistently slow or does it run quickly sometimes and
> slow
> > other times? We have found many areas of BPCS that use very poor
> > programming techniques in general and SQL in particular.
>
> Dwight we have a donk that is too big for us. The jobs times are
> constsistant.
> By this I mean it is not the load
> on the AS400 affecting the job. It is the job itself. Other jobs running
> at
> the same time as the jobs in quwstion, ECM
> are flying along.
>
> > The As/Set case tool (we have been using since 1992) does not generate
the
> > world's most
> > efficient code to begin with. Add to that some "creative" techniques
used
> > by SSA and you get some real dogs.
>
> We are also using AS/Set. (Is this really a CASE tool?) I thought CASE
> tools
> made programming easier
> rather than an arcane art.) Its the SQL that SSA use in place of setting
up
> data models, or rather the difficulty of
> modifying data models and having all programs that use the model updated
> that
> may be part of the problem.
>
> > We had a sizing problem with our AS/400 and would experience intermittant
> > performance problems. If your
> > performance comes and goes you could also be experiencing a "governing"
> effect
> > supplied by IBM.When your job appears to "go to sleep", useWRKSYSACT and
> > check if there are any CFINTxx jobs running and how much CPU they are
> > pulling. If you see multiple CFINTxx jobs running and they are pulling a
> > significant percentage of CPU, you are being slowed down by IBM
>
> Ahh my paranoia is at an end , no headshrinker required (for now anyway).
> Thanks for the tip.
>
> > upgrade. By the way we are on BPCS 6.0.04 mixed mode with an AS/400
2178.
>
> Thanks DWIGHT, we are also 6.04 MM and have a 640-2237 on V4R2M0.
>
>
+---
| This is the BPCS Users Mailing List!
| To submit a new message, send your mail to BPCS-L@midrange.com.
| To subscribe to this list send email to BPCS-L-SUB@midrange.com.
| To unsubscribe from this list send email to BPCS-L-UNSUB@midrange.com.
| Questions should be directed to the list owner: dasmussen@aol.com
+---
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.