• Subject: Re: Performance solutions - more than one kind exist
  • From: MacWheel99@xxxxxxx
  • Date: Fri, 9 Jul 1999 02:37:28 EDT

Al Mac also visits a popular soap box.

>  Mac are you saying that this is just IBMs way of getting us to upgrade. ?:)

Well I thought it was an interesting piece of news considering the volume of 
folks on BPCS_L who seem to have more than their fair share of performance 
problems associated with BPCS upgrades that supposedly can be helped with a 
faster processor, more disk space, platform upgrade, auto-tuning software 
etc. although in some cases people suggest the need for additional logicals, 
which I presume need to be incorporated into the recompiling of relevant 
programs.

We have also encountered the accumulation of deleted records that do not go 
away unless you know the right reorgs to run.  In an earlier thread I cited 
files that had like 100 good records, and tens of thousands of deleted 
records, which doubtless contributed to disk space waste, and inefficient use 
of logicals.

I am a bit annoyed that when I use GO CMDRGZ to reorganize stuff in QSYS that 
was added by V4R3 that have huge deleted volumes, they almost immediately are 
in the same boat, but this pales in comparison to BPCS files that I now need 
to reorganize at least weekly - check out GLD work & JIT backflushing for 
example.

You may have noticed that BPCS has a vast abundance of files - are they all 
used by all installations?  I was recently seeking the best logical to use in 
a new modification & it looked to me, from the descriptions, that IIML12 & 
IIML15 were duplications, so I checked the source code & found that you can't 
go by the SSA description.  I have a suspicion that in the absense of a cross 
index of what is really out there - need a logical for some application - SSA 
adds another that might duplicate what already exists.  It is also very easy 
to copy source code from a similar need & forget to rewrite the text to say 
what the new version is.

Thus a more promising approach, than throwing more hardware at the problem, 
is what Unbeaten Path calls "BPCS Lite" and doubtless other consultants have 
similar services.  Check that out on their web site, under Products & 
Services - Enhancements.

http://www.unbeatenpathintl.com/services.html

Basically they take inventory of a client's BPCS files, identify the useless 
baggage that does not need to be there, show us what needs to be regularly 
purged & how to do it, and suggests modifications to help the hogs accelerate.

I also understand that regular AS/400 consultants can do similar tuning with 
the standard IBM performance tools - we capture the data, they analyse it, 
for a price.

Now if they could only do it over the ECS line so that we could get fast 
turn-around on their guidance.

> This is only possible way this can be the case is for SQL to be reading the
database so inefficently that <snip>

We recently had occasion to dig into the CST900 code because shop orders 
coded for completion were not going away - turned out that we had done this 
piece of sabotage to ourselves, but sure enough, here is a program that is 
apparently going through a great abundance of records, instead of using a 
logical to filter out the facilities that are not relevant to the current run.

I think the dropping cost of hardware has invited some sloppiness into 
coding.  Something does not need to be coded efficiently if the customer can 
just throw extra hardware at the environment so there is more than adequate 
memory.  This is a PC mentality that has invaded the midrange.  When the cost 
of hardware was astronomical, compared to today's reality, programmers had to 
write tight code to maximize value of that hardware, but now that imperative 
has given way to a desire for fancy features to compete with the other ERP 
providers.  Get it out speedily to market & don't worry about bugs - we can 
always fix them later when the customers do us the service of finding them 
for us.

As/Set steamlines the task of programmers writing creative code, but is it 
any good at writing code that works efficiently?

One of the first manufacturing modifications I had to do several years ago 
was on GMD's multi-location inventory tracking for MAPICS/36, in which when a 
human entered an inventory transaction they had to literally wait several 
minutes before screen returned for next transaction & when I was finished 
re-writing the code, which took me several months, they had subsecond 
response time, on the same hardware.  

The problem was that data could be accessed by location or by item primary & 
the addressing scheme did not support alternate indexes (S/36 version of 
OS/400 logicals) - we had several generic factory locations for 
work-in-progress & the code was reading every record in WIP until it found 
the matching item, which was half our items on the average for every access.  
Logically any given item was in only a small handful of locations, so reading 
item first, then looking for a matching location, reduced number of disk 
reads from thousands to only 2-3, per transaction.

There's all kinds of ways to screw up design.

Al Macintyre
+---
| This is the BPCS Users Mailing List!
| To submit a new message, send your mail to BPCS-L@midrange.com.
| To subscribe to this list send email to BPCS-L-SUB@midrange.com.
| To unsubscribe from this list send email to BPCS-L-UNSUB@midrange.com.
| Questions should be directed to the list owner: dasmussen@aol.com
+---


This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2019 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].