-----Original Message-----
From:   Hans Boldt [SMTP:hboldt@vnet.IBM.COM]
Sent:   Tuesday, September 30, 1997 6:53 PM
To:     midrange-l@midrange.com
Subject:        RE: "MI" of ILE programs

> Wiesmayr Rudolf <Rudolf.Wiesmayr@mag.linz.at> wrote:
> >It seems that the ILE people don't want us to see the code
> >(p or MI or whatever) generated by the ILE compilers.
> >But what could be the reason for this?

> I think the main reason is that it's just not very useful to see the
> intermediate language.  First, our intermediate language (W-Code,
> which corresponds roughly to MI in OPM) is not easy to read.  Secondly,
> the optimizer in the back-end is fairly robust, even at low optimization
> levels.  So the W-Code may not fairly represent the final machine code.
> The optimizations performed by the ILE back-end are rather good.  Using
> C, you should see good improvements just by compiling with the highest
> optimizations.  (These days, optimizers can often produce better code
> than an experienced assembler programmer.)

> Also:  What algorithms are you using?
I did NOT think of sorting. The topics of concern were heavily used string
functions and complex ADT implementations...

> Another consideration is time:  Is it worth spending a week to get a 5%
> improvement in program performance?
Of course not. Improvement rates must be higher. And they are. My last
tuning session finished with a net reduction of 60 % CPU consumption.
There are so much things to improve when you see what your customers
really do with your products - and how often they use this or that
function. You can not always anticipate this in the initial version of
a library module...

> Ultimately, the only proper way to see what works is with performance
> testing.
That's clear. And I DO measure before and after an optimization step.
The focus has always been on the leading CPU consumers and/or call
counters, based on measuring, not guessing.

> Even if you make a change that you *know* must give better
> performance, test it to be sure.
Correct. I too got surprising negative results caused by incomplete impact
analysis for the change. OTOS, I have also got improvements I did not expect
in this magnitude.

> The bottom line is simply that these days there are better ways of
> addressing performance concerns than looking at the compiler generated
> code.
OK. But I want to have any chance to examine my bottleneck routine.
There are cases where a short look at the generated code should make clear
if there is a chance for a better algorithm. When I see a single instruction
(and I heard of very powerful instructions doing a lot of things deep down in
the machine) instead of a long sequence I'd not try to reformulate the HLL
code and go concerning the next possibility...

Regards,
-----------------------------------------------------------------
Dipl.-Ing. Rudolf Wiesmayr   Tel:     +43 - 0732 / 7070 - 1720
Magistrat Linz, ADV/AE       Fax:     +43 - 0732 / 7070 - 1555
Gruberstrasse 40-42          mailto://Rudolf.Wiesmayr@mag.linz.at
A-4041 Linz                  IBMMAIL: at3vs7vs@ibmmail.com
------ http://www.linz.at <<<--- Digital City Linz, Austria -----
- City of Linz: awarded by 'Speyerer Qualitaetswettbewerb 1996' -

application/ms-tnef


This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2020 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].