I just ran a small "CPU speed benchmark" program written in C, compiled it with CRTBNDC and ran it, then compiled the same source, using CRTBNDCPP, and ran that version. Both versions reported exactly the elapsed time. Here is this "benchmark" program:

so you can try this program on your system, using both ILE C and ILE C++ to see if you get the same results.

Questions / suggestions:

1. what version of OS/400 or IBM i is this running on?
2. what Target Release (TGTRLS) was specified on the compile and bind
3. show commands used to compile and bind, e.g. CRTCMOD or CRTCPPMOD
and CRTPGM with any parameters used
4. explain how you ran the "benchmark" -- how did you measure the
elapsed times that you reported
5. create a "cut down" version of a program that demonstrates the
problem, and post it at -- then reply to
the list with the generated link -- (you will also need this for
reporting a problem to IBM.)


Mark S. Waterbury

> On 2/16/2015 9:56 AM, Jevgeni Astanovski wrote:
Hi all.
Does anyone have any idea why C++ program is significantly slower than
C program?

Explain the situation.
I have a number of API-s that are called via RPC, access some tables,
make some calculations and return some structures to the caller.
They've all been written on ILE/C.
Recently I started to experiment with porting them to ILE/C++. There
was a number of reasons why it would be nice.
After I rewrote the first one, I started to measure performance. This
is an issue for me as some of them are called hundreds of thousands
times per day.
Found out that it is approximately 2 times slower than C version.
Shared a code to colleges - no one found anything suspicious. However
one guy came out with a "brilliant" idea - he advised me to try to
recompile my C program with CPP compiler and see the performance.
Guess what? C program compiled with CPP compiler was the same 2 times
slower than C program compiled with C compiler....

And the only modifications that I did for compiling C with CPP compiler was:
1. Substituted "#pragma mapinc" table structue definitions with those
generated by GENCSRC;
2. Changed char and unsigned char in a couple of places as C++ is more strict;
3. Made a substitution for "NNNND" constants (like 100D for example)
as C++ does not support that.

What I saw is that time increase looks like it has a proportional nature.
I tested API in 3 cases (number of tests is over 100): simple request,
medium request; complicated request and the timing is:
Simple C: 1ms; Simple C++ : 2ms
Medium C: 3ms; Simple C++ : 7ms
Long C: 25ms; Long C++: 60ms.

The difference is in number of records read from table and a number of
That is a simple one retrieves less that 10 records; medium - more
than 10 and long - more than 100. Program makes a lot of calculations
with packed decimal.

Any idea why it is like that and is there anything that can be done about it?

Thanks in advance,


This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2019 by and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].