I just noticed near the bottom of your post, where you said:
Program makes a lot of calculations with packed decimal.
There is a big difference in the way ILE C implements support for "packed
decimal" as a "native" data type -- versus the way it is "supported" in ILE
C++ -- in ILE C, the compiler directly genrerates in-line code (in WCode /
NMI) to work on packed decimal data. In the ILE C++ compiler, support for
"packed decimal" is through a C++ library called "BCD" -- there are a lot
of macros and such defined in the QSYSINC/H include member named BCD ... if
you take a peek at that member, I think you will see that this uses
"operator overloading" in C++ to trick the compiler into calling the
desired library functions when operations are performed on "packed decimal"
This difference could easily account for the big differences you are
And then, you said:
... is there anything that can be done about it?
In C++, you could look into converting the packed decimal data into 64-bit
integers, and then performing any calculations using those long integers.
This would be much faster than using packed decimal arithmetic, then
convert back to packed decimal format, if needed. Or, just stick with ILE
C/400 for those programs that require doing a lot of packed decimal
Hope that helps,
Mark S. Waterbury
On 2/16/2015 9:56 AM, Jevgeni Astanovski wrote:
Does anyone have any idea why C++ program is significantly slower than
Explain the situation.
I have a number of API-s that are called via RPC, access some tables,
make some calculations and return some structures to the caller.
They've all been written on ILE/C.
Recently I started to experiment with porting them to ILE/C++. There
was a number of reasons why it would be nice.
After I rewrote the first one, I started to measure performance. This
is an issue for me as some of them are called hundreds of thousands
times per day.
Found out that it is approximately 2 times slower than C version.
Shared a code to colleges - no one found anything suspicious. However
one guy came out with a "brilliant" idea - he advised me to try to
recompile my C program with CPP compiler and see the performance.
Guess what? C program compiled with CPP compiler was the same 2 times
slower than C program compiled with C compiler....
And the only modifications that I did for compiling C with CPP compiler
1. Substituted "#pragma mapinc" table structue definitions with those
generated by GENCSRC;
2. Changed char and unsigned char in a couple of places as C++ is more
3. Made a substitution for "NNNND" constants (like 100D for example)
as C++ does not support that.
What I saw is that time increase looks like it has a proportional nature.
I tested API in 3 cases (number of tests is over 100): simple request,
medium request; complicated request and the timing is:
Simple C: 1ms; Simple C++ : 2ms
Medium C: 3ms; Simple C++ : 7ms
Long C: 25ms; Long C++: 60ms.
The difference is in number of records read from table and a number of
That is a simple one retrieves less that 10 records; medium - more
than 10 and long - more than 100. Program makes a lot of calculations
with packed decimal.
Any idea why it is like that and is there anything that can be done about
Thanks in advance,
This is the Bare Metal Programming IBM i (AS/400 and iSeries) (C400-L)
To post a message email: C400-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
or email: C400-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives