I haven't tried to optimize the performance of sockets, but I wonder if
increasing the size of the send and receive buffers using the setsockopt()
API might be a way to improve performance?
It would surprise me if the socket APIs on IBM i performed poorly compared
to PCs. Stream-file APIs tend to perform better on IBM i.
In regard to C performing better than RPG, I have found that to be true
when evaluating expressions in IF statements and the like. However so many
RPG functions and operation codes are simply wrappers around system APIs,
which are written in C, which perform the same, whether called by RPG or C.
It behooved me to do performance testing against YAJL (which is written in
C) when I was writing a JSON parser using RPG. Since parsers use a lot of
conditional expressions which perform better in C, I had to come up with
other design elements in order to come up with a parser that performed
better overall. For example, JSON parsers have a number of parts that
impact overall performance:
1. Scanning a stream (one byte at a time), and reacting to "tokens" found
(begin object, end object, begin array, end array, begin name, end name,
begin value, end value, etc.).
2. Allocating memory for new nodes.
3. Reading nodes in sequence, or looking up node values by key.
4. Mapping node values to program variables.
5. Freeing memory when requested.
YAJL performed better at 1. But I was able to come up with a design that
performed much better than YAJL at 2-5, for example.
As an Amazon Associate we earn from qualifying purchases.