× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Hi Rick,

We have an application that monitors a socket for application
information in an XML format.  At varying places in the transmission
we receive a string of '@'.

I've seen this before, but it's ALWAYS an error in the RPG coding. Usually a programmer who is learning sockets for the first time, and didn't do a very good job of debugging their code.

To illustrate what can cause this... consider the following code:

D InputData       s            100a
   .
   .
  len = recv(mySock: %addr(InputData): %size(InputData): 0);
  QDCXLATE( %size(InputData): InputData: 'QTCPEBC');


How many bytes will the recv() API get? It might receive 100 bytes -- if the network speed is fast enough to deliver 100 bytes in one call. But that certainly cannot be counted on. The recv() API does not know that the RPG programmer's world revolves around fixed-length fields in fixed-length records.

It thinks it's reading a network stream. It can return as few as 1 byte per call, or as many as the size I specified (100 in this example) and every time I call it, it can be a different number.

Look at the QDCXLATE API that I call after recv(). Note that I'm translating the WHOLE RPG variable, NOT just the part that I received. See the mistake?

In the preceding example, if I received 80 bytes, then the 80 bytes would look correct, but there'd be 20 @ symbols after those 80 bytes, to fill up the rest of the field! Those @ symbols didn't come from the socket, they came from the blanks that were in the last 20 bytes of my field!

You see, in EBCDIC, x'40' = BLANK. However, I've told QDCXLATE that my field isn't EBCDIC, it's ASCII!! And in ASCII, x'40' is the @ symbol. So when it translates the blank characters, it THINKS they're @ symbols, because that's what they'd be if the field was an ASCII field. When it translates it to EBCDIC, it therefore converts the x'40' characters into the EBCDIC code for @, which is x'7C'.

So the socket never received the @ symbol, it was my fault for translating the extra spaces at the end of my field, despite that they were never received on the socket!

The issue is that we sometimes don't receive the entire XML string.
A co-worker is responsible for this process and tells me that there
is a 32k buffer restriction.  He thinks the '@' string sometimes
pushes the data beyond the buffer size and we lose it.

You didn't tell us which network protocol you're using with your socket. However, I can't think of any network protocol where the preceding paragraph makes sense.

With TCP, if the buffer fills up, the sender gets stuck blocking on the send() API until you receive part of the buffer and make space for more data to come in.

With UDP, data could be lost, but you'd never get anywhere close to 32k!! You'd start losing data around the 500 byte mark, or maybe 1000, or maybe 1500, depending on the MTU of your underlying IP stacks.

And Unix-Domain sockets work very much like TCP... the sender blocks...

The only thing I can think of is that you may be using non-blocking sockets, and only part of the data is sent because of buffer space. If this is the case, your programs are written incorrectly. Don't just assume that there will always be enough buffer space!! Non-blocking sockets MUST take care to re-send anything that wasn't sent the first time. It has nothing to do with a 32k buffer, either. Depending on the link speed, there may be previous data in the buffer, so the available space may be much smaller than the total size of the buffer.

When the programs are properly written, there's no practical limit on the amount of data that can be received over a socket. Just make sure you understand that data is received as a STREAM, not a fixed-length record. If the sender calls send() with 500 bytes, there's no reason to expect that the receiver can call recv() and get 500 bytes in one call. He may get 123 bytes on the first call, then 209 on the 2nd call, then 169 on the 3rd call... they add up to 500, and all 500 will be received with nothing lost, but it's not reasonable to assume that they'll all appear in one call to recv()!! That's just not how TCP works.

Though, perhaps you're not using TCP...  you didn't say.

Does this sound right?  Has anyone else encountered this type of
behavior using sockets in the iSeries?

Absolutely.  However, it's always been a programmer error.

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.