× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Hi Mark,

By "negative response code" I assume you mean that the recv() API would return -1, right? (Normally I hear the term "negative response code" in conjunction with IBM technologies like APPC or 5250, not with TCP/IP.)

It's true that sometimes recv() will return -1 if an error occurs. However, you can't rely on this always being the case -- especially over the Internet.

When the Java side times out and closes the connection, a TCP/IP packet containing the FIN flag (and probably also the RST flag) is sent from the Java program to your RPG program, and this is what triggers the recv() API to return -1. But... what if that packet never arrives? What if a firewall blocks it? Or a router gets unplugged, and therefore the packet can't be sent? Or your Internet connection goes down? Or someone cuts the network cable?

The point is -- these messages that demonstrate errors are handy, but you can never rely on them. Your recv() API in your RPG program has no 100% safe way to tell the difference between "nothing was sent", vs "connection is dead".

For that reason, you should ALWAYS implement timeouts on your sockets. Based on your business rules, determine how long it's acceptable to wait -- double that -- and if nothing is received in that time, assume that the connection is dead and start over.

Here's a link to an article I wrote that describes how to do timeouts on sockets in ILE RPG:
http://systeminetwork.com/article/timeouts-sockets-0

mgarton@xxxxxxxxxxxxxxx wrote:
I have socket client written in Java that sends a message to a socket
server running on the iSeries written in RPG. The socket server is the
spawning socket server based on Scott Klements code. I am having a
problem where the client times out but the server doesn't realize it. The
client sends a message and times out after 40 seconds if it doesn't receive
a response. The server receives a message, gets data, and sends a
response. Usually it only take a second or two. On the send of the
response, from the server, I check the response code. If the response code
is negative, I assume the client didn't get the response and back out the
change. The java developer assures me that once the timeout is reached,
the socket connection is ended by the client. I would expect to get a
negative response code on the send from the server, but that is not always
the case. This week we had some system issues causing the server to send
responses two or three minutes later. I would have expected the pipe to
have been broken in that amount of time.

Am I wrong to expect a negative response code on the send from the server?
Is there anything that I can tweak so the server recognizes the pipe is
broken by the client so that it will get a negative response code?

Thanks,
Mark Garton



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.