× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Scott,

Thanks for the response. In my case, setting a time-out on the socket
would help and make the application more robust but I don't think it fixes
the problem entirely. For example, the server program takes 45 seconds to
process, then it sends the response. However, the client has timed out
after 40 seconds and closed the socket. However, the server program
doesn't know that and sends the response thinking all is well. The
problem is client has already timed out but the server program thinks all
is well. Then the client usually tries again causing duplicate processing.
From your previous email, I can't rely on the return code from the send()
to recognize the socket was closed by the client. The only thing I know to
do is check to see if it has been less than 40 seconds since I received the
message from the client before I send a response.

Mark Garton

***************************************
This message is protected by the Electronic Communications Privacy Act, 18
USCS § 2510 et seq., and may not be used, copied or forwarded without the
consent of the named recipient(s). The information contained in this
message is confidential, is intended only for the use of the individual or
entity named. If the reader of this message is not the intended recipient,
you are hereby notified that any dissemination, distribution or copying of
this communication is strictly prohibited. If you have received this
communication in error, please notify me immediately at 417-829-5768.


message: 2
date: Tue, 10 Jun 2008 12:29:08 -0500
from: Scott Klement <rpg400-l@xxxxxxxxxxxxxxxx>
subject: Re: Socket Time-out Issues

Hi Mark,

By "negative response code" I assume you mean that the recv() API would
return -1, right? (Normally I hear the term "negative response code" in
conjunction with IBM technologies like APPC or 5250, not with TCP/IP.)

It's true that sometimes recv() will return -1 if an error occurs.
However, you can't rely on this always being the case -- especially over
the Internet.

When the Java side times out and closes the connection, a TCP/IP packet
containing the FIN flag (and probably also the RST flag) is sent from
the Java program to your RPG program, and this is what triggers the
recv() API to return -1. But... what if that packet never arrives?
What if a firewall blocks it? Or a router gets unplugged, and therefore
the packet can't be sent? Or your Internet connection goes down? Or
someone cuts the network cable?

The point is -- these messages that demonstrate errors are handy, but
you can never rely on them. Your recv() API in your RPG program has no
100% safe way to tell the difference between "nothing was sent", vs
"connection is dead".

For that reason, you should ALWAYS implement timeouts on your sockets.
Based on your business rules, determine how long it's acceptable to wait
-- double that -- and if nothing is received in that time, assume that
the connection is dead and start over.

Here's a link to an article I wrote that describes how to do timeouts on
sockets in ILE RPG:
http://systeminetwork.com/article/timeouts-sockets-0

mgarton@xxxxxxxxxxxxxxx wrote:
I have socket client written in Java that sends a message to a socket
server running on the iSeries written in RPG. The socket server is the
spawning socket server based on Scott Klements code. I am having a
problem where the client times out but the server doesn't realize it.
The
client sends a message and times out after 40 seconds if it doesn't
receive
a response. The server receives a message, gets data, and sends a
response. Usually it only take a second or two. On the send of the
response, from the server, I check the response code. If the response
code
is negative, I assume the client didn't get the response and back out
the
change. The java developer assures me that once the timeout is
reached,
the socket connection is ended by the client. I would expect to get a
negative response code on the send from the server, but that is not
always
the case. This week we had some system issues causing the server to
send
responses two or three minutes later. I would have expected the pipe
to
have been broken in that amount of time.

Am I wrong to expect a negative response code on the send from the
server?
Is there anything that I can tweak so the server recognizes the pipe is
broken by the client so that it will get a negative response code?

Thanks,
Mark Garton





As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.