Hi Victor,

The idea is e-mail is this: If you abort the e-mail send (disconnect before the session is complete) the e-mail is considered to be "aborted" and not sent.

So you need to send the whole message, and successfully receive the response that confirms that the server has sent it.

I would not use a sleep() to give it time to send the response, however. I would just call recv() until you've received the whole thing. That way, it takes only the time it needs to take to send the response...

Make sense?

On 10/16/2012 8:16 AM, Victor Hunt wrote:
Hi Scott,

I share your feeling on the sleep() call. The program is fairly straight
forward. It opens a socket to our SMTP server. It then loads the various
SMTP commands into a string (one at a time) and runs a subroutine that does
the send() and recv() calls separated by a sleep() call.

I've been playing with the subroutine a bit. The data from the recv() call,
if any is even returned, is not used by the program. I commented out both
the sleep() and recv() calls. Nothing else was changed. The program no
longer sends an email. After uncommenting out the sleep() call, the email
is sent again.

This was late in the day and I didn't have time to do any debuging, which I
will do this morning to see if the send() call is failing for some reason.
Do the send() and recv() calls need to be used together? Can I do a send()
and never a recv()? At least as far as SMTP goes?



On Mon, Oct 15, 2012 at 4:57 PM, Scott Klement

Hi Victor,

It's hard to see how a sleep() call would help in sending an e-mail? I
could see using delays like this if you're using QtmmSendMail(), since
that program hands the file off to a background job which may take a
moment to handle the file

But if you're coding an SMTP client (i.e. coding your own SMTP routine
with the socket API) this should be a non-issue? Unless something
strange is going on?


On 10/15/2012 1:45 PM, Victor Hunt wrote:
I've run across an RPG program that uses socket APIs to send an email.
the socket is setup and opened, the program uses a subroutine to do all
send/receive functions. Just after the send, the subroutine runs the
API with a parameter of 1 second. It appears the original intent was to
give the email server and/or network time to produce/deliver a response
before the receive API runs. Does anyone think this built in delay is
really necessary? Depending on what is being sent, this program can run
a very long time with these roughly 1 second delays. Also, the program
doesn't do anything with the received data collected after the send API.
it really necessary to do a receive after a send? Seems I can tighten
up a bit.



This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.

Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2015 by MIDRANGE dot COM and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available here. If you have questions about this, please contact