× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Hi Scott, and anybody else who might be following this,

This is like some nightmare.

The script is supposed to send two files, every time it runs. I changed the
error number returned for each expect string. I consistently get a failure
waiting for the last sftp> in order to send the quit. Remote site says they
aren;t getting the files. I have observed a number of jos getting fired off
and started wondering if the timeout value may be a factor, as the files are
rather large. So, I have increased teh timeout several times. Currently, it
is set to 900. I see a lot of spawned jobs, and all are waiting.

What I don't see is anything that explains why the remote site is getting
nothing.

Is there a way to put the output of the put commands into a log. All I see at
this point is the commands as sent by expect. The last command that is sent is
a put, not a quit. In looking at the submitted jobs, it looks like both file
transfers are running concurrenly, and something just times out. I have no
idea what an appropriate timeout value should be, since I have no idea just how
big the files are.

Do you have any suggestions?

John McKee

Quoting John McKee <jmmckee@xxxxxxxxxxxxxx>:

Hi Scott,

Your example makes it a LOT clearer. I like the fact that each step
can issue a
useful exit code.

Thanks Scott!

John McKee

Quoting Scott Klement <midrange-l@xxxxxxxxxxxxxxxx>:

hi John,

You'll want to change your Expect script so that it returns a non-zero
exit status when something fails. The way you have it written, each
line either succeeds or times out -- but you don't tell it what to do
when it times out.

So I'd code it more like this:

#!/usr/local/bin/expect -f
set timeout 20
spawn sftp -v ${USER}@ftp.xxx.com
expect {
default {exit 2}
"Connecting to gateway.klements.com..."
}
expect {
default {exit 2}
"continue connecting (yes/no)?" {send "yes\n"; exp_continue}
"assword:"
}
send "${PASSWORD}\n"
expect {
default {exit 2}
"sftp>"
}
send "cd expecttest\n"
expect {
default {exit 2}
"sftp>"
}
send "put ${PUTFILE}\n"
expect {
default {exit 2}
"not found" {exit 3}
"sftp>"
}
send "put ${PUTFILE2}\n"
expect {
default {exit 2}
"not found" {exit 3}
"sftp>"
}
send "quit\n"
exit 0




The word "default" is a special value in Expect. When either the
spawned utility ends prematurely, or a timeout occurs, the Expect tool
will run whatever you've coded under "default". In the above example,I
had it do an "exit 2", which means the Expect script will terminate with
exit status 2.

So if it sends a password, and that password doesn't work... the next
expect { } group will time-out after 20 seconds, and when that timeout
occurs, it'll do an "exit 2" and cause that exit status of 2 to be
returned to your shell script.

The next thing... the shell script needs to return that exit status
back to the CL program. So you can't simply end the shell script
normally, you need the shell script to get the error, and then return it
to the CL.


build_script | /usr/local/bin/expect -f -
exit $?

The "$?" returns the exit status of the last run command. So when you
do "exit $?" it'll take the exit status of the last run command and
return it to the CL program.

Hope that helps.


John McKee wrote:
Hi Scott,

I had suspected that the return value was from the expect command. I have
located a web page of Tcl commands. Not sure how to get the return
value back
to a CL program.

I had thought sftp was running asynchronously as it was a spawned
process. Not
the same as a forked process, but still a different process.

One thought I had was for the status of the sftp command to be placed in an
environment variable to be tested by the CL program. Odd, but I don't
see a CL
command to retrieve the value of an environment variable for
testing. What I
did years ago in scripts was put a command within an exit(). I thought that
might work here, but I don't see how to do that with the spawn
function being
involved. And I would still need to get the value back to the
calling CL. Is
this possible?

John McKee

Quoting Scott Klement <midrange-l@xxxxxxxxxxxxxxxx>:

Hi John,

Not sure why you think it's running asynchronously...

You aren't running sftp -- not directly at any rate -- you're running
expect. Your code is checking to see if expect is ending normally NOT
if sftp is ending normally. So if sftp fails, you have nothing to
detect that.

You need to code expect to look for the failed sftp, and then notify the
calling CL of the failure.



John McKee wrote:
Some issues have arisen. I don't know if there is a way to detect them.

A coworker sends huge files, 500M+, every day. Yesterday, the
expect script did
not even make it past the password prompt. No file transfer
happened. The
expect script ran to the bottom. There was code to retrieve the
status of the
last command. But, it was zero, so no error.

Today, same script, different huge data files. Script runs to
completion. No
idea if the data files arrived on the remote site. Below is my
expect script,
slightly edited to remove actual destination:

#!/bin/sh

build_script() {

cat <<End-of-message
#!/usr/local/bin/expect -f
set timeout 20
spawn sftp -v ${USER}@ftp.xxx.com
expect "Connecting to ftp.xxx.com...\r\n"
expect "password:"
send "${PASSWORD}\n"
expect "sftp>"
send "put ${PUTFILE} \n"
expect "sftp>"
expect "sftp>"
send "put ${PUTFILE2} \n"
expect "sftp>"
send "quit\n"
exit
End-of-message

}

build_script | /usr/local/bin/expect -f -



Is there something there that causes the actual file transfer to occur
asynchronously, thus not allowing the status to be known to the original
caller?

This is the CL that calls the sFTP script:

CALL QP2SHELL PARM('/usr/bin/sh' '-b' +
'/usr/local/bin/xxx.scr')

/* Check for sFTP error */
CALL PGM(QUSRJOBI) PARM(&RCVVAR +
&RCVVARLEN +
'JOBI0600' +
'*' +
' ' )
IF (%BIN(&RCVVAR 109 4) *NE 0) DO

With the inclusion of the '-b', I am guessing that is why the sFTP
is running
asynchronously. My guess. This looks like the only thing that is
checked is
whether some part of QP2SHELL has completed, but not the entire
script. Is
there some way to pause the CL until the underlying QP2SHELL has
completed?

John McKee

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.






--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.





--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.






As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.