|
How do you determine how large the sockets pipe is? I am using the select() with a 90 second time-out before a send() to determine if the socket is available to be written to. Currently, the server program seems to be flooding the socket and timing out. The application is a download of customer pricing and is sending a lot of data (23804 records * 60 bytes). If I watch the batch program with wrkactjob it is in a run status, then selw, then it times out. The client side does not have enough time to read all the data on the socket before the server side stops after the time-out. The odd thing is the process used to work fine on this machine. It crashed hard about a week ago and now my sockets process no longer works. If I point the client to another machine, it works just fine. When I watch the batch program in wrkactjob, it varies between run and selw. The client is able to read all the data from the socket and complete normally. This leads me be believe there is a buffer (?) definition that controls the amount of data that can be placed on a socket. Any ideas??? Donnie Jobe
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.