× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



I'm not sure if anyone remembers this thread I started in December, or
cares.  I've got an answer though.

We're running thousands upon thousands of FTP PUT and GET operations on our
server every week.  Ending FTP takes a very long time no matter how we do it
(ENDTCPSVR *FTP or ENDSBS *ALL *IMMED...).

FTP is supported by a number of QTFTPxxxxx jobs in QSYSWRK.  FTP requests
are balanced across these server jobs.  It seems that every time an FTP
session starts (at least for PUTs) it will create a file named QTMFTPDDS in
QTEMP, and add a member with 1 record.  At the end of the FTP session it
will get rid of the file.  Unfortunately FTP uses a CLRLIB QTEMP to get rid
of the file.

We've always secured the major library commands (CRTLIB, DLTLIB, CLRLIB).
Our Unix server for FTP connects using a user profile which does not have
access to the DLTLIB command.  FTP sessions are unable to clear the library
so nothing gets cleaned up.  Each new FTP session for a QTFTPxxxxx job adds
another 1 record member to the job's QTEMP/QTMFTPDDS file.  We IPL the
system once a week.  Therefore each QTFTPxxxxx FTP support job has a full
week to accumulate thousands of members in its QTEMP/QTMFTPDDS.  When we try
to bring down the system or bring down FTP each one of these QTFTPxxxxx FTP
support jobs has to delete thousands of member objects as it ends.

All we have to do is grant our FTP user access to the CLRLIB command.

It bugs me though.  It seems kind of strange that the FTP processing program
doesn't keep track of its dependent objects by name.  CLRLIB QTEMP seems
like a brute force way to clean up.  To my mind, the program should do a
DLTF QTEMP/QTMFTPDDS, and perform similar explicit deletes on anything else
it might create on the way.  I don't like the idea that the OS/400 FTP
implementation *requires* that an FTP user has access to the CLRLIB command.

The PMR hasn't been closed yet.  What are the odds that they'll listen to
me?

-Jim

James P. Damato
Manager - Technical Administration
Dollar General Corporation
<mailto:jdamato@xxxxxxxxxxxxxxxxx>



------------------------------------------

Subject: RE: Ending TCP/IP, FTP jobs in QSYSWRK 
From: Jim Damato 
Date: Tue, 2 Dec 2003 16:14:20 -0600 


Nuts.  The QTFTPxxxxx jobs are not pre-start jobs or autostart jobs.  The
job QSTRTCP is started by a QSYSARB job as TCP/IP is started.  The QSTRTCP
job submits the first QTFTPxxxxx job.  The first QTFTPxxxxx job seems to
submit more QTFTPxxxxx jobs as needed, then these children submit more
children as needed.

I'm trying to RTFM to find if there are any deeper configuration controls
than CHGFTPA.  I'm also trying to figure out if the guy who wrote the Perl
scripts to process FTP from our Unix hub is doing anything screwy (thousands
of times a day).  Many of these jobs are not actively processing FTP
requests as the system is brought down, so they shouldn't be doing any
database work.  I was seeing some activity against a Q* file in QTEMP which
made me think the jobs were dumping their job logs, even though we've set
the jobs so that they don't generate joblog spooled files.

We might try changing the jobs' from LOGLVL(4 00 *NOLIST) to something with
less internal logging.

And yes, we're going to call IBM so that they can tell us that the system is
working as designed.

-----Original Message-----
From: Andy Nolen-Parkhouse [mailto:aparkhouse@xxxxxxxxxxx]
Sent: Tuesday, December 02, 2003 9:35 AM
To: 'Midrange Systems Technical Discussion'
Subject: RE: Ending TCP/IP, FTP jobs in QSYSWRK


Jim,

I don't have a system to check on right now, but my recollection is that
these QTFTPxxxxx jobs are pre-start jobs.  If these jobs are being used many
thousands of times and then taking forever to clean themselves up, you may
want to look at the pre-start job entry in the subsystem.  One of the
parameters is MAXUSE, which will determine how many times a job can process
requests before it is ended.  On your system, what is the value for this
parameter?  If it is *NOMAX, perhaps you would benefit by setting this to a
hundred or so.  Valid values are between one and one thousand, or *NOMAX.
You might end up incurring some additional overhead during normal
operations, but it might lessen the amount of cleanup required for a
shutdown.  It sounds as though this is a significant source of irritation
for you.

You would use the WRKSBSD (Work with Subsystem Description) command to view
the pre-start job entries and CHGPJE (Change Pre-Start Job Entry) to change
the values which control the jobs.  Because this is an IBM thing, you might
need to re-apply changes following PTF's or upgrades.

Regards,
Andy Nolen-Parkhouse

> On Behalf Of Jim Damato
> Subject: RE: Ending TCP/IP, FTP jobs in QSYSWRK
> 
> grrr...
> 
We changed the jobs' logging
> levels
> and found that we're generating thousands of short QPJOBLOG files for each
> _JOB_.  Each joblog identifies an FTP PUT or GET session.  Apparently each
> of these QTFTPxxxxx jobs supports thousands of consecutive FTP sessions.
> I
> wonder whether there's some aspect of job cleanup that has grown out of
> control



As an Amazon Associate we earn from qualifying purchases.

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.