× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



It is the same for both jobs, as they were submitted one after another.

Anyway, now this has become an academic question, as the original jobs that
were causing an issue have completed, so there is no longer any need to
delete the rows via sql.

I would still like to know what the issue could have been, if anyone has
come across such an issue before.

Thanks

Vinay

On Sun, Jan 4, 2015 at 7:25 PM, Kirk Goins <kirkgoins@xxxxxxxxx> wrote:

What does the library list look like for the job?

On Sat, Jan 3, 2015 at 2:45 PM, Vinay Gavankar <vinaygav@xxxxxxxxx> wrote:

Hi,

We have a physical file (say pffile) which has over 280 million records
(no
key in physical). The file has 2 logical files, one with Key of Ref
Number
REFNO and Seq Number SEQNO (lf1file) & another with key of Ref Number &
Seq
number descending (lf2file).

The file is in use by numerous jobs (using logicals).

We want to delete records from the file for particular Reference numbers.

We tried to do it thru creating a sql script and running it via a
Submitted
Job using RUNSQLSTM.

the sql statement is:

delete from lib1/pffile where refno = nnnnnnnn and SEQNO > 0 and SEQNO <
99999

where nnnnnnnn is the refno we are trying to delete
REFNO is 15,0 packed and SEQNO is a 5,0 packed. We want to leave SEQNO =
0
and SEQNO = 99999 in the file after or SQL is done.

Even as the sql is running, one of the jobs is trying to write records to
the file for the above REFNO, by getting the highest SEQNO for the REFNO,
adding 1 (which makes SEQNO 0) and tries to write. An exception happens,
which is trapped, and the program continues.

Killing the jobs which is doing this is not an option (at least not a
preferred option).

We want to run this for multiple REFNOs.

For one value of REFNO, the sql completed in a few seconds, even before
we
could get to see how the submitted job was doing. So the assumption is
that
it used the access path of the logical and deleted the required rows
(99,998 rows).

But for other values of REFNO, the sql job starts searching the full
table,
ignoring the access path. We can see the open files for the job going
thru
the physical file, starting with the highest RRN available and going
backwards (at least that is how it seems from the RRN of the open files).


We tried different things like using "delete from lib1/lf1file...." but
still the same thing happens.

Any ideas on why this could be happening, and how to force the access
path
to be used?

Vinay
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.




--
Kirk
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.