× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



On 14-Oct-2016 10:52 -0500, dlclark wrote:
First note that our environment is such that we have upwards of 600
different data libraries with the same data file/table/view names in
them where each data library represents a different "company." Thus,
we can run the same program using different job description-based
initial library lists and achieve processing for a single "company."
We can also run a CL which loops through a master company list and
dynamically changes the library list before calling an RPG program to
run for that particular company. This prevents having to have 600
jobs to do what a single job can also do.

[…]

That said... I have a new process I've created for my current
project that loops through an entire data view while processing
every inventory item (looking for item exceptions) and which can
either be run for a single "company" or it can be run across all
companies but, as described, the CL treats each company as a
"separate" run (meaning, the library list is manipulated between
companies and the associated RPG program is called only once for each
company).

The description of "loops through an entire data view" I suppose means a FETCH of the data from a query of a[n item-exception] VIEW is processed.? Of no consequence probably, but...


When I run this process for a single company it always works
successfully. I have now been trying to run it across all companies
and I am getting sporadic failures (meaning, in different companies
on each attempt) for duplicate key issues for the item exception
table that I maintain as part of this process. However, these
duplicate keys are not logically possible because at the beginning of
processing each inventory item I do a mass delete of any previous
exceptions for that one item. Hence, it seems that SQL is sometimes
reusing an open data path for a previous company when doing the
delete and thus the exceptions for the current company are not
getting deleted -- which then causes a duplicate key when trying to
reinsert the same exception for that same item.

The duplicate key exception logged by the database feature [i.e. to be clear, separate from the effective equivalent SQL /message/ sqlcode in response to the DB messaging] states what file; just review that prior message for the message data that would conclusively reveal that the wrong file was referenced with an implication almost surely an existing but incorrect query Open Data Path (ODP) was accessed.


Suggestions on how to prove this is what is happening? ...or what
else I should be doing? ...or whether I should try to work around
this problem by forcing library (schema) names into all my SQL
statements or by forcing an end of the activation group between
"companies"?


The Reclaim Activation Group (RCLACTGRP) between companies is probably the simplest sure-fire way to circumvent an issue if the origin is, as alluded is suspected.


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.