× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



On 27-Jul-2016 17:21 -0500, Evan Harris wrote:
On 27-Jul-2016 17:09 -0500, dlclark wrote:
It is "simple" enough to use a single RPG program and embedded
SQL to access multiple databases (one local and the rest remote)
for the purpose of collating such information. <<SNIP>>

I considered that and haven't ruled it out, but it does get me into
doing a lot of maintenance on the i and using RPG (which I was
trying to avoid). Deploying compiled programs is a bit more effort
than being able to run a central script or command against multiple
systems. I'd really like to be able to maintain the SQL on a central
machine and just run it against each i and avoid the change control
aspects) of doing that.

I suppose I should have stated that one of my goals is a least touch
approach on the remote/target systems. <<SNIP>>

There need not be any _deployed_ compiled programs; just ensure everything is performed dynamically from a script that is agnostic about the system on which the script runs. I used to do essentially the following; the script did identify the /central/ system, and that would be globally replaced within the script, changed to the system name from which the request to submit the script was initiated, as the system name to which the output from the prior activity in the script would be sent upon completion:

Do for each System-N
If lclScriptModified() or tgtScriptDownLevel(System-N) then
tgtScriptRefresh(System-N)
EndIf
lclSetLastRun(System-N) /* Alw see only more recent than this */
tgtScriptSubmit(System-N)
EndDo

The compiled SQL program [with the SQL packages for the application servers] almost never changed, per merely containing logic about how to run the script on each system. Admittedly, that was always just the one script, but there is no reason the idea can not be applied more generally to accept any ad hoc script. Just verify that the script does what is expected on the central system, and then fire-off that script across all of the [other] systems. Rather than that program awaiting any completions for retrieval of the output generated on the target, a request to display the output from the last-run would diagnose if the last-run had not yet delivered the completed results.

FWiW: My earlier iteration of the same work being sent to each system had used instead, the Submit Network Job (SBMNETJOB) with a CL stream job as the script. With that, each time the script would be the newest, being copied\sent directly; similarly, the script was redacted at the submitted-from system if necessary, with that current system name identifying the location to send the output generated by the script at the target system. The output was copied using Copy File (CPYF) into DDM Files created previously in the CL stream, naming the Remote Location Name (RMTLOCNAME) as the submitted-from system, per the prior [optional] redaction. I do recall sometimes that I would include inline //DATA members as source for DDS, CL, or REXX programs mostly, to be used in compiles on the target systems, when some work was not easily achieved purely with the interpreted CL.


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.