× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Joel,
One more try:
If I recall; pgmA is designed to handle journal change recs/rows
that look like: j---j+r-----r
where j---j is the journal info for a particular changed CUST rec/row
r---r is the CUST info (don't know if you processs after image or before & after)

So, my thought was for your one time CUST load:
create Sql join to create output like j---j+r----r
to do this you need to join a suitable j---j row with each
CUST row which would supply the r---r info

Devilish details:
Did not get much from your post about pgmA requirements for j---j,
but my swag was it could be as simple as a single row in a work table.
I think r---r info is a CUST row, so I suggest you create a custom
outfile for the DSPJRN cmd which includes all fields/columns for
both the j---j and the r---r parts of the output.
If you are lucky, you can find code on your system for both.
Once you have your outfile, load it with your join from the
custom journal info file/table and your CUST file/table.
I can provide sample code for a DSPJRN outfile we have used
if you contact me off-line.
Best of luck.


-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Stone, Joel
Sent: Tuesday, October 02, 2012 8:33 AM
To: 'Midrange Systems Technical Discussion'
Subject: RE: make cust file look like a journal record - format

To clarify:

I am using DSPJRN and pgmA to ETL our CUST file daily CHANGES to a .NET platform.

I need to load the CUST file - ONCE in the life of the system - for an initial load.

To do this ONE initial load I would prefer to

- use the SAME pgmA for this one-time initial load
- NOT have a new RPG or COBOL prep program hanging around that will never be used again
- NOT load 20 million records into the prod journal file and waste all that space and archiving space (by updating each record)
- use OPNQRYF or SQL to take my live CUST file record and load it into the JOESD field of the *outfile of DSPJRN

It sounds like what I am trying to do is not possible without an HLL. I like Mark W's idea of creating a journal in QTEMP and copying all CUST records into a QTEMP/CUST file. Then I can use pgmA to do my initial load, and QTEMP will disappear with all the journal records.

This CUST file is ditto'd 20 times for other files for a total of 20 million records or so. Each file does have a different record length and format.






-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Jeff Young
Sent: Monday, October 01, 2012 3:36 PM
To: Midrange Systems Technical Discussion
Subject: Re: make cust file look like a journal record - format

Joel,
If you use the DSPJRN command, then it would not take any more disk space to have a pgm load the file in the same format.

On Mon, Oct 1, 2012 at 4:18 PM, Stone, Joel <Joel.Stone@xxxxxxxxxx> wrote:

"You haven't stated whether your pgmA is using the API or the RCVJRNE
command. "

Neither one of these. I am using the DSPJRN to *outfile command.

Thanks for ideas.


-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx [mailto:
midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Monnier, Gary
Sent: Monday, October 01, 2012 1:54 PM
To: Midrange Systems Technical Discussion
Subject: RE: make cust file look like a journal record - format

Depends upon how often you need to transfer the entire files. If time
is of the essence maybe not. You could put together the read/update
program in under an hour while maintaining an audit trail of what was transferred.

You haven't stated whether your pgmA is using the API or the RCVJRNE
command. If pgmA is using the API you don't really have any choice
but to generate journal entries. If pgmA is using the RCVJRNE command
you can spoof it but I don't know what your program or ETL process
will do with duplicate sequence numbers.

But here's a way you could use the RCVJRNE command's exit program to
do what you want. You will still have to write a new program though.

1. Create an F-Spec for each table being transferred.
2. Define external data structures for the tables in question.
3. Define the prototype for pgmA to match the existing pgmA interface.
4. Define a data structure to match the journal data portion of pgmA's
interface.
a. Journal header stuff.
b. Journal entry specific data (this will hold your table's
record contents).
4. Read each table but rather than update each record call pgmA.
a. Set the mock journal header information. You know what pgmA is
looking for so I don't have any suggestions.
b. Set the JOESD to contain the corresponding external data
structure's contents (JOESD = CUSTDS for example).
c. Call pgmA.

If ZC entry types are part of your system's auditing set up you're set.
If not, think about how you will explain the transfer to your auditors.

This what you were looking for?

Gary

-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx [mailto:
midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Stone, Joel
Sent: Monday, October 01, 2012 11:26 AM
To: 'Midrange Systems Technical Discussion'
Subject: RE: make cust file look like a journal record - format

Yes but -

Many of these files have millions of records in them - that is a lot
of wasted disk space I would think - even if only for the several days
we keep journals around.



-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx [mailto:
midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Monnier, Gary
Sent: Monday, October 01, 2012 1:12 PM
To: Midrange Systems Technical Discussion
Subject: RE: make cust file look like a journal record - format

Joel,

All you have to do is write a program that reads then updates the file
in question. An RPG program that just reads and updates the file
might be faster than an SQL script.

dow (not %EOF(CUST));
read(E) CUST;
write(E) CUST;
enddo;

You don't have to change anything but by updating the record you force
an entry to the journal where, you state, your program reads the
journal changes. No change to your ETL are required.

Gary


> On 10/1/2012 11:19 AM, Stone, Joel wrote:
> I have pgmA which reads cust journal changes and sends them off for
an ETL app.
>
> I would also like to be able to send the entire cust file using the
same pgmA with no changes.
>
> How can I make the cust record format look just like a journal >
record (the format that DSPJRN creates).
>
> For example:
>
> Cust file format:
>
> CustName 30
> CustAddr 30
> CustCity 20
> -----
> Total 80
>
>
>
> I want this to look like a DSPJRN record so I can use the same pgmA
to load up all CUST records for an initial load.
>
> Desired format:
>
> Field Field
> Text Name Type Length
> Length of entry JOENTL S 5
---------------------|
> Sequence number JOSEQN S 10
|
> Journal Code JOCODE A 1
|
> Entry Type JOENTT A 2
|
> Date of entry: Job date f JODATE A 6
|
> Time of entry: hour/minut JOTIME S 6
|---------- journal header fields
> Name of Job JOJOB A 10
|
> Name of User JOUSER A 10
|
> Number of Job JONBR S 6
|
> Name of Program JOPGM A 10
|
> Name of Object JOOBJ A 10
----------------------|
> .
> .
> .
> Entry Specific Data - Var JOESD A 80
<--------------------------------- I want my CUST data here!!
>
>
>
>
> Most of the journal fields can empty - but I want my CUST data to
be > in JOESD. Can SQL or OPNQRYF do this?
>
> I think I need to take fields JOENTL thru JOOBJ and initialize those.
Then take all CUST fields CUST.* and string them together (alpha,
packed, > zoned) into the one JOESD field.
>
> Is this possible without using an HLL?
>
> Thanks!
>
>
>
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe,
unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take
a moment to review the archives at http://archive.midrange.com/midrange-l.


______________________________________________________________________
__ This inbound email has been scanned for all viruses by the
MessageLabs SkyScan service.
______________________________________________________________________
__

______________________________________________________________________
This outbound email has been scanned for all viruses by the
MessageLabs Skyscan service.
For more information please visit http://www.symanteccloud.com
______________________________________________________________________
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe,
unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take
a moment to review the archives at
http://archive.midrange.com/midrange-l.

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe,
unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take
a moment to review the archives at
http://archive.midrange.com/midrange-l.


______________________________________________________________________
__ This inbound email has been scanned for all viruses by the
MessageLabs SkyScan service.
______________________________________________________________________
__

______________________________________________________________________
This outbound email has been scanned for all viruses by the
MessageLabs Skyscan service.
For more information please visit http://www.symanteccloud.com
______________________________________________________________________
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe,
unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take
a moment to review the archives at
http://archive.midrange.com/midrange-l.




--
Jeff Young
Sr. Programmer Analyst
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a moment to review the archives at http://archive.midrange.com/midrange-l.


________________________________________________________________________
This inbound email has been scanned for all viruses by the MessageLabs SkyScan service.
________________________________________________________________________

______________________________________________________________________
This outbound email has been scanned for all viruses by the MessageLabs Skyscan service.
For more information please visit http://www.symanteccloud.com ______________________________________________________________________
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a moment to review the archives at http://archive.midrange.com/midrange-l.




As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.