× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



It could be, yes...

But honestly, If you want to use JDBC to put the data in postgreSQL...

I'd probably go with a 100% Java application, running on the i, reading
from the DTAQ and writing to the table.

Charles


On Thu, Oct 24, 2019 at 9:21 AM Jay Vaughn <jeffersonvaughn@xxxxxxxxx>
wrote:

now this is some great info and it relates back to Alan Campins suggested
use of data queues also.

I have to clarify exactly what the triggers would be writing out to the
data queues... of course it would be the before/after image rows, right?
And of course the table name. Am I missing something?

For the running job to maintain connection, read data queue(s) and insert
to target... could SK's jdbc api be used in that program? It could right?

Jay

On Thu, Oct 24, 2019 at 11:06 AM Charles Wilt <charles.wilt@xxxxxxxxx>
wrote:

You really don't want to be talking to AWS from a trigger..

You're looking at 400ms response for HTTP and probably 800ms for HTTPS...

Assuming that you have to connect, send, disconnect for every request.
Which you would for anything but changes being made by a batch job.

Additionally, with HTTPS...if you happen to have lots of jobs connecting
at
once, you can run into machine gate waits in the DCM.

They way I have done this is to have Db2i triggers write out to a data
queue. Then have a background job send the data to AWS.

The background job can keep the connection open and just keep sending as
long as there's another entry on the queue.

In my case, we can have multiple worker jobs pulling off the queue, but
in
your case, you probably want just a single job to keep the transactions
ordered.

You might want to multiple queues for sets of tables, or even one queue
per
table if there's enough changes.


Charles




On Thu, Oct 24, 2019 at 8:14 AM Jay Vaughn <jeffersonvaughn@xxxxxxxxx>
wrote:

vern - way too much fun.

so... i'm pretty well versed with what can be developed on the i, I
just
need to educate myself more on "connection" options and best overall
platform to platform connection possibility?

Like...

my trigger i developed to bail out the symDS could be a pretty viable
option in this case, instead of pushing the before/after row to a local
symDS table on the i, why could'nt i just push that row directly to the
AWS
cloud database?
Of course I have to keep in mind the ability to "re-sync" the two
databases
at anytime and that would just be another mass insert cross platform.
I'm probably way oversimplifying this... but isn't it possible just to

- put triggers on the IBM i tables for (insert/update/delete) events
and
have he trigger pgm connect to remote database (in this case AWS
postGre
SQL)
- when needed to do a resync, simply run a process to mass distribute
ever
row to target database?

tia

Jay

On Thu, Oct 24, 2019 at 9:32 AM Vernon Hamberg <
vhamberg@xxxxxxxxxxxxxxx

wrote:

Hi Jay

Too much fun, eh?

Products such as SEQUEL have the ability to export data by various
means, including FTP. They can also communicate with non-IBM i RDBMS
directly using JDBC.

You might look at generating things like XML or JSON - IBM i SQL has
several tools for that, and can be executed in RPG. There are also
tools
like CGIDEV2 and Scott Klement's JSON tooling, if you want to roll
your
own. I can even imagine writing an RPG-Open Access tool that could be
used to "WRITE" to the ETL using native RPG opcodes.

HTH
Vern

On 10/24/2019 7:15 AM, Jay Vaughn wrote:
So our non i folks decided to create a small 29 table DWH for a
client.
They know little about our tables.
They used symetricDS to pull from the i and populate a staging
database.
An ETL tool then moves that to an AWS cloud PostgreSQL data
warehouse.

The symDS tool auto generates sql triggers that are slapped on our
i
tables. A couple of tables were so large in row size (many
columns),
that
the auto generated sql triggers were so inefficient that I had to
replace
them with an i DB2 trigger that literally hands off the
before/after
image
to a batch process to carry out the insert of those images to a
symDS
table
that is also on the i. Apparently the symDS tool pulls changes
form
that
symDS table on the i.
Then, due to the fact that we have initialized blanks in some of
our
zoned
fields that sql doesn't play well with, I also had to put our own
trigger
on that one... and there may be more eventually.

I'm at the point where I'm convinced this is not the best game plan
for
the
DWH.

Has anyone got any suggestions for a better IBMi to AWS postgreSQL
replication that would eliminate that symDS staging piece?
Preferably
something that can connect directly to the AWS cloud from the i...
even
if
custom code needs to be written?

tia

Jay

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L)
mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxx for any subscription related
questions.

Help support midrange.com by shopping at amazon.com with our
affiliate
link: https://amazon.midrange.com

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxx for any subscription related
questions.

Help support midrange.com by shopping at amazon.com with our affiliate
link: https://amazon.midrange.com

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxx for any subscription related
questions.

Help support midrange.com by shopping at amazon.com with our affiliate
link: https://amazon.midrange.com

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxx for any subscription related
questions.

Help support midrange.com by shopping at amazon.com with our affiliate
link: https://amazon.midrange.com


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.