Dumb idea #1:

How about 2 file encapsulation programs for each file? One that does all
read-only accesses and the other that does all update/output accesses?


On Tue, Apr 3, 2012 at 5:35 PM, Kurt Anderson
<kurt.anderson@xxxxxxxxxxxxxx>wrote:

Being a big fan of file encapsulation (essentially centralizing business
logic relating to a file), I've created a fair number of file encapsulated
service programs. They use native I/O. Since the service program is a
one-stop shop, the file is defined as Update/Add. This causes a couple
issues:

1. File needs to be copied into a test library.
In our environment we don't have separate test and production
environments. In fact, most testing references a client's production
library. This has actually worked fine without consequence. We always
run STRDBG UPDPROD(*NO) for testing. What this means is that any file that
is encapsulated needs to be copied into a test library otherwise the
service program won't open the file. This is a minor pain (however has on
occasion caused issues in testing b/c a file had pre-existed in a test
library and it wasn't updated).

2. Files opened for update don't block read.
One of our main files I really wanted encapsulated couldn't be because
it's a transaction file with millions of records. No blocking on read
loops hurts. A lot.

3. Using SQL bloats job log in test mode. Slows single-record access.
IBM has really been pushing using SQL to access data, so I thought this
might be a good occasion to follow that path they've laid out. Doing so
addresses issues #1 & #2 above. I've modified one of our file encapsulated
service programs to use SQL. I think it works pretty slick. Although one
not-so-slick aspect is the "chain." Presumably Closing an existing cursor,
Preparing a cursor, Declaring the cursor, then Fetching the cursor is going
to be a lot slower than a simple chain. I'm willing to live with that,
although I'm getting some beef about over-complicating it. In addition,
using SQL and running the program with STRDBG UPDPROD(*NO) balloons the job
log. Maybe I shouldn't care about the size of the job log in test,
although in one of my tests of 100,000 records (~80k chains), the job
wrapped twice and then I killed it. In this situation I may be able to
load the file into an array or something, but I know that won't always be
the case. (I do realize this is only
an issue in testing, but I can also foresee concerns about this slowing
down testing since it's writing out to the job log so much.)

This had me wondering how other shops handle file encapsulation. I know
my last job had a completely separate test environment. That's not likely
going to happen here anytime soon.

We also don't have change management software. Files being encapsulated
had made some of our file changes quite a bit easier.

Here is a sample of the code. If you have the time, I'd appreciate
comments.
http://code.midrange.com/f5aa843519.html

Thanks,
Kurt Anderson
Sr. Programmer/Analyst
CustomCall Data Systems
--
This is the RPG programming on the IBM i / System i (RPG400-L) mailing list
To post a message email: RPG400-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/rpg400-l
or email: RPG400-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/rpg400-l.





This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2019 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].