Boy, Scott, you really know how to make a guy get defensive. I think it's because you're making an assumption with the whole item master scenario. I'm not at all suggesting to not modularize logic such as item_getInventory(). I'll try to take a step back to analyze why we're doing what we do and explain it.

First off, we don't encapsulate all of our files. Only the ones that have business logic directly associated with the file.
For example:
We have a file with records with effective dates associated with them. We also have some clients who apply grace periods to their effective dates (that we need to interpret and apply on the fly if a record wasn't found w/o the grace period). Without this service program, this file would be used in over a hundred programs. Instead of having every program that needs to use this file have to go through all the gyrations of looking for the effective record, it's all handled in the service program. I also call this service program a business unit. The service program is set up so all the caller has to do is code the statement "line = getEffectiveLine( date: otherinfo ); "

But I did take these business unit service programs that revolve around files a step further. I designed them to fully encapsulate the file in such a way that the caller could use the file from one location. This was a carry-over from my last job, where we did use it as means for other systems to get the data it needed. We weren't a SQL shop, so that was probably part of the reason for using this method.

I think a big part of trying to keep the file access in one module is because at my current job we don't have a CMS or a cross-reference utility. We have a homemade cross-reference utility, and it's mostly ok, but not 100%. So when it comes time to implement a file change, having that file in one location has helped quite a bit. I do realize that we could achieve this same benefit by moving further toward SQL, which we are slowly doing.

Maybe I should try cutting out all of the Gets and Sets and procedures that mimic RPG operations (SetLLFile() ReadFile() ChainFile()). I went looking through our file encapsulated service programs and in every one there is at least one procedure that will perform all of the necessary actions to produce the result to a common requirement (vs forcing the caller to issue each separate procedure call for things such as setll, reade, get, get, get, etc). However I know there are instances where programs do use those individual calls - usually for the most basic of needs.

I have to admit, I really do like the idea of centralizing i/o access, however I do concede that there is added complexity with wanting all file access to occur through one location.


-----Original Message-----
From: rpg400-l-bounces@xxxxxxxxxxxx [mailto:rpg400-l-bounces@xxxxxxxxxxxx] On Behalf Of DeLong, Eric
Sent: Wednesday, April 04, 2012 10:56 AM
To: RPG programming on the IBM i / System i
Subject: RE: File Encapsulation Quandary


Can you tell me why you think file encapsulation is worth the effort? I spent many years thinking on this, and found that I don't like the bottom-up mindset that this imposes on application development.

I find my development patterns seem to take a top-down approach, defining business "objects" or entities, and methods (business transactions) to process these entities, and invoke whatever file I/O approach that best serves the needs of the transaction.

At issue, to me, is that the file encapsulation limits you to a handful of methods to retrieve your data, and limit your flexibility as a developer. Say for example that I only need a subset of fields from a particular file. Unless I specifically define a proc to return just the fields I want, the standard mode of operation is to simply return all the fields in the file.

Instead of designing on top of the database, design the functions that support the meta-data contained within the database. In CUST file, you might have a data function to retrieve billing or shipping address, another for YTD summary details, another for A/R credit terms, and so forth...

-Eric DeLong

-----Original Message-----
From: rpg400-l-bounces@xxxxxxxxxxxx [mailto:rpg400-l-bounces@xxxxxxxxxxxx] On Behalf Of Kurt Anderson
Sent: Tuesday, April 03, 2012 4:35 PM
To: RPG programming on the IBM i / System i (rpg400-l@xxxxxxxxxxxx)
Subject: File Encapsulation Quandary

Being a big fan of file encapsulation (essentially centralizing business logic relating to a file), I've created a fair number of file encapsulated service programs. They use native I/O. Since the service program is a one-stop shop, the file is defined as Update/Add. This causes a couple issues:

1. File needs to be copied into a test library.
In our environment we don't have separate test and production environments. In fact, most testing references a client's production library. This has actually worked fine without consequence. We always run STRDBG UPDPROD(*NO) for testing. What this means is that any file that is encapsulated needs to be copied into a test library otherwise the service program won't open the file. This is a minor pain (however has on occasion caused issues in testing b/c a file had pre-existed in a test library and it wasn't updated).

2. Files opened for update don't block read.
One of our main files I really wanted encapsulated couldn't be because it's a transaction file with millions of records. No blocking on read loops hurts. A lot.

3. Using SQL bloats job log in test mode. Slows single-record access.
IBM has really been pushing using SQL to access data, so I thought this might be a good occasion to follow that path they've laid out. Doing so addresses issues #1 & #2 above. I've modified one of our file encapsulated service programs to use SQL. I think it works pretty slick. Although one not-so-slick aspect is the "chain." Presumably Closing an existing cursor, Preparing a cursor, Declaring the cursor, then Fetching the cursor is going to be a lot slower than a simple chain. I'm willing to live with that, although I'm getting some beef about over-complicating it. In addition, using SQL and running the program with STRDBG UPDPROD(*NO) balloons the job log. Maybe I shouldn't care about the size of the job log in test, although in one of my tests of 100,000 records (~80k chains), the job wrapped twice and then I killed it. In this situation I may be able to load the file into an array or something, but I know that won't always be the case. (I do realize this is only
an issue in testing, but I can also foresee concerns about this slowing down testing since it's writing out to the job log so much.)

This had me wondering how other shops handle file encapsulation. I know my last job had a completely separate test environment. That's not likely going to happen here anytime soon.

We also don't have change management software. Files being encapsulated had made some of our file changes quite a bit easier.

Here is a sample of the code. If you have the time, I'd appreciate comments.

Kurt Anderson
Sr. Programmer/Analyst
CustomCall Data Systems
This is the RPG programming on the IBM i / System i (RPG400-L) mailing list To post a message email: RPG400-L@xxxxxxxxxxxx To subscribe, unsubscribe, or change list options,
or email: RPG400-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives at

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2020 by and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].