× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



This is a brutally difficult suject to tackle. The OO people have testing frameworks like JUnit which allow them to test individual functions. If you're not familiar with JUnit, let's take an example. You write a function that calculates taxes. For input, it takes a part number and quantity purchased. For output, it returns the tax amount. You write tests that look like this:

assert(CalcTax(11111, 0) == 0.00); // didn't buy any?!
assert(CalcTax(11111, 1) == 0.10); // one unit
assert(CalcTax(11111, 5) == 0.50); // five units
assert(CalcTax(11111, 10) == 1.00); // ten units
assert(CalcTax(11111, -1) == -0.10); // a return!

Any time you make a change to CalcTax, you run this little test suite and every test should pass - assert will throw a message if the test fails. This sort of thing works pretty well for a simple function like this one, something that can be done with a calculator, but there's an untested condition lurking here. What happens if the product number passed is is not in the product file? Notionally, as we discover this, we add a test for it along with code to handle it. Maybe:

assert(CalcTax(11110, 5) == exception); // invalid product throws an exception message

But this sliver starts to illustrate the problem we in the midrange world have with automated testing. It's almost entirely database dependent. Look at the simple CalcTax above. It must take the product number, CHAIN out to a product master file and get the product cost. The simplistic assert above assumes the tax will come out to be .10 per unit but that can only be true if the cost in the database doesn't change. Plus, this function is too stupid to work in the real world - there are different tax jurisdictions, so we need to know the tax on item 11111 in maybe 52 counties. And we need the item cost and tax rates to remain frozen for all time if we want the test suite (as written) to pass.

Yuck.

So now what do we do? Our critical path takes us through thousands of items multiplied by 52 counties and the whole lot is in flux. We have a choice of either mocking up a test that doesn't hit the database or mocking up a known database that we can test against.

I have adopted the latter. Sort of-kind of but not really. What I do is to create a test database out of production data. It's a subset that uses SQL extensively to extract out related rows [insert into test_product (select * from prod_product where product in (select product from test_history where custno in (select custno from test_customer)) ] - like that. Then I use another set of SQL statements to modify select rows to meet the test requirements. Thankfully, these can all be scripted which makes this less painful than it sounds.

Once the test database is established for the current set of mods, I make a copy. Once the benchmark copy is established, I run a CL program that copied from the benchmark to the test library, runs my test suite which almost invariably runs programs that update this or that file and then the CL runs a file comparison program (again, SQL based) that only prints out columns that have changed between the benchmark and the test library. I scan that by eye to make sure the database changes I intend were made, and to ensure that changes I did NOT intend are not present.

When doing this sort of comparison, it's important to know what the common key is that links the files between the benchmark and test library. For the product master file, it's the product number. The other thing to remember is that any 'change date/time stamps' will produce a lot of false positives in terms of changed data elements.

The advantage is the automation. The disadvantage is the time to set up the environment and the time to run a pass of tests. I don't put this out as the be-all and end-all of testing, rather to spur some ideas on how to overcome some of the issues that a heavy database system brings to the testing table.

For simple RPG function testing, have a look at RPGUnit http://rpgunit.sourceforge.net/ Yes, it's old but it still has some good work done to bind your service program to their testing framework. I like it and use it as much as possible.

I hope this helps someone.
--buck

On 7/31/2012 8:51 AM, Dave wrote:
Hi all,

I'm interested in how people are going about their tests!

I'm creating a test library libA for pgmA.

PgmA will do a load of stuff then call other programs that will write
to files, return, then pgmA will write just one or two files, all of
which wil be in libA. All the called programs have their own test
libraries.

Should one set up the library to only control the contents of the
files written by PGMA or whole chain of files? It seems to me that the
latter would make the library difficult to maintain if one of the
called programs get modified.


As an Amazon Associate we earn from qualifying purchases.

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.