|
Buck, If we're putting out a new release of our product, we test differently than a new OS/400, for example concenterating more on range testing of input field values, edits, etc, for new or changed online screens, commands, etc. More user-interface testing, plus whatever internal changes have been made. For batch engines, we basically take some known inputs and run them through and make sure we get the known outputs. For a new OS/400 release, we concentrate more on system interfaces such as communications. This defies test tools, as there are infinite combinations of unexpected situations that can occur, hardware that can be used, network configs, who-knows-what on the other end, etc. We attempt to test a fairly comprehensive subset of everything we are aware the customers are doing and that we have facilities to test. The other things customers do with the product, we obviously cannot test. Kind of like a word processor - you can't test everything someone could possibly type but you try to make at least one pass through every feature you offer. A thorough review of the Memo to Users is also done by senior development and support people, each looking out for what might effect their particular areas of expertise, and this is also considered in the test plan. Also, as you mentioned, don't forget to test the things you've been burned by before. Our System Verification department uses various automated testing tools with varying degrees of effectiveness, but I have no personal knowledge of these tools, as they are not currently used for the products I work on. Our newest stuff will use a common code base and will benefit from both automated and human testing. -Marty --__--__-- From: Buck Calabro <Buck.Calabro@commsoft.net> To: midrange-l@midrange.com Subject: RE: Test environments (was: upgrade) Date: Tue, 27 Aug 2002 12:14:26 -0400 Reply-To: midrange-l@midrange.com Hi Marty! >We find OS issues testing a limited subset on >a small system or LPAR. It is well worth the effort. We are putting together a regression test plan, but with 5000 displays, the number of possible inputs is astronomical. Therefore, some intelligence needs to be applied to this: what do we test in order to determine that we're six sigma confident nothing will break? For sure, we run a suite of programs that use ODBC, and we run through a complete set of service orders, to exercise the most complex parts of our application. We also run some AFP bill print jobs, because AFP support has been questionable in the past. Can you share some details of the effort you go through when testing a new release? --buck
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.