× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Hello, David:

It is normally not necessary to recompile every program that accesses or updates a table with embedded SQL. This is one of the benefits of SQL over "native" I/O. The only programs that need to be recompiled are cases where someone has added a new column to the table, and certain programs now must use this new column, or where a column definition has changed (type, length, decimals, etc.) and so, any programs that use this column may need changes, for example, to change the declaration of corresponding host variable(s) to ensure the data will still "fit" correctly into the host variables, without any data loss due to conversion, truncation, etc.

RECORD I/O
With traditional "native" record I/O, each program has a "map" (data structure) of the record layout of the file as it existed at compile time. At runtime, the data is copied from the actual I/O buffer into this data structure, to "map" the individual fields in the record. CPF, OS/400 and i5/OS add the "Record Format Level IDentifier" to detect changes to the record layout, so programs that have not been recompiled since the file was changed will not just run and potentially corrupt the data in the file due to an incorrect "mapping" or layout. Hence, the usual techniques of using DSPPGMREF and DSPFD *OUTFILEs to find any programs that need to be recompiled.

SQL
With SQL, programs do not have a "map" of an entire "record" because in SQL, there is no concept of a "record", but only "rows" and a program must always specify exactly which columns it wants in the rows of the result set, by specifying the column names, e.g. on a SELECT statement (or a FETCH, etc.). What SQL does, in the code generated by the SQL preprocessor, is essentially to proceed column by column, to "map" each individual requested column into the corresponding host variable. If the type, length, and decimals match exactly, the data is simply copied in. If the type or length (or decimals) of the host variable are different from the definition of the column in the table or view, then some runtime conversion takes place. This may or may not result in various "errors". The database will throw an exception (error) only if this would result in "data loss" and this depends upon the exact value in that particular column in the row, on a row-by-row and column-by-column basis.

Let's look at a simple example: Suppose we have a column, let's call it "NAME" and it is defined as character length 10. We have a host variable NAME that is also alpha 10. Now, we change the SQL table with ALTER TABLE to change the size (definition) of the NAME column to now be character 12. But, we do not change our program, so the host variable is still Alpha 10. In most cases, the conversion for simple character fields is to right-trim and remove any trailing blanks, and so a value such as "CHARACTERS" will still fit into the host variable, and so no errors will be reported. But, if you happen to fetch a row that contains a value in that column that is greater than 10 characters, for example,
"EXPERIMENTAL", this value cannot "fit" into the 10-character host variable without truncation resulting in data loss (e.g. "EXPERIMENT"), and so a runtime conversion error exception is thrown when your program attempts to fetch this row.

SUMMARY
With "native" I/O, checking is at the record level (via the level-checks mechanism) and this check is performed at file open time ("level check").
With SQL I/O, all checking is done on a column-by-column basis, as each row is fetched or updated.

RECOMMENDATIONS
For programs that use embedded SQL, do not try to rely on the same old techniques (DSPPGMREF, etc.) to detect when a program needs to be recompiled. Instead, developers, as part of problem analysis and scope determination, should perform a field-level "impact analysis" -- in other words, whenever the definition of a particular column in an SQL table or view changes, you should search to find all programs that use this column, and then examine the corresponding host variable definition., to assess whether each program needs to be changed to accomodate the database change. For those that do need change, you would then just check them out with your "CMS" and then check them back in again, using the usual procedures.

You can use FNDSTRPDM as a "poor man's impact analysis tool" to search for all occurances of the affected column name in all of your program source. Or you can use any of the "impact analysis" tools on the market, such as Hawkeye Pathfinder, Help Systems Abstract+, Databorough X-Analysis, etc., to assist with this task.

Does that help?

All the best,

Mark S. Waterbury

> David FOXWELL wrote:
Hi,

Starting a new thread, the last on handling views was short-lived. Vern, thanks, I've saved your input for when we get started on that issue.

Our "CMS" will automatically recompile all modules and programs if a file changes and that file is referenced in a program. By using DSPPGMREF. Of course, in an SQLRPG, this doesn't work. Would anyone like to share how they are dealing with this problem?

Maybe this should be on the RPG list?

Thanks.

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.