× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



On 19-Jun-2014 12:46 -0500, James H. H. Lampert wrote:
I just found something new.

It's NOT the user (I was able to successfully duplicate the problem
myself).

As Barbara had already noted, per the amount of data; "the WRITE operations aren't going out to database until the block is full."

It's the number of records written to the source member, and it
always fails after successfully writing the 44th record.

Actually, more accurately, the effective buffer size is what determines when the error transpires. The actual number of rows that can be written before the error, is variable; the count varies according to: the size of the blocking buffer, the record length, any other attributes of the fields of the record format that influence buffer requirements, and anything else that influences the buffer size made available for the Open Data Path (ODP).

Increasing the buffer size or decreasing the record length would increase the number of records that could be written before the error CPF5152 being diagnosed on the WRITE. Conversely, increasing the record length or decreasing the buffer size decreases the number of records written before the error. With 96 bytes per record, with a buffer [for BLOCK(*YES)] of ~4K, only 44 records will fit in the block; per the following expression:

( 4094 / 96 ) = ~44.5

And a Google search turns up other cases where writing to a source
member produces a CPF5152 after 44 successful writes.

The mismatch per CPF5152 is diagnosed when the /database put/ request is manifest; that occurs with the 44th WRITE in the given example. Although the 44th RPG WRITE terminates with an error, all of the 44 WRITE requests were able to be blocked into the Data Management buffer *prior* to the DB PUT being invoked. So the Database is what diagnosed that there was a conflict with the /put/ request [as manifest by the DM due to the buffer being full on that 44th WRITE], and neither the Data Management nor the RPG clears\resets the buffer and notifies that those records were lost. Instead [of, for example, a msg CPF4572 to inform that "&6 records lost when member &4 closed"], the implicit CLOSE flushes the buffer to the physical member, without regard to the format name that was utilized on the blocked WRITE requests. AFaIK, part of [implicit or explicit] CLOSE processing is an implicit Force End Of Data (FEOD); i.e. I believe a trace of the request would show the DM invoking the QDBFEOD, at least with BLOCK(*YES) in effect.

Unfortunately, none of the references I've found seem to show any
kind of resolution.

The resolution is to ensure the Record Format name for the member matches the declarative when using an External description for the declared file, or to use a Program Described description for the file data [instead of the External description].

As can be inferred [at least partly] from the current implementation [defect or no], the error can be avoided by Force End Of Data (FEOD), be that explicit or implicit; by a test, I verified the explicit FEOD can serve to avoid the error by flushing the buffer before the buffer fills, just like occurs for the implicit FEOD as part of the close processing. Of course because the FEOD has to be done before the buffer has been filled makes such a[n inchoate] /circumvention/ somewhat unusable, generally; at least without full control of [an always known] number of rows that can fit in the blocked buffer, and the actual number of records mostly would be an assumption that may not always\consistently hold true. The number of records for the BLOCK(*YES) effectively can be /negotiated/ [effectively overridden by the Database or Data Management] such that the original value specified by the RPG control-block could be reduced; thus the actual value could be somewhat unpredictable, based on the attributes of the file and\or other attributes of the run-time environment.

I tried Barbara Morris's suggestion to add BLOCK(*NO) to the F-spec,
and it does indeed fail on the first write, with no records making it
into the file.

The actual Database /put/ will validate the record format name irrespective of blocking; as already discussed, the blocked WRITE does not manifest as a Database /put/ until the DM buffer is filled. For BLOCK(*NO), every WRITE is manifest as a DB /put/ by the DM.

Yet with blocking enabled, it does 44 successful writes, all of which
make it into the file, and IT THROWS THE CPF5152 ON THE 44TH WRITE,
EVEN THOUGH THAT RECORD DOES MAKE IT INTO THE FILE!

I am unsure why the one row in the buffer for BLOCK(*NO) would not appear in the member similar to the multiple-row processing [for which just one row in the buffer would effect the Database /put/ by the DM]; perhaps some difference arising as side effect of the DM or the RPG run-time. Seems that in either case, the QDBPUT and QDBPUTM inform of the error identically, as Escape msg F/QDBSIGEX, so the inconsistent effect seems not to be the fault of the DB.

That inconsistency between blocked vs non-blocked might be the basis for an argument, that one or the other is a defect. Seems to me that the implicit FEOD allowing the row(s) to appear in the physical member irrespective the record format name [explicit or implied for the FEOD] seems problematic. Why for example should a buffer that is full with just one row [e.g. rcdlen(4080) with 4K buffer] have the record appear in the physical member after the CPF5152 when using BLOCK(*YES), whereas the same record written with BLOCK(*NO) fails the exact same way?

I suspect one part of the confusing effect, is that although an RPG FEOD opcode specifies a Format Name, the DM nor DB pay any attention; i.e. the DM may not pass anything to the DB, other than the ODP, such that there is nothing for the DB to compare betwixt. When the ODP is created by the OPEN, what is the Format name for the member is copied into the ODP, and that would be compared to the Format name passed on the request; if unspecified for the FEOD request, conspicuously there is nothing to compare against. Another issue may be that for a BLOCK(*NO), there is no implicit invocation of FEOD for lack of a /blocked/ buffer; i.e. the one row of an unblocked buffer would not be flushed into the physical dataspace, as contrasted with what occurs with a blocked buffer, even if that blocked buffer is _full_ with just one row such that the first WRITE gets the CPF5152.

Even if accepted as a defect, I doubt a fix would be forthcoming aside from with a new release, and thus with a MTU comment. That is because there could be many programs depending on the effect of that [accepted as] defect; a PTF that would likely negatively impact user code is generally not issued or is delivered in a manner that the function must be explicitly activated because customers would not be happy no matter how much an emphasis is made about their having coded a dependence on a defect. And even if such a fix came with a new release and an MTU, I would think they would need to provide an Environment Variable [or Data Area] implementation as a means to revert to the old behavior for either those programs that went unnoticed or for those who simply did or could not properly prepare for the change by fixing their programs [to no longer depend on the effects of that defect].

The only limiting factors for the impacts of such a scenario, if deemed defect [and a fix is forthcoming], is the likelihood that the affected programs would be those utilizing SRC-PF. That is because the Source Files are by default created with LVLCHK(*NO), whereas other database files default to create with LVLCHK(*YES). And while any application that had revised to accommodate file changes via CHGPF LVLCHK(*NO) would seem possibly to be affected similarly, those programs utilizing non-source Externally Described files would be much less likely also to change the Record Format name [R-spec] in the DDS beyond what field changes\additions were made that gave rise to their choice to avoid level checking. That is, non-source PF would likely either encounter a level check error on open, or will function without the error CPF5152 because the RcdFmt name remained unchanged.

It just keeps getting weirder.

Introducing Level Checking to the situation will cause the failure to move to the Open processing; i.e. a msg CPF4131 would report the Record Format Level Identifier mismatch between the compiled\stored value and the run-time reference obtained from the file [format; actually, as obtained from the value stored in the database file member, which was copied from the *FILE when the member was created; e.g. by ADDPFM]. That is what I subtly alluded, I did not explicitly describe\discuss the effect of the revision to add CHGPF LVLCHK(*YES) against the PF-SRC, in a prior reply:
<http://archive.midrange.com/rpg400-l/201406/msg00121.html#>


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.