× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Hi Cary,

Are you doing anything to ensure that you won't have two programs opening/writing to the file at the same time? For example, are you using O_SHARE_NONE on the open() API to guarantee exclusive use?

The reason I ask is that I suspect you might have a "phantom refresh" problem. Where one process opens the file while another already has it open. If the first process writes it's data, then the second process also writes it's data, it might accidentally wipe out the first process's work.

Remember, stream files (I assume these are stream files) are not divided into records. If the first process says "open file, position to byte 0, write 10 bytes" and the second process says "open file, position to byte 0, write 10 bytes" the second process will overwrite what the first process has written there.

If they are occurring at the same time, the process could result in a phantom refresh. I.e. process one opens the file, and gets the last byte position that was in-use in the file (let's say position 0). Now, thanks to multitasking, the second process also opens up the file and tries to position to the end of the file, and ALSO gets byte position 0, because the first process hasn't written any data yet. Then the first process writes it's data, and closes the file. Then the second process, which is still position to position zero, writes it's data, and wipes out the first one's work. That's a phantom refresh.

When you specify O_SHARE_NONE it means you won't share your data with any other processes... only you can have it open. In that case, there's no chance of a phantom refresh, because the second process can't open the file while the first process already has it open.

Something to think about, anyway




On 2/16/2012 1:18 PM, Kellems, Cary (EXTERNAL) wrote:
I have a process that monitors files in IFS. (Reads the file every second or so)

Once the process senses that the client has completed it's write to the IFS file it will process the data.

It will then write data to another IFS file with a response.

The problem I am encountering is that my write process


* Open( the IFS file
* Then Writes( to the IFS file
* Then Close( the IFS file

I keep a log of these transaction and the log demonstrates that the process is working yet when I look at the IFS file (via WRKLNK) there is no data in this file. I have captured the return codes for the Open( and the Write( and they are good (Result code for Open = 0 and Bytes sent for write( = 19 ). And to complicate the situation, this is happening intermittently.

Has anyone experienced similar problems and might have a direction that I can investigate?


Thanks,
Cary T. Kellems
(310) 727-4490
cary.kellems@xxxxxxxxxxxxxx<mailto:cary.kellems@xxxxxxxxxxxxxx>



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.