|
In message <34215B04.4E16@usaor.net>, From Tim Truax <truax@usaor.net>, the following was written: > 2 users access the same account number at the exact same time in the > same file and same interactive job and are presented with a full > screen where 20 comment lines can be typed in. > User 1 - gets this screen with available sequence numbers 1 thru 20. > User 2 - gets this screen with same available sequence numbers!!! > User 1 & User 2 - commence to typing in the comments 1 thru 20. > User 1 hits enter first and adds the records 1 thru 20, Now > User 2 hits enter and they update and write over User 1's previously > entered comments. Tim, It sounds as though you have 20 records, each with a 78 byte alpha field for comment, probably with a customer number and sequence number as the key. If only 20 records are allowed, you have a real problem. If more are permitted, you could check the highest sequence number when you read the data, then again before updating. If it's different, reread and display the data from the database with the current user's comments appended to the end (resequenced). He can then edit or simply hit enter again. Have you considered using a variable length field which you programatically break into rows? You could allow it to be up to 100 rows or so in length, but it will only use the storage space required by the data. It's pretty easy to code that way. You just initialize a subfile to the maximum number of rows and update the data into it, one segment at a time. This allows the whole shebang to be in one record, which is a little easier to control. The easiest way I've found to control the conflict situation is to store the time when the record is updated. When you read the record, you do not lock the record, but you do save the time. If no record currently exists, the time value is 00:00:00.000. Before you update, you read the record again, this time locking it, and compare the time with the original value. If it's different, you give the user the bad news and make him key his comments again using the updated data. This has the advantage of not leaving a lock flag in place if a job dies in mid edit. You don't care what the time value is. Your only concern is that it cannot be different than it was when the data was originally read. Of course you could just lock the record on the first read and trap the error, but that tends to allow users to lock records and then wander away, making everyone else wait while they have a smoke and a cup of java. The time value method encourages them to get in and out quickly, freeing up the database for others. It's worked well for me, but your mileage may vary... Pete -- - Pete Hall peteh@earth.inwave.com http://www.inwave.com/~peteh/ +--- | This is the Midrange System Mailing List! | To submit a new message, send your mail to "MIDRANGE-L@midrange.com". | To unsubscribe from this list send email to MAJORDOMO@midrange.com | and specify 'unsubscribe MIDRANGE-L' in the body of your message. | Questions should be directed to the list owner/operator: david@midrange.com +---
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.