× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Even 4 or 5 logicals will be bad - every record deleted means processing all the indexes for each record deleted. The Database Programming manual says that the entry for a deleted record is removed from the index. If this is done in a simple manner, it may mean just changing a couple pointers. If the entry is compressed out, there's more work involved. Nonetheless, this is on a record-by-record basis. You're way ahead by turning off or delaying access path maintenance.

The other points, esp. about copying the records you want to keep and clearing the file, are right on, IMO, esp. when the number of deleted records will be most of the file. You can CPYF on a range, and an existing logical will help this, too. You deleted c.72% of the records. Copying the other 28% would mean, at worst, about 40% of the I/O, and could be better, if blocking can be used - deletes are not blocked.

In my experience, SQL is the worst tool for deleting records, esp. a lot of them, relatively speaking. It's awfully easy to program but often performs very badly. The usual trade-off is: Convenient for the programmer, bad for the user (or performance).

As far as using the index goes, the optimizer may still select a table scan, if it determines that lots of records will be returned. Default estimates are that greater-than and less-than each return 33% of the records. This is refined in various ways, and I don't know the effect of the AND. A between (which this is equivalent to) assumes 25% of records returned. But key-range estimates may broaden this, as it should in this case. If it finds the number returned to be high, it'll probaby go with a table scan, as this will take less time, since index key positioning requires more I/O. I/O is the killer in all this.

Regards

Vern

At 11:31 AM 2/10/03 +0100, you wrote:


Hi,

>- journaling (number of pending deleted records and multiples writes).
>- Triggers.(number of calls and process in triggers programs)
>- Numbers of logicals files.(update index)
>- Referentials constraints.(multiples action processed).

yeah, I guess the journal will have had a big impact.. We have an ASP2 for
journalling,
but it only has 2 6718 drives (mirrored).

4 or 5 logicals shouldn't be too bad... No triggers or constraints,
anyhow..

I'll keep this in mind,

Oliver






As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.