|
Thanks for all who responded so far... More info: The file has no indexes. (I just added one by company to test some things out.) The file will delete a high percentage of records (from 50% to 100%, based on the number of companies currently being processed.) I can only delete the records of the company being processed, so that eliminates DROPping and reCREATEing the file. Will a non-unique index help with this high of a delete percentage? Is there a way to "turn off" the overhead for this file, because I really use it as a work file (but I must keep it until I run the process again.) ... a tricky thing, this SQL stuff... William > date: Thu, 14 Apr 2005 15:44:30 -0500 > from: "Elvis Budimlic" <ebudimlic@xxxxxxxxxxxxxxxxxxxxxxxxx> > subject: RE: SQL Delete Performance - Why so Horrible? > > I forgot to ask, how many indexes already exist over this physical file? If > there are many, they need to be maintained during the delete. > > -----Original Message----- > Subject: SQL Delete Performance - Why so Horrible? > > Hi All, > > Why is the SQL DELETE performace so terrible? > > I have a file with 1,500,000 records. When I do: > > DELETE FROM filename WHERE company = pCompany > > in a stored procedure running in batch, it takes nearly 30 minutes! Of > course, > an SQL DROP or a CL "CLRPFM" works within seconds, but I really must check > the > company before deleting. > > Would indexing by company help? > > Thoughts? Ideas? Thanks. > > William
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.