× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



This is theory--I have not actually tested it. If you have an index that needs to be rebuilt, I would expect that ALTER TABLE will do it in a fashion similar to dropping the index and recreating it, so you may gain nothing by dropping it yourself. However, I do not know what it will do with a LF, which is somewhat more than an index. Maybe it will do it a record at a time.

In a former life we had a huge file, with lots of logicals, to which the developer added a field. His manager wanted it adjacent to a related field in the middle of the file. A CHGPF command with the modified DDS was used to make the change. It ran all night and then had to be killed. I was asked to figure out how we could actually change the file. The solution was to add the field at the end of the file, and it was done in a couple of hours. Now this was several release back on older hardware and things may have improved since then, but I think much of the time was taken up moving all the fields after the inserted field. Adding it to the end apparently isn't nearly so time consuming.

ALTER TABLE is similar to a CHGPF with new DDS and they may share technology under the covers, but I don't think you can insert a new field in the middle of the record with ALTER TABLE. However, if you change the length of a field in the middle of the record you have the same effect--all fields after the changed field have to be moved around in the record.

I don't think 5 seconds to do an alter table is unusual. It does have to update the system crossref and if the machine is busy that could be significant, though you said you ruled out system load. Differing numbers of indexes might account for the 5 vs 22 seconds though, because of extra effort to maintain the crossref.

I'd be inclined to try testing with production sized files rather than empty files, running these timings:

1) table with no LF and no indexes.
2) table with no LF but with indexes.
3) table with LFs and indexes.

Timing testingcan be problematic, because often the second test runs faster because so much of the data has been paged in to memory. There is a technique for flushing memory cache, but right now it eludes me.

(You haven't said how often you expect to do the alter table, nor how long each alter table may run. If not very often, and you have a wide enough window, consider that alter table, or CHGPF, does give you a remarkably simple way to make the change and have all LFs and indexes handled automatically.)

Sam

On 11/16/2010 12:24 PM, Kruse, Kat wrote:
As I mentioned in another response, we did rule out locks and system load.

The issue of indices is intriguing as these files do have a lot of logicals/views. Do you know of way to improve this timing? Would it be faster to delete the logicals/views, do the ALTER TABLE, then rebuild them?

-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Lennon_s_j@xxxxxxxxxxx
Sent: Monday, November 15, 2010 4:23 PM
To: midrange-l@xxxxxxxxxxxx
Subject: Re: ALTER TABLE performance issues

Try getting an exclusive lock on the object first, like this:
ALCOBJ OBJ((MYLIB/MFILE *FILE *EXCL)) CONFLICT(*RQSRLS)

The CONFLICT parm intent is to free up any system locks.

Then see how long the change takes.

And I suspect that if you alter field which is part of an index than the
index will have to be rebuilt and that will add time. Presumably for
your test the table in both libraries had all the same indexes, views
and constraints.

Sam

On 11/15/2010 5:44 PM, Kruse, Kat wrote:
Our developers have finally started using ALTER TABLE instead of rename/copy method of changing files. One of the things we noticed is that there are times that there are significant differences in the performance of the change.

We've already suggested to them that instead of multiple passes with only one change at a time, they should be doing all the changes in a single pass. We've also noted where the differences can be accounted for by files with many/few columns and by files with many/few rows.

After taking the above into consideration we're still finding that the ALTER TABLE give very inconsistent results. For example, for a given file which had 0 records, in library A it took 5 seconds to finish while in library B (also 0 records) it to 22 seconds to finish.

FWIW, we're at V6R1 on Power7.

So, has anyone got suggestions at what we can look at or do to improve our performance?

Thanks,
Kat Kruse
Qualcomm, Inc.
Systems Support Engineer


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.