× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



For any INSERT the database must locate where to place the row atomically, whether it be via addressing the end of the dataspace using information stored in the dataspace about where that location is, or finding that location via the reuse information stored in the reuse segment. In both cases the row must be added in a manner that deals with any concurrency. The performance with reuse is impacted when the data segment into which the row is placed must be brought into memory, which is less likely with data always added at the end. With reuse enabled the data segment into which the row data is placed is variable\random [i.e. not sequential], it can be faster to enable more concurrency [concurrent write support] than when all rows must be placed concurrently at the end. Thus a single threaded batch insert will likely do better to have pre-allocated storage or very large storage extents defined, with data coming in arrival sequence.

It is the CHGPF SRCFILE(specified) which definitely impacts the SQL-ness of the specified file; TABLE designation is lost. The specified file is then created from the specified DDS using CRTPF. A DDS PF with columns modified by SQL ALTER TABLE may pick up some SQL-ness for certain column attributes. The request to CHGPF SRCFILE(*NONE) just changes the attributes common to physical files; some attributes are protected for a SQL TABLE, for example MAXMBRS() must remain one.

Regards, Chuck

Kurt Anderson wrote:
Jorge - I think when you do a CHGPF on a SQL created file it is
no longer treated as SQL created. Not 100% on that, but I
thought I came across that somewhere.

Rob - good questions. I was asking because I wanted to know what
my options are. There is concern in the shop about reusing
deleted records. While I'm not sure of their reasons, there is a
performance consideration. Reusing deleted records results in a
"nominal" performance hit, but nominal adds up when batch
processing millions of records (in this case it's doubtful that
there'd be millions of deleted records - although I don't know if
the nominal hit is in regard to hitting the Deleted Record Map
before performing the write or in the reuse of the deleted record
itself).

<<SNIP>>

Rob wrote:

Why would you not want it to? Are you relying upon arrival
sequence? Can you bypass that by adding a "sequence" column to
the table? If you're concerned about sequence rollover, log a
date along with the sequence. That should work unless you plan
on rolling over your sequence in the same day.

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.