× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



You've got a lot of options available...

    1) Stream files aren't a bad idea, have you tested the performance?
         Assuming you don't use QDLS, I wouldn't expect their performance
         to be a problem.

    2) Multiple user spaces would almost certainly be the fastest
         solution, if speed is your main criteria.

    3) The other good option would be to use a non-keyed PF with records
         that consist of 1 32k field.  It'd be easy enough to divide your
         offset by 32k to get the RRN, and use the remainder as the offset
         into the record.   A PF has no size limit that I know of.

I don't think data queues are a good fit for this, I don't really
understand why they've even come up... I can't think of any advantage
they have over PFs at all -- unless of course you've got another job
that's procesing the records as they arrive?  But then it doesn't make
much sense to store them temporarily at all, you can process them off of
a socket just as easily as a data queue.

Likewise, a user index doesn't make much sense.  You don't want to access
the data by index, so why do it that way?

That's my opinion, for what it's worth.

Are you Tex?

On Mon, 3 Nov 2003, John Brandt Sr. wrote:
>
> I have written a utility to retrieve records from an SQL server.
> I used User Space to store the data returned from the server.
> The problem is that the user space is restricted to 16MB.
> In order to break this limitation, I need to move to another storage method.
> I am putting data directly off a socket in 32K chunks and accessing it by
> address.
> Any method besides the IFS would cause me to have to calculate the record I
> want every I/O, although a Data Queue would work because I can use the 32K
> chunks and just divide my offset by 32767 to get the record that I want. (I
> will still have some overlap, but I can deal with that)
>

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.