|
This is an interesting point, Steve. > From: Steve Richter > > I dont think it is an openness issue. DRDA, I am guessing, is part of > a server centric architecture. Since the release of ADO.NET ( 5 > years ago ) and probably SQL Server 2000 Microsoft has gone with the > detachable dataset approach to database serving. They say such a > database setup is more scalable and is more compatible with server > farms. Not sure yet about the server farms issue... I need to think about that. But from a standpoint of pure server load, you're saying that the overhead of storing the result set on the server and sending it over to the caller on a request by request basis is higher than that of creating the entire dataset and shipping it as a detached set. On face value, you seem to be correct on that. Of course, that's really only useful when you have bounded sets; that is, sets of data where the requester already knows the beginning and end. That's a bit different than the concept of a scrollable cursor, or even an updateable cursor. Detached commitment control isn't going to work very well. But for that segment of the application suite that is read only on small but complex datasets, I can understand the detached set concept. However, since you still need both scrollable and updatable cursors for any sort of real development, this argument against server-side cursors (and DRDA) seems to be less feasible. > For example, the server side SQL Cursor is still supported by MS, but > its use is discouraged and is not a part of the .NET framework. You > have to roll your own. The theory being that the server farm can > serve out a subset of the database as a dataset to the client very > efficiently. The client then orders and joins the datasets as it > wishes, using the client's resources. The server side cursor, with > all its built in functionality, puts a heavy load on the server. So > it does not scale well. This is even more interesting. You're saying that the client is more equipped to join and order the data. I'm not sure of all the database architecture involved here; but it seems like you're dividing the workload of the SQL engine from selecting data on the one hand and sorting it on the other. And that doesn't make sense at first glance, especially on a join, because your selected data on joined tables would be limited by the results of the initial select. Of course, I've never believed in the concept of distributed databases anyway, except in extreme circumstances. Just think of it... if the three tables of a join are on three different machines, then the result of the first join must be sent to the second machine, which in turn must send both datasets to the third machine, which then sends all three datasets to the client, which only THEN begins ordering them. Obviously, modifiers like HAVING and "FIRST N ROWS" wouldn't be able to reduce traffic and processing if the ordering were all client side. To me it seems like what is happening here is that as you add servers to a database, the processing requirements exponentiate and to offset that, Microsoft will need to use the client CPU cycles as part of the database access mechanism,. That's about as non-scalable as I can imagine, and a neat way to lock clients into the servers. Wow... you can't be a client on our server unless you also do a bunch of the work! Hee hee! That's like saying you can eat at our restaurant, but you have to wash the dishes, too. Joe
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.