× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



> -----Original Message-----
> From: James Rich [mailto:james@xxxxxxxxxxx]
> Sent: Wednesday, November 24, 2004 4:50 PM
> To: Midrange Systems Technical Discussion
> Subject: RE: Laymans explaination for single level store?
> 
> 
> On Wed, 24 Nov 2004 CWilt@xxxxxxxxxxxx wrote:
> 
> > "Single-level store is not about a large address space; 
> it's about sharing."
> >
> > "It may seem obvious to some that every server designed to support a
> > multi-user, multi-application environment should use an 
> addressing model
> > designed for sharing.  As I stated earlier, this is a 
> foreign concept to
> > hardware and operating system designers."
> 
> The above implies that only single level store (SLS) systems 
> can share 
> memory between applications and users.  That is not true.


As pointed out in the reply to Waldon, there is a difference between
explicitly allowing sharing between processes with the application's help
and explicitly sharing everything at the OS level.


> 
> Further, it may not be that such a system "should" use an 
> addressing model 
> designed for sharing (whatever that means).  Consider a system with 
> several LPARs or system partitions.  Certainly memory should 
> not be shared 
> across unless you specifically want to.  There must be a wall of 
> separation.  Or to stick within on logical system, consider a 
> system that 
> implements military style security protections such as 
> SELinux.  Part of 
> the design mandate is that memory not be shared.

Granted, there may be some multi-user systems in which sharing should not
occur.  Obviously, those customers wouldn't want an i5 with its single level
store.

LPAR provides hardware consolidation.  From the application level, it's no
different from having multiple physical machines.  Thus, LPAR in not
relevant to the discussion of rather memory at the partition level should be
shared across processes.

> 
> So for these two reasons I think we can conclude that the 
> premise that 
> "every server designed to support a multi-user, multi-application 
> environment should use an addressing model designed for 
> sharing (which we 
> suppose is SLS)" is false.
> 

Given your military example.  Sure, "every" is false.  But lets amend it to
say "most single customer business".  So, we're not talking about a server
to be used in an ASP type environment where multiple customers are running.
In an ASP environment, the protection provided by a two-level store may make
sense.  However, this doesn't mean SLS couldn't do ASP.  Just means that the
SLS must implement a good security model that prevent sharing when
necessary.

> > "Virtual memory evolved to support time-sharing by giving 
> each user a
> > separate address space.  One users memory space was 
> isolated from another
> > users, thereby providing a degree of protection between them."
> >
> > "Worse yet, the designers of these time-sharing systems 
> decided to keep the
> > file system outside virtual memory. They created two places 
> to store data
> > and programs: the virtual memory and the file system.  With 
> this design, the
> > data and programs can be used or changed only when they are 
> in virtual
> > memory.  This means anything in the file system must be 
> moved into virtual
> > memory before it can be used or changed."
> 
> The second paragraph tries to paint the "degree of 
> protection" described 
> in the first paragraph as a bad thing.  But then it goes on without 
> addressing why that protection is bad.  Instead it talks about some 
> possible flaws in implementation of memory systems but leaves 
> untouched 
> the design purpose of the protection between users.  This is a flawed 
> argument.

No it's not.  From a business standpoint, most computer systems are designed
to help share information within the organization.  That being the case, why
prevent processes within a server from sharing information?  Sure, payroll
data needs to be secured, but if multiple authorized processes need to work
on the same data, why should they be prevented from sharing the data between
them?


> 
> > "A simple, familiar example for this process is using a 
> word processor on a
> > PC.  We first open the file, which contains the document we 
> want to use, and
> > we watch the hard disk light blink as the document is read 
> into memory.
> > Actually, it is first being read into our virtual memory 
> and then part of
> > the document is read into memory.  When we originally 
> configured our PC
> > operating system, we told it how much disk space should be 
> reserved for
> > virtual memory.  In the PC world, this is called the 
> application swap file.
> > As we scroll through the document, we notice the hard drive 
> light again
> > blinking.  As needed, new parts of the document are being 
> read into memory
> > from this reserved space.
> 
> Parts of this are wrong and parts we'll come back to later.
> 
> The sentence, "Actually, it is first being read into our 
> virtual memory 
> and then part of the document is read into memory" is wrong 
> (or at least 
> it is on the only non-SLS system I am familiar with: linux).  
> When the 
> disk is read it is always copied into RAM first.  If and only 
> if there is 
> not enough space in RAM to hold the data are some parts of 
> RAM paged out 
> to disk.  Those parts are chosen carefully in order minimize 
> the chances 
> that any process will want to access them anytime in the near 
> future.  The 
> idea that a file on disk is first copied to swap and then to RAM is 
> completely false.

No it's not.  Note that the original post was a layman's explanation.  The
example was designed to get basic ideas across, not provide exact technical
details.

You're confusing virtual memory and RAM.

Here's the idea, an application can't access data unless it has been loaded
into the application's VM address space.  At the same time, the CPU can't
access data unless it has been loaded into the physical RAM.  Data is loaded
from the file system on disk into VM.  VM is either in physical RAM or in
the swap file on disk.

Now with enough processes and/or a large enough piece of data, when the data
is loaded into VM parts of it will end up swapped back out to disk.  Sure,
there's all kinds of algorithms used to make sure that this swapping is done
in an intelligent manner.  But when it's swapped out to disk, it's written
into a reserved portion of the disk (or a completely separate disk) outside
of the FS where the object exists permanently.

> 
> >     The act of opening the file has created a second copy of our
> > document.  The original copy of the document still exists 
> unchanged on our
> > hard drive.  The second copy is in the disk space reserved 
> for virtual
> > memory. ... There are actually three copies of some parts 
> of the document,
> > if we count the copy in memory."
> 
> This last sentence is also wrong.  The whole point of virtual 
> memory is to 
> have a place to put things when you have run out of space in 
> RAM.  Thus 
> things are written to swap *when they no longer fit in RAM*, i.e. the 
> stuff in swap is not in RAM any longer.  If it were, they why 
> swap it out?

No, the whole point of virtual memory is so that application programmers
don't have to deal with the limited amounts of physical RAM.  As far as the
application is concerned, every PC has 2GB of physical RAM.

> 
> > [Single-Level Virtual Memory]
> > "The programmer in the virtual memory implementation just 
> described sees and
> > managers two levels of storage; the file system and virtual 
> memory are
> > separate.  This two-level store also creates overhead in the system.
> > Opening a file causes a disk write to the swap area and 
> closing a file
> > requires a disk write back to the permanent location.  An 
> alternative
> > approach is to have only one copy of the file.
> 
> Again, opening a file does not cause a disk write to the swap 
> area unless 
> the memory required by a process exceeds the amount available 
> in RAM.  And 
> then it is highly unlikely that the process reading the file 
> will have any 
> of its parts swapped, since that is the process that the 
> system is making 
> room for!

Again, you're looking at a simple layman's example for exact details.

> 
> Further, simple reasoning leads to the conclusion that even a 
> SLS system 
> must have two copies of disk data read into memory:  that 
> which is on the 
> disk and that which is in memory.  If it were not so, then 
> that would mean 
> then when something is read into RAM, it would no longer 
> exist on disk! 
> Removing data from disk to put it into RAM is surely not what 
> a SLS system 
> does.  Thus there are two copies.

Ah, but that is the great thing about SLS.  There is theoretically only one
copy!

I say theoretically, since the object is of course not erased off the disk.
But the object doesn't really exist on the disk in any event.  The object
only exists in the VM address space shared across all processes.  The disk
is simply the mechanism used to give the objects permanence. 

With a SLS, I don't say read this object from the disk.  I say read the
object at such and such address of the VM. Now, the VM pages that contain
the object are either in physical memory or they have been swapped to disk.
If they've been swapped to disk, they are simply swapped back in.

Is this any clearer?  

> 
> >     Not having two separate copies means we don't need to 
> reserve disk
> > space for a swap file.  With this approach, the entire file 
> system becomes
> > part of virtual memory."
> >
> > "The open and close operations no longer must physically 
> copy the entire
> > file from its permanent location on the disk.  Only the 
> portion (or record)
> > you're reading or working on is copied to a memory buffer.  We often
> > describe this by saying files are always used "in place," 
> thereby improving
> > overall system performance."
> 
> Now we return to the word processor example from above.  The above 
> paragraph says, "The open and close operations no longer must 
> physically 
> copy the entire file from its permanent location on the disk. 
>  Only the 
> portion (or record) you're reading or working on is copied to 
> a memory 
> buffer."  The implication is that non-SLS system do copy the 
> entire file 
> into RAM.  But the word processor example above said, "As we scroll 
> through the document, we notice the hard drive light again 
> blinking.  As 
> needed, new parts of the document are being read into memory 
> from this 
> reserved space."  We already showed that the document is not 
> read from 
> swap.  Thus this reading of new parts of the file are disk 
> reads from the 
> location of where the file is stored on disk, just like we 
> expect.  This 
> is in direct conflict with the premise of the above paragraph 
> the non-SLS 
> systems read the entire file into RAM at once instead of 
> piece by piece as 
> needed.  The quoted article contradicts itself.

Again, layman's example...

The idea is that with a SLS, you don't read from the File system into VM.
The object is already in the VM.

Originally, on the System 38.  There was no way to control which disks
objects ended up on.  This was a side effect of the SLS.  Again, disks are
simply a way to make permanent an object which exists only in the VM address
space of the system.

> 
> > "As with a two-level virtual memory, the memory is still 
> used as a buffer.
> > Processors can operate directly only on data in memory, not 
> on the disk.
> > The difference with only one level is that memory is now a 
> cache for all the
> > disk storage, rather than for only a reserved area on the 
> disk.  Also, when
> > one user makes a change to a file, the change is instantly 
> available to any
> > other user who shares the file."
> 
> Memory [RAM] can only be a cache for all the disk storage if 
> the amount of 
> RAM equals the amount of disk, regardless of whether a system 
> uses SLS or 
> not.  Any time the RAM is less than the amount of disk then 
> the RAM cannot 
> possibly cache all the disk - it simply won't fit.

Nonsense, simple example: if you have a caching disk controller.  It
certainly caches all disk attached even though the amount of RAM on the
controller is orders of magnitude less than the total disk space attached.

Still, the idea here is that instead of VM pages being swapped between
physical RAM and a reserved section of the disk.  VM pages are swapped
between physical RAM and all disk.

> 
> I know the article quoted is an IBM document and I'm no 
> uber-hacker.  But 
> the article simply doesn't stand on its own two feet.
> 
> James Rich
> 

I'm not an uber-hacker either.  But I would consider Dr. Soltis one and all
the quotes in my original post are from his book.  Now with a BS in Computer
Science, I believe I understand the iSeries and other systems enough that I
personally don't have any problems with his layman's explanations.

Now, I may not know enough to explain them good enough for everybody to
understand. But I'm trying and if I've failed please feel free to read the
Fortress Rochester book yourself.

Charles 

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.