× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.


  • Subject: Re: stack vs. storage-to-storage
  • From: Blair Wyman <wyman@xxxxxxxxxxxx>
  • Date: Wed, 24 Nov 1999 12:41:14 -0600 (CST)

Excerpts from mi400: 23-Nov-99 stack vs. storage-to-storage Gene
Gaunt@ibm.net (433) 

> Why is the stack virtual machine better than the storage-to-storage
> virtual machine?  Is it faster? 

The key advantage to a stack-based virtual machine is in code
optimization.  Consider this simple sequence of HLL code: 

x = (a * a) + (b * b); 
y =  x + (c * c); 

In the storage-to-storage model of Original MI the code expansion might
look like: 
     MULT T1, A,  A   <--updates storage 
     MULT T2, B,  B   <--updates storage 
     ADDN X,  T1, T2  <--updates storage 
     MULT T1, C,  C   <--updates storage 
     ADDN Y,  X,  T3  <--updates storage 

The 'storage-to-storage' Original MI architecture requires that T1 and
T2 -- the  two required temporary variables -- be updated by the MULT
instructions.  This means storing the intermediate results "all the way
home" to their locations in the automatic or static storage frame. 

Now consider the stack-based machine expansion, in pseudo-code: 
     LOD A 
     DUP 
     MULT 
     LOD  B 
     DUP 
     MULT 
     ADD 
     STR  X   <-- updates storage 
     LOD  X 
     LOD  C 
     DUP 
     MULT 
     ADD 
     STR  Y   <-- updates storage 

While there are more 'pseudo' instructions, note that the only places
that storage is updated is to actually update the result locations.   

All processors wait at the same speed...  One of the things that
processors have to wait for is the time it takes to get items to and
from "storage."  The "distance" to storage, compared to the speed of the
processor, has been a critical source of performance problems for years.
 Partial solutions such as 'data caching' -- effectively keeping some of
the storage logically 'closer' to the processor -- have been very
successful, but do not solve the fundamental problem.  Further, they
suffer from problems of their own, such as keeping the cache consistent
when there are multiple processors. 

Therefore, avoiding trips to "storage" -- especially distant storage --
is a key facilitator for optimization.   

Now, the 'stack' in the virtual machine can actually map to processor
registers -- the absolute "closest" of the close storage -- and so we
can get this simple sequence to happen a WHOLE lot faster. 

Furthermore, if it can be determined that 'x' is never used again in the
program, it need never even *have* a final storage location, instead
living its entire life in a single register. 

Anyway, that's probably more than you wanted, but it's a peek at just
one of the architectural advantages of a stack-based machine -- the
'locality' of the logical stack storage and the avoidance of long trips
to storage. 

HTH. 

-blair 
+---
| This is the MI Programmers Mailing List!
| To submit a new message, send your mail to MI400@midrange.com.
| To subscribe to this list send email to MI400-SUB@midrange.com.
| To unsubscribe from this list send email to MI400-UNSUB@midrange.com.
| Questions should be directed to the list owner/operator: dr2@cssas400.com
+---


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.