> From: srichter
> >From: "Nathan M. Andelin" <nandelin@relational->data.com>
> >RPG Way:
> >
> >phone = getPhone(customer)
> >
> >OOP Way:
> >
> >phone = customer.getPhone()
> >
> Agreed. And C++ is C with a built in "this" pointer to the data
> structure the methods are associated with.
> Both examples use many times the cpu of:
> Eval phone = CustPhone
> where Cust is the rcdfmt and CustPhone is a fld in that rcdfmt.

Actually, this is not necessarily the case.  If you understand that in the
OOP environment the "assignment" is actually a single move of a pointer,
whereas the Eval is an actual memory-to-memory copy of an array of bytes, I
think you'll find that the OOP may be at least competitive, if not faster,
even though there's an implicit CALL.  The actual machine instructions might
map to something like this (I am obviously making up the syntax, but if
you've done any assembly language you'll recognize the basic concepts):

-- Mainline
PUSH BASEPTR                            Save "this" pointer
MOVE %ADDR(CUSTOMER), BASEPTR           Set "this" pointer
CALL .GETPHONE, +4                      Call method saving space for return 

MOVE @STACKPTR-4, BASEPTR               Get "this" pointer
MOVE BASEPTR+CUSTPHONE, @STACKPTR       Store address of "CUSTPHONE" in stack
RETRN                                           Return to caller

-- Mainline
POP  R0                                 Pop return value into register
POP  BASEPTR                            Pop "this" pointer
MOVE R0, BASEPTR+PHONE                  Move register to variable "PHONE"

Many of these instructions are likely single-cycle instructions.  Let's
assume half are single cycle, half are two-cycle, and estimate 14 cycles
total.  At the same time, depending on the length of the operand, your
simple eval could take just as long.  Now, Leif can probably tell us better
whether the RISC architecture supports a repeat move; some processors do,
some don't.  If there is no block copy, then your Eval will look like this:

    MOVE %ADDR(CUSTPHONE), R0           Set address of source
    MOVE %ADDR(PHONE), R1               Set address of target
    MOVE 15, R2                         Set length
LP: MOVE @R0, @R1                               Move one byte
    INCR R0                                     Increment source pointer
    INCR R1                                     Increment target pointer
    DECR R2                                     Decrement length
    JNZ  LP                                     And loop until done

This is the worst case scenario, assuming only one-byte memory moves, no
auto-increment of registers and so on.  While a syntax this primitive is
unlikely to be found on any CPU since the 8085, I'm using it to prove a
point.  Even if each operation is one cycle, you're still looking at over 75
cycles.  Depending on the various syntaxes available, this can be
significantly reduced, especially if there is a block copy, which I believe
there is in MI.  If there is, then it's a single instruction:


But more than likely that is not a single-cycle instruction.  I'm not an MI
expert, but I'm reasonably sure that repetitive operations require multiple
CPU cycles.  I suspect that this operation will still require somewhere
between 4 and 15 cycles, simply because there are memory-to-memory moves.
The reality is that each memory-to-memory move requires a fetch and a store,
and this requires CPU cycles.

As the operands get longer, the time increases, and so the comment that the
OOP statement takes many times longer than the non-OOP statement is not
necessarily correct.  As in anything, it takes some investigation and some
common sense to come up with a real answer.

Joe Pluta

As an Amazon Associate we earn from qualifying purchases.

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2022 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.