From: Robin Holmes on
No matter how hard I try to explain this, no one gets it.

Lets say you have an object RootObject and on the object is a function
that when called, will in turn call many other virtual functions and
use many fields of allocated objects.

class RootObject {
object A,B,C,D,E,F;
function Root() {
A.someFunction();
B.someFunction();
C.someFunction();
}
}

All of the calls to virtual functions A,B,C can be inlined when JIT
compiling function Root, because the instance of RootObject is passed
to the JIT.

The JIT in Java, .NET is working with types. The ROD is passing in
live object instances and provides a mechanism to deal with changes to
those objects should other code say, set a new value of a field.
From: johnzabroski on
On Feb 11, 11:15 am, Robin Holmes <rangsy...(a)gmail.com> wrote:
> No matter how hard I try to explain this, no one gets it.
>
> Lets say you have an object RootObject and on the object is a function
> that when called, will in turn call many other virtual functions and
> use many fields of allocated objects.
>
> class RootObject {
>   object A,B,C,D,E,F;
>   function Root() {
>     A.someFunction();
>     B.someFunction();
>     C.someFunction();
>   }
>
> }
>
> All of the calls to virtual functions A,B,C can be inlined when JIT
> compiling function Root, because the instance of RootObject is passed
> to the JIT.
>
> The JIT in Java, .NET is working with types. The ROD is passing in
> live object instances and provides a mechanism to deal with changes to
> those objects should other code say, set a new value of a field.

I think I understand, and that is a cool performance hack.

Have you seen E-Bunny? IIRC, it has a method call acceleration
technique similar to what I believe you are describing. See:
http://www.amazon.com/Dynamic-Compiler-Embedded-Virtual-Machine/dp/3639095065/

It also reminds me of the optimization techniques done in Synthesis OS
back in the '80s. Except instead of a kernel with its own assembler
to dynamically build machine code, you are thinking of objects as a
sort of way to fatten the kernelspace. See:
http://z-bo.tumblr.com/post/366024997/the-synthesis-kernel and
http://z-bo.tumblr.com/post/366019361/threads-and-input-output-in-the-synthesis-kernel
From: S Perryman on
Robin Holmes wrote:

> No matter how hard I try to explain this, no one gets it.

1. Physician, heal thyself ... ??

Produce a better explanation/example.


2. You have been given some interesting performance bounds.

Ada, with the "X" / "X'class" constructs, will prevent/allow runtime
dispatch of any virtual op that X may support. Meaning there is no dispatch
overhead.

Tis also possible by runtime impl choice for an object, to guarantee
that runtime dispatch is only incurs the additional cost of one
memory indirection.

None of the above require runtime activity (analysis, translation etc) .


So, if we have an op OP on a type T1, and an inheritance chain
T1 <|-- T2 <|-- T3

And some code :

doSomething(T instance) { instance.OP() ; }


Explain your scheme for a system where the following occurs :

doSomething(new T1) ;
doSomething(new T2) ;
doSomething(new T3) ;


Regards,
Steven Perryman