From: Del Cecchi on

"Bernd Paysan" <bernd.paysan(a)gmx.de> wrote in message
news:nrbcr6-rs6.ln1(a)vimes.paysan.nom...
> Del Cecchi wrote:
>> You could use SOI, no bulk. :-)
>
> There still is a bulk, there is just no substrate, so the bulk is
> left
> floating. The diodes I mentioned are sill there, supplying the bulk
> when forward biased (this is the well-known effect of SOI to have
> variable gate thresholds through charging and discharging the bulk
> below
> the diodes threshold, unless you add in a real bulk contact like on
> stock silicon wafers).

You could go fully depleted, although IBM didn't last I heard.
>
>> I don't get the point of the AC. Light bulbs and space heaters are
>> AC
>> powered and still disipate power. What did I miss?
>
> I can't tell you. Andy apparently doesn't care much about the
> physics
> behind integrated circuits, his knowledge stops at the gate level.
> This
> is completely ok for digital design, but I wonder why he makes that
> sort
> of suggestions ;-).
>
> One interesting property of quantum mechanics is that for
> irreversible
> logic, there's a minimum amount of energy that is necessary to make
> it
> happen. Reversible logic does not have this drawback. Therefore,
> people investigate into reversible logic, even though the actual
> components to get that benefit are not in sigh (not even carbon
> nanotube
> switches have these properties, even though they are much closer to
> the
> physical limits for irreversible logic). Many people also forget
> that
> quantum mechanics does not properly take changes in the system into
> account, and that means that your reversible logic only works with
> the
> predicted low power when the inputs are not changing any more - and
> this
> is just the uninteresting case (the coherent one - changes in the
> system
> lead to decoherence, and thereby to classical physics).
>

I am so disappointed. I thought QM explained everything. :-)
The theory of circuits that require unobtanium to build is widespread.
> --
> Bernd Paysan
> "If you want it done right, you have to do it yourself"
> http://www.jwdt.com/~paysan/


From: Robert Myers on
On Oct 25, 6:05 pm, Bernd Paysan <bernd.pay...(a)gmx.de> wrote:
> Robert Myers wrote:
> > Coherence theory is a *huge* improvement over talking about things
> > like "the collapse of the wavefunction," but I don't find the idea of
> > drawing a box and saying things are quantum mechanical within it and
> > "classical" outside it to be particularly helpful.
>
> It obviously is a very simplified view.  Yes, there is no obvious and
> "hard" boundary e.g. between coherent and incoherent light.  It's just
> to illustrate: the "classical physics" which we are used to, as it
> describes human-scale effects, deals with incoherent light and matter.  
> QM deals with coherent light and matter.  It's not so easy to say "just
> put incoherent light into QM equations, and off you go with your
> classical physics".  QM tends to turn incoherent parts into coherent,
> even when applied to larger scales (from Bose-Einstein condensate to
> lasers), classical physic does the reverse.

That's why tropical storms and hurricanes are quantum mechanical?
They are chaotic in the small and highly organized in the large.
Collective modes arise in many different circumstances in physics.

The key step in most analyses has nothing to do with quantum
mechanics. You write down the governing equations, multiply by the
complex conjugate of a field variable, and take an appropriate
ensemble average. That's how it goes through in turbulence and
statistical optics, and, since the parabolic wave equation is exactly
Schrodinger's equation, the difference between doing it for a laser
beam and non-relativistic quantum mechanics would be one of applying
natural language labels to the field variables and potentials.

There are subtleties of statistical physics that are completely beyond
me that have to do with things like ergodicity and ensemble averages,
but the formal manipulations themselves should not be beyond a first-
year graduate student.

It may be that none of this has to do with the practical likelihood of
building a quantum computer or usable reversible logic. John
Bardeen's idea for the first transistor wasn't the basis of reliable
device, and Bardeen and his followers thought that the difficulty was
fundamental, but William Shockley showed otherwise. My major concern
is to be careful about absolute declarations based on a limited
understand of quantum mechanics. At this point, I don't know anyone
whose understanding is not limited.

Coherence theory is one powerful tool for making careful statements
where once there were vague and sweeping generalizations.

Robert.

From: Stephen Sprunk on
Robert Myers wrote:
> I don't know actually know of any circumstances, though, where any run-
> time software approach yields significant benefits, other than for
> languages like Java that *can't* run without run-time support. If I
> knew of such an instance, that would be a good place to start thinking
> about a theory of binary translators, but I don't know of any such
> instances.

IIRC, HP's Dynamo, a run-time PA-RISC to PA-RISC binary translator, was
able to achieve a significant performance gain over just running the
binary natively. They also ported it to x86 under the name DynamoRIO,
though I can't recall the performance results.

DEC's binary translator (FX32?) made the Alpha, for a few months at
least, the fastest "x86" machine in existence.

MS's .NET stuff seems to run pretty fast, especially when you consider
all the safety checks that it always does which are usually left out of
native C/C++ binaries, garbage collection, etc. Some tests show that
it's actually _faster_ than native code, once you subtract the start-up
time hit.

Apple seems to be committed to their LLVM stuff, which is already
reaping huge graphics performance gains. Rosetta was pretty darn fast,
enough to be useful and transparent, though not quite as fast as native
code.

Run-time translation seems to have a lot of success stories; the
miserable failures of Transmeta and Intel may just be well-publicized
anomalies.

S

--
Stephen Sprunk "God does not play dice." --Albert Einstein
CCIE #3723 "God is an inveterate gambler, and He throws the
K5SSS dice at every possible opportunity." --Stephen Hawking
From: Terje Mathisen on
nmm1(a)cam.ac.uk wrote:
> In article<8v2dneGMDK_yJ3nXnZ2dnUVZ8oOdnZ2d(a)lyse.net>,
> Terje Mathisen<Terje.Mathisen(a)tmsw.no> wrote:
>>
>> An in-order cpu requires the asm programmer and/or compiler writer to
>> figure out statically how long each of those chains will be, and then
>> unroll the code sufficiently to handle it. This btw. requires a _lot_
>> more architectural registers, and is quite brittle when faced with a new
>> cpu generation with slightly different latency numbers.
>
> Well, yes, but it needs saying that that's not a REAL in-order CPU!
> One of those finishes each instruction before starting the next.
> Those are absolutely trivial to implement and program, and pretty
> trivial to tune. They just tend to be a bit slow :-)
>
> However, if one were to take the highly-threaded approach, as the
> Tera MTA did, that's probably the right way to do it. At least to
> start with.
>
>
> Regards,
> Nick Maclaren.


--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"
From: Robert Myers on
On Oct 26, 12:28 am, Stephen Sprunk <step...(a)sprunk.org> wrote:

> IIRC, HP's Dynamo, a run-time PA-RISC to PA-RISC binary translator, was
> able to achieve a significant performance gain over just running the
> binary natively.  They also ported it to x86 under the name DynamoRIO,
> though I can't recall the performance results.
>
> DEC's binary translator (FX32?) made the Alpha, for a few months at
> least, the fastest "x86" machine in existence.
>
> MS's .NET stuff seems to run pretty fast, especially when you consider
> all the safety checks that it always does which are usually left out of
> native C/C++ binaries, garbage collection, etc.  Some tests show that
> it's actually _faster_ than native code, once you subtract the start-up
> time hit.
>
> Apple seems to be committed to their LLVM stuff, which is already
> reaping huge graphics performance gains.  Rosetta was pretty darn fast,
> enough to be useful and transparent, though not quite as fast as native
> code.
>
> Run-time translation seems to have a lot of success stories; the
> miserable failures of Transmeta and Intel may just be well-publicized
> anomalies.

That's a helpful summary. I'd forgotten about FX32! I didn't know
about Dynamo. I did know that DynamoRIO yielded not especially
promising results (or maybe just not good enough to salvage anything
that Intel needed salvaged at the time), and I actually played with
it. I hadn't connected the dots on LLVM.

I got Linus-ized while discussing the subject here. That and the well-
publicized failures you mention probably created the negative summary
in my mind.

The idea still seems promising. Run-time optimization always seemed
like a promising way to use more threads, whether on a separate core
or not. I always assumed that a helper thread using available
pipeline slots was a target of hyperthreading for Intel.

Robert.