From: Tom Knight on
"Andy \"Krazy\" Glew" <ag-news(a)patten-glew.net> writes:
> As for the dynamic power, similar considerations: the total amount of
> charge switched per cycle must stay below the square root curve. More
> precisely, the total amount of charge*frequency*voltage must stay
> below the square root curve.

Leakage is exponential in voltage around threshold, but the exponent
is highly temperature dependent. Operating at cryogenic temperatures
lowers theleakage almost to vanishing with current voltages.
Remaining problems center on threshold control, which then becomes
dominant, but we can again be slopply and work with full swing CMOS
technology at very low Vdd. Superconducting interconnect raises the
velocity of chip crossings from 1-3% of the speed of light to 60% of
the speed of light, and makes good inductors possible. Resonant power
recovery (at least of the clocks, and probably of much of the logic
with reversible techniques) becomes easy. We will do this, but
probably not in your phone.


From: jacko on
On 21 Oct, 08:08, Robert Myers <rbmyers...(a)gmail.com> wrote:
> On Oct 21, 2:21 am, "Andy \"Krazy\" Glew" <ag-n...(a)patten-glew.net>
>
>
>
> > All of the modern OOO machines are dynamic dataflow machines in their
> > hearts. Albeit micro-dataflow: they take a sequential stream of
> > instructions, convert it into dataflow by register renaming and what
> > amounts to memory dependency prediction and verification (even if, in
> > the oldest machine, the prediction was "always depends on earlier stores
> > whose address is unknown"; now, of course, better predictors are available).
>
> > I look forward to slowly, incrementally, increasing the scope of the
> > dataflow in OOO machines.
> > * Probably the next step is to make the window bigger, by
> > multilevel techniques.
> > * After that, get multiple sequencers from the same single threaded
> > program feeding in.
> > * After that, or at the same time, reduce the stupid recomputation
> > of the dataflow graph that we are constantly redoing.
>
> > My vision is of static dataflow nodes being instantiated several times
> > as dynamic dataflow.
>
> I think I saw things headed the same way, until the ugly issue of
> power/performance became paramount. Now, there are no more
> transistors to throw at anything. We're not out of ideas or
> transistors; we're out of watts.
>
> Robert.

This is where the disco-fet will help a little. As the gate electrons
are stored within a parallel channel, and do not have to be pulled in/
out. The lower miller capacitance reduces the dynamic CMOS currents,
and increases the switching speed, reducing the voltage crowbar effect
in the switchover. The RC stripline charging is still an issue. Maybe
the routing should be active inverter chains?

cheers jacko
From: Stephen Fuld on
Andy "Krazy" Glew wrote:
> Andrew Reilly wrote:
>> On Mon, 19 Oct 2009 20:40:39 -0700, Andy \"Krazy\" Glew wrote:
>>
>>> Andrew Reilly wrote:
>>>> Isn't it the case, though, that for most of that "popular software"
>>>> speed is a non-issue?
> >>
>>> I've been manipulating large Excel spreadsheets.
>>
>> well, there's your problem ;-)
>>
>> I've never got the hang of spreadsheets, and never found a problem
>> that didn't look like more of a job for awk or matlab, or even a real
>> program of some sort. I guess that there must be some (or at least
>> users who think differently than I): it's certainly popular.
>>
>>> Minutes-long recalcs.
> ...
>>> I'm reasonably sure it's computation, and not disk.
> ...
>>> Algorithms trump hardware, nearly every time.
>
>
> Possibly interesting proto-thought:
>
> I'm using Excel because the team I'm working with uses Excel. And
> because there are some useful features, usually user-interfacey, that
> are hard to get access to via other means. But mainly because of
> putative "ease of use".
>
> Observation: there are situations where the "Excel way" leads to
> sub-optimal algorithms. O(N^2) instead of O(N), in two examples I have
> found.
>
> To avoid such problems one must leave Excel and resort to VBA or some
> other programming language. I'm willing to do that, but others may not be.
>
> I wonder if other "ease of use" facilities similarly lead to situations
> of suboptimal algorithms.
>
> Because of the sub-optimal algorithms that Excel encourages, people are
> encouraged NOT to increase problem size, since O(N^2) bites them more
> quickly
>
> So we get a subculture of Excel users, forced to deal with smaller
> problem sizes because of scale-out issues. Versus a subculture of
> programmers, that can deal with larger problems, but which doesn't have
> the "ease of use" of Excel for smaller problems.
>
> Who has the competitive advantage?
>
> Probably the hybrid subculture that has a small number of hackers
> provide outside-of-Excel scalability to the Excel users.

Does the way your spreadsheet works force serial calculations? I.e. are
almost all the cells that are to be recalculated dependent upon the
previous one, thus forcing a serial chain of calculations. Or are there
"multiple chains of dependent cells" that are only serial due to the
way Excel itself is programmed? If the latter, one could enhance Open
Office to use multiple threads for the recalcs which would take
advantage of multiple cores for something useful.


--
- Stephen Fuld
(e-mail address disguised to prevent spam)
From: Thomas Womack on
In article <4ADF0FD4.6070104(a)patten-glew.net>,
Andy \"Krazy\" Glew <ag-news(a)patten-glew.net> wrote:

>Who has the competitive advantage?
>
>Probably the hybrid subculture that has a small number of hackers
>provide outside-of-Excel scalability to the Excel users.

An exceptionally smart Haskellite of my acquaintance is extremely
well-paid by Credit Suisse to write clever VBA (or, indeed, clever
Haskell that writes VBA) that talks both to large compute farms and to
Excel, for precisely this reason.

Obviously if I'm taking an average of a billion simulation runs, I
won't do it in Excel; but if I'm recording how the averages behave
when I change between six conditions, Excel feels like the thing to
use.

(actually it's gnumeric most of the time, but Excel has a much better
'attempt to minimise cell X by changing cells Y, Z and T' interface;
gnumeric's is a linear-programming tool and Excel's seems to be
basically simplex method)

gnuplot is ludicrously bad at fitting non-linear models (even
something as simple as a*x**c) to data, to the point that I wonder
whether it's actually buggy - taking enough logs to make the fitting
linear sometimes helps.

Tom

From: Paul Wallich on
Stephen Fuld wrote:
> Andy "Krazy" Glew wrote:
>> Andrew Reilly wrote:
>>> On Mon, 19 Oct 2009 20:40:39 -0700, Andy \"Krazy\" Glew wrote:
>>>
>>>> Andrew Reilly wrote:
>>>>> Isn't it the case, though, that for most of that "popular software"
>>>>> speed is a non-issue?
>> >>
>>>> I've been manipulating large Excel spreadsheets.
>>>
>>> well, there's your problem ;-)
>>>
>>> I've never got the hang of spreadsheets, and never found a problem
>>> that didn't look like more of a job for awk or matlab, or even a real
>>> program of some sort. I guess that there must be some (or at least
>>> users who think differently than I): it's certainly popular.
>>>
>>>> Minutes-long recalcs.
>> ...
>>>> I'm reasonably sure it's computation, and not disk.
>> ...
>>>> Algorithms trump hardware, nearly every time.
>>
>>
>> Possibly interesting proto-thought:
>>
>> I'm using Excel because the team I'm working with uses Excel. And
>> because there are some useful features, usually user-interfacey, that
>> are hard to get access to via other means. But mainly because of
>> putative "ease of use".
>>
>> Observation: there are situations where the "Excel way" leads to
>> sub-optimal algorithms. O(N^2) instead of O(N), in two examples I
>> have found.
>>
>> To avoid such problems one must leave Excel and resort to VBA or some
>> other programming language. I'm willing to do that, but others may
>> not be.
>>
>> I wonder if other "ease of use" facilities similarly lead to
>> situations of suboptimal algorithms.
>>
>> Because of the sub-optimal algorithms that Excel encourages, people
>> are encouraged NOT to increase problem size, since O(N^2) bites them
>> more quickly
>>
>> So we get a subculture of Excel users, forced to deal with smaller
>> problem sizes because of scale-out issues. Versus a subculture of
>> programmers, that can deal with larger problems, but which doesn't
>> have the "ease of use" of Excel for smaller problems.
>>
>> Who has the competitive advantage?
>>
>> Probably the hybrid subculture that has a small number of hackers
>> provide outside-of-Excel scalability to the Excel users.
>
> Does the way your spreadsheet works force serial calculations? I.e. are
> almost all the cells that are to be recalculated dependent upon the
> previous one, thus forcing a serial chain of calculations. Or are there
> "multiple chains of dependent cells" that are only serial due to the
> way Excel itself is programmed? If the latter, one could enhance Open
> Office to use multiple threads for the recalcs which would take
> advantage of multiple cores for something useful.

I would be that a huge chunk of the time isn't in doing the actual
calculations but in verifying that the calculations can be done.
Spreadsheets are pretty much the ultimate in mutably-typed interactive
code, and there's very little to prevent a recalculation from requiring
a near-universal reparse.