From: glen herrmannsfeldt on
Clay <clay(a)claysturner.com> wrote:
(snip)

> Many if not most processors use dynamic refresh, so you can't stop the
> clock lest they forget! The 6502 was one of the last static processors
> where you could actually single cycle the chip.

For S/360, it didn't stop the clock but just stopped executing
instructions. That is, internal to the CPU it stops fetching
and executing instructions. For microprogrammed machines, it
is likely that the microprogam still runs.

The usual time would be waiting for I/O interrupts which would
supply needed data.

From the windows task manager, you will see the idle process
accumulating CPU time while doing nothing. I believe that VMware
attempts to detect such loops and reduce the waste of emulated
CPU time spent on them. For S/360, emulators just look at the
WAIT bit in the PSW.

-- glen
From: Tim Wescott on
glen herrmannsfeldt wrote:
> Jerry Avins <jya(a)ieee.org> wrote:
> (snip)
>
>> There are other ways to save. Last I heard, more power is expended on
>> idle computers in the US than is used in all of Belgium. Quick recovery
>> from stand-by and the ability to respond to external wake-up signals
>> would save a lot of energy.
>
> An interesting feature of the IBM S/360 (and successors) is
> that when there is nothing to do they enter a WAIT state
> and stop executing instructions. That may or may not reduce
> power consumption. (Probably not in the case of ECL logic.)
>
> As I understand it, one reason for that ability was so that
> leased machines could be charged based on the CPU time used.
> It is also very convenient for emulation (virtual machine or
> software) as there is no need to waste host time executing
> an idle loop.
>
> Most processors now do not have this ability. Even so, it
> should be possible to power down, for example, the floating
> point unit when no floating point is being done.
>
> -- glen

Look again. Many processors intended for use in embedded systems _do_
have this ability, and some give you quite a bit of flexibility of how
much of the chip you can shut down when it goes idle.

But I don't know if the Intel chips do this.

--
Tim Wescott
Control system and signal processing consulting
www.wescottdesign.com
From: Tim Wescott on
Tim Wescott wrote:
> glen herrmannsfeldt wrote:
>> Jerry Avins <jya(a)ieee.org> wrote:
>> (snip)
>>
>>> There are other ways to save. Last I heard, more power is expended on
>>> idle computers in the US than is used in all of Belgium. Quick
>>> recovery from stand-by and the ability to respond to external wake-up
>>> signals would save a lot of energy.
>>
>> An interesting feature of the IBM S/360 (and successors) is
>> that when there is nothing to do they enter a WAIT state
>> and stop executing instructions. That may or may not reduce
>> power consumption. (Probably not in the case of ECL logic.)
>>
>> As I understand it, one reason for that ability was so that
>> leased machines could be charged based on the CPU time used.
>> It is also very convenient for emulation (virtual machine or
>> software) as there is no need to waste host time executing
>> an idle loop.
>> Most processors now do not have this ability. Even so, it
>> should be possible to power down, for example, the floating
>> point unit when no floating point is being done.
>>
>> -- glen
>
> Look again. Many processors intended for use in embedded systems _do_
> have this ability, and some give you quite a bit of flexibility of how
> much of the chip you can shut down when it goes idle.
>
> But I don't know if the Intel chips do this.
>
Note, too that this would take care of the processor, but you'd have to
do something about the rest of the machine -- drives, video monitors (if
it has one), RAM, etc., would all need to go into an appropriate low
power mode.

This would require _lots_ of detail engineering, and you can figure that
if it's remotely easy it's already being done in laptops, but there's
probably room for improvement even so.

--
Tim Wescott
Control system and signal processing consulting
www.wescottdesign.com
From: Eric Jacobsen on
On 2/24/2010 9:00 AM, Jerry Avins wrote:
> Eric Jacobsen wrote:
>> On 2/24/2010 7:31 AM, Jerry Avins wrote:
>>> http://www.berkeley.edu/news/media/releases/2010/02/23_nsf_award.shtml
>>>
>>> Jerry
>>
>> Sounds like a jobs bill to me, especially if they're focusing on the
>> transistors. I think they're a couple decades behind the curve to
>> expect the academic community to make a breakthrough around switching
>> logic design...just IMHO.
>
> There are other ways to save. Last I heard, more power is expended on
> idle computers in the US than is used in all of Belgium. Quick recovery
> from stand-by and the ability to respond to external wake-up signals
> would save a lot of energy.
>
> Jerry

I agree completely, but that article says:

"To reduce the energy requirement of electronics, researchers will focus
on the basic logic switch, the decision-maker in computer chips. The
logic switch function is primarily performed by transistors, which
demand about 1 volt to function well. There are more than 1 billion
transistors in multi-core microprocessor systems.

"The transistors in the microprocessor are what draw the most power in a
computer," said Yablonovitch. "When you feel the heat from under a
laptop, blame it on the transistors.""

So they're focused on transistor switch technology, to which I'm sure
they have nearly zero visibility into the current state of the art
(because it's almost certainly proprietary competitive information), and
which is also already the subject of a lot more than $27M in research by
the companies that DO make the stuff. It's not like reducing power
consumption is a new problem.

So something seems very amiss with this particular grant in my view.




--
Eric Jacobsen
Minister of Algorithms
Abineau Communications
http://www.abineau.com
From: Eric Jacobsen on
On 2/24/2010 11:45 AM, Tim Wescott wrote:
> Tim Wescott wrote:
>> glen herrmannsfeldt wrote:
>>> Jerry Avins <jya(a)ieee.org> wrote:
>>> (snip)
>>>
>>>> There are other ways to save. Last I heard, more power is expended
>>>> on idle computers in the US than is used in all of Belgium. Quick
>>>> recovery from stand-by and the ability to respond to external
>>>> wake-up signals would save a lot of energy.
>>>
>>> An interesting feature of the IBM S/360 (and successors) is
>>> that when there is nothing to do they enter a WAIT state
>>> and stop executing instructions. That may or may not reduce
>>> power consumption. (Probably not in the case of ECL logic.)
>>>
>>> As I understand it, one reason for that ability was so that
>>> leased machines could be charged based on the CPU time used.
>>> It is also very convenient for emulation (virtual machine or
>>> software) as there is no need to waste host time executing
>>> an idle loop. Most processors now do not have this ability. Even so, it
>>> should be possible to power down, for example, the floating
>>> point unit when no floating point is being done.
>>>
>>> -- glen
>>
>> Look again. Many processors intended for use in embedded systems _do_
>> have this ability, and some give you quite a bit of flexibility of how
>> much of the chip you can shut down when it goes idle.
>>
>> But I don't know if the Intel chips do this.
>>
> Note, too that this would take care of the processor, but you'd have to
> do something about the rest of the machine -- drives, video monitors (if
> it has one), RAM, etc., would all need to go into an appropriate low
> power mode.
>
> This would require _lots_ of detail engineering, and you can figure that
> if it's remotely easy it's already being done in laptops, but there's
> probably room for improvement even so.
>

Selective circuit clocking has been around for a long, long time in
highly integrated systems. Portions of the system that aren't being
used can selectively have the clock turned off (or greatly slowed down
if it doesn't have static clock capability). In my experience this has
been standard practice for a lot of silicon development. Even in 802.x
implementations the idea was to turn off unused features (e.g.,
whichever FEC decoder is not being used) in order to keep the die
temperature down and save power (especially for laptop implementations).

There has been a lot of discussion in the last ten years or so (for
integrated silicon as well as things like FPGAs) that clock distribution
accounts for a pretty substantial amount of power consumption (which
makes sense when you think about it). So being able to selectively
turn off or slow down the clock to circuits that may not be needed all
the time makes a lot of sense.

Laptop designers tend to be pretty careful about power consumption, and
almost anything designed to go into a laptop is likely to use a lot of
tricks (like selective clocking and other sleep modes) to help reduce
current draw.



--
Eric Jacobsen
Minister of Algorithms
Abineau Communications
http://www.abineau.com