From: Neo on

rohit.nadig(a)gmail.com wrote:
> > Hey! Give us software guys a break will ya?
>
> No kidding. Its the same pie we that feeds both of our respective
> communities (Hardware and Software). We (Hardware Designers and
> Manufacturers) want a bigger share of the pie. More complex hardware
> means more expensive chips.
>
> Sometime in the early 200x years (I am guessing 2002 ), Microsoft had
> bigger revenues and profits than Intel. Higher Margins are
> understandable, but in the business world, anybody that is much bigger
> than you is a threat. You want everybody in your ecosystem to be
> smaller than you, but profitable and growing. So if you make an iPOD,
> you dont want a product that docks the iPOD into a car to be more
> profitable than the iPOD itself.
>
> Its a subtle message that has pervaded many a corporation's strategies.
> Sun mantra "The network is the computer" for long has been a a strategy
> to push their agenda (selling many expensive servers).
>
> I am guessing Intel is worrying about Google? At google's pace, they
> will stream everything on the internet for FREE (funded by their ad
> revenues)! You may not need a fast computer anymore, just a broadband
> connection and a gmail account. What will that do to Intel's growth if
> the future of computing is 3 or 4 big corporations with huge server
> rooms?
>
> At this point, the only way you can grow a microprocessor business is
> by adding functionality simple because we have run out of "big" ideas
> to improve ST performance. Improvements in ST performance are
> incremental. Lets face it, over 60% of the money that people spend on
> semiconductors probably pay for a microprocessor of sorts, and hence my
> claim that the only way you can create value in new hardware designs is
> by adding functionality.
>
> > Seriously thought, for a moment... There are actually "some algorithms" that
> > can be "implemented in a highly efficient manor, directly in *software* ".
>
> I am sure there are many algorithms that work great in software. But I
> am going to pitch the same thing to you. One of the most popular
> software applications is a web browser. Shouldnt you guys be focussing
> on the XML/XSLT/DOM standards more, and less on the video codecs (and
> leave that implementation to us hardware guys).
>
> > IMHO, the hardware guys can move on to better things " --IF-- " the software
> > world actually invents something that performs so well on your existing
> > hardware that it basically renders a direct hardware-based implementation
> > meaningless...
> >
> > For instance, a distributed message passing algorithm (e.g., 100%
> > compatiable with the Cell BE) that exhibits' simply excellent scalability,
> > throughput and overall performance characteristics can be implemented, in
> > software, right now.
> >
> >
> > So, if a software implementation of an algorithm 'X' can exhibit virtually
> > zero-overhead... Why should the hardware guys worry about engraving the
> > algorithm 'X' in silicon? They can move on to better things... No?
>
> I agree that MPI would be a good feature to implement in hardware, but
> dont they have those myrinet switches that kinda do the same thing
> (implement really fast crossbar network switching in hardware)?

But thats all fine but ultimately we all will be working on only
software. Even the hardware will be generated by an software automation
tool that will make much of hardware design unnecessary. Then we might
have something like a standard component description language which
will be read by a tool to deliver the hardware and/or software and
another software tool which integrates them to give a complete
electronic product design which will go into a fab and again read by a
software to generate the patterns....

From: Rob Warnock on
<rohit.nadig(a)gmail.com> wrote:
+---------------
| It seems to me that much of the features that would benefit the average
| user of a computer could be better implemented in hardware than in
| software. Currently most of these features (multimedia,
| networking/communication) are implemented as software applications.
+---------------

Since no one else has said this yet...

Google for "Ivan Sutherland reincarnation" (without the quotes)
and the top hit will likely be this:

http://www.cap-lore.com/Hardware/Wheel.html

Also see:

http://www.catb.org/~esr/jargon/html/W/wheel-of-reincarnation.html
http://foldoc.org/?reincarnation,+cycle+of

When you have fully absorbed the implications of Myer & Sutherland's 1968
observations on the "Wheel of Reincarnation", and if you *still* have
a (hopefully significantly-modified) question, come back and try again.


-Rob

-----
Rob Warnock <rpw3(a)rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607

From: rohit.nadig on
This is already happening, more or less. Most chips these days are
synthesized, placed and routed using software for a good portion of
their die. These algorithms will never be as good as a human being, but
in most cases they are good enough. They are good enough mostly because
the sub-optimalities of the algorithms is masked by improvements in
process technology. Of course, the CAD guys have done their fair share
of improvements, but how often do you look at a jog in your layout and
said "Thats a stupid router".

But then again, with the advent of wireless chips, the need for old
fashioned Analog and RF circuit designers is increasing.

Plus, if you want to design any kind of a serious chip, you are going
to need a few good cache and register file designers, because you will
probably have both design styles in your chip. I think we are a ways
away before we can auto-generate the layout for a bit cell in a cache.


> But thats all fine but ultimately we all will be working on only
> software. Even the hardware will be generated by an software automation
> tool that will make much of hardware design unnecessary. Then we might
> have something like a standard component description language which
> will be read by a tool to deliver the hardware and/or software and
> another software tool which integrates them to give a complete
> electronic product design which will go into a fab and again read by a
> software to generate the patterns....

From: Del Cecchi on
rohit.nadig(a)gmail.com wrote:
> This is already happening, more or less. Most chips these days are
> synthesized, placed and routed using software for a good portion of
> their die. These algorithms will never be as good as a human being, but
> in most cases they are good enough. They are good enough mostly because
> the sub-optimalities of the algorithms is masked by improvements in
> process technology. Of course, the CAD guys have done their fair share
> of improvements, but how often do you look at a jog in your layout and
> said "Thats a stupid router".

Not very often. And the synthesis and place/route algorithms do well
because they can deal with much larger problems and use optimizations
that a human would never be able to do because of data volume if nothing
else.

>
> But then again, with the advent of wireless chips, the need for old
> fashioned Analog and RF circuit designers is increasing.

Cool. I'm old and old fashioned and analog.
>
> Plus, if you want to design any kind of a serious chip, you are going
> to need a few good cache and register file designers, because you will
> probably have both design styles in your chip. I think we are a ways
> away before we can auto-generate the layout for a bit cell in a cache.
>
but you autogenreate the array from the parts
>
>
>>But thats all fine but ultimately we all will be working on only
>>software. Even the hardware will be generated by an software automation
>>tool that will make much of hardware design unnecessary. Then we might
>>have something like a standard component description language which
>>will be read by a tool to deliver the hardware and/or software and
>>another software tool which integrates them to give a complete
>>electronic product design which will go into a fab and again read by a
>>software to generate the patterns....
>
>
In some sense it is all software. It isn't like we are soldering wires
or cutting rubylith anymore.

--
Del Cecchi
"This post is my own and doesn�t necessarily represent IBM�s positions,
strategies or opinions.�
From: Chris Thomasson on
<rohit.nadig(a)gmail.com> wrote in message
news:1166520654.843895.194590(a)j72g2000cwa.googlegroups.com...
>> Hey! Give us software guys a break will ya?
>
> No kidding. Its the same pie we that feeds both of our respective
> communities (Hardware and Software). We (Hardware Designers and
> Manufacturers) want a bigger share of the pie. More complex hardware
> means more expensive chips.
[...]
> Sometime in the early 200x years (I am guessing 2002 ), Microsoft had
> bigger revenues and profits than Intel. Higher Margins are
> understandable, but in the business world, anybody that is much bigger
> than you is a threat.
[...]
> You want everybody in your ecosystem to be
> smaller than you, but profitable and growing.

Got to properly maintain "your" food-chain!

> So if you make an iPOD,
> you dont want a product that docks the iPOD into a car to be more
> profitable than the iPOD itself.

;^)


[...]





> I am guessing Intel is worrying about Google? At google's pace, they
> will stream everything on the internet for FREE (funded by their ad
> revenues)! You may not need a fast computer anymore, just a broadband
> connection and a gmail account. What will that do to Intel's growth if
> the future of computing is 3 or 4 big corporations with huge server
> rooms?

Well, IMHO, I think that a fairly exciting class of distributed applications
can be realized *if* every "home system" was based on a
"super-computer-on-a-chip"... I am thinking of high-end 3d-game virtual
worlds that make extensive use of networks that are made up of boatloads of
interconnected "super-computers". That's a lot of computing power. IMO, when
you have lots of power at every node in the network, well, things could get
rapidly interesting...

Although, this kind of stuff is probably only going to be useful to various
"hard-core gamer types"... Oh well!

;^(...




[...]

>> Seriously thought, for a moment... There are actually "some algorithms"
>> that
>> can be "implemented in a highly efficient manor, directly in *software*
>> ".
>
> I am sure there are many algorithms that work great in software. But I
> am going to pitch the same thing to you. One of the most popular
> software applications is a web browser. Shouldnt you guys be focussing
> on the XML/XSLT/DOM standards more, and less on the video codecs (and
> leave that implementation to us hardware guys).

I think the en/decoding of complex video codec's would be right at home on
the hardware. I agree with you here.




>> IMHO, the hardware guys can move on to better things " --IF-- " the
>> software
>> world actually invents something that performs so well on your existing
>> hardware that it basically renders a direct hardware-based implementation
>> meaningless...
>>
>> For instance, a distributed message passing algorithm (e.g., 100%
>> compatiable with the Cell BE) that exhibits' simply excellent
>> scalability,
>> throughput and overall performance characteristics can be implemented, in
>> software, right now.
>>
[...]
>
> I agree that MPI would be a good feature to implement in hardware,

Well, FWIW, I can do "virtually zero-overhead" MPI in software, on existing
hardware right now. I can give you some details on my exact algorithm and
implementation details if you are interested... Therefore, IMHO, the
hardware doesn't really need to implement it because:


>> if a software implementation of an algorithm 'X' can exhibit virtually
>> zero-overhead... Why should the hardware guys worry about engraving the
>> algorithm 'X' in silicon? They can move on to better things... No?

Why burn an algorithm in the hardware when software can implement it with
virtually zero-overhead?




P.S...

To clarify my position a bit... The only time I think a hardware
implementation of an algorithm 'x' is practical is when a software
implementation 'simply cannot' be accomplished without introducing some
moderate "extra" overheads...