From: "Andy "Krazy" Glew" on
nik Simpson wrote:
> On 3/2/2010 1:17 AM, Terje Mathisen wrote:
>> nik Simpson wrote:
>>> On 3/1/2010 12:12 PM, Terje Mathisen wrote:
>>>> Even with a very non-bleeding edge gpu, said gpu is far larger than any
>>>> of those x86 cores which many people here claim to be too complicated.
>>>>
>>>> Terje
>>>>
>>> Isn't the GPU core still on a 45nm process, vs 32nm for the CPU and
>>> cache?
>>>
>> That would _really_ amaze me, if they employed two different processes
>> on the same die!
>>
>> Wouldn't that have implications for a lot of other stuff as well, like
>> required voltage levels?
>>
>> Have you seen any kind of documentation for this?
>>
>> Terje
>>
> That's certainly the case for the Clarksdale/Westmere parts with
> integrated graphics...
>
> http://www.hardocp.com/article/2010/01/03/intel_westmere_32nm_clarkdale_core_i5661_review/

Package.

Die.
From: Tim McCaffrey on
In article
<667e9f6b-6170-4d0e-8a68-06cdfc897608(a)g11g2000yqe.googlegroups.com>,
rbmyersusa(a)gmail.com says...
>
>On Mar 2, 7:18=A0pm, timcaff...(a)aol.com (Tim McCaffrey) wrote:
>> In article
<906e8749-bc47-4d2d-9316-3a0d20a7c...(a)b7g2000yqd.googlegroups.=
>com>,
>> rbmyers...(a)gmail.com says...
>>
>
>> >I know, you thought of it all in 1935.
>>
>> Actually, 1972(ish), and it was CDC/Seymore Cray (well, I assume it was
h=
>im).
>> It was even called the same thing: ECS.
>>
>> ECS was actually available before that (CDC 6000 series), but only the
ma=
>in
>> CPU could talk to it, and I/O was done to main memory. =A0With the 7600
t=
>he PPs
>> could also write to the ECS (slowly), and the CPU could read from it.
=A0=
>There
>> were Fortran extensions to place *big* arrays in ECS and the compiler
too=
>k
>> care of paging in and out the part you were working on. =A0Michigan
State=
> used
>> it to swap out processes (it was much faster than disk).
>>
>> The ECS used a 600 bit interface to the CPU, and had an extremely long
ac=
>cess
>> time (10us IIRC), so bandwidth was about the same as main memory, but
lat=
>ency
>> sucked.
>>
>> ECS was also how multiple CPUs talked to each other, it had 4 ports for
4
>> different systems, so they could coordinate disk/file/perpherial
sharing.
>>
>> MSU used the ECS to connect a 6400 and a 6500 together, and later the
650=
>0
>> with a Cyber 170/750. =A0The slower machine was used primarily for
system=
>s
>> development work.
>>
>I don't know if it's related, but CDC's "large core memory" (as
>opposed "small core memory") was my very unwelcome introduction to
>computer hardware.
>
>From that experience, I acquired several permanent prejudices:
>
>1. For scientific/engineering applications, "programmers" should
>either be limited to sorting and labeling output, or (preferably) they
>should be shipped to the antarctic, where they could be sent, one at a
>time, to check the temperature gauge a quarter mile from the main
>camp.
>
>2. No sane computational physicist should imagine that even a thorough
>knowledge of FORTRAN was adequate preparation for getting things done.
>
>3. Computer architects are generally completely out of touch with
>reality.
>
>Do anything you like, but please never show respect for Seymour Cray,
>including misspelling his name.
>
>Robert.

I certaintly didn't intend to misspell his name.

Hey, I tell you what, you don't tell me who I should show respect for and I
won't bother to point out that you ended up recreating a design that you
say you despise, OK?

And I'm also sorry that wherever you were, it didn't have competent
programmers (or, apparently, physicists).

MSU physicists designed two different cyclotrons using the CDC machines.

- Tim

From: Terje Mathisen "terje.mathisen at on
Robert Myers wrote:
> From that experience, I acquired several permanent prejudices:
>
> 1. For scientific/engineering applications, "programmers" should
> either be limited to sorting and labeling output, or (preferably) they
> should be shipped to the antarctic, where they could be sent, one at a
> time, to check the temperature gauge a quarter mile from the main
> camp.
>
> 2. No sane computational physicist should imagine that even a thorough
> knowledge of FORTRAN was adequate preparation for getting things done.
>
> 3. Computer architects are generally completely out of touch with
> reality.

Hmmm... let's see...

1. I'm a programmer, but otoh I do like xc skiing and I would love to be
able to spend a season in Antarctica.

2. I learned programming on a Fortran 2 compiler, '27H' Hollerith text
constants and all. I've done CFC (computational fluid chemistry)
optimization, doubling the simulation speed.

3. Yes, my employer tend to put me in the 'Architect' role on the staff
diagrams.

So Robert, do I satisfy your prejudices?
:-)

Terje

--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"
From: Terje Mathisen "terje.mathisen at on
nik Simpson wrote:
> On 3/2/2010 1:17 AM, Terje Mathisen wrote:
>> That would _really_ amaze me, if they employed two different processes
>> on the same die!
>>
>> Wouldn't that have implications for a lot of other stuff as well, like
>> required voltage levels?
>>
>> Have you seen any kind of documentation for this?
>>
>> Terje
>>
> That's certainly the case for the Clarksdale/Westmere parts with
> integrated graphics...
>
> http://www.hardocp.com/article/2010/01/03/intel_westmere_32nm_clarkdale_core_i5661_review/

That link clearly shows Clarkdale to be two dies bonded onto a single
carrier...

Terje

--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"
From: Terje Mathisen "terje.mathisen at on
Robert Myers wrote:
> Latency that is exposed on the critical path is forever. It isn't
> very often that you *have* to leave latency exposed on the critical
> path.

Huh?

In my book "critical path" is nothing but latency.

> Once again, it is a matter of design choices.

Please choose to remove all latency limits then!

Terje

--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"