From: Rick Jones on
Terje Mathisen <"terje.mathisen at tmsw.no"> wrote:
> Andy Glew wrote:
> > On 7/27/2010 6:16 AM, Niels J?rgen Kruse wrote:
> >> 256 byte line sizes at all levels.
> >
> > 256 *BYTE*?

> Yes, that one rather screamed at me as well.

Itanium's 128 byte cache line seems positively puny then :)

rick jones

--
denial, anger, bargaining, depression, acceptance, rebirth...
where do you want to be today?
these opinions are mine, all mine; HP might not want them anyway... :)
feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...
From: Jason Riedy on
And Andy Glew writes:
> Jason, can you explain why GUPS is so update heavy?

The best answer I've received is that it's the best model for the target
application anyone could construct without giving away details.[1]

> Sure, a workload of random updates seesm important. But similarly a
> workload of random reads also seems important.

I'll try to remember to come back and answer more clearly after a
funding agency makes its announcement, but there will be a higher-level
benchmark within a dev. program that focuses on graphs involving random
reads and updates. (Unless someone comes up with an amazing method for
some graph algorithms that doesn't require essentially random reads and
writes.)

However, that wanders away from the original subject line.

Random read/write performance very much *is* a need people with money
want to address. No one's quite sure of the interplay between how much
application- and algorithm-level parallelism is available and how much
architectural performance is available. Linear algebra (dense *and*
sparse) accustomed people to much more clean-cut trade offs.

Jason

Footnotes:
[1] Note that I neither have nor want any security clearances, so
everything related to certain groups is hearsay at best. Take with an
appropriate boulder of salt.
From: Bernd Paysan on
Andy Glew wrote:
> 256 *BYTE*?
>
> 2048 bits?
>
> Line sizes 4X the typical 64B line size of x86?
>
> These aren't cache lines. They are disk blocks.

What do you expect? Bandwidth and latency change differently. My rule of
thumb is that transfer time = access time, and that should work for disks as
well as for memories. The fact that disk sectors are still 512 bytes is
just legacy stuff, they should have increased to about a megabyte by now.

--
Bernd Paysan
"If you want it done right, you have to do it yourself!"
http://www.jwdt.com/~paysan/
From: Alex McDonald on
On Aug 3, 4:52 pm, Bernd Paysan <bernd.pay...(a)gmx.de> wrote:
> Andy Glew wrote:
> > 256 *BYTE*?
>
> > 2048 bits?
>
> > Line sizes 4X the typical 64B line size of x86?
>
> > These aren't cache lines.  They are disk blocks.
>
> What do you expect?  Bandwidth and latency change differently.  My rule of
> thumb is that transfer time = access time, and that should work for disks as
> well as for memories.  The fact that disk sectors are still 512 bytes is
> just legacy stuff, they should have increased to about a megabyte by now.
>
> --
> Bernd Paysan
> "If you want it done right, you have to do it yourself!"http://www.jwdt.com/~paysan/

The next disk sector size is 4KB. Increasing it beyond this brings
significant engineering issues (not to mention software problems).
Getting anywhere near 1MB would be impossible given that many disks
can't support that kind of track density; most modern disks have at
most low thousands sectors/track on the inner tracks, well short of
1MB. Mechanical problems abound too, as tracks don't run in perfect
circles. Reading shorter sectors and being able to adjust the heads
and correct for read errors in smaller chunks is a big advantage that
1 huge track would make impossible. In short, lots of reasons why 4KB
is such a big advance.

http://www.anandtech.com/show/2888 covers some of the 4KB technology.
From: Terje Mathisen "terje.mathisen at on
Bernd Paysan wrote:
> Andy Glew wrote:
>> 256 *BYTE*?
>>
>> 2048 bits?
>>
>> Line sizes 4X the typical 64B line size of x86?
>>
>> These aren't cache lines. They are disk blocks.
>
> What do you expect? Bandwidth and latency change differently. My rule of
> thumb is that transfer time = access time, and that should work for disks as

That's identical to my own rule of thumb as well, I've been using it for
many, many years. Among other things it means that the minimum transfer
block size has increased by an additional order of magnitude or so
during the time. :-)

> well as for memories. The fact that disk sectors are still 512 bytes is
> just legacy stuff, they should have increased to about a megabyte by now.
>

Disk sectors are for adressability, and backwards compatibility. The
latter is important enough that some modern disks which work with larger
sectors still have to emulate the old sector size, with dire
consequences for disk partitions which aren't properly aligned. :-(

Terje

--
- <Terje.Mathisen at tmsw.no>
"almost all programming can be viewed as an exercise in caching"