From: BDH on
> Grrk. That is true for matrix multiplication, but FFTs and sorting are
> memory access problems. IBM and others would be VERY interested in
> something that could do that ten times faster for an economical amount
> of hardware. Most special purpose systems that have done that for those
> tasks have handled only a few special cases.

Well, you would need to have your RAM and your CPU on the same chip,
and I wasn't worrying about getting data on and off the chip. I guess
you can run problems that are a long way from being IO bound, or
maybe...shine some light on the top?

> Matrix
> multiply speedup is a known, solved problem.

Abortions for some, miniature American flags for others, fast computers
for all!

Er, you're right though, I was looking at non-matrix problems.

From: Nick Maclaren on

In article <1163260515.356992.221720(a)h54g2000cwb.googlegroups.com>,
"BDH" <bhauth(a)gmail.com> writes:
|>
|> > Grrk. That is true for matrix multiplication, but FFTs and sorting are
|> > memory access problems. IBM and others would be VERY interested in
|> > something that could do that ten times faster for an economical amount
|> > of hardware. Most special purpose systems that have done that for those
|> > tasks have handled only a few special cases.
|>
|> Well, you would need to have your RAM and your CPU on the same chip,
|> and I wasn't worrying about getting data on and off the chip. I guess
|> you can run problems that are a long way from being IO bound, or
|> maybe...shine some light on the top?

I suggest that you look up how cache RAM is constructed, and why it
is transferred from L2 to L1 in lines, not whatever unit the program
needs. And then work out how you would make your system work for a
typical realistic large sort or FFT, where the data come to (say)
16 GB today.

When you have worked out how to do what you claim, patent it, and
you will become one of the richest men in the world. Not to say,
one of the most respected in the area of computer design.


Regards,
Nick Maclaren.
From: BDH on
> I suggest that you look up how cache RAM is constructed, and why it
> is transferred from L2 to L1 in lines, not whatever unit the program
> needs.

Everyone knows "why" - but...well...how would you respond if I
suggested you look up why multiplication is n^2?

> And then work out how you would make your system work for a
> typical realistic large sort or FFT, where the data come to (say)
> 16 GB today.

Yikes, I've never seen an FFT that big. What's it for?

On the one hand, you have a good point, I can't think of anything that
could handle that real efficiently. On the other hand, if you're
breaking it down, your super-FFT chips can still handle the pieces
faster, and to whatever extent you have to do things on-disk, you're
totally screwed in any case.

Sorts, well, sure you can have to sort 16 gigs, and you obviously will
have to read and write the whole thing several times. With ye old
super-sort chip you can bring that down to log(data size / sort
capacity) times. Which is slow, but hey, blame hard drives.

> When you have worked out how to do what you claim, patent it, and
> you will become one of the richest men in the world. Not to say,
> one of the most respected in the area of computer design.

Huh, well, if I do have something new I guess I could write a paper.
After reinventing at some point, for example, suffix array indexing,
compressed suffix array indexing, multidimensional scaling, LZW, and
PPM, and not ending up with anything useful and new, well, I'm
skeptical.

From: Eugene Miya on
In article <1163122270.138144.37330(a)h48g2000cwc.googlegroups.com>,
<lynn(a)garlic.com> wrote:
>Eugene Miya wrote:
>> This implies Convex was in UniTree's camp. Convex had their very
>
>it wasn't a "camp" thing ... there was proposal from several of the NSF
>funded supercomputing centers (CNSF, NCSA, PSC, SDSC) for NSF funding

Sure it was a camp.
These guys were Cray sites who went along with the DOE's CTSS OS and
UniTree as an afterthought. So when Unicos came along and CTSS was not
portable enough it left CTSS basically dead.

>for evaluation and selection of a common mass storage archive solution
>.... which strongly leaned towards Unitree on Convex platform. Just
>another one of those things that was happening in the transition from
>strictly proprietary software to more open environment.
>
>We got pulled into situation to push unitree on rs6000 as an
>alternative solution.

4 major MSS systems at the time which included NAStore.

Most of these systems only have ftp and NFS was a secondary thought....

--
From: Eugene Miya on
In article <m3lal21p8lmnce4tgnno4brvgkv4difm8q(a)4ax.com>,
Brian Inglis <Brian.Inglis(a)SystematicSW.ab.ca> wrote:
>On Fri, 10 Nov 2006 22:37:57 +0800 in comp.arch, prep(a)prep.synonet.com
>wrote:
>>eugene(a)cse.ucsc.edu (Eugene Miya) writes:
>>> close if not the first. CDC had a line of pretty drives resold as
>>> the RP06 and other models.
>>
>>No, the RP06 was a Memorex 677(?) with a massbus `DCL' wart on the
>>side. It was good enought that another company moved heaven and earth
>>to drive Memorex to the wall.

Oh damn, thats' right 200 MB unformatted and about 176 MB formatted.
As the newer generation says: My bad.


>>CDC spat off their disk biz early by going into a join venture with
>>NCR called MAgnetic Peripherals. CDC disks meets NCR printers. That

--