From: Anne & Lynn Wheeler on

eugene(a)cse.ucsc.edu (Eugene Miya) writes:
> Sure it was a camp.
> These guys were Cray sites who went along with the DOE's CTSS OS and
> UniTree as an afterthought. So when Unicos came along and CTSS was not
> portable enough it left CTSS basically dead.

I'm sorry, i misunderstood your original comment to be referring to
convex being part of some "camp" vis-a-vis their own proprietrary
solution ... and i thot i was replying that it was my impression that
it wasn't a "camp" thing for convex ... it was something that some
customers may have wanted to use with convex ... and convex was
responding to something their customers wanted.

i didn't mean to imply that there weren't a number of solutions and that
customers might be choosing particular solutions ... and then the
customers might be have preferrences for one solution or another
("camp" if you will) ... aka
http://www.garlic.com/~lynn/2006u.html#20 Why so little parallelism?

where:

Eugene Miya wrote:
> This implies Convex was in UniTree's camp. Convex had their very
> storage manager CSM which was not a bad system but never caught on.
> Too bad it never got along past 2.0.

i was trying to distinquish between the customers may have wanted
something on a convex platform ... vis-a-vis convex taking a position
possibly with respect to what their customers should want. i wouldn't
view convex cooperating with their customers as to a particular
solution necessarily meaning that it was a "camp"/membership thing for
convex (and didn't mean to imply anything at all about whether or
not it might or might not be a "camp": thing for the customers).

for other drift from
http://www.garlic.com/~lynn/2006u.html#19 Why so little parallelism?

.....

From: wheeler
Date: Thu Apr 16 11:11:41 1992
Subject: Re: Archiving on large systems

Unitree(/lincs) is basically one of the four that all evolved around
the same time. The other three being CFS(LLNL), Mesa(NCAR), and
NAStore .... reference:

Newsgroups: comp.unix.large
Date: 15 Apr 92 19:18:07 GMT

As it was mentioned in an earlier posting, let me add a couple of
words on NAStore.

NAStore is a system to provide a Unix based, network connected file
system with the appearence of unlimited storage capacity. The system
was designed and developed at NASA Ames Research Center for the
Numerical Aerodynamic Simulation program. The goal was to provide
seemingly unlimited file space via transparent archival (or migration)
of files to removable media (3480 tapes) in both robotic and manual
handlers. Supported files sizes exceed the 2 gigabyte limit on most
systems. Archived data is restored when accessed by the user with
each byte being available as soon as it is restored rather than having
to wait for the whole file as is the case with other archival systems.

The NAStore system has been used here for 3 years and is under ongoing
development. It is based upon Amdahl's UTS and runs upon an Amdahl
5880. We have 200 gigabytes of on-line disk and 6 terabytes of
robotic tape.

If your care for more information, let me suggest the following reading:
1989 Winter Usenix Conference Proceedings, see the article on RASH
the last IEEE Mass Storage Symposium proceedings

If you are still interested, contact Dave Tweten, e-mail tweten(a)nas.nasa.gov

.... snip ...

past posts mentioning one or more of the four

http://www.garlic.com/~lynn/2001.html#21 Disk caching and file systems. Disk history...people forget
http://www.garlic.com/~lynn/2001.html#22 Disk caching and file systems. Disk history...people forget
http://www.garlic.com/~lynn/2001f.html#66 commodity storage servers
http://www.garlic.com/~lynn/2002.html#10 index searching
http://www.garlic.com/~lynn/2002e.html#46 What goes into a 3090?
http://www.garlic.com/~lynn/2002g.html#61 GE 625/635 Reference + Smart Hardware
http://www.garlic.com/~lynn/2003b.html#29 360/370 disk drives
http://www.garlic.com/~lynn/2003b.html#31 360/370 disk drives
http://www.garlic.com/~lynn/2003h.html#6 IBM says AMD dead in 5yrs ... -- Microsoft Monopoly vs. IBM
http://www.garlic.com/~lynn/2003i.html#53 A Dark Day
http://www.garlic.com/~lynn/2004d.html#75 DASD Architecture of the future
http://www.garlic.com/~lynn/2004g.html#26 network history
http://www.garlic.com/~lynn/2004p.html#29 FW: Is FICON good enough, or is it the only choice we get?
http://www.garlic.com/~lynn/2005e.html#12 Device and channel
http://www.garlic.com/~lynn/2005e.html#15 Device and channel
http://www.garlic.com/~lynn/2005e.html#16 Device and channel
http://www.garlic.com/~lynn/2005e.html#19 Device and channel
http://www.garlic.com/~lynn/2006n.html#29 CRAM, DataCell, and 3850
http://www.garlic.com/~lynn/2006t.html#37 Are there more stupid people in IT than there used to be?


From: BDH on
> I wonder how the storage war is going to shape up?
> Will tape really die as Jim Gray predicts or will tape drives be
> relgated to data retrieval devices or one last read off tape?

Tape is a way to increase the ratio of memory to read/write speed. When
that ratio is already very high, why bother? You bother if you're
recording something continuously on the small chance that you will want
to rewind to some known point and see what happens, so surveillance and
backup and that's about it. And that's already happened. The
uncertainty in my mind is holographic and MEMS storage.

From: Nick Maclaren on

In article <1163268488.777281.134760(a)k70g2000cwa.googlegroups.com>,
"BDH" <bhauth(a)gmail.com> writes:
|> > I suggest that you look up how cache RAM is constructed, and why it
|> > is transferred from L2 to L1 in lines, not whatever unit the program
|> > needs.
|>
|> Everyone knows "why" - but...well...how would you respond if I
|> suggested you look up why multiplication is n^2?

I would respond that it isn't.

|> > And then work out how you would make your system work for a
|> > typical realistic large sort or FFT, where the data come to (say)
|> > 16 GB today.
|>
|> Yikes, I've never seen an FFT that big. What's it for?

Quantum theoretical calculations as used in surface chemistry (which
are often 1024x1024x1024), high-resolution image processing (which are
often 32768x32768) and so on. Both big-money calculations.

|> On the one hand, you have a good point, I can't think of anything that
|> could handle that real efficiently. On the other hand, if you're
|> breaking it down, your super-FFT chips can still handle the pieces
|> faster, and to whatever extent you have to do things on-disk, you're
|> totally screwed in any case.

16 GB - on disk? This is 2006. Get more RAM.

If you look at FFTs in detail (especially parallel ones), you discover
that handling the pieces fast isn't a lot of help, as they are usually
dominated by the data transfers.

|> Sorts, well, sure you can have to sort 16 gigs, and you obviously will
|> have to read and write the whole thing several times. With ye old
|> super-sort chip you can bring that down to log(data size / sort
|> capacity) times. Which is slow, but hey, blame hard drives.

You are still thinking teenage hobbyist sizes. If sorting performance
is important, nobody will go to disk until much larger sizes than 16 GB.
RAM is cheap, and systems that can handle 64 GB are not expensive.

|> Huh, well, if I do have something new I guess I could write a paper.
|> After reinventing at some point, for example, suffix array indexing,
|> compressed suffix array indexing, multidimensional scaling, LZW, and
|> PPM, and not ending up with anything useful and new, well, I'm
|> skeptical.

Been there - done that - and that was 30 years back :-)


Regards,
Nick Maclaren.
From: BDH on
> |> Everyone knows "why" - but...well...how would you respond if I
> |> suggested you look up why multiplication is n^2?
>
> I would respond that it isn't.

Sure. But the point is that from one perspective, it's necessary, but
from the perspective I prefer, it's irrelevant.

> |> Yikes, I've never seen an FFT that big. What's it for?
>
> Quantum theoretical calculations as used in surface chemistry (which
> are often 1024x1024x1024), high-resolution image processing (which are
> often 32768x32768) and so on. Both big-money calculations.

That is a fair point, in that such places are the best targets for
something new and shiny and expensive.

But still, these are splittable problems, and fast 2 gig solvers make
these faster.

> 16 GB - on disk? This is 2006. Get more RAM.

But it don't fit on the chip with the processing.

> |> Sorts, well, sure you can have to sort 16 gigs, and you obviously will
> |> have to read and write the whole thing several times. With ye old
> |> super-sort chip you can bring that down to log(data size / sort
> |> capacity) times. Which is slow, but hey, blame hard drives.
>
> You are still thinking teenage hobbyist sizes. If sorting performance
> is important, nobody will go to disk until much larger sizes than 16 GB.
> RAM is cheap, and systems that can handle 64 GB are not expensive.

To clarify something above: You can reduce the number of read writes by
a factor of over 16. While making on-chip things far faster. While not
losing generality.

> |> Huh, well, if I do have something new I guess I could write a paper.
> |> After reinventing at some point...and not ending up with anything useful and new, well, I'm
> |> skeptical.
>
> Been there - done that - and that was 30 years back :-)

Not sure what you mean.

From: Nick Maclaren on

In article <1163333469.455928.72240(a)e3g2000cwe.googlegroups.com>,
"BDH" <bhauth(a)gmail.com> writes:
|>
|> > |> Yikes, I've never seen an FFT that big. What's it for?
|>
|> But still, these are splittable problems, and fast 2 gig solvers make
|> these faster.
|>
|> > You are still thinking teenage hobbyist sizes. If sorting performance
|> > is important, nobody will go to disk until much larger sizes than 16 GB.
|> > RAM is cheap, and systems that can handle 64 GB are not expensive.
|>
|> To clarify something above: You can reduce the number of read writes by
|> a factor of over 16. While making on-chip things far faster. While not
|> losing generality.

This is getting boring. If you can provide evidence for the above claims,
please do so. Until then, I don't believe that you can, and few other
experienced people will, either. These are problems that have been
extensively worked over, there is a lot of money available for a much
better solution, and nobody has been able to provide one in 30+ years.


Regards,
Nick Maclaren.