From: Carlie Coats on
Noob wrote:
> [ NB: X-posted tocomp.arch andcomp.unix.programmer ]
>
> Within 1-2 years, "mainstream" desktop PCs will probably come
> equipped with a "small" (32-128 GB) solid-state drive (SSD) for
> the operating system and applications, and (possibly) an additional,
> larger (500+ GB) hard-disk drive (HDD) for a user's media (mostly
> compressed audio and video).
[snip...]
> Given the typical Unix directory structure:
> http://en.wikipedia.org/wiki/Unix_directory_structure
> which directories should go the SSD and which to the HDD?
>
> bin and sbin => SSD
> usr => SSD probably
> home => HDD
> etc ?? => not modified often ?? SSD perhaps
> var ??

I vote for "small" 100-250 GB SSD directly on the motherboard
(or, worst-case PCI bus) and HDDs for large data.

SSD's excel (relative to HDDs) at random I/O -- especially
small random reads -- but current models still beat out HDDs
for random writes, as well. It's worth noting that Linus
Torvalds has remarked that the SSD on his new machine gives
him a larger performance boost than the Nehalem processor
does -- software builds involve _lots_ of random scratch I/O.

Note that all but very-large writes are really read/garbage-
collect/merge&modify/write operations on SSD pages of 64K or
larger. Depending upon the sophistication of the controller,
the cache size, and the state of the drive, this may introduce
substantial overhead at times.

I move for the following:

/bin, /usr, /etc, /home: SSD

big data (multi-media, or the really-big stuff I generate in
environmental modeling--80GB per model-run): HDD (or HDD RAID)
mounted somewhere under /home, or else just mounted separately
as /data.

Note also that you want the SSD mounts to be "-noatime"
because of the extra "tiny" I/O transactions from writing
access times.

FWIW -- Carlie Coats
From: Morten Reistad on
In article <hmoo7o$460$1(a)speranza.aioe.org>, Noob <root(a)127.0.0.1> wrote:
>[ NB: X-posted tocomp.arch andcomp.unix.programmer ]
>
>Within 1-2 years, "mainstream" desktop PCs will probably come
>equipped with a "small" (32-128 GB) solid-state drive (SSD) for
>the operating system and applications, and (possibly) an additional,
>larger (500+ GB) hard-disk drive (HDD) for a user's media (mostly
>compressed audio and video).
>
>In the SSD+HDD scenario, I was wondering whether it would be
>"better" (for some metric) to have the OS swap to the SSD or to
>the HDD?

No, forget it. We shoud rather consider if we should turn off
swap alltogether. The most useful use of a swap partition I see is
to hold a crash dump.

>It might even make sense to be able to define a swap "hierarchy" as
>in e.g. 1 GB on the SSD as level 1 and 4 GB on the HDD as level 2?
>
>Supposing the apps reside on the SSD, and we define a large swap
>partition on the HDD, does it even make sense to page executable
>code to the HDD, instead of just discarding it, and loading it
>again from the SSD when needed?

Nah, for desktop systems the swap is just a backing storage for
code and data the programmers have included, but which you don't need.
A well tended system should have a write to read ratio for swap
of something like 10:1 or better. (Yep, 90+% of swapped stuff never
gets read again.)

If you actually need to swap significant amounts of data the system
performance is dead. Zap. Optimising a dead system is no point. Just
put it out of it's misery.

>However, I'm not sure SSDs are the perfect match for a swap partition,
>as the typical memory page is only 4 KB, whereas (AFAIU) SSDs are
>optimized to store data in larger chunks (128 KB ??).

And it should be written to, mostly. SSDs are not designed for that.

>Given the typical Unix directory structure:
>http://en.wikipedia.org/wiki/Unix_directory_structure
>which directories should go the SSD and which to the HDD?
>
>bin and sbin => SSD
>usr => SSD probably
>home => HDD
>etc ?? => not modified often ?? SSD perhaps
>var ??
>
>In short, will widely-available SSDs require OS designers to make
>large changes, or is the current infrastructure generic enough?

This is old news. I have been deploying such systems for 7 years now.

/ /etc and /usr on ssd. /var on hdd, or even on a memory overlay
or on an nfs file system, /tmp on whatever fast media, ram if
you can get away with it; and the rest on HDD, or the system SAN/NAS.

Rather than swap, we should consider if we should turn off swap,
and re-think the swapping to apply to the L2cache<-> memory border
instead.

-- mrr
From: Scott Lurndal on
Morten Reistad <first(a)last.name> writes:
>In article <hmoo7o$460$1(a)speranza.aioe.org>, Noob <root(a)127.0.0.1> wrote:

>>In short, will widely-available SSDs require OS designers to make
>>large changes, or is the current infrastructure generic enough?
>
>This is old news. I have been deploying such systems for 7 years now.

SSD's have been available for over 25 years in the mainframe world, some
using battery backed DRAM, some using much more expensive SRAM.

scott
From: Ersek, Laszlo on
In article <34Tjn.11929$v5.2975(a)news.usenetserver.com>,
scott(a)slp53.sl.home (Scott Lurndal) writes:

> My current test system has 112 processors, 1TB memory

Is this you?

http://www.3leafsystems.com/management-team.html

If so, can you point me to something public but not really
marketing-oriented about your SMP products? (No commercial interest,
sorry, just genuine curiosity.)

.... Okay, other than that (here comes the real agenda :)), would you
consider running something like this?

http://lacos.hu/lbzip2-scaling/scaling.html

(Sorry if I violated multiple newsgroup taboos with this post.)

Thanks!
lacos
From: Stephen Fuld on
On 3/4/2010 3:43 PM, Scott Lurndal wrote:
> Morten Reistad<first(a)last.name> writes:
>> In article<hmoo7o$460$1(a)speranza.aioe.org>, Noob<root(a)127.0.0.1> wrote:
>
>>> In short, will widely-available SSDs require OS designers to make
>>> large changes, or is the current infrastructure generic enough?
>>
>> This is old news. I have been deploying such systems for 7 years now.
>
> SSD's have been available for over 25 years in the mainframe world, some
> using battery backed DRAM, some using much more expensive SRAM.

Yup! But closer to 35 years and include in the technology choices, CCD
memories, bubble memory and even core!


--
- Stephen Fuld
(e-mail address disguised to prevent spam)