From: Johnny B Good on 26 Aug 2008 10:01
The message <VA.000014ca.5759f437(a)nospam.aaisp.org>
from Daniel James <wastebasket(a)nospam.aaisp.org> contains these words:
> In article news:<313030303737303648B2985D91(a)plugzetnet.co.uk>, Johnny B
> Good wrote:
> > Either 30 or 60 seconds. Probably the latter (at least for ms windows).
> > Since spin up can take a (seemingly) interminable 5 to 7 seconds,
> > that's not going to be the problem.
> That sounds about right ... you may have noticed elsethread that I found
> a good deal on the Netgear (Infrant) ReadyNAS Duo and went for that, so
> I've now got first-hand experience.
> I've got drive spin-down configured for an hour and it hardly ever annoys
> me by being slow.
My homebuilt FreeNAS file server has 3 drives in an old Gateway 2000
desktop box with an asrock P4 MoBo and I elected to forego drive
spindown. Whilst there was a definite power saving, I decided I'd rather
spend the extra tenner a year and not subject the drives and my sanity
to the extra stress. I also preferred to see relatively steady
temperatures (currently 33 deg C for the WD5000AAKB drive, and 29 and 30
for the two Samsung HD103UJs for a room temperature of 22 deg C).
> Interesting: It appears to spin the drives up one at a time (good thing
> there are only two) -- presumably to avoid the power drain of two drives
> spinning up at once.
Yes, that seems standard with pretty well all the *nix OSen when drive
spindown power saving is set. Of course, the bios might not be so
thoughtful when powering up a bunch of drives, a consideration when the
original 145 watt Gateway's PSU has been retained on efficiency and
noise grounds (it's lucky for me that the Asrock MoBo allows the CPU VRM
to be powered from the 5 volts in the absence of the 4 pin 12v
At one time I had a total of 4 drives in that box and the peak power
demand on startup would go right up to around the 175 watt mark before
settling back to the 75 watt mark and ultimately (with a VIA chipset
socket A MoBo, underclocked and undervolted Athlon XP2500+
configuration) about 72 watts. The consumption with the P4 setup is now
down to about 62 watts (one less drive but an extra 600GB capacity).
> I am occasionally surprised by the drives spinning up when I wouldn't
> expect them to ... but it seems to be just Windows Explorer (I've not yet
> had the box in use with only linux clients connected) poking the network
> to make sure it's display is still valid.
I see that effect on the Medion external USB2/E-Sata 500GB Seagate
drive whenever I switch my external drive farm on. The 5 minute timeout
can't be reconfigured and I've googled around (Medion'e website is no
help). The spindown is an obvious ploy to avoid overheating, but even
so, they could have set it for a more practical 10 or 15 minutes.
None of the other USB2 drives (enclosures and drives bought seperately)
have that annoyance and, although they do run a bit warm, they're
certainly cooler than that Medion when it has not been given the chance
to spin down for a half hour or so.
TBH, I've entertained the idea of setting up another (part time) server
box to sling those four external drives into and get away from the
perils of USB flakiness. As it is, they're all formatted with Ext2
partitions to give me a "Get Out Of Jail" card whenever USB flakiness
strikes and renders the FS invisible to the windows client (I have Ext2
support installed in my win2k box).
I'll be looking out for a suitable low powered, but well endowed, MoBo
to replace the current P4 setup (it needs to have IDE and 4 SATA plus
built in Gbps ethernet) so I'll probably relegate the existing MoBo to
another suitable case and run it part time with those external drives.
Please remove the "ohggcyht" before replying.
The address has been munged to reject Spam-bots.
From: Theo Markettos on 26 Aug 2008 18:21
Nigel Wade <nmw(a)ion.le.ac.uk> wrote:
> Be grateful. The NAS I bought, a Thecus N2100, would spin up both drives
> at the same time. Unfortunately its 12V/5A PSU wasn't sufficient to spin
> up both drives at the same time (despite the drives being on the
> officially supported list) when the system was attempting to boot... The
> ensuing spin-up drives, PSU overload, spin-up drives, PSU overload cycle
> didn't do either the PSU or the drives any good.
Heh. I once had a pair of 5.25" full height SCSI drives - one was 35W
constant and the other 25W (330MB and 1GB, I forget which way around. The
1GB one was black to help heat radiation). I could run either from my Risc
PC's 100W PSU, as well as the motherboard, CD-ROM, expansion cards and
floppy drive - there was a little delay while the RPC booted before the
drive started to spinup. But it really didn't like trying to spin both
drives up and had the effect you describe.
(450W desktop supplies, pah!)
I've had a play with noflushd, which does seem to be spinning my
IDE-connected backup drive down. But it doesn't go down for long although
AFAIK nothing is accessing it. I have a couple of ext3 FSs on it - would
that be the reason? How can I find out which processes are trying to access
a filesystem? lsof doesn't show anything, but then I'm only running it as a
Alternatively, I wonder if automountd would be a workaround?
From: Nix on 27 Aug 2008 00:56
On 26 Aug 2008, Johnny B. Good uttered the following:
> temperatures (currently 33 deg C for the WD5000AAKB drive, and 29 and 30
> for the two Samsung HD103UJs for a room temperature of 22 deg C).
As an aside, did you see the large-scale Google study a few years ago
which suggested that drives fail *less* often if run at 40C? They
speculate that drive manufacturers' simulation of aging for MTBF
analysis (by running a large number of drives hot with heavy I/O for
some time) is actually leading them to optimize (unintentionally?) for
drives that run hot.
Anecdotal evidence (is it anecdotal when it's yourself?): of fifteen
hard drives I've owned, I've actively cooled four. Three of those four
suffered mechanical failure, the only drives to fail that way.
I don't actively cool hard drives anymore. (It's a noisy powersucker to
do so in any case.)
`Not even vi uses vi key bindings for its command line.' --- PdS
From: Gordon Henderson on 27 Aug 2008 02:53
In article <35D*8bsls(a)news.chiark.greenend.org.uk>,
Theo Markettos <theom+news(a)chiark.greenend.org.uk> wrote:
>I've had a play with noflushd, which does seem to be spinning my
>IDE-connected backup drive down. But it doesn't go down for long although
>AFAIK nothing is accessing it. I have a couple of ext3 FSs on it - would
>that be the reason? How can I find out which processes are trying to access
>a filesystem? lsof doesn't show anything, but then I'm only running it as a
Noflushd and ext3 (or any journalling FS) don't play nice together. You
can try to extend the ext3 flush time with a mount option -
From: Daniel James on 27 Aug 2008 05:46
In article news:<87bpzf3vk9.fsf(a)hades.wkstn.nix>, Nix wrote:
> As an aside, did you see the large-scale Google study a few years ago
> which suggested that drives fail *less* often if run at 40C?
I saw the study, and recall that there was some counter-intuitive
revelation about failure rates not being related to high temperatures ...
but as there wasn't any correlation between those figures and drive
makes/models it seemed pretty meaningless.
All in all it was a pretty bad study (or, at least, what was actually
published wasn't good) as there was insufficient detail for any
meaningful conclusion to be drawn, but enough noise to excite a lot of
> They speculate that drive manufacturers' simulation of aging for MTBF
> analysis (by running a large number of drives hot with heavy I/O for
> some time) is actually leading them to optimize (unintentionally?) for
> drives that run hot.
It may be that drives that naturally run hot are designed to run hot, and
that cooling them doesn't help ... that doesn't mean that a badly-cooled
drive will be reliable because it's running hot, if it's not a model that
naturally does. You can't tell.
> Anecdotal evidence (is it anecdotal when it's yourself?): of fifteen
> hard drives I've owned, I've actively cooled four. Three of those four
> suffered mechanical failure, the only drives to fail that way.
Of ... 20-25 recent-ish hard drives I've owned the only ones to fail were
not actively cooled -- but they were IBM Ultrastars (SCSI near relatives
of the infamous Deathstar) so I won't draw conclusions. Nowadays I fit
all my drives in fan-cooled receptacles that allow easy drive swapping --
some of these systems get rebooted from different drives with different
OSes on a daily basis -- and no drive so fitted has yet failed. Of
course, the duty cycle of those drives hasn't been 100% as they're only
in use part of the time so that proves nothing.
> I don't actively cool hard drives anymore. (It's a noisy powersucker to
> do so in any case.)
The Icy-Box trayless SATA enclosure in this box make no perceptible noise
... the whole box is essentially silent (certainly much more so than the
ReadyNAS). Yes, the fan must draw a few hundred mW ...