From: alexd on
I noticed when copying files to an SD card with rsync some big swings in
throughput:

kmc17-bizet-20-farandole.mp3
4698112 100% 1.10MB/s 0:00:04 (xfer#120, to-check=19/140)
kmc21-mozart-08-andante_voor_fluit_en_orkest.mp3
10481664 100% 655.94kB/s 0:00:15 (xfer#121, to-check=18/140)
kmc27-smetana-07-furiant.mp3
2895872 100% 39.45MB/s 0:00:00 (xfer#122, to-check=17/140)
kmc28-chopin-04-polonaise_in_cis_klein.mp3
8450048 100% 846.38kB/s 0:00:09 (xfer#123, to-check=16/140)
kmc34-mozart-07-allegro_assai.mp3
10825728 100% 1.40MB/s 0:00:07 (xfer#124, to-check=15/140)
kmc41-c.p.e._bach-05-allegro.mp3
4005888 100% 316.48kB/s 0:00:12 (xfer#125, to-check=14/140)
kmc45-bruckner-02-symfonie_nr.4-andante.mp3
16375808 100% 684.53kB/s 0:00:23 (xfer#126, to-check=13/140)

What would account for this? There was little else going on on the system at
the time, and I'm certain that rsync was the only thing using the SD card.

I had expected it would be quick at first, then some ratio would be hit in
the kernel or a buffer would have filled somewhere, and the apparent speed
would drop to match that of the card, but the throughput bounced around the
whole time.

--
<http://ale.cx/> (AIM:troffasky) (UnSoEsNpEaTm(a)ale.cx)
00:36:34 up 8 days, 3:38, 5 users, load average: 0.52, 0.89, 0.89
DIMENSION-CONTROLLING FORT DOH HAS NOW BEEN DEMOLISHED,
AND TIME STARTED FLOWING REVERSELY

From: Nix on
On 29 Jan 2010, alexd stated:

> I noticed when copying files to an SD card with rsync some big swings in
> throughput:

Yep, this is normal.

> What would account for this?

Caching, and slow devices.

> I had expected it would be quick at first, then some ratio would be hit in
> the kernel or a buffer would have filled somewhere, and the apparent speed
> would drop to match that of the card, but the throughput bounced around the
> whole time.

This is because the SD card's write rate is very slow, much slower than
the device the data's being read from. rsync is dumping data to the
drive as fast as it can, and the kernel accepts it all until memory is
largely full of dirty pages. Then the kernel throttles writes until the
number of dirty pages falls far enough again.

Before 2.6.32, this was much worse: flusher threads were not per-device,
so a slow device could easily cause all the flusher threads to block
writing data to it, whereupon writes to much *faster* devices, e.g.
disks, would freeze as well. Often even *reads* would stall and the
whole system would appear to hang up because everything was blocked
either on swap reads or writes or on normal reads, until enough was
written to the slow device for things to unjam again. (For all I know
this is still true, but all my non-embedded devices now have >10Gb RAM
so I don't see these symptoms very often.)
From: alexd on
Meanwhile, at the uk.comp.os.linux Job Justification Hearings, Nix chose the
tried and tested strategy of:

> This is because the SD card's write rate is very slow, much slower than
> the device the data's being read from. rsync is dumping data to the
> drive as fast as it can, and the kernel accepts it all until memory is
> largely full of dirty pages. Then the kernel throttles writes until the
> number of dirty pages falls far enough again.

Right. I assumed this would hit a 'steady state' of some sort until it
finishes writing, and it appears that it doesn't. Gkrellm shows the writing
continues way after rsync has finished, and the graph is just as choppy as
the stats from rsync would suggest.

I tried again with it mounted synchronously [-o sync] and it appears the
stats from rsync now make more sense; they vary from 0.8 to 1.1 MBps, which
is much more sensible in my opinion [where "sensible" == "better fits with
my preconceptions"]. This is of no great import anyway; as the man page
says, --progress is only there to give bored users something to look at.

> Before 2.6.32, this was much worse: flusher threads were not per-device,
> so a slow device could easily cause all the flusher threads to block
> writing data to it, whereupon writes to much *faster* devices, e.g.
> disks, would freeze as well. Often even *reads* would stall and the
> whole system would appear to hang up because everything was blocked
> either on swap reads or writes or on normal reads

I noticed with 2.6.32 clients that NFS mounts come back to life much more
quickly [ie after 0 seconds rather than ~5 minutes] after suspend to disk
than they did with 2.6.31. Would this apparent change in behaviour be due to
the above, or is it just something else that got upgraded in Debian testing
at the same time?

--
<http://ale.cx/> (AIM:troffasky) (UnSoEsNpEaTm(a)ale.cx)
23:22:18 up 11 days, 2:24, 5 users, load average: 1.79, 0.71, 0.43
DIMENSION-CONTROLLING FORT DOH HAS NOW BEEN DEMOLISHED,
AND TIME STARTED FLOWING REVERSELY

From: Nix on
On 31 Jan 2010, alexd said:

>> Before 2.6.32, this was much worse: flusher threads were not per-device,
>> so a slow device could easily cause all the flusher threads to block
>> writing data to it, whereupon writes to much *faster* devices, e.g.
>> disks, would freeze as well. Often even *reads* would stall and the
>> whole system would appear to hang up because everything was blocked
>> either on swap reads or writes or on normal reads
>
> I noticed with 2.6.32 clients that NFS mounts come back to life much more
> quickly [ie after 0 seconds rather than ~5 minutes] after suspend to disk
> than they did with 2.6.31. Would this apparent change in behaviour be due to
> the above, or is it just something else that got upgraded in Debian testing
> at the same time?

It's something else. I've always had rapidly-returning NFS mounts in
2.6.31 after suspend to disk. (Possibly an NFS change, more likely a
suspend/resume change: that's a rapidly-evolving area).

(Note that in 2.6.30 and 2.6.31 there was an e1000e bug that could lead
to throttling of *all* traffic on more-than-lightly-loaded gigabit
links... but that would have led to the NFS mounts going *away* and not
coming back.)
 | 
Pages: 1
Prev: electricsheep+compiz
Next: unetbootin to a HDD