From: Heinz Diehl on
On 26.07.2010, Christoph Hellwig wrote:

> Just curious, what numbers do you see when simply using the deadline
> I/O scheduler? That's what we recommend for use with XFS anyway.

Some fs_mark testing first:

Deadline, 1 thread:

# ./fs_mark -S 1 -D 10000 -N 100000 -d /home/htd/fsmark/test -s 65536 -t 1 -w 4096 -F

FSUse% Count Size Files/sec App Overhead
26 1000 65536 227.7 39998
26 2000 65536 229.2 39309
26 3000 65536 236.4 40232
26 4000 65536 231.1 39294
26 5000 65536 233.4 39728
26 6000 65536 234.2 39719
26 7000 65536 227.9 39463
26 8000 65536 239.0 39477
26 9000 65536 233.1 39563
26 10000 65536 233.1 39878
26 11000 65536 233.2 39560

Deadline, 4 threads:

# ./fs_mark -S 1 -D 10000 -N 100000 -d /home/htd/fsmark/test -s 65536 -t 4 -w 4096 -F

FSUse% Count Size Files/sec App Overhead
26 4000 65536 465.6 148470
26 8000 65536 398.6 152827
26 12000 65536 472.7 147235
26 16000 65536 477.0 149344
27 20000 65536 489.7 148055
27 24000 65536 444.3 152806
27 28000 65536 515.5 144821
27 32000 65536 501.0 146561
27 36000 65536 456.8 150124
27 40000 65536 427.8 148830
27 44000 65536 489.6 149843
27 48000 65536 467.8 147501


CFQ, 1 thread:

# ./fs_mark -S 1 -D 10000 -N 100000 -d /home/htd/fsmark/test -s 65536 -t 1 -w 4096 -F

FSUse% Count Size Files/sec App Overhead
27 1000 65536 439.3 30158
27 2000 65536 457.7 30274
27 3000 65536 432.0 30572
27 4000 65536 413.9 29641
27 5000 65536 410.4 30289
27 6000 65536 458.5 29861
27 7000 65536 441.1 30268
27 8000 65536 459.3 28900
27 9000 65536 420.1 30439
27 10000 65536 426.1 30628
27 11000 65536 479.7 30058

CFQ, 4 threads:

# ./fs_mark -S 1 -D 10000 -N 100000 -d /home/htd/fsmark/test -s 65536 -t 4 -w 4096 -F

FSUse% Count Size Files/sec App Overhead
27 4000 65536 540.7 149177
27 8000 65536 469.6 147957
27 12000 65536 507.6 149185
27 16000 65536 460.0 145953
28 20000 65536 534.3 151936
28 24000 65536 542.1 147083
28 28000 65536 516.0 149363
28 32000 65536 534.3 148655
28 36000 65536 511.1 146989
28 40000 65536 499.9 147884
28 44000 65536 514.3 147846
28 48000 65536 467.1 148099
28 52000 65536 454.7 149052


Here are the results of the fsync-tester, doing

"while : ; do time sh -c "dd if=/dev/zero of=bigfile bs=8M count=256 ;
sync; rm bigfile"; done"

in the background on the root fs and running fsync-tester on /home.

Deadline:

liesel:~/test # ./fsync-tester
fsync time: 7.7866
fsync time: 9.5638
fsync time: 5.8163
fsync time: 5.5412
fsync time: 5.2630
fsync time: 8.6688
fsync time: 3.9947
fsync time: 5.4753
fsync time: 14.7666
fsync time: 4.0060
fsync time: 3.9231
fsync time: 4.0635
fsync time: 1.6129
^C

CFQ:

liesel:/home/htd/fs # liesel:~/test # ./fsync-tester
fsync time: 0.2457
fsync time: 0.3045
fsync time: 0.1980
fsync time: 0.2011
fsync time: 0.1941
fsync time: 0.2580
fsync time: 0.2041
fsync time: 0.2671
fsync time: 0.0320
fsync time: 0.2372
^C

The same setup here, running both the "bigfile torture test" and
fsync-tester on /home:

Deadline:

htd(a)liesel:~/fs> ./fsync-tester
fsync time: 11.0455
fsync time: 18.3555
fsync time: 6.8022
fsync time: 14.2020
fsync time: 9.4786
fsync time: 10.3002
fsync time: 7.2607
fsync time: 8.2169
fsync time: 3.7805
fsync time: 7.0325
fsync time: 12.0827
^C


CFQ:
htd(a)liesel:~/fs> ./fsync-tester
fsync time: 13.1126
fsync time: 4.9432
fsync time: 4.7833
fsync time: 0.2117
fsync time: 0.0167
fsync time: 14.6472
fsync time: 10.7527
fsync time: 4.3230
fsync time: 0.0151
fsync time: 15.1668
fsync time: 10.7662
fsync time: 0.1670
fsync time: 0.0156
^C

All partitions are XFS formatted using

mkfs.xfs -f -l lazy-count=1,version=2 -i attr=2 -d agcount=4

and mounted that way:

(rw,noatime,logbsize=256k,logbufs=2,nobarrier)

Kernel is 2.6.35-rc6.


Thanks, Heinz.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Vivek Goyal on
On Sat, Jul 24, 2010 at 10:06:13AM +0200, Heinz Diehl wrote:
> On 23.07.2010, Vivek Goyal wrote:
>
> > Anyway, for fs_mark problem, can you give following patch a try.
> > https://patchwork.kernel.org/patch/113061/
>
> Ported it to 2.6.35-rc6, and these are my results using the same fs_mark
> call as before:
>
> slice_idle = 0
>
> FSUse% Count Size Files/sec App Overhead
> 28 1000 65536 241.6 39574
> 28 2000 65536 231.1 39939
> 28 3000 65536 230.4 39722
> 28 4000 65536 243.2 39646
> 28 5000 65536 227.0 39892
> 28 6000 65536 224.1 39555
> 28 7000 65536 228.2 39761
> 28 8000 65536 235.3 39766
> 28 9000 65536 237.3 40518
> 28 10000 65536 225.7 39861
> 28 11000 65536 227.2 39441
>
>
> slice_idle = 8
>
> FSUse% Count Size Files/sec App Overhead
> 28 1000 65536 502.2 30545
> 28 2000 65536 407.6 29406
> 28 3000 65536 381.8 30152
> 28 4000 65536 438.1 30038
> 28 5000 65536 447.5 30477
> 28 6000 65536 422.0 29610
> 28 7000 65536 383.1 30327
> 28 8000 65536 415.3 30102
> 28 9000 65536 397.6 31013
> 28 10000 65536 401.4 29201
> 28 11000 65536 408.8 29720
> 28 12000 65536 391.2 29157
>
> Huh...there's quite a difference! It's definitely the slice_idle settings
> which affect the results here.


> Besides, this patch gives noticeably bad desktop interactivity on my system.

Heinz,

I also ran linus torture test and fsync-tester on ext3 file system on my
SATA disk and with this corrado's fsync patch applied in fact I see better
results.

2.6.35-rc6 kernel
=================
fsync time: 1.2109
fsync time: 2.7531
fsync time: 1.3770
fsync time: 2.0839
fsync time: 1.4243
fsync time: 1.3211
fsync time: 1.1672
fsync time: 2.8345
fsync time: 1.4798
fsync time: 0.0170
fsync time: 0.0199
fsync time: 0.0204
fsync time: 0.2794
fsync time: 1.3525
fsync time: 2.2679
fsync time: 1.4629
fsync time: 1.5234
fsync time: 1.5693
fsync time: 1.7263
fsync time: 3.5739
fsync time: 1.4114
fsync time: 1.5517
fsync time: 1.5675
fsync time: 1.3818
fsync time: 1.8127
fsync time: 1.6394

2.6.35-rc6-fsync
================
fsync time: 3.8638
fsync time: 0.1209
fsync time: 2.3390
fsync time: 3.1501
fsync time: 0.1348
fsync time: 0.0879
fsync time: 1.0642
fsync time: 0.2153
fsync time: 0.1166
fsync time: 0.2744
fsync time: 0.1227
fsync time: 0.2072
fsync time: 0.0666
fsync time: 0.1818
fsync time: 0.2170
fsync time: 0.1814
fsync time: 0.0501
fsync time: 0.0198
fsync time: 0.1950
fsync time: 0.2099
fsync time: 0.0877
fsync time: 0.8291
fsync time: 0.0821
fsync time: 0.0777
fsync time: 0.0258
fsync time: 0.0574
fsync time: 0.1152
fsync time: 1.1466
fsync time: 0.2349
fsync time: 0.9589
fsync time: 1.1013
fsync time: 0.1681
fsync time: 0.0902
fsync time: 0.2052
fsync time: 0.0673

I also did "time firefox &" testing to see how long firefox takes to
launch when linus torture test is running and without patch it took
around 20 seconds and with patch it took around 17 seconds.

So to me above test results suggest that this patch does not worsen
the performance. In fact it helps. (at least on ext3 file system.)

Not sure why are you seeing different results with XFS.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Christoph Hellwig on
On Wed, Jul 28, 2010 at 04:22:12PM -0400, Vivek Goyal wrote:
> I also did "time firefox &" testing to see how long firefox takes to
> launch when linus torture test is running and without patch it took
> around 20 seconds and with patch it took around 17 seconds.
>
> So to me above test results suggest that this patch does not worsen
> the performance. In fact it helps. (at least on ext3 file system.)
>
> Not sure why are you seeing different results with XFS.

So why didn't you test it with XFS to verify his results? We all know
that different filesystems have different I/O patters, and we have
a history of really nasty regressions in one filesystem by good meaning
changes to the I/O scheduler.

ext3 in fact is a particularly bad test case as it not only doesn't have
I/O barriers enabled, but also has particularly bad I/O patterns
compared to modern filesystems.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/