From: Thomas Gleixner on
On Sun, 11 Apr 2010, Avi Kivity wrote:

> On 04/09/2010 05:56 PM, Ben Gamari wrote:
> > On Mon, 29 Mar 2010 00:08:58 +0200, Andi Kleen<andi(a)firstfloor.org> wrote:
> >
> > > Ben Gamari<bgamari.foss(a)gmail.com> writes:
> > > ext4/XFS/JFS/btrfs should be better in this regard
> > >
> > >
> > I am using btrfs, so yes, I was expecting things to be better.
> > Unfortunately,
> > the improvement seems to be non-existent under high IO/fsync load.
> >
> >
>
> btrfs is known to perform poorly under fsync.

XFS does not do much better. Just moved my VM images back to ext for
that reason.

Thanks,

tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Andi Kleen on
> XFS does not do much better. Just moved my VM images back to ext for
> that reason.

Did you move from XFS to ext3? ext3 defaults to barriers off, XFS on,
which can make a big difference depending on the disk. You can
disable them on XFS too of course, with the known drawbacks.

XFS also typically needs some tuning to get reasonable log sizes.

My point was merely (before people chime in with counter examples)
that XFS/btrfs/jfs don't suffer from the "need to sync all transactions for
every fsync" issue. There can (and will be) still other issues.

-Andi

--
ak(a)linux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Thomas Gleixner on
On Sun, 11 Apr 2010, Andi Kleen wrote:

> > XFS does not do much better. Just moved my VM images back to ext for
> > that reason.
>
> Did you move from XFS to ext3? ext3 defaults to barriers off, XFS on,
> which can make a big difference depending on the disk. You can
> disable them on XFS too of course, with the known drawbacks.
>
> XFS also typically needs some tuning to get reasonable log sizes.
>
> My point was merely (before people chime in with counter examples)
> that XFS/btrfs/jfs don't suffer from the "need to sync all transactions for
> every fsync" issue. There can (and will be) still other issues.

Yes, I moved them back from XFS to ext3 simply because moving them
from ext3 to XFS turned out to be a completely unusable disaster.

I know that I can tweak knobs on XFS (or any other file system), but I
would not have expected that it sucks that much for KVM with the
default settings which are perfectly fine for the other use cases
which made us move to XFS.

Thanks,

tglx


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Hans-Peter Jansen on
On Sunday 11 April 2010, 23:54:34 Thomas Gleixner wrote:
> On Sun, 11 Apr 2010, Andi Kleen wrote:
> > > XFS does not do much better. Just moved my VM images back to ext for
> > > that reason.
> >
> > Did you move from XFS to ext3? ext3 defaults to barriers off, XFS on,
> > which can make a big difference depending on the disk. You can
> > disable them on XFS too of course, with the known drawbacks.
> >
> > XFS also typically needs some tuning to get reasonable log sizes.
> >
> > My point was merely (before people chime in with counter examples)
> > that XFS/btrfs/jfs don't suffer from the "need to sync all transactions
> > for every fsync" issue. There can (and will be) still other issues.
>
> Yes, I moved them back from XFS to ext3 simply because moving them
> from ext3 to XFS turned out to be a completely unusable disaster.
>
> I know that I can tweak knobs on XFS (or any other file system), but I
> would not have expected that it sucks that much for KVM with the
> default settings which are perfectly fine for the other use cases
> which made us move to XFS.

Thomas, what Andi was merely turning out, is that xfs has a really
concerning different default: barriers, that hurts with fsync().

In order to make a fair comparison of the two, you may want to mount xfs
with nobarrier or ext3 with barrier option set, and _then_ check which one
is sucking less.

I guess, that outcome will be interesting for quite a bunch of people in the
audience (including me�).

Pete

�) while in transition of getting rid of even suckier technology junk like
VMware-Server - but digging out a current�, but _stable_ kernel release
seems harder then ever nowadays.
�) with operational VT-d support for kvm
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Dave Chinner on
On Sun, Apr 11, 2010 at 08:16:09PM +0200, Thomas Gleixner wrote:
> On Sun, 11 Apr 2010, Avi Kivity wrote:
> > On 04/09/2010 05:56 PM, Ben Gamari wrote:
> > > On Mon, 29 Mar 2010 00:08:58 +0200, Andi Kleen<andi(a)firstfloor.org> wrote:
> > > > Ben Gamari<bgamari.foss(a)gmail.com> writes:
> > > > ext4/XFS/JFS/btrfs should be better in this regard
> > > >
> > > I am using btrfs, so yes, I was expecting things to be better.
> > > Unfortunately,
> > > the improvement seems to be non-existent under high IO/fsync load.
> >
> > btrfs is known to perform poorly under fsync.
>
> XFS does not do much better. Just moved my VM images back to ext for
> that reason.

Numbers? Workload description? Mount options? I hate it when all I
hear is "XFS sucked, so I went back to extN" reports without any
more details - it's hard to improve anything without any details
of the problems.

Also worth remembering is that XFS defaults to slow-but-safe
options, but ext3 defaults to fast-and-I-don't-give-a-damn-about-
data-safety, so there's a world of difference between the
filesystem defaults....

And FWIW, I run all my VMs on XFS using default mkfs and mount options,
and I can't say that I've noticed any performance problems at all
despite hammering the IO subsystems all the time. The only thing
I've ever done is occasionally run xfs_fsr across permanent qcow2
VM images to defrag them as the grow slowly over time...

Cheers,

Dave.
--
Dave Chinner
david(a)fromorbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/