From: Alan Cox on
> might have 4GB RAM and 1GB swap. I don't think you would expect
> Desktop users to understand or tweak overcommit_ratio, but I also
> don't think having the distro simply change the default from 50 (to
> 100 or something else) would cover all the cases well.

Sounds like the distribution should be tuning the value according to the
>
> Would it make more sense to have the overcommit formula be calculated as:
>
> max commit = min(swap, ram) * overcommit_ratio + max(swap, ram) ?
>
> When swap>=ram, the formula works exactly the same as it does now, but
> when ram>>swap, you are guaranteed to always be able to your full RAM
> (even when swap=0).

Which is wromg - some of your RAM ends up eaten by the kernel, by
pagetables and buffers etc. 50% is probably very conservative but the
point of VM overcommit is exactly that - and you end up deploying swap
as a precaution against disaster rather than because you need it.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
From: Dave Wright on
Thanks for your reply Alan.

>
> Sounds like the distribution should be tuning the value according to the [available memory?]

Yes, that's one approach and I'll probably recommend that to some
folks, however that would probably just be set at install time, and
there's no guarantee that memory/swap amounts wouldn't change later
(e.g. they added more RAM, and now find they can't "use" it because
overcommit ratio is too low).

>> max commit = min(swap, ram) * overcommit_ratio + max(swap, ram) ?
>>
>
> Which is wromg - some of your RAM ends up eaten by the kernel, by
> pagetables and buffers etc. 50% is probably very conservative but the
> point of VM overcommit is exactly that - and you end up deploying swap
> as a precaution against disaster rather than because you need it.
>

I actually think the current formula does the reverse - rather than
seeing swap as an overrun area, it includes the full amount of swap in
the max commit then adds only a percentage of main memory. I'm not
sure what the original motivation for that was - perhaps preventing a
page-file backed mmap from exhausting physical memory as well?

Setting overcommit to 100 in the absence of swap probably isn't a good
idea, however the default of 50 when there is less swap than RAM is a
problem.

I'm sure there will be resistance to any suggestion about changing the
calculation, since it works fine as long as you know about it and set
it properly for your situation, but I do think a more sensible default
can be found.
My first suggestion was above. Other possible options include:
1. Just changing the default % from 50 to 90

2. max commit = (ram + swap) * overcommit_ratio
[with a default ratio of 90% or more]

3. max commit = ram + swap + overcommit_bytes
[overcommit_bytes is a fixed number of bytes, rather than a
percentage, and can be negative to increase safety or positive to
allow aggressive overcommit]

Any of these options would increase the VM space (and thus usable RAM)
for scenarios where you had more RAM than swap. For scenarios where
you had more swap than RAM, they would allow more of it to be
committed than currently, but you're well into swap at that point
already so it's unlikely that it would hurt performance at all. Any of
them could still be manually tweaked to get a specific result, but the
starting value would make sense in a wider range of conditions.


-Dave Wright
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo(a)vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/