From: Srinu on
Hi,

I would like to share one of my practical experience with
multiprogramming here.

Yesterday I had written a multiprogram. Modifications to sharable
resources were put under critical sections protected by P(mutex) and
V(mutex) and those critical section code were put in a common library.
The library will be used by concurrent applications (of my own).

I had three applications that will use the common code from library
and do their stuff independently.

my library
---------
work_on_shared_resource
{
P(mutex)
get_shared_resource
work_with_it
V(mutex)
}

my application
-----------
application1
{
*[
work_on_shared_resource
do_something_else_non_ctitical
]
}


application2
{
*[
work_on_shared_resource
do_something_else_non_ctitical
]
}


application3
{
*[
work_on_shared_resource
]
}

*[...] denote a loop.
------------


I had to run the applications on Linux OS. I had a thought in my mind,
hanging over years, that, OS shall schedule all the processes running
under him with all fairness. In other words, it will give all the
processes, their pie of resource-usage equally well.

When first two applications were put to work, they run perfectly well
without deadlock. But when the third application started running,
always the third one got the resources, but since it is not doing
anything in its non-critical region, it gets the shared resource more
often when other tasks are doing something else. So the other two
applications were found almost totally halted. When the third
application got terminated forcefully, the previous two applications
resumed their work as before.

I think, this is a case of starvation, first two applications had to
starve.

Now how can we ensure fairness?

Now I started believing that OS scheduler is innocent and blind. It
depends upon who won the race; he got the largest pie of CPU and
resource.

Shall we attempt to ensure fairness of resource users in the critical-
section code in library?

Or shall we leave it up to the applications to ensure fairness by
being liberal, not greedy?

To my knowledge, adding code to ensure fairness to the common library
shall be an overwhelming task. On the other hand, believing on the
applications will also never ensure 100% fairness. The application
which does a very little task after working with shared resources
shall win the race where as the application which does heavy
processing after their work with shared resources shall always starve.

What is the best practice in this case? Where we ensure fairness and
how?

Sincerely,
Srinivas Nayak
From: Tim Little on
On 2010-05-13, Srinu <sinu.nayak2001(a)gmail.com> wrote:
> I had to run the applications on Linux OS. I had a thought in my
> mind, hanging over years, that, OS shall schedule all the processes
> running under him with all fairness.

In terms of CPU time, it does. However the OS typically doesn't
arbitrate mutex accesses. Those are built on top of primitive kernel
resources by a userland library (e.g. pthreads).


> I think, this is a case of starvation, first two applications had to
> starve.

That looks correct.


> Now I started believing that OS scheduler is innocent and blind. It
> depends upon who won the race; he got the largest pie of CPU and
> resource.

Actually the OS scheduler has almost nothing to do with it. It's the
implementation of the mutex that determines who gets what. The OS
can't usefully allocate CPU time to App1 and App2 if they're blocked
on a mutex held by App3.


> Shall we attempt to ensure fairness of resource users in the
> critical- section code in library?

If you need a guarantee of fairness you could use a FIFO mutex queue.
Then when App3 tries to reacquire the lock, it has to get in line
behind any other threads waiting on the lock. This can cause other
performance problems, such as excessive context switching if the "unit
of work" is too short, but would allow all contenders to progress at
least a little.


- Tim