[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re. Java's scheduling of objects waiting on a monitor
In his posting on the synchronising/waiting behaviour of Java threads,
Oyvind wrote:
> However, I have some comments. Per Brinch Hansen writes:
>
> All processes waiting for one condition or another on variable `v' enter
> the same event queue. When another process (here called the producer)
> changes `v' by a statement S2 inside a critical region, it is possible
> that one or more of the conditions expected by processes in the event
> queue will be satisfied. So, after completion of a critical region,
> all processes in the event queue, Qe, are transferred to the main queue,
> Qv, to enable them to reenter their critical regions and inspect the
> shared variable v again.
>
> [Structured Multiprogramming, Comm.of ACM, July 1972, p.576]
>
> Does this describe Java's waiting on a monitor: one queue for the object's
> monitor, and one ready-queue of threads to run? I imagine that the situation
> Brinch Hansen describes is notifyAll, since all are moved off Qe (please
> correct me here). Where lies the Java problem? Is it in the monitor queue
> administration (queue, stack, combination?) or is it when the monitor queue
> is transferred to the ready-queue (linear or swap or combination)?
Qv is the queue for the object's monitor (the object, in this case, being
the shared variable `v'). Qe is the queue of threads that have previously
acquired the monitor, been unable to proceed (because some necessary
`condition' is not right for it) and have been forced to wait for the
`event' (the fixing of the `condition' by some other thread).
But there is a key item of undefinedness in the quotation from Brinch Hansen.
When he says: "all processes in the event queue, Qe, are transferred to
the main queue" ... how are they attached to that main queue? I don't
know if he clarifies that later???
After a `notifyAll', Sun's JVMs currently attach them to the *back* of
that main queue -- which is the logical thing to do when adding things to
a queue. But it leads to infinite starvation and is plain `unfair'
(since those threads have already been through that main queue once!)
A fair thing would be for the notified threads queue, Qe, to barge into
the *front* of the main queue, Qv. That still doesn't guarantee against
infinite starvation though ... unless we abandon the `notifyAll' and
only do single `notify' calls each time the producer (in Brinch Hansen's
terminology) sets up the condition for which a thread is waiting. Or
unless we fix things, as we do in the implementation of JavaPP channels,
so that there can never be more than *one* thread in the waiting queue, Qe.
> Hoare, in his seminal paper states that:
>
> "If more than one program is waiting on a condition, we postulate that
> the signal operation will reactivate the longest waiting program.
> This gives a simple neutral queuing discipline which ensures that every
> program will eventually get its turn".
>
> [Monitors: An Operating System Structuring Concept, Comm.of ACM, 549-557,
> 1974]
>
> Isn't this postulate a requirement?
Yes! And Java monitors do *not* fulfil it. Most of Hoare's examples in that
paper do not work if that requirement is not honoured. Java monitors are
significantly different from Hoare monitors in this (and several other)
respects.
When working with raw Java monitors, we cannot look to algorithmic guidelines
/tricks/theory that were developed for those of Hoare. New ones have to be
found. The literature (e.g. Gosling's "Java Programming Language" and Doug
Lea's "Concurrent Programming in Java") comes up with the following:
1. Never guard a `wait' with a simple `if' on the wait-condition. Just
because you are notified by someone who set up the condition for you,
doesn't mean it's still there when you wake up! Always `wait' within
a while-loop that tests the wait-condition -- it must be re-tested
whenever you wake up in case it's gone wrong again.
2. Don't bother with calling `notify' -- we might as well always call
`notifyAll'. Given rule 1 above, this is always perfectly safe.
The literature is right, given Java's monitor semantics. Rule 1 *has*
to be followed and rule 2 is a consequence. However, while these rules
guarantee that the monitor call will never proceed with an incorrect state
in the monitor, it doesn't actually guarantee that any individual monitor
call will *ever* proceed (i.e. get out of the rule 1 while-loop) no matter
how often it's notified to do so ... as the "wot, no chickens?" example
shows.
JavaPP channels are implemented with a different design pattern to the above.
The `wait' *is* guarded by a simple if-test and we only ever `notify'. This
is perfectly safe since we can guarantee that there will only ever be *one*
waiting thread. So, there is *no* danger of the livelocking starvation
striking JavaPP channels. But there remains a danger of passive starvation
if the JVM *stacks* those initial synchronisations -- i.e. we don't have
a main queue, Qv, we have a main stack, Sv!!! So, don't use JVMs that use
stacks if you are working in a real-time environment -- unless you have lots
of idle time guaranteed.
Using JavaPP channels with a queueing (or `weakly' queuing) JVM, we are
safe from both passive and livelocking starvation. The user never has
to worry about any design rules for monitors. The user *does* have to
worry about deadlock though -- but that's another story and one for which
we have a decent answer (thanks to 20 years work on CSP and occam).
Peter Welch.