[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Occam vs. monitor
Oyvind,
> Does this hold water:
>
> Occam's communication language primitives cannot,
> like monitors, deadlock by erroneous usage. In this
No! Erroneous channel usage certainly causes deadlock. That's the only
way deadlock occurs in occam systems. Example:
PAR
SEQ
c.0 ! x.0
c.1 ! x.1
... etc.
SEQ
c.1 ? y.0 -- listen in the
c.0 ? y.1 -- wrong order
... etc.
Where occam scores over deadlocks arising from monitor misuse is in the
clarity with which these design errors can be faced and overcome. occam
even provides a deadlock primitive (STOP). "Know your enemy" is the key
and occam lets you see him. There are 20 years of CSP theorems and, now,
design methods and tools that provide automatic guarantees against deadlock
in occam (or JavaPP) designs.
For monitors, there is tight-coupling between the various monitor methods
that prevent us from using compositional reasoning in their analysis.
With mutually referencing monitors, there is a real mess. I know! Take
a look at the JavaPP implementation of ALTing:
http://www.hensa.ac.uk/parallel/groups/wotug/java/discussion/4.html
We have mutual references between the Alternative and Channel monitors.
The reason why the Channel.disable method is *not* synchronized is that
it avoids a race hazard that might otherwise cause deadlock. But it's
*really* subtle. Even now, I'm not totally sure of its water-tightness.
I'd really like someone to prove that these monitors are deadlock free,
but where's the theory for doing this?
Cheers,
Peter.
cc: occam-com, java-threads (because someone out there may know ...)