[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Transputer/occam example of sliding choice?



Hi all,

I forgot to mention - the previous message was put out for critique. All you old hands at CSP, please correct me if I am on the wrong track.

The "straight sweep" that I conjecture is equivalent to the required leadup to able-to-be-hidden events is, roughly, full-speed occupation of the computer, continually firing instructions and/or communicating bytes, because there is always something more to do and response is immediate in that state. It ends and blocks when there is nothing more to do (for a while). The not-able-to-be-hidden event MAY correspond to "nothing more to do for a while", and may show up sooner. Thus, from the popcorn popper point of view, it may be pop . . . pop . . . pop or popopopopop. The quality design, which is the purpose of this whole exercise, can deal with both these cases equivalently, up to the limit of overflow.

Another question for Bill: In the example of sliding choice which you gave, does c!v get orphaned if d?y wins the ALT? Does it have to be satisfied by a c?x in the d?y branch of the ALT, or does it just go away after a while, or does something else clean it up, or is that an implementation detail?

Larry

On Jan 14, 2019, at 8:33 PM, Larry Dickson <tjoccam@xxxxxxxxxxx> wrote:

Thank you, Bill! I think I am onto it, where the problem is what is hiding. References are Roscoe, Theory and Practice of Concurrency, and Martin, The Design and Construction of Deadlock-Free Concurrent Systems. It is the failures rule for hiding which (assuming divergence freedom) is failures(P\X)={(s\X,Y)|(s,YUX) in failures(P)} (Roscoe p 199) or for a single hiding, failures(P\x)={(s|\(Sigma-{x}),X)|(s,XU{x}) in failures(P)} (Martin p 18). Repeated applications of Martin's rule gives Roscoe's rule. In either case, multiple different traces of P collapse to a single trace of P\X by "collapsing" states that are connected by members of X, and refusals only count if the refusals in P contain every member of X. It is clear that this is needed to make the algebra work, and we just hope it corresponds to something in real programs - a point not clearly addressed in the references.

The notion that they are the refusals of a stable state doesn't seem quite right. (The Roscoe p 94 description of case P2 does not correspond to the picture on p 93, by the way, because hiding should have eliminated c from refusal sets.) Applying the above rule to ((c->a->STOP)[](b->STOP))\c starts with a maximal refusal on the top node before hiding of {a}, and therefore the rule looks only at the node after the c, and you get the sliding choice. However, applying the above rule to ((c->a->STOP)|~|(b->STOP))\c, before the hiding the top node has maximal refusals {a,b} and {a,c} and the latter does make a contribution, as does the node after the c, so you get maximal refusals {a} and {b} after the hiding.

The solution to the conundrum seems to be that a process with hiding is not the same as the process without hiding. The solution seems to be that an event that is to be hidden must satisfy both the conditions (i) it is completely under the control of the process and needs nothing from the environment, and (ii) it is promptly on offer where written at a state. Neither of these conditions are required if the event is not to be hidden. In particular, (ii) is replaced with the possibility that the event may be immediately on offer OR it may only be offered after a long time. Thus, the {b} that may be refused in the sliding choice above is really a {b,c} where it counts as refusal if c wins the race, while the {a} that must be accepted is really an {a,c} without the b, so that it has to accept the c, and the a may arrive later and if early stays on offer until c is executed. The second case is easier to understand since the original top may insist on b even though c was on offer, and then get hung on an offer of {a} even if it really means {a,c}; or it may take c and get hung on an offer of {b} after that.

I think this links up nicely with my Ground Truth stuff (Fringe, CPA2017), which basically defines "promptly" as a Straight Sweep, which means a continuous stream of executed instructions, without blocking within the process - but allowing overlapping of multiple sub-processes and communication - leads to the event in question. See Active Sweep, Progress, and Straight Sweep in the first Fringe, slides 9 and 10. It refers to Transputers, both soft and hard linked, but the same thing will work for any reasonable processor. Fast, busy execution without divergences will get to the event in microseconds or less, while human-scale events from the environment take a big fraction of a second or more. So the distinction is practically clear, if not theoretically.

Larry Dickson

On Dec 22, 2018, at 2:00 PM, Bill Roscoe <awroscoe@xxxxxxxxx> wrote:

The sliding choice operator can be thought of in two ways: as an abstraction of a time-out, so that P [> Q means that the guards in P are available before the timeout, and that there is an additional guard that fires once some time has been reached, and Q is the state then reached.  I have been known to call [> an untamed time-out.

Alternatively think of
PAR
ALT g1 P1    Where g1 is c?x
       g2 P2    Where g2 is d?y
SEQ
       R
       c!v

And R is a process that uses up some time but doesn’t communicate and doesn’t change the state for simplicity.

The input g2 is offered immediately but is withdrawn when the internal and hidden communication along c occurs, so this behaves like
g2 P2 [> P1[v/x]

Here [> is simply being used to explain what happens in a not completely straightforward parallel/hiding composition.

       

On Sat, 22 Dec 2018 at 18:59, Larry Dickson <tjoccam@xxxxxxxxxxx> wrote:
All,

You guys really cleaned up my problem with occam 2.5 counted array protocol, thank you! So here is another.

Can anyone come up with an example, in occam running on Transputers or near-Transputers, that models the "sliding choice" that is so often described in CSP literature (the triangle pointing right)? It's OK to add a random function as one might to get nondeterminism |~| . It's got to have the same traces and failures.

Thanks,

Larry Dickson