Noticed this discussion - here are some thoughts.
The CSP concurrency model is closely related to delay insensitive logic. In delay insensitive logic,
it is normal to talk about things starting, doing something and completing (the CSP/occam wording
talks about processes starting, doing something and terminating).
A two-party synchronisation, as takes place in a channel communication, involves a two-party
completion. In delay insensitive logic, it would be implemented using a special 'gate': the Muller
C-element. Similarly, the join at the end of a parallel involves a multi-party completion.
The need for synchronised communication in both CSP/occam and delay insensitive logic
arises because of alternation (arbitration in delay insensitive logic, normally performed by
another special 'gate': the asynchronous arbiter). If there is an alternation between two channels
which become ready in sequence, synchronising the communications ensures that the 'correct'
choice is made (a communication delay on the first channel to be ready can not affect the
The above is why variants of CSP/occam have been adopted for delay insensitive hardware
There seems to be widespread confusion about 'blocking'. The problem is that a common
technique in shared memory concurrency involves 'blocking' on access to shared state
and various techniques have been devised to avoid or reduce the need for this. Although
these work for some algorithms, there is no general technique to eliminate 'blocking',
and many algorithms rely on it - if a process depends on (for example) the sum of the results
of 1000 others (a reducing operation) it will have to 'block' until they have all completed.
All of these 'non-blocking' techniques are more difficult to understand and verify than the
blocking equivalents; also they use indivisible instructions (such as compare and swap)
which have to access the shared (deep and high latency) parts of the memory hierarchy;
also the hardware has to 'block' in order to implement the indivisible instructions. So these
techniques are probably not as efficient as they may seem.
Are the notions of completion, two-party completion and many-party completion helpful?
It seems to me that the idea (originated now over 35 years ago) that neither processing
delays nor communication delays affect the outputs of a system (as seen from a
programming or hardware design language) is as important as ever. Hardware designers
know this as 'avoiding race conditions' and mostly solve it using synchronous design
techniques. But now, we have to use asynchronous techniques - in software for the cloud
and large scale systems - and (I suspect) increasingly in hardware as we try to overcome
energy problems. The CSP/occam model provides a way to do this.
There is, of course, an issue of how to ensure that timing (and energy) are built into
the model ... but that's another story ...
On 27 Sep 2014, at 18:31, Larry Dickson wrote:
It seems to me that “blocking” applies to an alt as well as a one-to-one communication (or to a timer). It’s not something that CSP people need to be embarrassed about. On the contrary. Any model that does NOT account for blocking is a false model. Think on the instruction level. Each sequential logic primitive (like an add or multiply) is a wrapper around some combinatorial logic that needs time to settle. The system clock gives it that time — so every instruction blocks! And if it did not block, it would give incorrect results.
Before an instruction completes, the conditions necessary for its completion have to hold. If this is a read, write, alt, or timeout, the required blocking can be “macroscopic,” and efficient chips yield control of the instruction stream to other tasks at that point.
Blocking is probably used because a read or write will do exactly that in a one to one straight up communication. Though used in an alt blocking might not aptly describe the state as only communications that are ready (i.e., will _not_ block) will be considered.
I dislike yield because yield gives the idea that you can back off from a blocking read or write call which you cannot.
On Sep 26, 2014, at 2:19, "Roger Shepherd" <rog@xxxxxxxx
I don’t know the answer to you question.
I can’t say that I like “block” - but it usage is certainly old and is common for multitask systems where the ability to create an extra task/thread/whatever to do communication is considered to be advantageous - hence “non-blocking communication”.
I don’t like “yield” either as this has a meaning along the lines of "to give place or precedence (usually followed by to): to yield to another; Will the senator from New York yield?” (from dictionary.com
) and it is not necessarily the case the case that there is anything else to yield to.
I prefer “wait”.
On the subject of language, I think the term “synchronous” is plain wrong when used to describe (occam) channel communication. The processes are “synchronised” by the communication; the communication is “asynchronous” - there is no clock which causes the communication to happen.
When was the phrase “blocking on a channel” introduced and by whom? Hoare does not use it in his 1985 book. Roscoe almost does not use it, or I would say, he does not use it at all in this context in his version of the book that’s PDFed. If this group does not know this, none will.
I am suggesting using “to yield on a channel” rather than “to block on a channel”. I have a blog note about it  and there is a discussion thread on golang-nuts . I include the figure here (and the intro text in golang-nuts):
Readers of golang-nuts know that “blocking is fine in Go”. But often, when I talk with other programmers I would hear that “blocking is evil”. I suggest that we go over to saying that “yielding on a channel is fine”. I’ll explain:
The literate meaning of blocking is about something we don’t want. It means I want to go somewhere but am stopped or delayed so I arrive too late. Or a door is blocked, in which case we must unblock it, hopefully without a bulldozer. Since this semantics outnumbers the people who understand CSP-based channel semantics we have an explanation problem.
With an explanation problem follows a mission problem.
Tell a basically single-thread programmer in C++ that blocking is good and you ask for much. His attention to try to understand something rather new, even if he’s used to linux select. Because often the code that does this linux select also does other rather independent matters. And it’s in his spec that these other matters also need to complete. And when you block on one matter it’s easy to see blocking as something evil. Because he or she is right in their own mind.
So which “blocking” do you mean? 1. “Blocking on a channel” or some shared resource controlled by a non-blocking algorithm. I believe these may be in the same group, ref. the Wikipedia page about Non-blocking algorithm 2. “Blocking” away other required functionality 3. “Blocking” as in deadlock, where the system cannot proceed, where there is a cycle in the dependency tree We already have good words for 2. = blocking as is, and 3 = deadlock. But we reuse blocking for 1.) which is not optimal. As said, I suggest 1. = yielding. This is an implicit yield that the application doesn’t have control of. Not the explicit yield that some operating systems would supply in the API. The channel semantics as implemented in the scheduler does it for us.
What do you think of this? If we start to write “to yield on a channel” or “yielding on a channel” it could slowly creep into our language. And the C/C++ (and even Scala or Erlang) people would give us an ear. Especially if we agree with them that blocking is evil.
(I alse dare take comments on the idea.. Here, there or there)
Med vennlig hilsen / Best regards
Senior utviklingsingeniør, siv.ing. / Senior Development Engineer, M.Sc.
Autronica Fire and Security AS
Research and Development
UTC Building and Industrial Systems
Phone: +47 735 824 68
www.teigfam.net/oyvind/home/ (Also work-related)