|Thank you, Bernhard. I have been preoccupied by other tasks and just now took a look at your references. In Reference  I found the following:|
On this level Tasks exchange standardized Packets using an intermediate entity we call Port. Two tasks rendezvous by one task sending a ‘put’ request and the other task sending a ‘get’ request to the Port. The Port behaves similar to the JCSP Any2AnyChannel . . . . Finally, the architecture was kept simple and modular by developing kernel and drivers as Tasks.
A first surprise was that each model gave no errors when verified by the TLC model checker. This is actually due to the iterative nature of the model development process and partly its strength. . . . This forced us to avoid shared data structures.
An example was the use of both waiting lists and Port buffers, which is one of the main concepts of OpenComRTOS. A waiting list is associated with just one waiting action but one overlooks the fact that it also provides buffering behaviour. Hence, one waiting list is sufficient resulting in a smaller and cleaner architecture.
Ports act like channels in the tradition of Hoare’s CSP , but they allow multiple waiters and asynchronous communication.
A first level of safety is provided by the formal modelling approach. Each service is intensely modelled and verified with most “corner cases” detected during design time prior to writing code. . . . Even when they are asynchronously used, the services become synchronous when resources become depleted.
Of great interest was the use of Tasks to describe kernel and drivers (which is like Transputer high priority programming, making drivers unnecessary) and the use of the TLC model checker. The implication of "asynchronous communication" was unclear (Section 1.4) but perhaps the note from Section 1.3 means it is equivalent (or nearly equivalent) to my x to y channel description. The idea of "resources become depleted" at the end seems to indicate dynamic resource allocation most of the time; am I right that dynamic resources (e.g. a malloc) are a part of this design? My design was based on static resources per task (hence only quasi-dynamic).
Were Port-Hubs used to develop block devices or TCP support?
Hi Larry and occam-com readers,
What you describe here very much sounds like the Port-Hub (it describes
the basic Hub concept) in OpenComRTOS / VirtuosoNext , with all Tasks
(CSP would call it Processes) running at the same priority. We developed
OpenComRTOS / VirtuosoNext using formal modelling, and we've all got a
CSP / JCSP / occam background which shows in the design of the RTOS.
Our implementation scales from single node systems to distributed
heterogeneous systems consisting of multiple (up to 2^16) nodes, see 
for an example. More Information about the features of the current
version can be found at .
 Formal Development of a Network-Centric RTOS: Software Engineering
for Reliable Embedded Systems; Eric Verhulst, Raymond T. Boute, José
Miguel Sampaio Faria, Bernhard H.C. Sputh, Vitaliy Mezhuyev; Springer
Science & Business Media, 23.08.2011
On 22.11.18 01:03, Larry Dickson wrote:
Hi Peter and occam-com folks,
Before I wander off and reinvent stuff on my own, I'd like feedback on my (probably trivial?) notion of a queue-based x-to-y channel (where x and y can be one or many (independently)). Peter said all those four kinds of channels were mutexes, and I thought queues, but did not have my thoughts organized at CPA2018. Here is stuff I passed to Peter a month ago, but got no reply, probably due to his legal stuff.
A channel starts out empty, and maintains a queue count which starts at zero. A number of sender and receiver processes may communicate through the channel (the number may be unbounded or not). A process arrives when its code reaches the point of communicating through the channel. When a sender arrives, the queue count is increased by one; when a receiver arrives, the queue count is decreased by one. This always results in a change in the absolute value of the queue count. If the absolute value decreases, a communication takes place between the arriving process and the lead process of the queue in the opposite direction. If the absolute value increases, the arriving process is put at the end of the queue and blocks. Note that there need only be one queue; the queue count tells whether it is a sender queue or receiver queue.
If there is only one sender, the queue count cannot rise above +1; if there is only one receiver, the queue count cannot fall below -1. This means that this setup allows one-to-one, many-to-one, one-to-many, and many-to-many channels. Bounds on the queue count can be enforced that are smaller than the number of senders or receivers, in which case an arrival may fail (like a NAK in TCP connect attempt).
Implementation may be centrally static or quasi-dynamic or both. Centrally static means there is room in the channel for maintenance of the queue, so that the queue count limit is finite. (A one-to-one channel is an example of this.) Either the number of senders and the number of receivers is constrained to be less than or equal to the queue count, or the case of failed arrival must be handled. Quasi-dynamic means that the queue is maintained using space in each queued process (like a timer queue in Transputer occam). Though each process can remain static, there need be no limit on the number of processes in the queue (again like a timer queue), and no failed arrival need be handled. A combination of them is possible, where a static queue is maintained up to a certain length and then the quasi-dynamic approach takes over. That requires circular in-channel queue and also avoids failed arrivals, but makes dequeueing more complicated, and may be called for if the queue count is almost always bounded but on rare occasions may get longer.
In one-to-one and many-to-one cases, a standard input ALT is possible. I believe it will also work in the other cases, since ready-to-communicate is deterministic and the ALT would just amount to a loop within this commitment.
(Added since then) I thought about it, and to do an input ALT in the one-to-many or many-to-many cases would would probably want two queues. This is because between the matchup of an input and an output and the actual communication between that input and that output there might be a long time lapse due to ALTs which lose. The same could happen because of a long communication. Stuff that comes in from other processes while this is underway would go to a second "inner" queue, thus allowing the process queue to progress.
Why are Assembly programmers always soaking wet? They work below C-level.
"The cheapest, fastest, and most reliable components are those that
aren’t there." Gordon Bell