Tone Gore wrote:
| The other thing that is mostly missing in languages (someone is bound to know of ones I don’t) is the effect on power consumption.
During a blog note that I wrote (here) I got in contact with Ami Marowka. He is very interested about these things . And about how cognitive matters in our heads matter when doing parallel programming.
 - Energy Consumption Modeling for Hybrid Computing, Ami Marowka, Department of Computer Science, Bar-Ilan University, Israel
Much of the discussion seems to be on implementation details.
What is less clear is what the parallel programming model is. Are we still assuming a CSP model?
For me, the great beauty of Occam was that it was a simple model; I came from engineering rather than computer science, and so I found it approximated to the wires and boxes I was familiar with in electronics. Glitches, race conditions etc all had programming equivalents.
If we look at how hardware design has evolved, the level of abstraction has moved up. This allows many implementations,
“Systems of systems” is an evolving area where we have autonomous systems that interact and collaborate. How will one approach programming systems of this complexity?
It seems to me that sub-systems will have to report their capabilities on interactions. For instance, a city wide traffic flow system would need to be able to interrogate each vehicle to find out if it was fitted with anti-collision radar or not, and perhaps on a multi-lane highway allocate these to a “high density traffic lane” because it can reduce the space between them as it does not have to allow for driver delays.
There will be aspects of such systems – probably only at subsystem level – that will require verification to a very high degree; some will require guaranteed responsiveness. Break them down far enough and you will always find subsystems that are resource constrained.
Are we trying to design a “one size fits all” language? Or are we trying to design a basic language that can work as a modelling and systems language, down to a level of abstraction, and then have subsets/supersets to deal with differing requirements e.g. a richer and more complex version for (say) desktop programming, and various flavours for real time and resource constrained embedded?
A single language that tries to be all things to all people ends up becoming too complex – CHILL was an example, and I don’t hear much about ADA these days.
The other thing that is mostly missing in languages (someone is bound to know of ones I don’t) is the effect on power consumption.
I am only speculating, but it is my guess that the reason applications will have to be rewritten for Windows 8 RT is that without some thought, existing programs may well not be power optimised. The average user would not recognise that, and so Windows RT tablets would get a bad rap for battery life.
Again, it comes down to implementation and capabilities – if you have a module that requires fast reactions, you may use/design hardware to do it or have software that runs more actively e.g. polling more frequently.
I raise power, because with the Internet of Things, wireless sensor networks and so much low level embedded computing, programming efficiently for power is going to become important, and it could be one of those things that can help a new language get traction.
So it seems to me that instead of starting from the bottom up, we should be starting with what problems we want the language to solve and where the language sits in relation to existing languages.
Aspen Enterprises Limited email tony@xxxxxxxxxxxx
tel +44-1278-761000 FAX +44-1278-760006 GSM +44-7768-598570 URL: www.aspen.uk.com
Registered in England and Wales no. 3055963 Reg.Office Aspen House, Burton Row, Brent Knoll, Somerset TA9 4BW. UK
| 7. RC_Fail is possible with non-blocking semantics. (no matching request was waiting => return immediately, to be avoided as risk of busy polling)
Or use an XCHAN, and you will not poll (busy or not) even in this situation. See below.
| Only giving priorities to messages make little sense as they don't use a lot of the processing resources, whereas Tasks/Processes do.
I would regard message priority as controlling what a process should do when, so the above argument would be invalid
| Essentially, priority is an aspect of the message, not the sender
Yes,.. but the sender sends the high priority message!
But, if the man in the gabardine suit was a spy and his message could changethe war…
..who would let that spy (just any grey spy like him) give priority to his message?
Whom does he think he is?
| I am afraid I share your wishful thinking, even if I regret so.
I have introduced something I call “architectural leak” in :
“Architectural leak” from link to application level could be seen as application code
that is added to compensate for missing features at link level. Chained processes and
overflow buffers are needed when buffered channels are not supplied. Busy polling is
needed if the link level does not deliver appropriate flow control. The channel-readychannel
connected to a buffered channel, described in this paper as x-channel, would
decrease architectural leakage.
So, we’re all striving for architectural balance.
So there should be balance between “I have it all already in my API” and “I want to make it all in my language”.
XCHANs helps that balance!
| What is essential?
New channel types? (like XCHAN?)
protocol-sessions? (as suggested by.. Peter?)
a king to decide (advised by a committee)
a million++ hits on YouTube tutorials, like 
after numviewers +=1 to  and studying Go to depth see what to learn from it. They have learnt from “Occam”!
..think ahead about what I would do if the king dropped XCHAN and I had joined the committee for the Good XCHAN Cause..
(=some level of consensus needed..)
 – CPA-2012: “XCHANs: Notes on a New Channel Type”
 – http://www.youtube.com/watch?v=f6kdp27TYZs&sns=em - “Go concurrency patterns” Rob Pike at «Google I/O 2012»:
(Sorry for the XCHAN mantra, but this thread is that room..)
I would like to see an Occam-like language agreed, defined,
implemented and promoted in an open process.
I'm not interested in discussions about how to represent
priority. There were several very good reasons why this was
relegated to the 'configuration' section of the original language
specification. In the meantime, nothing has changed.
The occam-pi language is an over-extended version of occam
with no formal specification. Some of the novel features have no
efficient implementation in message-passing distributed memory
So my suggestion is that we start form occam2, and look at what
we need to add from occam3 and occam-pi. What is essential?
I've been working on language issues for quite a while now -
mainly looking at how we can really get value out of thousands
Not sure how best to do this but I'd like to see it happen. I'd be
happy to host a meeting.
On 4 Oct 2012, at 20:38, Rick Beton wrote:
I started the original discussion following Peter's 'Occam Obviously' presentation, but sadly the language discussion petered out, lapsing into a fascinating but many-year-long rehearsed discussions on priority.
My original hope was to seek an answer to this question: if the answer is Occam (obviously or otherwise), what will it take to make Occam generally usable? In its present form it is not so.
Then there's the question of aspiration versus practicalities. The first suggestion I made was for packages to be added to Occam-pi and I put it first deliberately. Not a new suggestion, this; in fact Occam3 had 'modules' way back in 19xx (choose your own xx). I don't really care for the details of the implementation, I'm much more concerned that Occam-pi/-tau should belong to a busy community, inspired by (a) clarity of thinking and (b) a need to make things happen.
If this is wishful thinking, then alas Occam is not obviously going ever to be more than a teaching tool.
So, what next?