[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Occam-Tau - the natural successor to Occam-Pi - or is there one already?



 

 

From: Eric Verhulst (ALTREONIC) [mailto:eric.verhulst@xxxxxxxxxxxxx]
Sent: Tuesday, October 02, 2012 6:19 PM
To: 'David May'; 'Ian East'
Cc: 'Larry Dickson'; 'Ruth Ivimey-Cook'; 'Occam Family'
Subject: RE: Occam-Tau - the natural successor to Occam-Pi - or is there one already?
Importance: Low

 

Hi All,

 

While this is an interesting discussion and good points have been made (known to the old transputer fans, but being rediscovered by the current generation), I don't agree with all conclusions and that applies to other posts in this thread as well. Just my 2c. There is material here for a full blown conference.

 

A language should not try to cover all aspects. Separation of concerns applies.

 

For example, communication as such (= moving bytes) should not be in the language. Interaction (= synchronisation + data exchange) should.

 

The same applies to most other aspects that have to do with the construction side of things.

 

The reason is that there different layers of abstraction. Programming Languages have evolved from an almost transparent layer of abstraction that initially was only hiding the pure assembler and the CPU executing it (e.g. C or even occam), to domain specific languages using tens of layers of interpretation and virtualisation (e.g. phyton frameworks and VMs). The latter have overheads in the order of 10**3. Actually, the best programming "languages" are now modelling tools (and that's not necessarily UML and its alikes).

 

So, for practical reasons, a first clear distinction to make  is between embedded needs and IT-like/desktop needs. The first group still has the requirement of efficiency and predictability (in time) because there are constraints of power, resources (memory), safety and cost. The second group seems to be driven a lot more by reuse and productivity. Often inefficiency (and unnecessary complexity) is fully masked by the user interface (the eye samples at 25 Hz).

 

So to answer some of the requests:

 

- Concurrency: taking Hoare's CSP as the basis, it results in something more abstract that we call "Interacting Entities". Even if a lot more abstract, it still reflects the architectural view (this is too often how developers and/or HW people think). Does the programming language need to be able to express this? Not really. Defining the concurrency and the interactions can be left to the modelling domain. There is of course an overlapping area. Implementation decisions will at some point transpire in the "program". The point is that in the programming part, one should not be concerned too much with how things are done in the hardware. This can be left to  an optimised and verified system layer (OS if you like, with HW support for better performance) for the plumbing aspects and to the compiler for the binary code generation. Routing can be pre-calculated or dynamic.

 

- Scheduling: similarly, we have scheduling theories like RMA that work. The "logical behaviour" of the program is or at least should not be affected by it as most scheduling should be a side-effect of the interactions.  RMA means that tasks/processes get an assigned priority. In a parallel system, everything must follow. Any activity must be prioritised, e.g. when waiting (ALTs in occam), priority must be observed. Why because system-wide anything that has higher priority and is executable must be scheduled first, else system-wide the real-time response will go down even if the logical behaviour is not affected. Tasks can delay the execution of a high priority task on another processing node and there is communication in between. Hence also the communication must be prioritised. The alternative is that the programmer plays scheduler. Only works for simple systems (data in, processing, data out). This is the static approach also taken in safety critical systems for the simple reason that it is easier to analyse. Is it safer? Not necessarily. Real systems are never fully synchronous and have jitter. Hence, a self-synchronising system with priority based scheduling (and hence enough concurrency) is a lot more resilient. The other extreme (EDF) doesn't work either. While it has the benefit of allowing a higher CPU load, it has the serious issue of failing catastrophically and has no distributed capability, not to speak of the fact that it really needs cycle counters per task/process in the hardware. Hence RMA with priority based scheduling (incl. support for priority inheritance) is still the best option.

 

- Communication: as already mentioned above, interaction between tasks/processes does follow the logical behaviour of the application. There is no reason why it should strictly follow the hardware level communication. Communication (prioritised) must be a system level service not something the programmer has to program. Practically speaking, it also means the use of packet switching as the transmission of packet determines the minimum latency added by the communication layer.

 

Is this strange? Internet works like this. Applications call the services remotely and transparently, independently of where they are in the network.

 

So, what should a programming language really provide? Mostly how to program the state-machine and manipulate local data. Maybe also a convenient way to activate behaviour (e.g. using functions, methods, ...).  Might support some local concurrency, but does not need to support explicit inter-processor communication (that is a system level issue). A well thought-out programming language should be clean as well as rich enough to allow using it as a specification language. It reflects intended behaviour (semantics) not the implementation (that is pure syntax). Functional languages (e.g. Haskel) probably come closest, but their runtime support makes them difficult to use for hard real-time embedded systems. In the embedded world we also must be able to access a single bit in hardware (not always, but when needed). My ideal view (for embedded) is a cleaned -up version of the over-specified ADA, a bit like Modula-2. Pointers? They are indeed tricky but sometimes necessary. Good programmers know how to avoid them and use them cleanly when needed. Somebody has to program the system level. The application programmer can be shielded by code generators or verified libraries.

 

Occam is neat and clean, but it lacks typical features that system level programming requires. That's why we moved to C in 1989 after having developed a fault tolerant transputer system in occam. We also found  a way to have more than 2 priorities and full preemption on the transputer, because this was needed for real embedded programming.

 

What else is needed? The interactions. Pure occam has channels that directly connects processes.  Hence the plumbing code is visible (read from a channel and decode the message). Such code is difficult to change, difficult to scale across processor boundaries. The solution is to decouple the channels from the processes. In other words, make the interaction explicit and hence decouple the processes. We call these "hubs". It still synchronises like a channel (using the hub's guard) but the "action" has more semantic variety than a simple channel transfer (ranges from events to blackboards). As the communication and routing is OS level support, tasks and hubs can be transparently placed in a network (or many/multicore) without changing any source code: hardware topology and application topology are fully decoupled. Even processor types and communication media types can be freely mixed (after some driver development).

 

Some will say that that this is not the definition of a language as the services reside in libraries. Indeed, the core language is still ANSI-C, but it is actually language independent. A prototype is running on phyton and nothing prevents us from using ADA, C++ or whatever. The benefit of not putting all in the language is that the language itself remains light and the system (language + concurrency support) remains scalable as well as flexible. No need to redefine the language to introduce a new type of synchronisation mechanism or to improve the semantics. Codesize: typically 5 KB/node. Formally developed (using TLA, itself derived from CSP) and verified.

 

Does it work? We have a running system, some heterogenous with 5 types of processors (DSP, 16bit, 32bit) in a total of 12000 processors.

One can start by using visual programming /modelling on a PC and then redistribute the program across the PC and some attached ARMs in 15 minutes, almost without writing any code. Code size? Typically 5 KB/node. Formally developed and verified using TLA/TLC (a cousin of CSP).

 

What are we looking for? Not a new programming language. But a language that allows to specify behaviour (functional requirements) very precisely but independently from any implementation/programming language.

Formal models as well as implementation to be generated by translation. Is it possible? Why not? Take the example of a loop construct. Specify pre-conditions, datatypes and structures; specify the block to be repeated; specify the repeat conditions; specify the loop termination conditions, specify the post conditions. Doesn't say anything about how the implementation looks like but is generic and can be translated into most existing languages.  Now put each loop construct and encapsulate it in a concurrency "shell" and you have the notion of a task/process. Add now interactions (like exchanging state space data) and you can start parallel-ising. If the interaction comes from the (OS) framework,  no need to change the specification when going parallel.  

 

As education was mentioned here as well, this approach does away with the brainwashing induced by learning a specific programming language (remember the OUG joke that maintaining Fortran programs for 10 years gave rise to permanent brains damage?).

Engineers should not program, but specify, model and verify.

 

BTW, this came in last night:

 

http://worrydream.com/LearnableProgramming/

from

http://www.technologyreview.com/view/429438/dear-everyone-teaching-programming-youre-doing-it/

 

 

Best regards,

 

Eric Verhulst

 

PS.

 

The PRI ALT is just a language concept. As far as I know never implemented. The consequence is that it is really an ALT. Which means , don't assume anything about the order in which the ALT triggers. If not, you are likely to introduce side-effects in the code. Therefore the opposite is better. Always implement in order of priority but still assume that the "select" is priority independent. It keeps the behaviour consistent even if the timings (in a network) vary. Because in practice, one should not assume anything about the order of things on other nodes. Logical (functional) behaviour and timing behavior should be independent.  

 

 

----------------------  FROM : -----------------------
  Eric.Verhulst@xxxxxxxxxxxxx

  Skype me at: ericverhulstskype
  Mob. +32 477 608339

  Office. +32 16 202059

  http://www.altreonic.com
-----------------------------------------------------------
"From Deep Space to Deep Sea,

   Trustworthy Forever"

 

 

 

 

 

 

 

From: Mailing List Robot [mailto:sympa@xxxxxxxxxx] On Behalf Of David May
Sent: Sunday, September 30, 2012 12:43 PM
To: Ian East
Cc: Larry Dickson; Ruth Ivimey-Cook; Occam Family
Subject: Re: Occam-Tau - the natural successor to Occam-Pi - or is there one already?

 

 

Dear all, 

 

I've just noticed this email trail. It reminded me to post a keynote presentation 

I gave recently - it's here:

 

 

It was given at a "Multicore Challenge" conference in Bristol last Monday. 

 

Unfortunately they lost the recording so you have to guess what I said!

 

The main thing I've realised over the last year or two is that if you want to

write efficient programs for huge numbers of cores, you have to think about

the pattern(s) of communication. And of course, the language(s) have to be

able to express them clearly; the compilers have to be able to analyse

and optimise them; the architectures have to be designed to support them. 

 

David 

 

 

On 28 Sep 2012, at 18:08, Ian East wrote:

 

Larry

 

Funding for research may very well exacerbate the problem, producing a hideous plethora of languages, whereas funding for action, within a commercial environment, I think could be justified, perhaps even fought for and won, despite the degree of commitment – the cost, both in product integrity, and in productivity, of the current situation is enormous and demonstrable, I think.  The problem then is persuasion.  Although we have a body of work showing the potential, dating back decades, the CPA paradigm still does not have a comprehensive text.

 

I think it's easier to get 'em while they're young.  The new opportunity is to target a textbook at GCSE Computing students in support of the requirement to complete a significant software project.  The Plumbing book by the Transterpreter folk could be built up to that I think, though it would need professional production.  (If you're listening guys, consider it an offer.)  That leaves a more comprehensive volume needed for the pro and academic markets.

 

I do plan to write one, but cannot do it for years yet.

 

Ian

 

Ian East

Open Channel Publishing Ltd.

(Reg. in England, Company Number 6818450)