Great idea. I love the way this list has come to life. Brainstorming below... On Oct 4, 2012, at 3:51 PM, David May <dave@xxxxxxxxxxxxxxxxxxxxx> wrote:
Yes, I agree with you more than with Peter on this. Things are moving toward serial comms and distributed resources (look at PCI --> PCI Express). Memory itself has some serious limitations (e.g. that "parallel" read of a read-only has to really be sequential). I like the idea of a bunch of people snapping pictures of a 2D barcode at once, and the broadcaster knowing they've all grabbed it, so it can be changed. That by itself would eliminate much of the need for shared memory.
(1) Usability on all levels! The original occam development fenced itself in to the Transputer side of the B008. The new occam needs to support an OS; it needs to support scripting; it needs to run on heterogeneous systems and get them to communicate ACCORDING TO ITS MODEL; it needs to support drivers and ISRs; it needs to have ways of communicating with non-occam systems and processes through things like sockets and USB connections, again according to its model. It therefore needs to pay attention to startup and shutdown, which are always the most difficult part of any communicating system. (2) Dynamic resource appropriation and use. This is the most elementary operation of an OS (type in a program on the bash prompt and run it) yet old occam hid its head from it. It can be accomplished with a SINGLE "wild" heritage (like occam-pi) that spawns multiple, parallel "tame" processes (like strict occam) and then takes back their resources when these are done. Everything is easy, and the resource model is inviolate, if you do this --- and, especially with virtual machines, you get almost all the resource flexibility of a standard, dynamic, spaghetti OS with none of the drawbacks. The OS is then just another program in our language. So are all the drivers. We rule the whole world, just like C. (3) Strict adherence to the software/hardware equivalence. This allows whatever extensions are consistent with our requirement of true, black-box modularity. One example is the 2D barcode snapshot broadcasting (ACK through closing series switches) mentioned above. Another is "sneakernet" using data containers with return addresses (a limited use of mobile channel-ends consistent with a well-scoped many-to-one channel). (4) Consistently with (3) in "the other direction", we need component formal verification capability so that we can have an occam software module (simple example: a FIFO) and confirm that a certain hardware implementation is completely consistent in behavior with that software module, and then legally map our program onto hardware that uses the hardware implementation in place of compiled software. Then the sky's the limit in respect of special-purpose efficiency. The great temptation seems to be to violate (3) and go off after any feature anyone ever advertised as valuable. We need to resist that rigorously, because the real value is more and more applications of (4).
Sign me up! I'll find some funding somewhere ;-) My belief is that programming the thousand-core chip will prove much easier than people expect, if we properly design our approach along these lines. Therefore we also need (Eric, are you listening?) some examples of realistic but hard problems to design toward with our new features. Larry
|