Hi David I accept that enough processors and interconnect will often obviate prioritisation, but I'm not yet convinced that it would do so always. Like other things, there will always be a way to express what you want with the model (language) that you have, but it may not be as simple and transparent as with another. There is precedent on my side as well. We humans make use of priority all the time in the way we organise. Maybe it's true that we'd do without it if there were enough of us, and we could communicate sufficiently well. But communication seems to break down, when our systems gets large, as we know. Could it be that when you rely too heavily on communication, it gets exponentially harder to do (or program)? My experience with large class frameworks (like Cocoa) is not encouraging. While the classes may (or may not) be modular, the communication between them is certainly not. It can be very hard to achieve something very simple – like dropping a box of paper clips and trying to pick up just a few. If a few well-contained shared variables can replace a whole mess of channels, perhaps there is a place yet for them. I will admit I'm hard-pressed to come up with a convincing example; such things sometimes need to wait until they present themselves, and I'm no longer in a situation where I meet many real problems in system design. There remains at least the formal and philosophical question as to how to build priority into a programming language. It's been fun to think about. cheers Ian PS I really wish I had the time and excuse to play with an XMOS board… On 1 Oct 2012, at 18:41, David May wrote:
Ian East Open Channel Publishing Ltd. (Reg. in England, Company Number 6818450) |