[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Quote from CACM paper: cost of parallelism
Larry Dickson wrote:
I would agree with Eric; the Cell is an interesting design but far from
easy to program for. I do seem to recall that someone at Kent did some
work with the transterpreter on the Cell. Maybe my memory is faulty.
Sorry to be a little slow on this, but...
I think the key task for our side is what the previous poster talked
about - the Cell processor (and similar). We all know that it should
be + EASY + to program the Cell, because it's just a slightly
disguised PC and B008 with 8 transputers ;-) But none of us that I
know of, including myself, has actually done anything about this...
And the key is what Rick says, we start from the wrong place. Namely,
the mountain of massive OS constructs and their insistence on hiding
the "bare metal". The Transputer was a big technical success because
it was driven from DOS, a totally minimalistic non-OS that allowed you
to go around it and whack away, in standard code, at things like DMA
addresses. Now we have to tiptoe around the whole attic full of
exploding OS and driver constructs, never doing a real design (like a
classic car), and the effort involved is not only triple or more, but
And this is where I get disappointed. While I can see where Larry is
coming from (I too appreciated the ability to program on bare metal on
the transputer) it is no longer practical with the sorts of systems
being designed and the nature of the solutions being demanded because
ultimately, you end up having to reinvent the wheel. The OS constructs,
in the main, are aimed at (a) supporting legacy apps and (b) providing a
kit of parts to make applications easier to write. While I can agree
that you can eliminate that in some embedded environments, your can't do
so on mainstream desktop OSs, which are the ones feeling the pinch right
From my (these days) very "industrial" view, the place to start is
simply where we are - namely, often large apps of mainly C code written
by various authors over many years, with tight deadlines and demanding
management. Pretending that we have the luxury of redesigning either
OS's or the raw silicon is fairy tales, not because we can't see why it
might be useful or interesting, but because (in the case of the
million-odd line codebase I'm thinking of) it would probably take over 5
years for a competent team of people. We don't have that time to spend,
let alone the money to pay people. Moreover, in many cases we are also
constrained tightly by our own customers, who say things like "we'd love
your code, but it must run on weird processor X using niche compiler Y".
We have to be able to write the code using very portable constructs. No
gcc-isms here, I'm afraid.
I was hoping, in posting the original email, that people might be able
to say "well, if you do things *this* way..." but it seems not.
At present the general computing world is being forced by the scruff of
its neck to face parallel programming head on. I believe it won't be
long before even the current luxury of true shared memory will be left
behind and we'll all be in NUMA land. For CSP to be part of that new
world depends on proponents being able to provide usable solutions to
the problems real people face.
A year or so ago, I was hopeful that the CSP community would be ready to
take up this challenge, but I'm becoming less hopeful now.