[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
March IEEE Computer Diatribe on Java
Dear Ruth and all,
>>why are we getting into this mess in 1997 when everything to do with
>>things like race hazards on double-buffer FIFOs was *completely*
>>sorted out 20 years ago (CSP etc.).
>Seriously, we, for better reasons or worse, seem to have been unable to get
>the basic message about occam through. The world seems to think of events
>as things Windows throws around ...
>>there is a wealth of literature and industrial practice on doing this
>>properly, but it's being forgotten ...
Knowledge is being lost at a frightening rate, because people are becoming
prisoners of their tools, not masters of their tools. Here's a notice I
circulated after the PDPTA'96 conference (this issue is obviously just one
*************** WARNING - to PDPTA'96 participants ***************
I noticed several presentations which used assumptions equivalent to
Time >= Comms + Compute
when estimating costs of parallel computing. This in spite of the
fact that a few years ago, in the heyday of the Transputer,
Time > Max(Max(Comms),Compute)
was well known to be true - and MUCH more efficient! Use of the
first assumption - and I noticed several OTHER presentations which
appeared to assume it in design - results in bad design and
overspecification! DON'T GIVE UP GROUND GAINED!!! See my paper's
"burden bandwidth" versus "timing bandwidth". Larry Dickson
********************** KNOWLEDGE BEING LOST **********************
>We DO dare mention occam, as the only language I'd even consider writing
>seriously parallel programs in; It is surely worth the mention.
I think the source of the problem is one or two wrong turns the software
community made a generation ago. Since then we've just been getting worse
(A) STACK - C is a stack based language (pace J. Navas below). Its
successor C++ builds on stacks and trees. Java is a stack-based language,
entangled in C++ style classes and objects (Computer Design, 3/97). Classes
and object oriented programming are tools searching for a use.
WE NEED STACKLESS LANGUAGE(S)!!! Then most of the complexity and bloat
problems vanish like the morning mist. Load-time component programming
becomes possible. Real occam is stackless, except atomic calls like
(B) SEQUENTIAL COMMS - See my notice above. Embedded programmers use
many-level prioritizing and other sledgehammers to solve a bandwidth problem
that isn't there. Real occam cleanly interleaves all comms using interrupt
scheduling, much EASIER to understand than the glorified "printf" approach in
code of any complexity.
(C) SYSTEM COMPLEXITY - occam did not deal with system questions, and we
have gotten trapped on the merry-go-round of many-layered system complexity
and insulation from knowledge. I believe I have solved this problem (see my
PDPTA'96 article) with my tame child/wild child approach. But it's a
May I ask a favor of the formal-methods authorities on this list: If you
can find the time, could you critique my paper and see if it is on solid
ground? If you need a copy or more info, contact me (tjoccam@xxxxxxxxxxxxx).
Following is an exchange I had with a J. Navas - a typical example of
things that "everyone knows" that, in many cases, just aren't true. Looked at
the right way, it's an opportunity for us occam folks.
>>Quotes from my letter re J. Navas:
>Quotes from J. Navas' reply:
>With all due respect, my experience is that all microkernels inevitably
>result in bloat when they are scaled up to handle real world tasks.
I've used mine in several commercial projects, including even automotive
radar, and kernel+code never exceeded 200 kilobytes. After I devised the
system extension, PARTS (independent programs that are joined together)
became even smaller, a few tens of kilobytes.
>One of the many problems is that
>performance and efficiency go down as the number of layers increases.
My system eliminates layers. See my paper. Every program can hook straight
into (its part of) the hardware. Efficiency is maximum available from
interrupt and DMA, and the "burden bandwidth" concept, explained in the
paper, permits very large numbers of slow control channels without any data
>> I just saw a big article (EE Times, January 27, p 24) hailing a release
>>that "lets designers use Windows software in real-time systems." Suppose
>>you visit your auto parts store and find something that "lets motorists use
>>Chrysler Corporation electricity in GM light bulbs." Would you expect them
>>to hail it as a major breakthrough?
>>Any teenager should be able to tear apart and rebuild software as
>>easily as a car!
>That would be nice, but just isn't practical.
Even my most complex example is completely defined by a few dozen short
(typically two to four line) batch-like text files displaying command line
parameters that completely state the data connections. The result is a
simple description that can be displayed in a one-page schematic diagram in
which substitutions can be made at will following rigorous rules. A teenager
(my son Tom) HAS torn apart and rebuilt this stuff, and the analogy to
physical parts in a car is real (and enforced).
>> "In the operating system one time" vs "in applications multiple times"
>>not the only options for shared code. Has Mr Navas never heard of TSRs and
>You bet -- TSR's in particular are an architectural nightmare.
The notion of "tame child" and "wild child," explained in my paper,
encompasses TSRs, shelling out, daemons and GUIs in a logical and
demonstrably workable fashion. Live code examples (a homemade network) show
that it works.
>There are lots of small OSes. My personal favorites are OS/9, QNIX, and
>GEOS. Unfortunateley, they are only suitable for special purposes.
Mine has worked on seven commercial projects, some of them major. These
projects were not chosen to fit the kernel. Many were grinding to a halt
until the kernel came to their aid.
>> Windows doesn't deliver what users want to buy, it forces users to churn
>>(see F. Homan's letter).
>It delivers what users want to buy: Excel, Word for Windows, etc.
By Mr Navas' own examples it has stalled progress - since those capabilities
have existed for decades with or without Windows! Software has extended
tremendously in size, but in function very little. Of the niches not served
well by the current bloated architecture, many if not most will be well
served by a lean distributed architecture that conquers complexity problems
with ease - and this I have demonstrated.