[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: concurrency research "hot" again?



Eric Verhulst wrote:
OS in kilobytes? No, I am not kidding.

Some examples:

- Occam. You might not consider it as an OS, but it has the essential
runtime functions and the people from Kent might give you more concrete
figures. I picked up that the Transterpreter code (and that more a VM) fits
in about 10 KB.

- QNX. I still have a floppy somewhere that boots and brings up a GUI with
browser that connects to the internet. In case, such a floppy has not more
than 1.44 Mbytes. QNX is also a COTS OS that has a message passing based
architecture. Not as pure as CSP but a lot cleaner than these monolithic
monsters like Linux and Windows. For VISTA one now needs 1.5 Gbytes of RAM,
enough to turn you off forever from Windows. QNX has been bought by Decker
and if you happen to have a GPS/multimedia of that brand in your car, it is
likely to run QNX.

- Our own OpenComRTOS is using a CSP-like model and can be made to fit in 1
Kbyte (single processor) and 2 Kbytes (multiprocessor with full topology
transparency). This is the L0 layer and with static linking like in occam.
The L1 layer has more RTOS tradional services and is still being completed,
but it will fit as well in a few Kbytes. The reason for this small size is
that we used formal modeling. The previous RTOS we developed was called
Virtuoso (and was the first preemptive RTOS on the transputer in 1991) was
about 10 times bigger but still around nt more than 50 Kbytes.

I am not a professor, although I do some research. The OSes above are used
in industry. OpenComRTOS is put on smart sensors with an internal 16bit
micro, 32 Kbytes of flash and 2 KB of RAM (for data). It also runs on top of
Windows and ports are underway to e.g. the Cell processor, FPGA (microblaze)
and others.

I am more astonished that most "professors" in computer science (not all,
mind you) keep on teaching about these monolithic socalled OO models
requiring MegaBytes. CSP is unfortunately more or less forgotten. The good
news is that thanks to the upcming multicores, it is being rediscovered out
of necessity. But I find it shocking to see how much elementary knowledge of
the transputer days seems to have been lost. Old geeks like this group is
keeping it alive, ...
And my first love CP/M...we upgraded from an 8080 machine to a Z80 - the system had a 2k driver and initialisation package (ours) and loaded 8k of code that contained disk system, console interpreter and task manager. For real-time stuff we had a 2k "thread manager". The system could run from an 8" floppy disk... We used them to automate rather complex experiments and ISTR that the most complex code built for it was a 4 machine/player networked game system in the style of Doom (later) using glass teletypes...

The definition of what the operating system was was largely another people issue - the operating system divided the hardware/infrastructure people from the experiment designers who rather liked using Fortran77 on this machine. Over the years, I've observed the people issues being the dominant force defining what an operating system is - generally a system's complete set of attrbutes are driven from actual requirements and the operating system is used to both reuse common codes and seperate "trusted people domains".

Stephen