[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: "No aliasing = no garbage collection"

At 18:26 22/09/00 -0700, Lawrence Dickson wrote:
   I think you have some interesting points, but are taking some things
for granted - a problem everywhere now that programming is such a

I was worried by the proposition, that seemed to be accepted, that more memory was cheap. I was therefore trying to disprove this... I am quite aware there are many applications where it is perfectly reasonable to say 'we don't care how much memory the app takes -- the machine will have enough. I guess most desktop apps fall into this category these days.

> 1. What memory you have must be on-chip with the processor, which
> mostly limits it to small quantities of SRAM and slightly larger
> quantities of ROM or EPROM.

I once had great fun making a decimation TRAM (for radar use) out of 4
16-bit Transputers with only their tiny allotment of on-chip memory.
All in occam and assembly - it required tight coding of course.

Strangely enough, I had a similar system to design once -- although it was an error corrector for a tape data stream. T2's were underrated IMO.

> (and that is perhaps understating the multiple). Consequently, adding a
> $10 DRAM to a design is not an option -- it would add $100 to the
> selling price.
But that applies to CPUs too!

I know, but people weren't talking about throwing in two processors, just 2 memory chips.

 Microchip Technologies PIC processors
that zip along in assembly are much cheaper than ones that use C,

Well actually there is a PIC 'C' compiler, sold as part of the £150 SDK...

which in turn are cheaper than Java native chips using object oriented
language... occam natives should be somewhere between assembly and C

I hope so... occam should be at least as good as C, although I think we have yet to prove that in the sequential case.

> 3. Items like mobile phones typically have huge memory requirements --
> 3-5MB ROMS is not unusual, and RAM is also needed in large quantity.
> And yet if you look inside these devices you find only a couple of
> chips. Doubling the physical space required for memory is not possible
> -- it doesn't fit.

A clean small language ought to nicely reduce that ROM, and I'm not

The ROM is huge because the phones have to have protocol stacks for several different, complex and incompatible comms systems. There is a lot of almost-duplication, but I would think that the programming time constraints are such that it is not economic to merge them more tightly.

sure you couldn't halve your RAM needs in many cases... not all, of
course... things like phone lists do need to be pretty dynamic. BUT
SEE PETER'S NOTE, BELOW. Real usable information is usually pretty
"flat" and probably could qualify for the alias control he describes!

It is probably the case that once you have fitted your code into a 2^n MB ROM, it is inefficient to reduce the size further, even if it is not too difficult. The next time around, however, you may have to because you need some other whizzo feature. Commercial software is *so* much fun!

I'm not sure I understand 'flat' here, but I think I largely agree.

> I believe there should be a garbage collection scheme added to occam.
> It is a great language for many things, but it is let down in areas
> which people care about intensely.
I think that 90% of the utility could be achieved by a linked list
management PROGRAM without damaging the roots of the language. I heard

I don't understand what you mean by "PROGRAM" here. Lists are certainly a basic feature which are hard to implement without using fixed size arrays of objects, array indexing and explicitly coding the manipulation. Last time I got into that I used to worry whether I had enough 'X' objects or 'Y' objects, and if only I could share them.

I am not really sure how best to solve this. Putting a dynamic allocator into the language, so you could say, for example:

PROC foo(VAL INT sz)
  INT array[sz]:
    ... use array
  -- array freed here by compiler

might go some way, but I suspect you would also need global scoping, which would mess things up again.

that the Apollo program got to the moon on 38K of memory. I think real

Well, the 1802 was a rather smaller chip than we use these days! It was significantly less powerful than the 6502.

time and embedded coding needs a foundational rework to bring it back
in that direction. (A DOS-like OS can give you total control of the
resources and then get out of your way... and occam program modules
can communicate as state machines correctly with just one layer of
dynamic resource allocation.)

I suspect there are a large number of people out there who have moved into embedded systems from a background in desktops, who misuse dynamic allocation. However, I think that there are good reasons for its use not necessarily conflicting with the predictability or timeliness requirements of typical embedded systems.

Peter wrote:
> >    A somewhat weaker predicate might be "if we have aliasing
> >    under control, GC may also be under controll, in a predictive
> >    way." Is this true? Objects would still be taken from the
> >    heap, the heap would still be fragmented and a need to
> >    defragment it would arise? Or could objects be taken from
> >    the stack (possible in RT-Java) in that case?

My naive answer would be there is no way you can avoid GC in a dynamic allocation system simply by knowing about aliasing. One reason is that I think to know enough about aliasing in a system using dynamic allocation requires a run time alias checker, which seems even worse than GC.

> Tom discussed the various and very different ways in which we use
> references in OO systems.  These differences are alarming for what
> seems to be the same mechanism - he showed examples where that
> mechanism causes great harm (e.g. by breaking data encapsulation and
> making component based design unsafe).

What little I have heard on the various proposed RT-OO systems leaves me cringing.

> One major class of the use of references can be put under occam-like
> alias control - not all, of course (e.g. for graph data structures like
> doubly linked lists).

... of course linked lists, and lists-of-lists, are almost *the* dynamic structure we need.

> Those that can be controlled are guaranteed to only have one reference.
> When that reference is lost (e.g. overwritten) the space for that
> object can be automatically de-allocated.  Using Brinch-Hansen's

I have never liked systems which rely on noticing the overwriting of some base pointer. I'd much rather it was explicitly freed or freed at some well defined point (e.g. an alloca() running out of scope).

Using the GC schemes invented for research & teaching languages like lisp & scheme seems wrong to me for an embedded/real-time language like occam.

> algorithms for parallel recursion workspace allocation - as modified by
> David Wood and presented in his paper at CPA 2000 - alias controlled
> single reference objects can be allocated and de-allocated in *unit*
> time with an average space wastage of only 25% ... which looks like a

Did I misunderstand David's talk then? 25% seems much lower than the numbers I was thinking of in the talk.

> big win for real-time systems ...  the new rt-java draft specification
> for a real-time Java API just ducks the whole issue (they have unit
> time allocation and *no* de-allocation - if I follow it right - which


> But alias control can't be done with Java or C# or ... a better
> language is needed that takes these issues seriously.

... altough modern C compilers are taking the issue seriously and doing most of what is possible, and proposing language changes to make more possible. I doubt that it can get as good as occam already is, but it is already getting pretty good.