[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


I've now read both Per Brunch Hansen's new SIGPLAN paper (thank's Peter) 
and his earlier paper on efficient memory allocation. The new paper has a
simple thesis: that Java has made daft design decisions by trying to offer
safety in the sequential language design but doing nothing about secure
parallelism.  Indeed, Java actively misleads the user by pretending to
offer monitor security but actually enforcing none of the constraints. 
This paper is all well and good, and it is good to see the view exposed in
a public forum. 

I thought, however, that Peter had a much stronger result: that the
specification of the language allows (and in some cases mandates) a
run-time implementation which prevents reliable use of monitors even if
the programmer carefully avoids "erroneously" shared variables. Is that in
print, in as public a forum as SIGPLAN? If not why not? 

The memory management story is also simple.  You calculate at compile time
the size of an activation record for each different recursively used
process.  The compiled code then manages a separate list of exact-size
buffers for each recursively used process. In-line code picks a buffer
from the list on process activation (or sbrk()'s a new one if the list is
empty) and returns the buffer to the list on process termination.  No
buffers are ever returned back from these exact-fit lists. The buffer
lists just grow out of contiguous free store.

The total memory required is bounded, and is optimal if there is only one
recursive process---the typical teaching scenario. It all gets returned at
program termination.

Does this look useful for real codes?

Denis A Nicole                      WWW:   http://www.hpcc.ecs.soton.ac.uk/~dan 
High Performance Computing Centre   Email: dan@xxxxxxxxxxxxxxx                  
University of Southampton           Phone: +44 1703 592703                      
UK.                                 Fax:   +44 1703 593903