[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: OCCAM, JOYCE, SUPERPASCAL AND JAVA



Adrian Lawrence writes:
> Denis A Nicole writes:
> 
> > The memory management story is also simple.  You calculate at compile time
> > the size of an activation record for each different recursively used
> > process.  The compiled code then manages a separate list of exact-size
> > buffers for each recursively used process. In-line code picks a buffer
> > from the list on process activation (or sbrk()'s a new one if the list is
> > empty) and returns the buffer to the list on process termination.  No
> > buffers are ever returned back from these exact-fit lists. The buffer
> > lists just grow out of contiguous free store.
> > 
> > The total memory required is bounded, and is optimal if there is only one
> > recursive process---the typical teaching scenario. It all gets returned at
> > program termination.
> > 
> > Does this look useful for real codes?
> 
> Prof Brinch Hansen sent me this paper and a couple of others. I was waiting 
> to see what other people thought. I too was surprised by this stategy: he
> does acknowledge it as a limitation at the end of the paper. 
> 
> I am probably being dim, but as far as I can see, the algorithm has to
> update shared variables and so invoke some synchronisation mechanisms.
> 
> For those that haven't seen the paper, here is part of an occamish outline 
> that I prepared earlier :-)  
> 
> The memory used by any particular process is never 
> released for reuse by other processes. So a single use of a deeply recursive
> process early in a program run might permanently consume most of the memory.
> He quotes exactly three algorithms: Quicksort, N-body pipeline and Laplace
> matrix. None of them seems to me to be very representative of the sort of 
> massively parallel and perhaps deeply recursive things that we might want.
> 
> Except that the maximum depth of recursion is determined at run time, the
> stategy is not unlike (Michael's?) textual replication trick for recursion.
>  
> -----------------------------------------------------------------------------
> The idea seems to be
>
> PROC release(VAL INT index,address)
>   SEQ
>     memory[address] := pool[index]
>     pool[index]     := address
> :

Until a recent discussion, I was so concerned with updating shared variables
in allocation and the unstated assumptions that I had overlooked the real 
advantage here: deallocation. At some cost.   

Adrian
-- 
A E Lawrence, MA., DPhil.  	adrian.lawrence@xxxxxxxxxxxxxx
MicroProcessor Unit, 13, Banbury Road, Oxford. OX2 6NN. UK.                
Voice: (+44)-1865-273274,  Fax: (+44)-1865-273275