I think I understand the explanation in Stack Overflow. Essentially this says that if a one-buffered channel is used, the sending process wonât block even if the receiver is dead, and the process and channel will be garbage collected. Presumably in the implementation, a processâs memory is reclaimedÂonlyÂwhen it terminates, whereas dangling channels are garbage collected.
I deduce it is not an error to have a program which outputs N items into a channel but inputs only N-1. [I know how you get there, how you end up with buffered channels, and then end up with the nasty consequences. I think the solution is change the programming model so the cases that cause problems can be better expressed - occam and Go are too low level).I think the buffered channel âfixâ only works if the channel eventually becomes ready - if it doesnât, the process cannot complete.
I canât be bothered to go through the whole of the background to this, and my knowledge of go is only superficial. However, I did design and implement the occam implementation of InputOrFail etc. so I think I have some insight into this. However, I donât understand the motivation of this example, i.e. what the programmer is really trying to do. (I suspect that some problems are caused by having these mono-pole processes which fork but do not necessarily join).In occam we write an input with timeout as:-ALT  InputChannel ? messageHandleMessage(message)  TIME ? AFTER timeoutHandleLackOfMessage()Of course, this doesnât really match the go example which is a bit likeCHAN c OF Thing :PAR  SEQÂTIME ? t    ÂTIME ? AFTER t + delayÂc ! someThing  ALT    c ? message ÂÂHandleMessage(message)    TIME ? AFTER timeout ÂÂHandleLackOfMessage()which makes clear the deadlock in the case the ALT chooses the timeout path.
So, we might take the Go example as an example of problems with process termination rather than a problem with channel.
But, as I said I donât really understand the semantics of Go. Is the system supposed to detect and suppress deadlocked processes?