[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Go - deadlocked processes causing memory leaks

The simplest response is to, upon timeout, attach a sink to the response channel to ensure that the result is able to be cleaned up. If you ask a process (whether transient or not) to do something and then communicate it to you, it it reasonable to accept the responsibility for ensuring that it's actually done.

In the short-term you'd have two goroutines that are blocked waiting for each other, but as long as the Get() is guaranteed to return you should be okay.Â

On Sun, 29 Mar 2015 at 16:19 Rick Beton <rick.beton@xxxxxxxxx> wrote:
"Because the channel is non-blocking, ..."
should read
"Because the channel is unbuffered, ..."

(edited version follows below)

On 29 March 2015 at 15:15, Rick Beton <rick.beton@xxxxxxxxx> wrote:
Hi all,

Is this a good community to ask a concurrency question specific to Go?

There is an interesting post on StackOverflow asking why memory leaks can occur when a service process doesn't complete due to timeout.

func Read(url string, timeout time.Duration) (res *Response) {
    ch := make(chan *Response)
    go func() {
        time.Sleep(time.Millisecond * 300)
        ch <- Get(url)
    select {
    case res = <-ch:
    case <-time.After(timeout):
        res = &Response{"Gateway timeout\n", 504}
Because the channel is unbuffered, the secondary goroutine is blocked to send its response and therefore occupies some memory. When the response is consumed, everything is cleaned up correctly. But when a timeout occurs instead, the secondary goroutine is effectively left deadlocked in memory, and this memory leak is never recovered.

There is a simple proposed solution of using a buffered channel. The sending goroutine is never blocked and terminates immediately. The receiving goroutine either consumes the buffer's message or skips it. Either way the memory is garbage-collected. No leak occurs.

I have a question: are there other practical solutions to this puzzle that allow unbuffered channels to operate without the deadlock described?


PS as an aside, it has been my observation that actor systems (in the Erlang and Akka style) can't suffer from deadlocks because they don't allow blocking reads or writes. But do they therefore suffer from subtle memory leaks instead? I think quite they most likely do.