Concurrency in Go

A quick guide to goroutines & channels

Emre Tanrıverdi
7 min readJan 17, 2021

Go is widely being used for backend programming and its community is growing larger each day. Most people choose Go because of its easy-to-implement concurrency abilities.

This story is about how to implement concurrency in Go with a step-by-step guide. So this story considers you already know the basics of concurrency from other programming languages.

Why Go?

Lightweight: Creating a goroutine requires only 2 KB of heap space. They grow by allocating and freeing up space as needed. For comparison, Threads get created with 1 MB. A server that processes incoming requests can easily create a goroutine per request.

Doesn’t use OS threads: Goroutines are created and destroyed during runtime, therefore no need to request resources from the OS.

Provides possibility to communicate between concurrent parts:
It’s easy for the concurrent parts to work together using channels. With this approach, goroutines can be applied to as much place as possible without needing to break sequential logic.

Sequential Code

Imagine that our program has a sequential code block like this:

gist on GitHub

Let’s dummy-implement the methods to make them runnable and let’s say:

upserting userId to postgres takes 20ms
upserting userName to postgres takes 10ms

upserting user to Couchbase takes 5ms.

gist on GitHub

This was an expected result. Everything was run in order.

Goroutines

Let’s say we want to achieve concurrency by making upsertToPostgres work in a goroutine.

gist on GitHub

We just put the go keyword and that’s it.

Now we have two goroutines:

  1. The one that upserts to Postgres.
  2. The main goroutine.

And when we run it…

Wait, what? What happened here?

WaitGroups

Remember we had two goroutines.

In Go, when the main goroutine finishes, the program terminates, regardless of all the other goroutines. So in here, it got terminated before being able to upsert the userName to Postgres.

We should tell our program to wait for all the goroutines to finish before terminating.

The way to accomplish this is by using WaitGroups.

gist on GitHub

wg.Add(1) tells the program that we have 1 goroutine to wait for.
wg.Done() tells the program to decrement wg by 1.

wg.Wait() finishes the waiting process when wg is 0.

And when we run it:

Working as expected! It’s both concurrent and doesn’t prematurely terminate.

Anonymous Functions

Similar to other programming languages, we can create anonymous functions in Go (and run them in a goroutine).

Let’s say we have a huge list of users and again we want to upsert them to Postgres in a goroutine, so they don’t wait for each other.
We can implement it easily like this:

gist on GitHub

This anonymous function will be called for every user, so we know exactly how many times it will be called. That’s why we just simply called wg.Add(len(users)) instead of calling wg.Add(1) every time in the loop.

Defer Keyword

Alternatively, we can use the defer keyword before wg.Done(), such as:

gist on GitHub

Now it will wait for upsertToPostgres to finish before running wg.Done(). With this approach, we don’t need to put wg.Done() at the end of goroutine, we can put it wherever we want. So it’s basically the same as before, it’s a matter of preference of what to use.

Concurrent Writes

Let’s say this time we won’t just make an external call and return void.
We also need to collect the results.

Imagine a scenario that we will have contentIds and we will need to make a request to an external service that will return details based on contentId.

It would be wise to run the commands in a goroutine and collect the results concurrently, to prevent elements’ waiting for each other to finish.

gist on GitHub

And when we run it…

We got an error.

In Go, concurrent writing to a map is not allowed.
Concurrent writing to a slice is allowed, but should always be handled like above to avoid race condition.

In above code block, we are trying to concurrently manipulate our map by making an external call and adding the results on it.

We can fix this problem by sequentially appending to map but keeping the rest of the code block to continue running async.
Appending to a map is a very fast operation comparing to making an API call.

gist on GitHub

So we surround the appending operation with lock and unlock.

And when we run it:

It works!

Channels

If we want two methods to work concurrently, but notify each other when an event occurs, we use Channels.

A channel can transport data of only one data type.

Let’s say we want a consumer/worker/job to listen messages from a Kafka topic all the time, but we want it to upsert the message object to Couchbase whenever it receives a message.

Two goroutines may need to communicate in such scenario.

Illustration by Trevor Forrey

We can implement it easily like this:

gist on GitHub

It works!

Do not forget that channels are blocking operations.

Once a goroutine sends data on a channel, the sending goroutine blocks until another goroutine receives the data sent on the channel.

Similar to blocking after sending on a channel, receiving goroutine also blocks while waiting to get a value from a channel, with nothing sent to it yet.

Deadlock

But what if our channel won’t work infinitely?
Let’s this time read a value from Couchbase and send it to a Kafka topic.

gist on GitHub

This should work, right? Let’s run it.

We got deadlock.

A deadlock is a state that happens when a goroutine is blocked without any possibility to get unblocked. Go provides a deadlock detector that helps developers not get stuck in this kind of situation.

The channel we created must be closed when its work is done, so that the program can exit from the range we created in readAndSend().

gist on GitHub

And when we run:

It works!

Channel Capacity

For the sake of simplicity, let’s just create something small.

gist on GitHub

And let’s run this.

We got deadlock again.

Since channels are blocking operations, when c <- “emre” is sent, the program expects it to be received and therefore (since it won’t get received by anything) deadlock occurs.

We can create a buffered channel to tell the program that “don’t block until channel capacity is out of bounds”.

gist on GitHub

Select with Channels

This time, let’s say I need two jobs to invalidate and restart caches in my program.

One data is a bit more crucial, so I need to invalidate my caches every 5 minutes.

The other data, on the other hand, can be updated every 30 minutes.

(For the sake of simplicity, I’ll demonstrate it with seconds in code.)

gist on GitHub

Normally, we’d expect this to work, but when we run it:

Since channels are blocking operations, it blocks the operation until c1 is received.

Go allows us to create a simple and a smart workaround like this:

gist on GitHub

Voila!

I wanted to show a simple guide on how to implement goroutines and channels in Go, without diving too deep into OS and architecture informations about concurrency.

All feedbacks are welcome and I hope it was helpful. :)

Thank you for reading! ❤️

--

--

Emre Tanrıverdi
Emre Tanrıverdi

No responses yet