Go Channels and Communication: Complete Guide

In the previous post on goroutines, we learned how to create lightweight concurrent tasks. Now we'll master channels - Go's elegant way of communicating between goroutines.
"Don't communicate by sharing memory; share memory by communicating." - Go proverb
What Are Channels?
A channel is a typed conduit through which you can send and receive values between goroutines. Channels enable safe communication without explicit locks or condition variables.
Key Characteristics:
- Type-safe: Channels carry values of a specific type
- Thread-safe: Multiple goroutines can safely send/receive
- Blocking: Send/receive operations block until ready
- First-class: Channels are values that can be passed around
Your First Channel
package main
import "fmt"
func main() {
// Create a channel of integers
ch := make(chan int)
// Send value in goroutine
go func() {
ch <- 42 // Send 42 to channel
}()
// Receive value
value := <-ch // Receive from channel
fmt.Println("Received:", value)
}Output:
Received: 42Channel Basics
Creating Channels
// Make a channel
ch := make(chan int) // Unbuffered channel
ch := make(chan string, 10) // Buffered channel (capacity 10)
// Channel types
var readOnly <-chan int // Can only receive
var writeOnly chan<- int // Can only send
var readWrite chan int // Can send and receiveSending and Receiving
ch <- value // Send value to channel (blocks until received)
value := <-ch // Receive value from channel (blocks until sent)
value, ok := <-ch // Receive with status (ok is false if channel closed)Closing Channels
close(ch) // Close channel (no more sends allowed)
// Check if closed
value, ok := <-ch
if !ok {
fmt.Println("Channel closed")
}Important Rules:
- ❌ Cannot send on a closed channel (panics)
- ✅ Can receive from a closed channel (returns zero value)
- ❌ Cannot close a nil channel (panics)
- ❌ Cannot close an already-closed channel (panics)
Unbuffered vs Buffered Channels
Unbuffered Channels (Synchronous)
ch := make(chan int) // No buffer
go func() {
ch <- 42 // Blocks until someone receives
}()
value := <-ch // Blocks until someone sends
fmt.Println(value)Characteristics:
- Send blocks until receive happens
- Receive blocks until send happens
- Synchronization point between goroutines
Use when: You need guaranteed synchronization
Buffered Channels (Asynchronous)
ch := make(chan int, 3) // Buffer size 3
// Send doesn't block until buffer is full
ch <- 1
ch <- 2
ch <- 3
// ch <- 4 // Would block here
// Receive doesn't block until buffer is empty
fmt.Println(<-ch) // 1
fmt.Println(<-ch) // 2
fmt.Println(<-ch) // 3Characteristics:
- Send blocks only when buffer is full
- Receive blocks only when buffer is empty
- Decouples sender and receiver
Use when: You want to allow some asynchrony
Complete Example: Producer-Consumer
package main
import (
"fmt"
"time"
)
func producer(ch chan<- int) {
for i := 1; i <= 5; i++ {
fmt.Printf("Producing %d\n", i)
ch <- i
time.Sleep(500 * time.Millisecond)
}
close(ch) // Signal completion
}
func consumer(ch <-chan int) {
for value := range ch { // Iterate until closed
fmt.Printf("Consuming %d\n", value)
time.Sleep(1 * time.Second)
}
}
func main() {
ch := make(chan int, 2) // Buffer of 2
go producer(ch)
consumer(ch) // Blocks until channel closed
fmt.Println("Done")
}Output:
Producing 1
Producing 2
Consuming 1
Producing 3
Producing 4
Consuming 2
Producing 5
Consuming 3
Consuming 4
Consuming 5
DoneThe Select Statement
select lets you wait on multiple channel operations, like a switch for channels.
Basic Select
package main
import (
"fmt"
"time"
)
func main() {
ch1 := make(chan string)
ch2 := make(chan string)
go func() {
time.Sleep(1 * time.Second)
ch1 <- "from ch1"
}()
go func() {
time.Sleep(2 * time.Second)
ch2 <- "from ch2"
}()
for i := 0; i < 2; i++ {
select {
case msg1 := <-ch1:
fmt.Println("Received:", msg1)
case msg2 := <-ch2:
fmt.Println("Received:", msg2)
}
}
}Output:
Received: from ch1
Received: from ch2Select with Default (Non-Blocking)
select {
case msg := <-ch:
fmt.Println("Received:", msg)
default:
fmt.Println("No message available")
}Select with Timeout
package main
import (
"fmt"
"time"
)
func main() {
ch := make(chan string)
go func() {
time.Sleep(2 * time.Second)
ch <- "result"
}()
select {
case res := <-ch:
fmt.Println("Got:", res)
case <-time.After(1 * time.Second):
fmt.Println("Timeout!")
}
}Output:
Timeout!Channel Patterns
Pattern 1: Fan-Out (Distribute Work)
Multiple workers process from a single source:
package main
import (
"fmt"
"sync"
"time"
)
func worker(id int, jobs <-chan int, wg *sync.WaitGroup) {
defer wg.Done()
for job := range jobs {
fmt.Printf("Worker %d processing job %d\n", id, job)
time.Sleep(500 * time.Millisecond)
}
}
func main() {
jobs := make(chan int, 10)
var wg sync.WaitGroup
// Start 3 workers
for i := 1; i <= 3; i++ {
wg.Add(1)
go worker(i, jobs, &wg)
}
// Send jobs
for j := 1; j <= 9; j++ {
jobs <- j
}
close(jobs)
wg.Wait()
fmt.Println("All jobs completed")
}Output:
Worker 1 processing job 1
Worker 2 processing job 2
Worker 3 processing job 3
Worker 1 processing job 4
Worker 2 processing job 5
...
All jobs completedPattern 2: Fan-In (Collect Results)
Merge multiple channels into one:
package main
import (
"fmt"
"sync"
)
func producer(id int, ch chan<- string) {
for i := 0; i < 3; i++ {
ch <- fmt.Sprintf("Producer %d: item %d", id, i)
}
}
func fanIn(channels ...<-chan string) <-chan string {
out := make(chan string)
var wg sync.WaitGroup
for _, ch := range channels {
wg.Add(1)
go func(c <-chan string) {
defer wg.Done()
for msg := range c {
out <- msg
}
}(ch)
}
go func() {
wg.Wait()
close(out)
}()
return out
}
func main() {
ch1 := make(chan string)
ch2 := make(chan string)
ch3 := make(chan string)
go producer(1, ch1)
go producer(2, ch2)
go producer(3, ch3)
// Close channels after production
go func() {
for i := 0; i < 3; i++ {
// Wait for producers to finish
}
close(ch1)
close(ch2)
close(ch3)
}()
merged := fanIn(ch1, ch2, ch3)
for msg := range merged {
fmt.Println(msg)
}
}Pattern 3: Pipeline
Chain processing stages:
package main
import "fmt"
// Stage 1: Generate numbers
func generator(nums ...int) <-chan int {
out := make(chan int)
go func() {
for _, n := range nums {
out <- n
}
close(out)
}()
return out
}
// Stage 2: Square numbers
func square(in <-chan int) <-chan int {
out := make(chan int)
go func() {
for n := range in {
out <- n * n
}
close(out)
}()
return out
}
// Stage 3: Filter even numbers
func filterEven(in <-chan int) <-chan int {
out := make(chan int)
go func() {
for n := range in {
if n%2 == 0 {
out <- n
}
}
close(out)
}()
return out
}
func main() {
// Build pipeline
numbers := generator(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
squared := square(numbers)
evens := filterEven(squared)
// Consume
for n := range evens {
fmt.Println(n)
}
}Output:
4
16
36
64
100Pattern 4: Worker Pool
Fixed number of workers processing jobs:
package main
import (
"fmt"
"time"
)
type Job struct {
ID int
Data string
}
type Result struct {
Job Job
Output string
}
func worker(id int, jobs <-chan Job, results chan<- Result) {
for job := range jobs {
fmt.Printf("Worker %d processing job %d\n", id, job.ID)
time.Sleep(500 * time.Millisecond)
results <- Result{
Job: job,
Output: fmt.Sprintf("Processed %s", job.Data),
}
}
}
func main() {
const numWorkers = 3
const numJobs = 10
jobs := make(chan Job, numJobs)
results := make(chan Result, numJobs)
// Start workers
for w := 1; w <= numWorkers; w++ {
go worker(w, jobs, results)
}
// Send jobs
for j := 1; j <= numJobs; j++ {
jobs <- Job{ID: j, Data: fmt.Sprintf("task-%d", j)}
}
close(jobs)
// Collect results
for r := 1; r <= numJobs; r++ {
result := <-results
fmt.Printf("Job %d result: %s\n", result.Job.ID, result.Output)
}
}Context for Cancellation
Use context.Context to cancel goroutines gracefully:
package main
import (
"context"
"fmt"
"time"
)
func worker(ctx context.Context, id int, results chan<- int) {
for i := 0; ; i++ {
select {
case <-ctx.Done():
fmt.Printf("Worker %d: Cancelled\n", id)
return
case results <- i:
fmt.Printf("Worker %d: Sent %d\n", id, i)
time.Sleep(500 * time.Millisecond)
}
}
}
func main() {
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()
results := make(chan int)
go worker(ctx, 1, results)
for {
select {
case result := <-results:
fmt.Println("Received:", result)
case <-ctx.Done():
fmt.Println("Main: Context cancelled")
return
}
}
}Channels vs Mutexes: When to Use Each
Use Channels When:
✅ Passing ownership of data
✅ Distributing work to multiple workers
✅ Communicating results between goroutines
✅ Coordinating goroutine lifecycles
// GOOD: Passing ownership
func processOrders(orders <-chan Order) {
for order := range orders {
// Process order
}
}Use Mutexes When:
✅ Protecting shared state (caches, counters)
✅ Short critical sections
✅ Simple data access without coordination
// GOOD: Protecting shared cache
type Cache struct {
mu sync.RWMutex
data map[string]string
}
func (c *Cache) Get(key string) string {
c.mu.RLock()
defer c.mu.RUnlock()
return c.data[key]
}Common Channel Pitfalls
Pitfall 1: Sending on a Closed Channel
// ❌ BAD: Panics!
ch := make(chan int)
close(ch)
ch <- 42 // panic: send on closed channelPitfall 2: Closing a Channel Multiple Times
// ❌ BAD: Panics!
ch := make(chan int)
close(ch)
close(ch) // panic: close of closed channelPitfall 3: Deadlock (No Receiver)
// ❌ BAD: Deadlock!
ch := make(chan int)
ch <- 42 // Blocks forever (no receiver)Pitfall 4: Goroutine Leak
// ❌ BAD: Goroutine leaks if channel never receives
func leak() <-chan int {
ch := make(chan int)
go func() {
ch <- computeExpensiveValue() // Blocks forever if no receiver
}()
return ch
}Fix with timeout:
// ✅ GOOD: Use select with timeout
func noLeak() <-chan int {
ch := make(chan int)
go func() {
select {
case ch <- computeExpensiveValue():
case <-time.After(5 * time.Second):
return // Prevent leak
}
}()
return ch
}Best Practices
1. Close Channels from the Sender
// ✅ GOOD
func producer(ch chan<- int) {
defer close(ch) // Sender closes
for i := 0; i < 10; i++ {
ch <- i
}
}2. Use Directional Channels
// ✅ GOOD: Type safety
func send(ch chan<- int) { // Can only send
ch <- 42
}
func receive(ch <-chan int) { // Can only receive
value := <-ch
}3. Range Over Channels
// ✅ GOOD: Automatically stops when closed
for value := range ch {
fmt.Println(value)
}4. Check if Channel is Closed
// ✅ GOOD
value, ok := <-ch
if !ok {
fmt.Println("Channel closed")
return
}5. Use Buffered Channels for Known Capacity
// ✅ GOOD: Prevents blocking when capacity is known
results := make(chan Result, numJobs)Real-World Example: Web Scraper with Channels
package main
import (
"fmt"
"io"
"net/http"
"time"
)
type Result struct {
URL string
Status int
Size int
Err error
}
func fetch(url string, results chan<- Result) {
start := time.Now()
resp, err := http.Get(url)
if err != nil {
results <- Result{URL: url, Err: err}
return
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
results <- Result{URL: url, Status: resp.StatusCode, Err: err}
return
}
results <- Result{
URL: url,
Status: resp.StatusCode,
Size: len(body),
}
fmt.Printf("Fetched %s in %v\n", url, time.Since(start))
}
func main() {
urls := []string{
"https://golang.org",
"https://github.com",
"https://stackoverflow.com",
"https://reddit.com",
"https://news.ycombinator.com",
}
results := make(chan Result, len(urls))
// Launch goroutines
for _, url := range urls {
go fetch(url, results)
}
// Collect results
for i := 0; i < len(urls); i++ {
result := <-results
if result.Err != nil {
fmt.Printf("❌ %s: %v\n", result.URL, result.Err)
} else {
fmt.Printf("✅ %s: %d (%d bytes)\n",
result.URL, result.Status, result.Size)
}
}
}Performance Considerations
Channel Overhead
Channels have overhead compared to direct memory access:
// Slower: Channel communication
func withChannel() {
ch := make(chan int)
go func() {
for i := 0; i < 1000000; i++ {
ch <- i
}
close(ch)
}()
for range ch {}
}
// Faster: Shared memory with mutex
func withMutex() {
var mu sync.Mutex
var counter int
go func() {
for i := 0; i < 1000000; i++ {
mu.Lock()
counter++
mu.Unlock()
}
}()
}Rule of Thumb: Use channels for coordination, mutexes for performance-critical shared state.
Summary and Key Takeaways
✅ Channels enable safe communication between goroutines
✅ Unbuffered channels provide synchronization (blocking send/receive)
✅ Buffered channels allow asynchronous communication
✅ Select statement multiplexes multiple channel operations
✅ Close channels from the sender, not the receiver
✅ Use range to iterate until channel is closed
✅ Context enables graceful cancellation
✅ Channels for coordination, mutexes for shared state
✅ Common patterns: Fan-out, fan-in, pipeline, worker pool
✅ Avoid: Sending on closed channels, closing multiple times, deadlocks
What's Next?
You've mastered Go's concurrency primitives! Next steps:
Continue Go Series:
- GO-10: Building Web Services in Go
- GO-11: Testing in Go
Deep Dives:
- Context Package (cancellation, deadlines, values)
- Advanced Concurrency Patterns (sync.Pool, atomic, advanced patterns)
- Go Performance Optimization
Practice Exercises
Exercise 1: Rate Limiter
Implement a rate limiter using channels:
func rateLimiter(requests <-chan int, rate time.Duration) <-chan int {
// TODO: Limit requests to one per `rate` duration
}Exercise 2: Pipeline with Error Handling
Build a pipeline that:
- Generates numbers 1-100
- Filters primes
- Squares them
- Handles errors at each stage
Exercise 3: Concurrent Map-Reduce
Implement map-reduce pattern:
- Map: Process items concurrently
- Reduce: Aggregate results
Related Posts
Go Learning Roadmap:
- Go Learning Roadmap - Complete series overview
- Phase 1: Go Fundamentals - Variables, types, control flow, functions
- Go Goroutines and Concurrency - Goroutine basics and WaitGroups
Companion Posts:
Comparison:
Additional Resources
Official Documentation:
Advanced Topics:
Books:
- "Concurrency in Go" by Katherine Cox-Buday (Chapter 3: Channels)
Have questions about channels or concurrency? Let me know in the comments!
Happy concurrent programming! 🚀
📬 Subscribe to Newsletter
Get the latest blog posts delivered to your inbox every week. No spam, unsubscribe anytime.
We respect your privacy. Unsubscribe at any time.
💬 Comments
Sign in to leave a comment
We'll never post without your permission.