Go Phase 3: Concurrency, HTTP & Advanced Patterns

Welcome to Phase 3 of the Go Learning Roadmap — the final phase! In Phase 1 you learned Go syntax, and in Phase 2 you mastered structs, interfaces, error handling, and packages. Now you'll learn what makes Go famous: concurrency.
Go's concurrency model is its crown jewel — goroutines and channels let you write concurrent code that's genuinely readable. Beyond concurrency, this phase covers the context package, testing, HTTP, and building a small REST API.
What You'll Learn
✅ Launch goroutines and understand the Go scheduler
✅ Communicate between goroutines with channels (buffered + unbuffered)
✅ Multiplex channels using select
✅ Coordinate goroutines with sync.WaitGroup, sync.Mutex, sync.Once
✅ Use context for cancellation and timeouts
✅ Write tests with go test — table-driven, subtests, benchmarks
✅ Build HTTP servers and clients with net/http
✅ Encode/decode JSON
✅ Build a simple REST API from scratch
Part 1: Goroutines
The Go Scheduler
A goroutine is a lightweight thread managed by the Go runtime — not an OS thread. You can run thousands of goroutines on a handful of OS threads. The Go scheduler (a work-stealing M:N scheduler) multiplexes goroutines onto available CPUs.
// Launch a goroutine with the `go` keyword
package main
import (
"fmt"
"time"
)
func sayHello(name string) {
fmt.Printf("Hello, %s!\n", name)
}
func main() {
go sayHello("Alice") // runs concurrently
go sayHello("Bob") // runs concurrently
go sayHello("Carol") // runs concurrently
// Without this sleep, main() might exit before goroutines run
time.Sleep(10 * time.Millisecond)
}The go keyword is all you need. But time.Sleep is a hack — in real code, you coordinate with channels or sync.WaitGroup.
sync.WaitGroup — Wait for All Goroutines
WaitGroup lets the main goroutine wait for a collection of goroutines to finish:
package main
import (
"fmt"
"sync"
)
func processItem(id int, wg *sync.WaitGroup) {
defer wg.Done() // signal done when this function returns
fmt.Printf("Processing item %d\n", id)
}
func main() {
var wg sync.WaitGroup
for i := 1; i <= 5; i++ {
wg.Add(1) // register one goroutine
go processItem(i, &wg) // launch goroutine
}
wg.Wait() // block until all goroutines call Done()
fmt.Println("All items processed")
}Rules:
- Call
wg.Add(1)before launching the goroutine (not inside it) - Use
defer wg.Done()as the first line in the goroutine function - Pass
*sync.WaitGroup(pointer), never copy it
Part 2: Channels
Channels are Go's mechanism for goroutines to communicate safely. The Go philosophy: "Do not communicate by sharing memory; share memory by communicating."
Unbuffered Channels
An unbuffered channel blocks the sender until the receiver is ready (and vice versa):
package main
import "fmt"
func sum(nums []int, ch chan int) {
total := 0
for _, n := range nums {
total += n
}
ch <- total // send to channel
}
func main() {
nums := []int{1, 2, 3, 4, 5, 6, 7, 8}
ch := make(chan int)
// Two goroutines each sum half the slice
go sum(nums[:4], ch)
go sum(nums[4:], ch)
// Receive two values (blocks until each arrives)
a, b := <-ch, <-ch
fmt.Println("Sum:", a+b) // 36
}Buffered Channels
A buffered channel has capacity — sends only block when the buffer is full:
// Buffered channel with capacity 3
ch := make(chan string, 3)
ch <- "first" // doesn't block
ch <- "second" // doesn't block
ch <- "third" // doesn't block
// ch <- "fourth" // would block — buffer full
fmt.Println(<-ch) // "first"
fmt.Println(<-ch) // "second"
fmt.Println(<-ch) // "third"Use buffered channels when you know the number of items upfront, or when you want to decouple producer speed from consumer speed.
Ranging Over Channels
You can range over a channel to receive values until it's closed:
package main
import "fmt"
func producer(ch chan<- int) {
// chan<- means send-only
for i := 0; i < 5; i++ {
ch <- i
}
close(ch) // signal: no more values
}
func main() {
ch := make(chan int)
go producer(ch)
for v := range ch { // receives until channel is closed
fmt.Println(v) // 0, 1, 2, 3, 4
}
}Important: Only the sender should close a channel. Closing a channel twice panics. Sending to a closed channel panics.
Directional Channels
Channels can be typed as send-only (chan<-) or receive-only (<-chan):
// send-only parameter — can only send to this channel
func producer(out chan<- string) {
out <- "hello"
out <- "world"
close(out)
}
// receive-only parameter — can only receive from this channel
func consumer(in <-chan string) {
for msg := range in {
fmt.Println(msg)
}
}
func main() {
ch := make(chan string, 2)
go producer(ch) // bidirectional ch implicitly converts to chan<-
consumer(ch) // bidirectional ch implicitly converts to <-chan
}Directional channels are a form of documentation — the compiler enforces that functions respect their role.
Part 3: The Select Statement
select lets a goroutine wait on multiple channel operations simultaneously — like a switch for channels:
package main
import (
"fmt"
"time"
)
func main() {
ch1 := make(chan string)
ch2 := make(chan string)
go func() {
time.Sleep(1 * time.Second)
ch1 <- "one"
}()
go func() {
time.Sleep(2 * time.Second)
ch2 <- "two"
}()
// Wait for whichever channel sends first
for i := 0; i < 2; i++ {
select {
case msg := <-ch1:
fmt.Println("Received from ch1:", msg)
case msg := <-ch2:
fmt.Println("Received from ch2:", msg)
}
}
}Non-Blocking Operations with Default
select {
case msg := <-ch:
fmt.Println("Got:", msg)
default:
fmt.Println("No message available — moving on")
}Timeout Pattern
select {
case result := <-ch:
fmt.Println("Got result:", result)
case <-time.After(2 * time.Second):
fmt.Println("Timeout — giving up")
}Done Channel Pattern (Stop Signal)
func worker(done <-chan struct{}) {
for {
select {
case <-done:
fmt.Println("Worker stopping")
return
default:
// do work...
time.Sleep(100 * time.Millisecond)
}
}
}
func main() {
done := make(chan struct{})
go worker(done)
time.Sleep(500 * time.Millisecond)
close(done) // signal all workers to stop
}Part 4: sync Package
sync.Mutex — Mutual Exclusion
When goroutines share mutable state (not via channels), use a Mutex to prevent data races:
package main
import (
"fmt"
"sync"
)
type SafeCounter struct {
mu sync.Mutex
value int
}
func (c *SafeCounter) Increment() {
c.mu.Lock()
defer c.mu.Unlock()
c.value++
}
func (c *SafeCounter) Value() int {
c.mu.Lock()
defer c.mu.Unlock()
return c.value
}
func main() {
counter := &SafeCounter{}
var wg sync.WaitGroup
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
counter.Increment()
}()
}
wg.Wait()
fmt.Println("Final count:", counter.Value()) // always 1000
}Run with go run -race to detect data races during development.
sync.RWMutex — Read-Write Lock
When you have many readers and few writers, RWMutex improves performance:
type Cache struct {
mu sync.RWMutex
store map[string]string
}
func (c *Cache) Get(key string) (string, bool) {
c.mu.RLock() // multiple concurrent reads allowed
defer c.mu.RUnlock()
v, ok := c.store[key]
return v, ok
}
func (c *Cache) Set(key, value string) {
c.mu.Lock() // exclusive write lock
defer c.mu.Unlock()
c.store[key] = value
}sync.Once — Run Exactly Once
sync.Once guarantees a function runs only once, even if called from multiple goroutines — perfect for lazy initialization:
package main
import (
"fmt"
"sync"
)
type Database struct {
connection string
}
var (
db *Database
once sync.Once
)
func getDB() *Database {
once.Do(func() {
fmt.Println("Connecting to database...") // runs exactly once
db = &Database{connection: "postgres://localhost/mydb"}
})
return db
}
func main() {
var wg sync.WaitGroup
for i := 0; i < 5; i++ {
wg.Add(1)
go func() {
defer wg.Done()
d := getDB()
fmt.Println("Got db:", d.connection)
}()
}
wg.Wait()
// "Connecting to database..." printed only once
}Part 5: Context Package
The context package is how Go propagates cancellation, deadlines, and request-scoped values through a call tree. It's essential for HTTP handlers, database calls, and anything with timeouts.
context.WithTimeout
package main
import (
"context"
"fmt"
"time"
)
func fetchData(ctx context.Context) (string, error) {
// Simulate slow operation
select {
case <-time.After(3 * time.Second): // takes 3s
return "data", nil
case <-ctx.Done(): // cancelled or timed out
return "", ctx.Err()
}
}
func main() {
// Give the operation 1 second to complete
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel() // always call cancel to free resources
result, err := fetchData(ctx)
if err != nil {
fmt.Println("Error:", err) // context deadline exceeded
return
}
fmt.Println("Result:", result)
}context.WithCancel
func main() {
ctx, cancel := context.WithCancel(context.Background())
// Launch workers that respect context
for i := 0; i < 3; i++ {
go func(id int) {
for {
select {
case <-ctx.Done():
fmt.Printf("Worker %d stopping: %v\n", id, ctx.Err())
return
default:
fmt.Printf("Worker %d working...\n", id)
time.Sleep(200 * time.Millisecond)
}
}
}(i)
}
time.Sleep(500 * time.Millisecond)
cancel() // cancel all workers
time.Sleep(100 * time.Millisecond) // let goroutines finish
}context.WithValue
Pass request-scoped data (user ID, trace ID, auth token) without cluttering function signatures:
type contextKey string
const userIDKey contextKey = "userID"
func withUserID(ctx context.Context, userID string) context.Context {
return context.WithValue(ctx, userIDKey, userID)
}
func getUserID(ctx context.Context) (string, bool) {
userID, ok := ctx.Value(userIDKey).(string)
return userID, ok
}
// Usage in HTTP middleware
func authMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
userID := r.Header.Get("X-User-ID")
ctx := withUserID(r.Context(), userID)
next.ServeHTTP(w, r.WithContext(ctx))
})
}Rules for context:
- Pass
ctxas the first parameter to every function that does I/O - Never store context in a struct — pass it explicitly
- Always
defer cancel()afterWithTimeoutorWithCancel - Use
context.Background()at the top level (main, tests) - Use
context.TODO()as a placeholder when you're not sure yet
Part 6: Testing with go test
Go has a first-class testing package in the standard library — no third-party required.
Basic Tests
// calculator.go
package calculator
func Add(a, b int) int { return a + b }
func Subtract(a, b int) int { return a - b }
func Multiply(a, b int) int { return a * b }
func Divide(a, b float64) (float64, error) {
if b == 0 {
return 0, fmt.Errorf("division by zero")
}
return a / b, nil
}// calculator_test.go
package calculator
import (
"testing"
)
func TestAdd(t *testing.T) {
result := Add(2, 3)
if result != 5 {
t.Errorf("Add(2, 3) = %d; want 5", result)
}
}
func TestDivide(t *testing.T) {
_, err := Divide(10, 0)
if err == nil {
t.Error("expected error for division by zero, got nil")
}
}Run with: go test ./...
Table-Driven Tests
The idiomatic Go testing pattern — test many cases with one function:
func TestAdd_TableDriven(t *testing.T) {
tests := []struct {
name string
a, b int
expected int
}{
{"positive numbers", 2, 3, 5},
{"negative numbers", -2, -3, -5},
{"mixed", -2, 3, 1},
{"zeros", 0, 0, 0},
{"large numbers", 1000000, 2000000, 3000000},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
result := Add(tc.a, tc.b)
if result != tc.expected {
t.Errorf("Add(%d, %d) = %d; want %d", tc.a, tc.b, result, tc.expected)
}
})
}
}Run a specific subtest: go test -run TestAdd_TableDriven/negative_numbers
Benchmarks
func BenchmarkAdd(b *testing.B) {
for i := 0; i < b.N; i++ {
Add(100, 200)
}
}
func BenchmarkMultiply(b *testing.B) {
for i := 0; i < b.N; i++ {
Multiply(100, 200)
}
}Run with: go test -bench=. -benchmem
BenchmarkAdd-8 1000000000 0.23 ns/op 0 B/op 0 allocs/op
BenchmarkMultiply-8 1000000000 0.23 ns/op 0 B/op 0 allocs/opTesting with Interfaces (Mocking)
Go makes mocking easy via interfaces — no reflection-based mocking framework needed:
// email.go
type EmailSender interface {
Send(to, subject, body string) error
}
type UserService struct {
email EmailSender
}
func (s *UserService) Register(name, emailAddr string) error {
// ... create user ...
return s.email.Send(emailAddr, "Welcome!", "Thanks for registering, "+name)
}// email_test.go
type MockEmailSender struct {
SentMessages []struct{ To, Subject, Body string }
ShouldFail bool
}
func (m *MockEmailSender) Send(to, subject, body string) error {
if m.ShouldFail {
return fmt.Errorf("email service unavailable")
}
m.SentMessages = append(m.SentMessages, struct{ To, Subject, Body string }{to, subject, body})
return nil
}
func TestUserService_Register(t *testing.T) {
mock := &MockEmailSender{}
svc := &UserService{email: mock}
err := svc.Register("Alice", "alice@example.com")
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if len(mock.SentMessages) != 1 {
t.Errorf("expected 1 email sent, got %d", len(mock.SentMessages))
}
if mock.SentMessages[0].To != "alice@example.com" {
t.Errorf("wrong recipient: %s", mock.SentMessages[0].To)
}
}Part 7: HTTP with net/http
Go's standard library net/http is production-ready — many companies run Go HTTP servers using it alone (no Express.js equivalent needed).
Simple HTTP Server
package main
import (
"fmt"
"net/http"
)
func helloHandler(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "Hello, %s!", r.URL.Query().Get("name"))
}
func main() {
http.HandleFunc("/hello", helloHandler)
fmt.Println("Server starting on :8080")
if err := http.ListenAndServe(":8080", nil); err != nil {
panic(err)
}
}JSON Encoding and Decoding
import (
"encoding/json"
"net/http"
)
type User struct {
ID int `json:"id"`
Name string `json:"name"`
Email string `json:"email,omitempty"` // omit if empty
}
// Encode struct to JSON response
func writeJSON(w http.ResponseWriter, status int, data any) {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(status)
json.NewEncoder(w).Encode(data)
}
// Decode JSON request body
func readJSON(r *http.Request, dst any) error {
return json.NewDecoder(r.Body).Decode(dst)
}Part 8: Building a REST API
Let's put it all together — a simple REST API for managing users using only the standard library:
package main
import (
"encoding/json"
"fmt"
"log"
"net/http"
"strconv"
"strings"
"sync"
)
// Domain types
type User struct {
ID int `json:"id"`
Name string `json:"name"`
Email string `json:"email"`
}
// In-memory store (thread-safe)
type UserStore struct {
mu sync.RWMutex
users map[int]User
nextID int
}
func NewUserStore() *UserStore {
return &UserStore{users: make(map[int]User), nextID: 1}
}
func (s *UserStore) GetAll() []User {
s.mu.RLock()
defer s.mu.RUnlock()
result := make([]User, 0, len(s.users))
for _, u := range s.users {
result = append(result, u)
}
return result
}
func (s *UserStore) GetByID(id int) (User, bool) {
s.mu.RLock()
defer s.mu.RUnlock()
u, ok := s.users[id]
return u, ok
}
func (s *UserStore) Create(name, email string) User {
s.mu.Lock()
defer s.mu.Unlock()
u := User{ID: s.nextID, Name: name, Email: email}
s.users[s.nextID] = u
s.nextID++
return u
}
func (s *UserStore) Delete(id int) bool {
s.mu.Lock()
defer s.mu.Unlock()
if _, ok := s.users[id]; !ok {
return false
}
delete(s.users, id)
return true
}
// HTTP helpers
func writeJSON(w http.ResponseWriter, status int, data any) {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(status)
json.NewEncoder(w).Encode(data)
}
func writeError(w http.ResponseWriter, status int, message string) {
writeJSON(w, status, map[string]string{"error": message})
}
// Handler
type UserHandler struct {
store *UserStore
}
func (h *UserHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
// Route: /users or /users/{id}
path := strings.TrimPrefix(r.URL.Path, "/users")
path = strings.Trim(path, "/")
if path == "" {
// /users
switch r.Method {
case http.MethodGet:
writeJSON(w, http.StatusOK, h.store.GetAll())
case http.MethodPost:
var body struct {
Name string `json:"name"`
Email string `json:"email"`
}
if err := json.NewDecoder(r.Body).Decode(&body); err != nil {
writeError(w, http.StatusBadRequest, "invalid JSON")
return
}
if body.Name == "" || body.Email == "" {
writeError(w, http.StatusBadRequest, "name and email required")
return
}
user := h.store.Create(body.Name, body.Email)
writeJSON(w, http.StatusCreated, user)
default:
writeError(w, http.StatusMethodNotAllowed, "method not allowed")
}
return
}
// /users/{id}
id, err := strconv.Atoi(path)
if err != nil {
writeError(w, http.StatusBadRequest, "invalid user ID")
return
}
switch r.Method {
case http.MethodGet:
user, ok := h.store.GetByID(id)
if !ok {
writeError(w, http.StatusNotFound, "user not found")
return
}
writeJSON(w, http.StatusOK, user)
case http.MethodDelete:
if !h.store.Delete(id) {
writeError(w, http.StatusNotFound, "user not found")
return
}
w.WriteHeader(http.StatusNoContent)
default:
writeError(w, http.StatusMethodNotAllowed, "method not allowed")
}
}
func main() {
store := NewUserStore()
handler := &UserHandler{store: store}
mux := http.NewServeMux()
mux.Handle("/users", handler)
mux.Handle("/users/", handler)
server := &http.Server{
Addr: ":8080",
Handler: mux,
}
fmt.Println("🚀 Server running on http://localhost:8080")
log.Fatal(server.ListenAndServe())
}Test it:
# Create users
curl -X POST http://localhost:8080/users \
-H "Content-Type: application/json" \
-d '{"name": "Alice", "email": "alice@example.com"}'
curl -X POST http://localhost:8080/users \
-H "Content-Type: application/json" \
-d '{"name": "Bob", "email": "bob@example.com"}'
# List all
curl http://localhost:8080/users
# Get one
curl http://localhost:8080/users/1
# Delete
curl -X DELETE http://localhost:8080/users/1Part 9: Concurrency Patterns
Worker Pool
A common pattern: limit concurrency to N workers processing a queue of jobs:
package main
import (
"fmt"
"sync"
)
type Job struct {
ID int
Data string
}
type Result struct {
JobID int
Output string
}
func worker(id int, jobs <-chan Job, results chan<- Result, wg *sync.WaitGroup) {
defer wg.Done()
for job := range jobs {
// Process the job
output := fmt.Sprintf("Worker %d processed job %d: %s", id, job.ID, job.Data)
results <- Result{JobID: job.ID, Output: output}
}
}
func main() {
const numWorkers = 3
const numJobs = 10
jobs := make(chan Job, numJobs)
results := make(chan Result, numJobs)
var wg sync.WaitGroup
// Launch worker pool
for w := 1; w <= numWorkers; w++ {
wg.Add(1)
go worker(w, jobs, results, &wg)
}
// Send jobs
for j := 1; j <= numJobs; j++ {
jobs <- Job{ID: j, Data: fmt.Sprintf("data-%d", j)}
}
close(jobs) // no more jobs
// Close results when all workers are done
go func() {
wg.Wait()
close(results)
}()
// Collect results
for r := range results {
fmt.Println(r.Output)
}
}Fan-Out / Fan-In
// Fan-out: distribute work to multiple goroutines
// Fan-in: merge results back into one channel
func fanOut(input <-chan int, numWorkers int) []<-chan int {
outputs := make([]<-chan int, numWorkers)
for i := 0; i < numWorkers; i++ {
out := make(chan int)
outputs[i] = out
go func(out chan<- int) {
for v := range input {
out <- v * v // square each number
}
close(out)
}(out)
}
return outputs
}
func fanIn(channels ...<-chan int) <-chan int {
merged := make(chan int)
var wg sync.WaitGroup
for _, ch := range channels {
wg.Add(1)
go func(c <-chan int) {
defer wg.Done()
for v := range c {
merged <- v
}
}(ch)
}
go func() {
wg.Wait()
close(merged)
}()
return merged
}Pipeline
Chain stages together, each processing and forwarding values:
func generate(nums ...int) <-chan int {
out := make(chan int)
go func() {
for _, n := range nums {
out <- n
}
close(out)
}()
return out
}
func square(in <-chan int) <-chan int {
out := make(chan int)
go func() {
for n := range in {
out <- n * n
}
close(out)
}()
return out
}
func filter(in <-chan int, pred func(int) bool) <-chan int {
out := make(chan int)
go func() {
for n := range in {
if pred(n) {
out <- n
}
}
close(out)
}()
return out
}
func main() {
// Pipeline: generate → square → filter even
nums := generate(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
squared := square(nums)
evens := filter(squared, func(n int) bool { return n%2 == 0 })
for n := range evens {
fmt.Println(n) // 4, 16, 36, 64, 100
}
}Common Pitfalls
1. Goroutine Leak — Forgetting to Stop Workers
// ❌ Goroutine runs forever if nobody reads from ch
func leakyFunc() {
ch := make(chan int)
go func() {
for {
ch <- expensiveComputation() // blocks forever if caller returns
}
}()
// caller returns without reading ch — goroutine leaks
}
// ✅ Use a done/ctx channel to signal stop
func safeFunc(ctx context.Context) <-chan int {
ch := make(chan int)
go func() {
defer close(ch)
for {
select {
case <-ctx.Done():
return
case ch <- expensiveComputation():
}
}
}()
return ch
}2. Closing a Channel Twice (Panic)
// ❌ Closing an already-closed channel panics
close(ch)
close(ch) // panic: close of closed channel
// ✅ Use sync.Once to close exactly once
var once sync.Once
safeClose := func() {
once.Do(func() { close(ch) })
}3. Capturing Loop Variable in Goroutine
// ❌ All goroutines see the same variable (last value)
for i := 0; i < 3; i++ {
go func() {
fmt.Println(i) // might print 3 3 3
}()
}
// ✅ Pass as argument — creates a copy per goroutine
for i := 0; i < 3; i++ {
go func(i int) {
fmt.Println(i) // prints 0 1 2 (any order)
}(i)
}4. Not Checking Errors from goroutines
// ❌ Errors silently lost
go func() {
if err := doWork(); err != nil {
// nobody listens!
}
}()
// ✅ Send errors on an error channel
errCh := make(chan error, 1)
go func() {
if err := doWork(); err != nil {
errCh <- err
}
close(errCh)
}()
if err := <-errCh; err != nil {
log.Fatal(err)
}5. Ignoring the Race Detector
Always run go test -race or go run -race during development — Go's built-in race detector catches data races at runtime.
Summary and Key Takeaways
Goroutines & Channels:
✅ go func() launches a goroutine — lightweight, cheap to create
✅ Channels are typed conduits — the preferred way for goroutines to share data
✅ Unbuffered channels synchronize; buffered channels decouple producer/consumer
✅ select waits on multiple channels — enables timeouts, cancellation, non-blocking ops
Sync Primitives:
✅ sync.WaitGroup — wait for a group of goroutines to finish
✅ sync.Mutex / sync.RWMutex — protect shared state from concurrent access
✅ sync.Once — run initialization exactly once, safely
Context:
✅ Pass ctx context.Context as the first parameter to any function that does I/O
✅ Use WithTimeout for deadlines, WithCancel for manual cancellation
✅ Always defer cancel() — leaking context causes resource leaks
Testing:
✅ Table-driven tests are idiomatic Go — maintain many cases cleanly
✅ Use interfaces for mockable dependencies — no reflection framework needed
✅ go test -race catches race conditions early
HTTP & REST:
✅ net/http is production-ready — no framework required for most use cases
✅ encoding/json with struct tags for clean JSON APIs
✅ Use http.ServeMux for routing, custom http.Handler for structured code
What's Next?
You've completed the Go Learning Roadmap! Here's where to go deeper:
- Deep Dive: Concurrency with Goroutines & Channels — patterns, pitfalls, and advanced concurrency
- Deep Dive: Error Handling Patterns — wrapping, sentinel errors, custom error types
- Deep Dive: Testing in Go — testify, mocks, integration tests, benchmarks
- Deep Dive: Context, HTTP & Building APIs — middleware, routing libraries, full API design
Related companion posts:
📬 Subscribe to Newsletter
Get the latest blog posts delivered to your inbox every week. No spam, unsubscribe anytime.
We respect your privacy. Unsubscribe at any time.
💬 Comments
Sign in to leave a comment
We'll never post without your permission.