Golang Concurrency #6 - Worker Pool Pattern

Just a guy who loves to write code and watch anime.
The Problem
You have many tasks (jobs) and want multiple workers processing them simultaneously instead of one-by-one.
The Solution
One shared job queue (channel)
Multiple workers (goroutines) all grabbing jobs from the same queue
Whoever is free takes the next job
Complete Example
func main() {
// Create the shared job queue and results collection
jobs := make(chan int, 100) // Job queue (buffered)
results := make(chan int, 100) // Results collection (buffered)
// Start 3 workers -> they all share the same job queue
for workerID := 1; workerID <= 3; workerID++ {
go worker(workerID, jobs, results)
}
// Send 9 jobs into the shared queue
for jobNumber := 1; jobNumber <= 9; jobNumber++ {
jobs <- jobNumber
fmt.Printf("Sent job %d to queue\n", jobNumber)
}
close(jobs) // Signal "no more jobs coming"
// Collect all 9 results
for i := 1; i <= 9; i++ {
result := <-results
fmt.Printf("Got result: %d\n", result)
}
}
func worker(id int, jobs <-chan int, results chan<- int) {
fmt.Printf("Worker %d started\n", id)
// Keep taking jobs until channel is closed and empty
for job := range jobs {
fmt.Printf("Worker %d grabbed job %d\n", id, job)
// Simulate work (different workers take different time)
time.Sleep(time.Duration(job) * 500 * time.Millisecond)
// Send result back
result := job * job
results <- result
fmt.Printf("Worker %d finished job %d → result %d\n", id, job, result)
}
fmt.Printf("Worker %d done (no more jobs)\n", id)
}
What Happens
Setup: 3 workers start, all waiting for jobs from the same
jobschannelJobs sent: Jobs 1,2,3,4,5,6,7,8,9 go into the shared queue
Competition: Workers compete -> whoever is free grabs the next job
Parallel work: Multiple jobs happen simultaneously
Results: Each worker sends results to shared
resultschannel
Example Output
Worker 1 started
Worker 2 started
Worker 3 started
Sent job 1 to queue
Worker 1 grabbed job 1
Sent job 2 to queue
Worker 2 grabbed job 2
Sent job 3 to queue
Worker 3 grabbed job 3
Worker 1 finished job 1 → result 1
Worker 1 grabbed job 4
Worker 2 finished job 2 → result 4
Worker 2 grabbed job 5
...
Key Insights
Why it works
All workers read from the same channel. When worker1 takes job 3, it's gone -> worker2 can't also take job 3.
Load balancing
Fast workers automatically get more jobs. Slow workers get fewer. No manual assignment needed.
Scalability
Want more throughput? Start more workers. Want less resource usage? Start fewer workers.
Real-world uses
Web server handling requests
Image processing pipeline
Database queries
File processing
The pattern: Many workers → One shared job queue → Automatic load distribution






