Skip to main content

Command Palette

Search for a command to run...

Mistakes Golang (book notes)

Updated
31 min read
Mistakes Golang (book notes)
T

Just a guy who loves to write code and watch anime.

Introduction

Some of my notes reading the book “100 mistakes in Golang”.

Mistakes 2 - Code and Project organization

init()

The Problem: init() functions execute in dependency order, not import order.

// main.go imports pkgA, pkgA imports pkgB
// Output: "pkgB init" → "pkgA init" → "main init"

Key Traps:

  1. Always executes - can't skip in tests

  2. Multiple init functions run in declaration order but scatter logic

  3. Dependency chains create unpredictable timing

Fix: Use explicit initialization instead of relying on init() magic

Bottom Line: Dependencies run first, depth-first. Avoid init() when possible - explicit is better than implicit.

Overusing getters and setters

What are Getters/Setters in Go?

Methods that access/modify struct fields, borrowed from OOP languages:

type Person struct {
    name string
    age  int
}

// Getter
func (p *Person) Name() string {
    return p.name
}

// Setter
func (p *Person) SetName(name string) {
    p.name = name
}

The Problem: Go developers coming from Java/C# automatically create getters/setters for everything

Why it's wrong in Go:

  • Go favors direct field access when no logic is needed

  • Exported fields (capitalized) are the idiomatic way

  • Only add getters/setters when you need validation or side effects

// Idiomatic Go - just export the field
type Person struct {
    Name string  // Direct access: person.Name = "John"
    Age  int
}

// Only use methods when you need logic
func (p *Person) SetAge(age int) error {
    if age < 0 {
        return errors.New("age cannot be negative")
    }
    p.Age = age
    return nil
}

Bottom Line: Export fields directly. Only create getters/setters when you need validation, computation, or side effects.

Interface pollution

What is Interface Pollution? Creating interfaces too early or making them too broad, violating Go's "accept interfaces, return structs" principle.

The Problem: Defining interfaces before you need them or making them overly complex

// BAD - Interface pollution
type UserService interface {
    CreateUser(user User) error
    GetUser(id int) (User, error)
    UpdateUser(user User) error
    DeleteUser(id int) error
    ValidateUser(user User) error
    HashPassword(password string) string
    SendEmail(email string) error
}

// Only one implementation exists
type userService struct{}

Why it's wrong:

  • Premature abstraction - no actual need for multiple implementations

  • Fat interfaces violate Interface Segregation Principle

  • Consumer-side interfaces are more flexible than producer-side

Go's Way:

// GOOD - Let consumers define what they need
type UserCreator interface {
    CreateUser(user User) error
}

type UserGetter interface {
    GetUser(id int) (User, error)
}

// Implementation stays concrete
type UserService struct{}

func (s *UserService) CreateUser(user User) error { /* */ }
func (s *UserService) GetUser(id int) (User, error) { /* */ }

Bottom Line: Don't create interfaces until you have 2+ implementations. Keep interfaces small and define them where they're used, not where they're implemented.

Returning interfaces + any

The Problem: Functions that return interface types instead of concrete types

// BAD - Returning interface
func NewUserService() UserService {
    return &userService{}
}

// GOOD - Return concrete type
func NewUserService() *UserService {
    return &UserService{}
}

Why it's wrong:

  • Limits caller flexibility - they can't access methods not in the interface

  • Breaks "accept interfaces, return structs" rule

  • Makes testing harder - can't easily inspect concrete behavior

  • Future-proofing issues - adding methods to concrete type breaks interface contract

Exception: Only return interfaces when you genuinely need to hide implementation details (rare)


The Problem: Using any (alias for interface{}) instead of proper types

// BAD - Lost type safety
func ProcessData(data any) any {
    // Need type assertions everywhere
    if str, ok := data.(string); ok {
        return strings.ToUpper(str)
    }
    return nil
}

// GOOD - Use generics or specific types
func ProcessData[T any](data T) T {
    // Type safe
    return data
}

Why any is dangerous:

  • No compile-time type checking

  • Runtime panics from bad type assertions

  • Loss of IDE support (autocomplete, refactoring)

  • Performance overhead from boxing/unboxing

Bottom Line: Return concrete types, accept interfaces. Use any only when you truly need to work with unknown types at runtime.

Confused about when to use generics

What are Go Generics? Type parameters that let you write code that works with multiple types:

// Without generics - need separate functions
func MaxInt(a, b int) int {
    if a > b { return a }
    return b
}

func MaxFloat(a, b float64) float64 {
    if a > b { return a }
    return b
}

// With generics - one function for all comparable types
func Max[T comparable](a, b T) T {
    if a > b { return a }
    return b
}

Common Mistakes:

1. Using generics for simple cases

// BAD - Overengineering
func Add[T int | float64](a, b T) T {
    return a + b
}

// GOOD - Just use the specific type you need
func Add(a, b int) int {
    return a + b
}

2. Generic interfaces when concrete types work

// BAD - Unnecessary complexity
type Container[T any] interface {
    Get() T
    Set(T)
}

// GOOD - Be specific about what you actually need
type UserContainer struct {
    user User
}

3. Premature generalization

// BAD - Making everything generic "just in case"
type Repository[T any] interface {
    Save(T) error
    Load(ID) (T, error)
}

// GOOD - Start concrete, generalize when you have 2+ similar implementations
type UserRepository struct{}
func (r UserRepository) Save(user User) error { /* */ }

When TO use generics:

  • Data structures (slices, maps, stacks)

  • Algorithms that work on multiple types

  • When you have 2+ nearly identical implementations

When NOT to use generics:

  • Single use case - just use concrete types

  • Business logic - usually too specific to generalize

  • "Future-proofing" - YAGNI (You Aren't Gonna Need It)

Bottom Line: Use generics when you're duplicating code across types, not when you think you might need flexibility someday.

Project structure + utility packages

The Problem: Poor package organization that creates circular dependencies, unclear boundaries, or "god packages"

Common Bad Patterns:

1. Organizing by layer (like MVC)

Problem: Everything depends on everything, creates import cycles

2. Generic names

Problem: These become dumping grounds with no clear purpose

Go's Way - Organize by Domain/Feature:

Key Principles:

  • Package per domain - user, order, payment

  • Keep related code together

  • Avoid circular imports - dependencies should form a DAG (directed acyclic graph)


The Problem: Creating generic "util" packages that become grab-bags of unrelated functions

// BAD - utils package becomes a mess
package utils

func StringToInt(s string) int { /* */ }
func ValidateEmail(email string) bool { /* */ }
func HashPassword(pwd string) string { /* */ }
func FormatCurrency(amount float64) string { /* */ }

Why it's wrong:

  • No cohesion - unrelated functions grouped together

  • Import pollution - importing utils brings in everything

  • Testing nightmare - hard to test unrelated functions together

  • Naming conflicts - utils.Format() could mean anything

Better Approach:

// Put functions where they belong
package user
func HashPassword(pwd string) string { /* */ }

package validation
func Email(email string) bool { /* */ }

package currency
func Format(amount float64) string { /* */ }

Exception: Small, focused utility packages are OK:

package httputil  // HTTP-specific utilities
package timeutil  // Time-specific utilities

Bottom Line: Organize by domain, not by type. Avoid generic util packages - put functions where they logically belong.

Interfaces in golang

What makes Go interfaces special: Go interfaces are implicit - you don't declare that a type implements an interface, it just does if it has the right methods (duck typing).

type Writer interface {
    Write([]byte) (int, error)
}

// This automatically implements Writer
type FileWriter struct{}
func (f FileWriter) Write(data []byte) (int, error) { /* */ }

// So does this
type NetworkWriter struct{}
func (n NetworkWriter) Write(data []byte) (int, error) { /* */ }

Producer vs Consumer Pattern:

Producer-side (BAD):

// Package A defines interface with implementation
type DatabaseService interface {
    Save(data Data) error
    Load(id string) (Data, error)
}

type MySQLService struct{}
func (m MySQLService) Save(data Data) error { /* */ }
func (m MySQLService) Load(id string) (Data, error) { /* */ }

Consumer-side (GOOD):

// Package B (consumer) defines only what it needs
type DataSaver interface {
    Save(data Data) error
}

func ProcessData(saver DataSaver, data Data) error {
    // Only needs Save method
    return saver.Save(data)
}

// Any type with Save() can be passed in

Key Go Interface Principles:

  1. "Accept interfaces, return structs" - functions take interfaces as parameters, return concrete types

  2. Small interfaces - prefer many small interfaces over few large ones

  3. Interface segregation - clients shouldn't depend on methods they don't use

  4. Define at point of use - interfaces belong where they're consumed, not produced

Why this matters:

  • Testability - easy to mock with small interfaces

  • Flexibility - consumers get exactly what they need

  • Decoupling - implementations don't dictate interface shape

Mistakes 3 - Data Types

Integer Overflow Detection

What is Integer Overflow? When arithmetic operations exceed the type's maximum value and "wrap around":

var x int8 = 127  // Max value for int8
x = x + 1         // Becomes -128 (wraps to minimum)
fmt.Println(x)    // Output: -128

The Problem: Go doesn't detect overflow - it silently wraps around

// Dangerous - can overflow without warning
func Add(a, b int32) int32 {
    return a + b  // What if a=MaxInt32 and b=1?
}

Detection Methods:

func SafeAdd(a, b int32) (int32, error) {
    if a > 0 && b > math.MaxInt32-a {
        return 0, errors.New("overflow")
    }
    if a < 0 && b < math.MinInt32-a {
        return 0, errors.New("underflow")
    }
    return a + b, nil
}

Understanding Floating Points

The Problem: Floating point arithmetic isn't exact

fmt.Println(0.1 + 0.2)        // Output: 0.30000000000000004
fmt.Println(0.1 + 0.2 == 0.3) // Output: false

Why: Binary representation can't exactly represent some decimals

Solutions:

// Use epsilon for comparison
func FloatEqual(a, b, epsilon float64) bool {
    return math.Abs(a-b) < epsilon
}

// Or use decimal library for financial calculations

Slice Length vs Capacity

What are they?

  • Length: Number of elements currently in slice

  • Capacity: Maximum elements slice can hold without reallocation

s := make([]int, 3, 5)  // length=3, capacity=5
fmt.Println(len(s))     // 3
fmt.Println(cap(s))     // 5

s = append(s, 1, 2)     // Still fits in capacity
fmt.Println(len(s))     // 5
fmt.Println(cap(s))     // 5

s = append(s, 3)        // Exceeds capacity - reallocates
fmt.Println(cap(s))     // 10 (doubled)

The slice grows but points to different underlying array when capacity exceeded


Nil vs Empty Slices

What's the difference?

var nilSlice []int           // nil slice
emptySlice := []int{}        // empty slice
emptySlice2 := make([]int, 0) // also empty slice

fmt.Println(nilSlice == nil)    // true
fmt.Println(emptySlice == nil)  // false
fmt.Println(len(nilSlice))      // 0
fmt.Println(len(emptySlice))    // 0

Practical difference: Usually none - both have length 0 and work with append()


Checking if Slice is Empty

Wrong Way:

if slice == nil {  // Only catches nil, not empty
    // Miss empty slices like []int{}
}

Right Way:

if len(slice) == 0 {  // Catches both nil and empty
    // Handle empty case
}

Not Making Slice Copies Properly

The Problem: Slices share underlying arrays

original := []int{1, 2, 3}
shallow := original           // Same underlying array!
shallow[0] = 999
fmt.Println(original)         // [999, 2, 3] - modified!

Proper Copy:

original := []int{1, 2, 3}
deep := make([]int, len(original))
copy(deep, original)          // Actually copies elements
deep[0] = 999
fmt.Println(original)         // [1, 2, 3] - unchanged

Bottom Line: Slices are references. Use len() for emptiness checks, understand capacity vs length, and use copy() for true copies.

Mistakes 4 - Control Structures

Ignoring Elements Are Copied in Range Loops

The Problem: Range loops copy elements, so modifying the loop variable doesn't change the original

accounts := []Account{
    {Name: "John", Balance: 100},
    {Name: "Jane", Balance: 200},
}

// BAD - Modifying copy, not original
for _, account := range accounts {
    account.Balance *= 2  // Modifies copy, not original slice element!
}
fmt.Println(accounts)     // Still [100, 200] - unchanged!

Fix - Use index to modify original:

for i := range accounts {
    accounts[i].Balance *= 2  // Modifies original
}
// Or use pointer slice if appropriate

Ignoring How Arguments Are Evaluated in Range Loops

The Problem: Range expression is evaluated only once at start of loop

numbers := []int{1, 2, 3}
for i, v := range numbers {
    numbers = append(numbers, v*10)  // Grows slice during iteration
    fmt.Printf("i=%d, v=%d\n", i, v)
}
// Only iterates over original 3 elements, not the new ones!
// Output: i=0,v=1  i=1,v=2  i=2,v=3

Why: Range captures the slice's length/capacity at start - doesn't see modifications

Another example:

func getSlice() []int {
    fmt.Println("getSlice called")
    return []int{1, 2, 3}
}

for _, v := range getSlice() {  // getSlice() called only once
    fmt.Println(v)
}

Ignoring Impact of Using Pointers in Range Loops

The Problem: Taking address of loop variable gives same memory location every iteration

var pointers []*int
numbers := []int{1, 2, 3}

for _, v := range numbers {
    pointers = append(pointers, &v)  // BAD - all point to same variable!
}

for _, p := range pointers {
    fmt.Println(*p)  // Prints: 3, 3, 3 (all point to last value!)
}

Why: v is reused each iteration - same memory address

Fix:

var pointers []*int
numbers := []int{1, 2, 3}

for i := range numbers {
    pointers = append(pointers, &numbers[i])  // Point to slice elements
}
// Or create local copy:
for _, v := range numbers {
    v := v  // Create new variable
    pointers = append(pointers, &v)
}

Making Wrong Assumptions During Map Iterations

The Problem: Map iteration order is random and can change between runs

m := map[string]int{"a": 1, "b": 2, "c": 3}

for k, v := range m {
    fmt.Printf("%s=%d ", k, v)
}
// Output might be: a=1 c=3 b=2
// Next run might be: b=2 a=1 c=3

Also: Modifying map during iteration can cause unpredictable behavior

for k := range m {
    if k == "a" {
        m["d"] = 4  // Adding during iteration - undefined behavior
    }
}

Fix: If you need order, collect keys first:

keys := make([]string, 0, len(m))
for k := range m {
    keys = append(keys, k)
}
sort.Strings(keys)  // Or sort however you need
for _, k := range keys {
    fmt.Printf("%s=%d ", k, m[k])
}

Ignoring How Break Statements Work

The Problem: break only breaks the innermost loop, not outer loops

outer:
for i := 0; i < 3; i++ {
    for j := 0; j < 3; j++ {
        if i == 1 && j == 1 {
            break  // Only breaks inner loop, continues outer loop
        }
        fmt.Printf("(%d,%d) ", i, j)
    }
}
// Still prints (2,0) (2,1) (2,2) - outer loop continues

Fix with labels:

outer:
for i := 0; i < 3; i++ {
    for j := 0; j < 3; j++ {
        if i == 1 && j == 1 {
            break outer  // Breaks outer loop
        }
        fmt.Printf("(%d,%d) ", i, j)
    }
}

Using Defer Inside a Loop

The Problem: defer executes when function returns, not when loop iteration ends

func processFiles(files []string) error {
    for _, filename := range files {
        file, err := os.Open(filename)
        if err != nil {
            return err
        }
        defer file.Close()  // BAD - all files stay open until function ends!

        // Process file...
    }
    // All file.Close() calls happen here - might run out of file handles!
    return nil
}

Fix - Use closure or manual cleanup:

func processFiles(files []string) error {
    for _, filename := range files {
        err := func() error {  // Closure
            file, err := os.Open(filename)
            if err != nil {
                return err
            }
            defer file.Close()  // Closes at end of this function

            // Process file...
            return nil
        }()
        if err != nil {
            return err
        }
    }
    return nil
}

Bottom Line: Range loops copy values, evaluate expressions once, reuse loop variables, map iteration is random, break needs labels for outer loops, and defer waits for function end not loop end.

Mistakes 5 - Strings

Not Understanding the Concept of a Rune

What is a Rune? A rune is Go's way of representing a single Unicode character (UTF-8 code point).

s := "Hello, 世界"
fmt.Println(len(s))        // 13 bytes (not 8 characters!)
fmt.Println(len([]rune(s))) // 8 runes (actual character count)

The Problem: String length returns bytes, not characters

// BAD - Assumes 1 byte = 1 character
name := "José"
if len(name) > 4 {  // len(name) = 5 bytes, but only 4 characters!
    // Wrong assumption
}

// GOOD - Count actual characters
if len([]rune(name)) > 4 {
    // Correct character count
}

Why this matters:

  • é is 2 bytes in UTF-8

  • is 3 bytes in UTF-8

  • ASCII characters are 1 byte


Inaccurate String Iteration

The Problem: Using byte index vs rune iteration gives different results

s := "café"

// BAD - Iterates by bytes
for i := 0; i < len(s); i++ {
    fmt.Printf("%c ", s[i])  // c a f à © (broken character!)
}

// GOOD - Iterates by runes (characters)
for _, r := range s {
    fmt.Printf("%c ", r)     // c a f é (correct!)
}

// Also good - with position
for i, r := range s {
    fmt.Printf("Position %d: %c\n", i, r)
    // Position 0: c
    // Position 1: a
    // Position 2: f
    // Position 4: é  (note: skips 3 because é is 2 bytes)
}

Key Point: range on strings iterates by runes, not bytes


Misusing Trim Functions

The Problem: Confusion between different trim functions

s := "  hello world  "

// Different behaviors:
fmt.Println(strings.Trim(s, " "))      // "hello world" - removes from both ends
fmt.Println(strings.TrimSpace(s))      // "hello world" - removes whitespace from both ends
fmt.Println(strings.TrimLeft(s, " "))  // "hello world  " - removes from left only
fmt.Println(strings.TrimPrefix(s, " ")) // " hello world  " - removes prefix once only

// Common mistake:
url := "https://example.com/"
// BAD - removes all 's' and '/' from both ends
clean := strings.Trim(url, "s/")       // "https://example.com" - wrong!

// GOOD - removes specific suffix
clean = strings.TrimSuffix(url, "/")   // "https://example.com" - correct!

Key Difference:

  • Trim() removes any characters in the set from both ends

  • TrimPrefix()/TrimSuffix() removes exact string once


Under-Optimized String Concatenation

The Problem: Using + for multiple concatenations is inefficient

// BAD - Creates new string each iteration
var result string
for i := 0; i < 1000; i++ {
    result += fmt.Sprintf("item%d,", i)  // O(n²) performance
}

Better approaches:

// GOOD - Use strings.Builder for multiple concatenations
var builder strings.Builder
builder.Grow(1000 * 10)  // Pre-allocate if you know size
for i := 0; i < 1000; i++ {
    builder.WriteString(fmt.Sprintf("item%d,", i))
}
result := builder.String()

// Or use strings.Join for slices
items := make([]string, 1000)
for i := 0; i < 1000; i++ {
    items[i] = fmt.Sprintf("item%d", i)
}
result = strings.Join(items, ",")

Performance difference: + is O(n²), strings.Builder is O(n)


Useless String Conversions

The Problem: Converting between string and []byte unnecessarily

// BAD - Unnecessary conversions
func processData(data string) {
    bytes := []byte(data)      // Conversion 1
    // ... process bytes ...
    result := string(bytes)    // Conversion 2
    return result
}

// GOOD - Work with appropriate type from start
func processData(data []byte) []byte {
    // Work directly with bytes
    return data
}

Common wasteful pattern:

// BAD
jsonStr := `{"name":"John"}`
var user User
json.Unmarshal([]byte(jsonStr), &user)  // Unnecessary string→[]byte conversion

// GOOD - Use []byte from start if possible
jsonData := []byte(`{"name":"John"}`)
json.Unmarshal(jsonData, &user)

Substrings and Memory Leaks

The Problem: Substrings share underlying memory with original string

func getFirstWord() string {
    hugeString := strings.Repeat("word ", 1_000_000)  // 5MB string
    return hugeString[:4]  // Returns "word" but keeps 5MB in memory!
}

word := getFirstWord()
// The entire 5MB string stays in memory because substring references it

Why: Go strings are immutable, so substring just points to part of original

Fix - Copy when keeping small part of large string:

func getFirstWord() string {
    hugeString := strings.Repeat("word ", 1_000_000)
    firstWord := hugeString[:4]
    return string([]byte(firstWord))  // Force copy, original can be GC'd
}

// Or use strings.Clone() (Go 1.18+)
return strings.Clone(hugeString[:4])

Bottom Line: Runes ≠ bytes, use range for character iteration, TrimPrefix/Suffix for exact matches, strings.Builder for concatenation, avoid unnecessary conversions, and copy substrings when original is large.

Mistakes 6 - Functions and methods

Not Knowing Which Type Receiver to Use

The Problem: Confusion between value receivers (t Type) vs pointer receivers (t *Type)

Value Receiver - Creates a Copy:

type Counter struct {
    count int
}

// Value receiver - modifies copy, not original
func (c Counter) Increment() {
    c.count++  // Modifies copy, original unchanged
}

func main() {
    c := Counter{count: 5}
    c.Increment()
    fmt.Println(c.count)  // Still 5 - unchanged!
}

Pointer Receiver - Modifies Original:

// Pointer receiver - modifies original
func (c *Counter) Increment() {
    c.count++  // Modifies original
}

func main() {
    c := Counter{count: 5}
    c.Increment()
    fmt.Println(c.count)  // 6 - changed!
}

When to Use Each:

  • Pointer receiver: When you need to modify the receiver OR when receiver is large (avoid copying)

  • Value receiver: When you don't modify receiver AND receiver is small (simple types, small structs)

Consistency Rule: If any method uses pointer receiver, use pointer receivers for all methods on that type


Never Using Named Result Parameters

What are Named Result Parameters? Pre-declaring return variable names in function signature:

// Regular return
func divide(a, b int) (int, error) {
    if b == 0 {
        return 0, errors.New("division by zero")
    }
    return a / b, nil
}

// Named return parameters
func divide(a, b int) (result int, err error) {
    if b == 0 {
        err = errors.New("division by zero")
        return  // Bare return uses named variables
    }
    result = a / b
    return  // Returns result and err
}

Benefits:

  • Documentation - shows what the function returns

  • Cleaner error handling - don't need to declare err variable

  • Bare returns - just return uses named variables

Good use case:

func processFile(filename string) (data []byte, err error) {
    file, err := os.Open(filename)
    if err != nil {
        return  // Returns nil data, error
    }
    defer file.Close()

    data, err = io.ReadAll(file)
    return  // Returns data, err
}

Unintended Side Effects with Named Result Parameters

The Problem: Named parameters can be modified unexpectedly, especially with defer

func increment() (result int) {
    defer func() {
        result++  // Modifies return value!
    }()
    return 5
}

fmt.Println(increment())  // Prints 6, not 5!

Another trap - shadowing:

func confusing() (result int) {
    result = 5
    if true {
        result := 10  // New variable shadows named return!
        fmt.Println(result)  // 10
    }
    return  // Returns 5, not 10
}

Be careful with defer modifying named returns:

func openFile() (file *os.File, err error) {
    defer func() {
        if err != nil {
            log.Printf("Failed to open file: %v", err)  // OK - just logging
        }
    }()

    file, err = os.Open("data.txt")
    return
}

Returning a Nil Receiver

The Problem: Methods called on nil receivers can panic or behave unexpectedly

type SafeMap struct {
    m map[string]int
}

func (sm *SafeMap) Get(key string) int {
    return sm.m[key]  // PANIC if sm is nil!
}

func NewSafeMap() *SafeMap {
    return nil  // BAD - returning nil pointer
}

func main() {
    sm := NewSafeMap()
    fmt.Println(sm.Get("key"))  // PANIC!
}

Fix - Handle nil receivers or don't return nil:

func (sm *SafeMap) Get(key string) int {
    if sm == nil || sm.m == nil {
        return 0  // Safe handling
    }
    return sm.m[key]
}

// Better - don't return nil
func NewSafeMap() *SafeMap {
    return &SafeMap{
        m: make(map[string]int),
    }
}

Using Filename as Function Input

The Problem: Functions that take filenames are harder to test and less flexible than those that take io.Reader/Writer

// BAD - Hard to test, inflexible
func processFile(filename string) error {
    data, err := os.ReadFile(filename)
    if err != nil {
        return err
    }
    // Process data...
    return nil
}

Better - Accept interfaces:

// GOOD - Easy to test, flexible
func processData(r io.Reader) error {
    data, err := io.ReadAll(r)
    if err != nil {
        return err
    }
    // Process data...
    return nil
}

// Can be used with files, strings, network, etc.
processData(file)                    // *os.File
processData(strings.NewReader(data)) // string data
processData(httpResponse.Body)       // HTTP response

Testing becomes easy:

func TestProcessData(t *testing.T) {
    input := strings.NewReader("test data")
    err := processData(input)
    // Much easier than creating temp files
}

Ignoring How Defer Arguments and Receivers Are Evaluated

The Problem: Defer evaluates arguments immediately, but executes function later

func example() {
    i := 1
    defer fmt.Println(i)  // Captures value 1 immediately

    i = 2
    defer fmt.Println(i)  // Captures value 2 immediately

    i = 3
    fmt.Println("Current:", i)
}
// Output: Current: 3, then 2, then 1 (defer runs in reverse order)

Receiver evaluation:

type MyStruct struct {
    value int
}

func (m MyStruct) print() {
    fmt.Println(m.value)
}

func example() {
    m := MyStruct{value: 1}
    defer m.print()  // Captures m with value=1 immediately

    m.value = 2
    m.print()  // Prints 2
}
// Output: 2, then 1 (deferred call uses captured value)

Use closure to capture later values:

func example() {
    i := 1
    defer func() {
        fmt.Println(i)  // Captures variable, not value
    }()

    i = 2  // This change will be seen by defer
}
// Output: 2

Bottom Line: Use pointer receivers for mutations/large types, named returns for documentation, watch for defer side effects, don't return nil receivers, prefer io interfaces over filenames, and remember defer captures arguments immediately.

Mistakes 7 - Error management

Panicking

The Problem: Using panic() when you should return an error

// BAD - Panics crash the entire program
func divide(a, b int) int {
    if b == 0 {
        panic("division by zero")  // Crashes program!
    }
    return a / b
}

When to panic vs return error:

Panic for:

  • Programming errors (bugs in your code)

  • Truly unrecoverable situations

  • Initialization failures

// OK to panic - programming error
func mustParseConfig() Config {
    config, err := parseConfig()
    if err != nil {
        panic("invalid config: " + err.Error())  // Should never happen in production
    }
    return config
}

Return error for:

  • Expected failures (file not found, network errors)

  • User input validation

  • External service failures

// GOOD - Return error for expected failures
func divide(a, b int) (int, error) {
    if b == 0 {
        return 0, errors.New("division by zero")
    }
    return a / b, nil
}

Ignoring When to Wrap an Error

The Problem: Not adding context when passing errors up the call stack

// BAD - Loses context
func processFile(filename string) error {
    data, err := os.ReadFile(filename)
    if err != nil {
        return err  // What operation failed? Which file?
    }

    err = validateData(data)
    if err != nil {
        return err  // No context about what was being validated
    }

    return nil
}

Good error wrapping:

// GOOD - Adds context at each level
func processFile(filename string) error {
    data, err := os.ReadFile(filename)
    if err != nil {
        return fmt.Errorf("failed to read file %s: %w", filename, err)
    }

    err = validateData(data)
    if err != nil {
        return fmt.Errorf("validation failed for file %s: %w", filename, err)
    }

    return nil
}

When to wrap:

  • Add context about what operation failed

  • Add relevant parameters (filename, user ID, etc.)

  • Don't wrap if you're handling the error (logging and continuing)


Checking an Error Type Inaccurately

The Problem: Using wrong methods to check error types

// BAD - String comparison is fragile
if err.Error() == "connection refused" {
    // Breaks if error message changes
}

// BAD - Type assertion on wrapped errors
if _, ok := err.(*net.OpError); ok {
    // Fails if error is wrapped
}

Correct type checking:

// GOOD - Use errors.As() for type checking
var netErr *net.OpError
if errors.As(err, &netErr) {
    // Works even if error is wrapped
    fmt.Printf("Network operation failed: %v", netErr.Op)
}

// GOOD - Use errors.Is() for specific error values
if errors.Is(err, os.ErrNotExist) {
    // File doesn't exist
}

// GOOD - Custom error types
type ValidationError struct {
    Field string
    Value string
}

func (e ValidationError) Error() string {
    return fmt.Sprintf("invalid %s: %s", e.Field, e.Value)
}

// Check for custom type
var validationErr ValidationError
if errors.As(err, &validationErr) {
    fmt.Printf("Validation failed on field: %s", validationErr.Field)
}

Checking an Error Value Inaccurately

The Problem: Direct comparison of wrapped errors fails

// BAD - Direct comparison fails with wrapped errors
if err == io.EOF {
    // Won't work if error is wrapped
}

// BAD - Using strings
if strings.Contains(err.Error(), "EOF") {
    // Fragile and language-dependent
}

Correct value checking:

// GOOD - Use errors.Is() for sentinel errors
if errors.Is(err, io.EOF) {
    // Works even if wrapped: fmt.Errorf("read failed: %w", io.EOF)
}

if errors.Is(err, os.ErrNotExist) {
    // File doesn't exist
}

if errors.Is(err, context.Canceled) {
    // Context was canceled
}

Handling an Error Twice

The Problem: Logging an error and then returning it, causing duplicate handling

// BAD - Error gets logged multiple times
func processData() error {
    err := fetchData()
    if err != nil {
        log.Printf("Failed to fetch data: %v", err)  // Log here
        return fmt.Errorf("fetch failed: %w", err)   // AND wrap and return
    }
    return nil
}

func main() {
    err := processData()
    if err != nil {
        log.Printf("Process failed: %v", err)  // Logs again!
    }
}

Pick one strategy per error:

// GOOD - Either handle (log) or return, not both
func processData() error {
    err := fetchData()
    if err != nil {
        return fmt.Errorf("fetch failed: %w", err)  // Return to caller
    }
    return nil
}

// OR handle locally if you can continue
func processDataSafely() error {
    err := fetchData()
    if err != nil {
        log.Printf("Failed to fetch data, using cache: %v", err)  // Handle here
        return useCache()  // Continue with fallback
    }
    return nil
}

Not Handling an Error

The Problem: Ignoring errors with _ or not checking them

// BAD - Ignoring errors
data, _ := os.ReadFile("config.json")  // File might not exist!

file, err := os.Create("output.txt")
// BAD - Not checking error
file.Write(data)  // Might fail if Create failed

Always handle errors:

// GOOD - Check every error
data, err := os.ReadFile("config.json")
if err != nil {
    return fmt.Errorf("failed to read config: %w", err)
}

file, err := os.Create("output.txt")
if err != nil {
    return fmt.Errorf("failed to create output file: %w", err)
}
defer file.Close()

_, err = file.Write(data)
if err != nil {
    return fmt.Errorf("failed to write data: %w", err)
}

Not Handling Defer Errors

The Problem: Ignoring errors from deferred functions

// BAD - Ignoring Close() error
func writeData(filename string, data []byte) error {
    file, err := os.Create(filename)
    if err != nil {
        return err
    }
    defer file.Close()  // Ignoring potential error!

    _, err = file.Write(data)
    return err
}

Handle defer errors:

// GOOD - Check defer errors
func writeData(filename string, data []byte) (err error) {
    file, err := os.Create(filename)
    if err != nil {
        return err
    }

    defer func() {
        if closeErr := file.Close(); closeErr != nil && err == nil {
            err = fmt.Errorf("failed to close file: %w", closeErr)
        }
    }()

    _, err = file.Write(data)
    return err
}

// Or use helper function
func closeAndCapture(file *os.File, err *error) {
    if closeErr := file.Close(); closeErr != nil && *err == nil {
        *err = closeErr
    }
}

func writeData(filename string, data []byte) (err error) {
    file, err := os.Create(filename)
    if err != nil {
        return err
    }
    defer closeAndCapture(file, &err)

    _, err = file.Write(data)
    return err
}

Bottom Line: Return errors instead of panicking, wrap errors with context, use errors.Is()/errors.As() for checking, handle errors once, always check errors, and don't ignore defer errors.

Mistakes 8 - Concurrency foundations

Concurrency Intro - The What and Why

What is Concurrency? Concurrency is about dealing with multiple things at once. It's about structure - organizing your program to handle multiple tasks, even if they don't run simultaneously.

What is Parallelism? Parallelism is about doing multiple things at once. It's about execution - actually running multiple tasks simultaneously on multiple CPU cores.

Think of it like:

  • Concurrency: A juggler managing multiple balls (one person, multiple tasks)

  • Parallelism: Multiple jugglers each handling balls (multiple people, multiple tasks)

Go's Concurrency Tools:

  • Goroutines: Lightweight threads (like having multiple workers)

  • Channels: Pipes for goroutines to communicate safely

  • Mutexes: Locks to protect shared data

  • Context: Way to cancel/timeout operations across goroutines


Mixing Up Concurrency and Parallelism

The Problem: Thinking concurrency automatically means parallel execution

// This is concurrent (multiple goroutines) but might not be parallel
func main() {
    for i := 0; i < 1000; i++ {
        go func(i int) {
            fmt.Printf("Goroutine %d\n", i)
        }(i)
    }
    time.Sleep(time.Second)
}

Key Points:

  • Concurrency: 1000 goroutines can run on 1 CPU core (time-slicing)

  • Parallelism: 1000 goroutines can run on 8 CPU cores (truly simultaneous)

  • Go's runtime decides how to map goroutines to OS threads and CPU cores

Example:

// Concurrent but not parallel (1 CPU core)
runtime.GOMAXPROCS(1)
// vs
// Concurrent AND parallel (8 CPU cores)
runtime.GOMAXPROCS(8)

Bottom Line: Concurrency is about structure, parallelism is about execution. You can have one without the other.


Thinking Concurrency is Always Faster

The Problem: Adding goroutines thinking it automatically improves performance

// BAD - Unnecessary concurrency overhead
func processNumbers(numbers []int) []int {
    results := make([]int, len(numbers))
    var wg sync.WaitGroup

    for i, num := range numbers {
        wg.Add(1)
        go func(i, num int) {  // Goroutine for each number
            defer wg.Done()
            results[i] = num * 2  // Simple operation
        }(i, num)
    }
    wg.Wait()
    return results
}

Why this is slower:

  • Goroutine overhead: Creating/destroying goroutines costs time

  • Context switching: CPU time spent switching between goroutines

  • Memory overhead: Each goroutine uses ~2KB of stack

When concurrency helps:

  • I/O bound tasks: Network requests, file operations

  • CPU intensive tasks: Complex calculations that can be split

  • Independent work: Tasks that don't depend on each other

// GOOD - Concurrent I/O operations
func fetchURLs(urls []string) []string {
    results := make([]string, len(urls))
    var wg sync.WaitGroup

    for i, url := range urls {
        wg.Add(1)
        go func(i int, url string) {
            defer wg.Done()
            resp, err := http.Get(url)  // I/O bound - benefits from concurrency
            if err == nil {
                // Process response...
            }
        }(i, url)
    }
    wg.Wait()
    return results
}

Being Puzzled When to Use Channels or Mutexes

What are Channels? Channels are pipes that let goroutines send data to each other safely:

ch := make(chan int)

// Goroutine 1 sends data
go func() {
    ch <- 42  // Send 42 into channel
}()

// Goroutine 2 receives data
value := <-ch  // Receive from channel
fmt.Println(value)  // 42

What are Mutexes? Mutexes are locks that protect shared data from being accessed by multiple goroutines simultaneously:

var counter int
var mu sync.Mutex

// Goroutine 1
go func() {
    mu.Lock()
    counter++  // Only one goroutine can modify at a time
    mu.Unlock()
}()

// Goroutine 2
go func() {
    mu.Lock()
    counter++  // Waits for goroutine 1 to unlock
    mu.Unlock()
}()

When to use what:

Use Channels when:

  • Passing data between goroutines

  • Coordinating work (worker pools)

  • Signaling (done/cancel notifications)

// Channel example - passing work
jobs := make(chan int, 100)
results := make(chan int, 100)

// Worker
go func() {
    for job := range jobs {
        results <- job * 2  // Process and send result
    }
}()

Use Mutexes when:

  • Protecting shared state (counters, maps)

  • Simple critical sections

  • Performance critical (mutexes are faster than channels)

// Mutex example - protecting shared map
var cache = make(map[string]string)
var cacheMu sync.RWMutex

func getFromCache(key string) string {
    cacheMu.RLock()         // Read lock
    defer cacheMu.RUnlock()
    return cache[key]
}

func setCache(key, value string) {
    cacheMu.Lock()          // Write lock
    defer cacheMu.Unlock()
    cache[key] = value
}

Go's Philosophy: "Don't communicate by sharing memory; share memory by communicating" (prefer channels when possible)


Not Understanding Race Problems

What is a Race Condition? When multiple goroutines access shared data simultaneously and at least one modifies it, causing unpredictable results:

// RACE CONDITION - Dangerous!
var counter int

func increment() {
    for i := 0; i < 1000; i++ {
        counter++  // NOT atomic! Read -> Add -> Write
    }
}

func main() {
    go increment()  // Goroutine 1
    go increment()  // Goroutine 2
    time.Sleep(time.Second)
    fmt.Println(counter)  // Could be anything from 1000 to 2000!
}

Why it's dangerous:

Detecting races:

go run -race main.go  # Go's race detector

Fixing races:

// Fix 1: Mutex
var counter int
var mu sync.Mutex

func increment() {
    for i := 0; i < 1000; i++ {
        mu.Lock()
        counter++
        mu.Unlock()
    }
}

// Fix 2: Atomic operations
var counter int64

func increment() {
    for i := 0; i < 1000; i++ {
        atomic.AddInt64(&counter, 1)
    }
}

// Fix 3: Channel communication
func increment(ch chan int) {
    for i := 0; i < 1000; i++ {
        ch <- 1  // Send increment signal
    }
}

Not Understanding the Concurrency Impacts of a Workload Type

The Problem: Using same concurrency approach for different workload types

CPU-bound workloads:

  • Limited by CPU cores

  • More goroutines than cores = wasted context switching

  • Optimal: Number of goroutines ≈ Number of CPU cores

// GOOD for CPU-bound work
func processDataCPUBound(data [][]int) {
    numWorkers := runtime.NumCPU()  // Match CPU cores
    jobs := make(chan []int, len(data))

    // Start exactly numCPU workers
    for i := 0; i < numWorkers; i++ {
        go worker(jobs)
    }

    // Send work
    for _, chunk := range data {
        jobs <- chunk
    }
    close(jobs)
}

I/O-bound workloads:

  • Limited by I/O operations (network, disk)

  • Can benefit from many goroutines (while some wait for I/O, others work)

  • Optimal: Much higher number of goroutines

// GOOD for I/O-bound work
func fetchURLsIOBound(urls []string) {
    numWorkers := 100  // Much higher than CPU count
    jobs := make(chan string, len(urls))

    for i := 0; i < numWorkers; i++ {
        go func() {
            for url := range jobs {
                http.Get(url)  // I/O operation
            }
        }()
    }

    for _, url := range urls {
        jobs <- url
    }
    close(jobs)
}

Misunderstanding Go Contexts

What is Context? Context carries deadlines, cancellation signals, and request-scoped values across API boundaries:

// Basic context usage
ctx := context.Background()
ctx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()

result, err := doWork(ctx)

The Problem: Not propagating context or ignoring cancellation

// BAD - Ignoring context
func fetchData(ctx context.Context, url string) ([]byte, error) {
    // Ignores ctx - won't respect timeouts/cancellation
    resp, err := http.Get(url)
    // ...
}

// BAD - Not checking for cancellation
func longRunningTask(ctx context.Context) error {
    for i := 0; i < 1000000; i++ {
        // Should check: if ctx.Err() != nil { return ctx.Err() }
        doExpensiveWork()  // Keeps running even if cancelled
    }
    return nil
}

Proper context usage:

// GOOD - Respecting context
func fetchData(ctx context.Context, url string) ([]byte, error) {
    req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
    if err != nil {
        return nil, err
    }

    resp, err := http.DefaultClient.Do(req)  // Respects context timeout
    // ...
}

// GOOD - Checking cancellation
func longRunningTask(ctx context.Context) error {
    for i := 0; i < 1000000; i++ {
        select {
        case <-ctx.Done():
            return ctx.Err()  // Cancelled or timed out
        default:
            doExpensiveWork()
        }
    }
    return nil
}

Context types:

  • context.Background(): Root context

  • context.WithTimeout(): Auto-cancels after duration

  • context.WithCancel(): Manual cancellation

  • context.WithValue(): Carries request-scoped data

Bottom Line: Concurrency ≠ parallelism, concurrency isn't always faster, use channels for communication and mutexes for protection, watch for race conditions, match concurrency to workload type, and always respect context cancellation.