C++ Stack vs Heap Memory: A Deep Dive

Every variable in C++ lives somewhere in memory. Understanding where — and why — is what separates developers who write fast, correct code from those who chase mysterious crashes and memory leaks.
Most languages hide memory management behind a garbage collector. C++ doesn't. It gives you direct control over where objects are created, how long they live, and when they're destroyed. That control is C++'s superpower, but it demands understanding.
This guide covers the two primary memory regions — stack and heap — how they work, how they differ, and how to choose between them.
What You'll Learn
✅ How the stack and heap are organized in process memory
✅ Object lifetime and storage duration (automatic, dynamic, static)
✅ Stack allocation: how it works, why it's fast, and its limits
✅ Heap allocation: when you need it and its costs
✅ Stack overflow, heap fragmentation, and other pitfalls
✅ Decision framework: stack vs heap for every situation
✅ How modern C++ (smart pointers, containers) simplifies the choice
The Memory Layout of a C++ Program
When your C++ program runs, the operating system gives it a block of virtual memory. This memory is divided into several segments:
Key points:
- The stack grows downward (from high addresses to low)
- The heap grows upward (from low addresses to high)
- They grow toward each other, with free space in between
- The text segment contains your compiled code (read-only)
- Data/BSS segments hold global and static variables
#include <iostream>
int globalVar = 42; // Data segment (initialized)
int uninitializedGlobal; // BSS segment (zero-initialized)
int main() {
int stackVar = 10; // Stack
static int staticLocal = 20; // Data segment (initialized)
int* heapVar = new int(30); // Pointer on stack, value on heap
std::cout << "Stack: " << &stackVar << '\n';
std::cout << "Heap: " << heapVar << '\n';
std::cout << "Global: " << &globalVar << '\n';
std::cout << "Static: " << &staticLocal << '\n';
delete heapVar;
return 0;
}Run this and you'll see the addresses reflect the layout: stack addresses are high, heap addresses are lower, and globals are even lower.
The Stack: Fast, Automatic, Limited
The stack is a LIFO (Last In, First Out) data structure managed by the CPU. Every time you call a function, a new stack frame is pushed. When the function returns, its frame is popped.
How Stack Allocation Works
void bar(int y) {
int local_b = y * 2; // pushed onto stack
// ... do work ...
} // local_b popped (destroyed)
void foo(int x) {
int local_a = x + 1; // pushed onto stack
bar(local_a); // new frame pushed for bar()
// bar's frame already popped when we get here
} // local_a popped (destroyed)
int main() {
int n = 10; // pushed onto stack
foo(n); // new frame pushed for foo()
return 0; // n popped (destroyed)
}Why Stack Allocation is Fast
Stack allocation is essentially free — it's just a pointer adjustment:
- Allocation: Decrement the stack pointer by the size of the variable → done
- Deallocation: Increment the stack pointer when the function returns → done
- No bookkeeping: No free lists, no fragmentation tracking, no metadata headers
- Cache-friendly: Stack memory is contiguous and heavily cached by the CPU
Compare this to heap allocation, which involves searching for a suitable free block, updating metadata, and potentially calling into the OS kernel.
#include <chrono>
#include <iostream>
void stackAllocation() {
for (int i = 0; i < 1'000'000; i++) {
int arr[100]; // Stack: just moves the pointer
arr[0] = i; // Prevent optimization
}
}
void heapAllocation() {
for (int i = 0; i < 1'000'000; i++) {
int* arr = new int[100]; // Heap: find block, update metadata
arr[0] = i;
delete[] arr; // Heap: mark as free, coalesce
}
}
int main() {
auto start = std::chrono::high_resolution_clock::now();
stackAllocation();
auto mid = std::chrono::high_resolution_clock::now();
heapAllocation();
auto end = std::chrono::high_resolution_clock::now();
auto stackTime = std::chrono::duration_cast<std::chrono::microseconds>(mid - start);
auto heapTime = std::chrono::duration_cast<std::chrono::microseconds>(end - mid);
std::cout << "Stack: " << stackTime.count() << " μs\n";
std::cout << "Heap: " << heapTime.count() << " μs\n";
std::cout << "Ratio: " << (double)heapTime.count() / stackTime.count() << "x slower\n";
return 0;
}Typical result: heap allocation is 10–100x slower than stack allocation.
Stack Limitations
The stack is fast because it's simple. But that simplicity comes with constraints:
1. Fixed size (typically 1–8 MB)
void stackoverflow() {
int hugeArray[1'000'000]; // 4 MB on stack — might crash!
hugeArray[0] = 42;
}2. Size must be known at compile time (in standard C++)
void process(int n) {
// int arr[n]; // Variable-Length Arrays — NOT standard C++!
std::vector<int> arr(n); // Use the heap via std::vector instead
}3. Objects die when the function returns
int* createValue() {
int local = 42;
return &local; // DANGLING POINTER! local is destroyed on return
}Stack Overflow
When you use more stack space than available, you get a stack overflow — typically a segfault (segmentation fault):
// Classic cause: unbounded recursion
void infinite(int n) {
int buffer[1000]; // Each call consumes ~4 KB
std::cout << n << '\n';
infinite(n + 1); // Never stops → stack overflow
}
// Fix: always have a base case
void factorial(int n) {
if (n <= 1) return; // Base case stops recursion
factorial(n - 1);
}Common causes of stack overflow:
- Unbounded recursion (missing base case)
- Deep recursion (even with a base case, very deep call chains)
- Large local variables (huge arrays on the stack)
How to check/set stack size:
Linux:ulimit -s(shows KB, default usually 8192 = 8 MB)
macOS:ulimit -s(default usually 8192 = 8 MB)
Windows: Default is 1 MB, configurable via linker flag/STACK
The Heap: Flexible, Manual, Costly
The heap (also called the free store in C++ terms) is a large, unstructured pool of memory. You request memory from it explicitly, and you're responsible for returning it.
How Heap Allocation Works
#include <iostream>
int main() {
// 1. Allocate: OS/allocator finds a free block
int* p = new int(42);
// 2. Use: memory stays alive as long as you want
std::cout << *p << '\n';
// 3. Deallocate: you must explicitly free it
delete p;
p = nullptr; // Good practice: avoid dangling pointer
return 0;
}Behind the scenes, new does several things:
- Calls the memory allocator (e.g.,
malloc→mmap/sbrk) - The allocator searches its free list for a block of the right size
- If no suitable block exists, it requests more memory from the OS
- It writes metadata headers (block size, flags) before your data
- It returns a pointer to the usable area
Why You Need the Heap
The heap solves the three problems the stack can't:
1. Objects that outlive their creating function
#include <memory>
#include <string>
std::unique_ptr<std::string> createGreeting(const std::string& name) {
// This string lives on the heap — survives the function return
return std::make_unique<std::string>("Hello, " + name + "!");
}
int main() {
auto greeting = createGreeting("World");
std::cout << *greeting << '\n'; // "Hello, World!"
// greeting automatically deleted when main() ends
return 0;
}2. Large data that would overflow the stack
#include <vector>
void processLargeDataset() {
// 100 million ints = ~400 MB — way too large for the stack
// std::vector allocates on the heap internally
std::vector<int> data(100'000'000);
for (int i = 0; i < data.size(); i++) {
data[i] = i;
}
// vector's destructor frees the heap memory
}3. Data whose size is only known at runtime
#include <iostream>
#include <memory>
int main() {
int n;
std::cout << "How many elements? ";
std::cin >> n;
// Size determined by user input — must use the heap
auto arr = std::make_unique<int[]>(n);
for (int i = 0; i < n; i++) {
arr[i] = i * i;
}
return 0;
}The Costs of Heap Allocation
Nothing is free. Heap flexibility comes at a cost:
| Cost | Why |
|---|---|
| Slower allocation | Must search free lists, update metadata |
| Slower deallocation | Must coalesce free blocks, update lists |
| Fragmentation | Repeated alloc/dealloc creates gaps (see below) |
| Cache misses | Heap objects are scattered in memory |
| Thread contention | Global heap needs locking in multithreaded programs |
| Memory leaks | Forget to delete → memory never returned |
Heap Fragmentation
Fragmentation happens when the heap has enough total free memory but not enough contiguous free memory:
Total free: 80 bytes. But you can't allocate a 64-byte block because the free space is fragmented into 32 + 16 + 32 byte chunks.
#include <iostream>
#include <vector>
// Demonstrating fragmentation
int main() {
std::vector<int*> ptrs;
// Allocate many small blocks
for (int i = 0; i < 1000; i++) {
ptrs.push_back(new int[10]);
}
// Free every other block → creates holes
for (int i = 0; i < 1000; i += 2) {
delete[] ptrs[i];
ptrs[i] = nullptr;
}
// Now try to allocate a large block
// Even though ~50% of memory is free, it may be fragmented
int* large = new int[5000]; // May need to request new memory from OS
// Cleanup
for (int i = 1; i < 1000; i += 2) {
delete[] ptrs[i];
}
delete[] large;
return 0;
}Storage Duration: The Full Picture
C++ defines four storage durations that determine how long an object lives:
1. Automatic Storage Duration (Stack)
Objects created inside a function or block. Destroyed when the scope exits.
void example() {
int x = 10; // Created here
{
int y = 20; // Created here
std::string s = "hi"; // Created here
} // y and s destroyed here
// y and s don't exist here
} // x destroyed here2. Dynamic Storage Duration (Heap)
Objects created with new (or malloc). Live until you explicitly delete (or free) them.
void example() {
int* p = new int(42); // Created on heap
// ...
delete p; // Destroyed here — YOU decide when
}
// If you forget delete → memory leak3. Static Storage Duration
Objects that live for the entire program lifetime. Includes global variables, static locals, and static class members.
int globalCounter = 0; // Static: lives until program exits
void count() {
static int calls = 0; // Static local: initialized once, persists across calls
calls++;
globalCounter++;
std::cout << "Call #" << calls << ", global: " << globalCounter << '\n';
}
int main() {
count(); // Call #1, global: 1
count(); // Call #2, global: 2
count(); // Call #3, global: 3
return 0;
}4. Thread Storage Duration
Objects that live for the lifetime of a thread. Created with thread_local.
#include <iostream>
#include <thread>
thread_local int threadCounter = 0; // Each thread gets its own copy
void work(const std::string& name) {
for (int i = 0; i < 3; i++) {
threadCounter++;
std::cout << name << ": " << threadCounter << '\n';
}
}
int main() {
std::thread t1(work, "Thread A");
std::thread t2(work, "Thread B");
t1.join();
t2.join();
// Each thread counted 1, 2, 3 independently
return 0;
}Storage Duration Summary
| Duration | Keyword/Location | Where | Lifetime | Example |
|---|---|---|---|---|
| Automatic | Local variables | Stack | Scope exit | int x = 10; |
| Dynamic | new / make_unique | Heap | Until delete | new int(42) |
| Static | static, globals | Data segment | Program lifetime | static int n; |
| Thread | thread_local | Thread-local | Thread lifetime | thread_local int n; |
Stack vs Heap: The Decision Framework
Here's a practical decision tree for choosing where to allocate:
When to Use the Stack
// ✅ Small, fixed-size types
int count = 0;
double ratio = 3.14;
char grade = 'A';
// ✅ Small structs and objects
struct Point { double x, y; };
Point origin{0.0, 0.0};
// ✅ std::array for fixed-size collections
std::array<int, 10> scores{};
// ✅ Iterators and loop variables
for (auto it = vec.begin(); it != vec.end(); ++it) { /* ... */ }
// ✅ RAII lock guards
{
std::lock_guard<std::mutex> lock(mtx);
// ... critical section ...
} // lock released here automaticallyWhen to Use the Heap
// ✅ Collections with runtime-determined size
std::vector<int> data(n); // heap internally
std::string name = getUserInput(); // heap internally
// ✅ Large objects
auto matrix = std::make_unique<double[]>(1'000'000);
// ✅ Objects that outlive their scope
auto config = std::make_unique<Config>();
return config; // ownership transferred to caller
// ✅ Polymorphic objects
std::unique_ptr<Shape> shape = std::make_unique<Circle>(5.0);
// ✅ Shared resources
auto cache = std::make_shared<Cache>();
// Multiple components hold shared_ptr to same cacheThe Modern C++ Answer
In practice, modern C++ makes the choice simpler. You rarely call new/delete directly:
| Scenario | Use This | Stack or Heap? |
|---|---|---|
| Local primitives and small structs | Direct declaration | Stack |
| Dynamic array | std::vector | Heap (managed) |
| Dynamic string | std::string | Heap (managed) |
| Unique ownership | std::unique_ptr | Heap (managed) |
| Shared ownership | std::shared_ptr | Heap (managed) |
| Fixed-size collection | std::array | Stack |
| Optional value | std::optional | Stack |
The standard library containers (vector, string, map, etc.) handle heap allocation internally and clean up automatically via destructors. You get the flexibility of the heap with the safety of the stack.
RAII: Bridging Stack and Heap
RAII (Resource Acquisition Is Initialization) is the pattern that makes C++ memory management practical. The idea: tie the lifetime of a heap resource to a stack object's scope.
#include <fstream>
#include <memory>
#include <mutex>
void demonstrateRAII() {
// 1. File handle: opened in constructor, closed in destructor
{
std::ofstream file("output.txt");
file << "Hello, RAII!\n";
} // file automatically closed here
// 2. Smart pointer: memory freed when pointer goes out of scope
{
auto data = std::make_unique<int[]>(1000);
data[0] = 42;
} // memory automatically freed here
// 3. Lock guard: mutex unlocked when guard is destroyed
std::mutex mtx;
{
std::lock_guard<std::mutex> lock(mtx);
// ... critical section ...
} // mutex automatically unlocked here
}The pattern:
- Stack object (the RAII wrapper) manages a heap resource
- Constructor acquires the resource
- Destructor releases the resource
- Scope exit guarantees cleanup — even if an exception is thrown
Rule of thumb: Every heap allocation should be owned by a stack object. If you follow this rule, you'll never have a memory leak.
For a deeper dive into smart pointers and ownership patterns, see the C++ Pointers Complete Guide.
Practical Examples
Example 1: Building a Simple Stack-Based Calculator
Everything on the stack — fast and simple:
#include <iostream>
#include <array>
struct Calculator {
// All members on the stack
std::array<double, 100> memory{};
int top = 0;
void push(double value) {
if (top < 100) {
memory[top++] = value;
}
}
double pop() {
if (top > 0) {
return memory[--top];
}
return 0.0;
}
double add() {
double b = pop(), a = pop();
double result = a + b;
push(result);
return result;
}
double multiply() {
double b = pop(), a = pop();
double result = a * b;
push(result);
return result;
}
};
int main() {
Calculator calc; // Entire calculator lives on the stack
calc.push(3.0);
calc.push(4.0);
std::cout << "3 + 4 = " << calc.add() << '\n'; // 7
calc.push(5.0);
std::cout << "7 * 5 = " << calc.multiply() << '\n'; // 35
return 0;
}Example 2: Heap-Based Dynamic Graph
When data structures grow dynamically, the heap is necessary:
#include <iostream>
#include <memory>
#include <string>
#include <unordered_map>
#include <vector>
class Graph {
struct Node {
std::string name;
std::vector<Node*> neighbors;
explicit Node(std::string n) : name(std::move(n)) {}
};
// Nodes owned by the graph, stored on the heap
std::unordered_map<std::string, std::unique_ptr<Node>> nodes;
public:
void addNode(const std::string& name) {
nodes[name] = std::make_unique<Node>(name);
}
void addEdge(const std::string& from, const std::string& to) {
if (nodes.count(from) && nodes.count(to)) {
nodes[from]->neighbors.push_back(nodes[to].get());
}
}
void printNeighbors(const std::string& name) const {
if (auto it = nodes.find(name); it != nodes.end()) {
std::cout << name << " -> ";
for (const auto* neighbor : it->second->neighbors) {
std::cout << neighbor->name << " ";
}
std::cout << '\n';
}
}
};
int main() {
Graph g; // Graph object on stack, nodes on heap
g.addNode("A");
g.addNode("B");
g.addNode("C");
g.addEdge("A", "B");
g.addEdge("A", "C");
g.addEdge("B", "C");
g.printNeighbors("A"); // A -> B C
g.printNeighbors("B"); // B -> C
return 0;
// All nodes automatically freed when Graph destructor runs
}Example 3: Measuring Stack vs Heap Performance
A realistic benchmark comparing allocation strategies:
#include <chrono>
#include <iostream>
#include <memory>
#include <vector>
struct Particle {
double x, y, z;
double vx, vy, vz;
double mass;
void update(double dt) {
x += vx * dt;
y += vy * dt;
z += vz * dt;
}
};
// Strategy 1: Stack allocation with std::array
void simulateStack(int steps) {
std::array<Particle, 1000> particles{};
for (auto& p : particles) {
p = {0.0, 0.0, 0.0, 1.0, 0.5, 0.2, 1.0};
}
for (int i = 0; i < steps; i++) {
for (auto& p : particles) {
p.update(0.01);
}
}
}
// Strategy 2: Heap allocation with std::vector
void simulateHeap(int steps) {
std::vector<Particle> particles(1000);
for (auto& p : particles) {
p = {0.0, 0.0, 0.0, 1.0, 0.5, 0.2, 1.0};
}
for (int i = 0; i < steps; i++) {
for (auto& p : particles) {
p.update(0.01);
}
}
}
// Strategy 3: Heap with individual allocations (worst case)
void simulateHeapScattered(int steps) {
std::vector<std::unique_ptr<Particle>> particles;
for (int i = 0; i < 1000; i++) {
particles.push_back(std::make_unique<Particle>(
Particle{0.0, 0.0, 0.0, 1.0, 0.5, 0.2, 1.0}
));
}
for (int i = 0; i < steps; i++) {
for (auto& p : particles) {
p->update(0.01);
}
}
}
int main() {
const int steps = 10'000;
auto measure = [](const std::string& name, auto func) {
auto start = std::chrono::high_resolution_clock::now();
func();
auto end = std::chrono::high_resolution_clock::now();
auto us = std::chrono::duration_cast<std::chrono::microseconds>(end - start);
std::cout << name << ": " << us.count() << " μs\n";
};
measure("Stack (array) ", [&]{ simulateStack(steps); });
measure("Heap (vector) ", [&]{ simulateHeap(steps); });
measure("Heap (scattered ptrs)", [&]{ simulateHeapScattered(steps); });
return 0;
}Expected results:
- Stack (array): Fastest — data is contiguous, cache-friendly
- Heap (vector): Slightly slower — one heap allocation, still contiguous
- Heap (scattered): Slowest — each particle allocated separately, cache-unfriendly
The takeaway: it's not just stack vs heap — data layout matters. Contiguous memory (whether stack or heap) beats scattered heap allocations.
Common Pitfalls and How to Avoid Them
1. Returning a Pointer to a Local Variable
// ❌ WRONG: dangling pointer
int* bad() {
int x = 42;
return &x; // x is destroyed when bad() returns
}
// ✅ CORRECT: return by value (move semantics make this efficient)
int good() {
int x = 42;
return x; // Copied/moved — safe
}
// ✅ CORRECT: return a smart pointer to heap-allocated object
std::unique_ptr<int> alsoGood() {
return std::make_unique<int>(42); // Heap object, ownership transferred
}2. Stack Overflow from Large Local Arrays
// ❌ DANGEROUS: 40 MB on the stack
void bad() {
double matrix[10'000'000];
}
// ✅ SAFE: use std::vector (heap-backed)
void good() {
std::vector<double> matrix(10'000'000);
}3. Memory Leaks from Raw new
// ❌ LEAK: exception before delete
void bad() {
int* data = new int[1000];
riskyOperation(); // If this throws, delete[] is never reached
delete[] data;
}
// ✅ SAFE: smart pointer handles cleanup even on exception
void good() {
auto data = std::make_unique<int[]>(1000);
riskyOperation(); // Even if this throws, memory is freed
}4. Fragmentation from Frequent Small Allocations
// ❌ SLOW: thousands of tiny heap allocations
std::vector<std::unique_ptr<int>> bad;
for (int i = 0; i < 10'000; i++) {
bad.push_back(std::make_unique<int>(i));
}
// ✅ BETTER: single contiguous allocation
std::vector<int> good(10'000);
for (int i = 0; i < 10'000; i++) {
good[i] = i;
}Advanced: Custom Allocators and Memory Pools
For performance-critical applications, you can customize how heap memory is allocated:
Arena/Pool Allocator Concept
An arena allocator pre-allocates a large block and hands out chunks from it — fast allocation with zero fragmentation:
#include <cstddef>
#include <iostream>
#include <vector>
class Arena {
std::vector<std::byte> buffer;
std::size_t offset = 0;
public:
explicit Arena(std::size_t size) : buffer(size) {}
void* allocate(std::size_t size, std::size_t alignment = alignof(std::max_align_t)) {
// Align the offset
std::size_t aligned = (offset + alignment - 1) & ~(alignment - 1);
if (aligned + size > buffer.size()) {
throw std::bad_alloc();
}
void* ptr = buffer.data() + aligned;
offset = aligned + size;
return ptr;
}
void reset() { offset = 0; } // "Free" everything at once
std::size_t used() const { return offset; }
std::size_t capacity() const { return buffer.size(); }
};
int main() {
Arena arena(1024); // 1 KB arena
// Allocate from the arena — extremely fast (just a pointer bump)
int* a = static_cast<int*>(arena.allocate(sizeof(int)));
int* b = static_cast<int*>(arena.allocate(sizeof(int)));
double* c = static_cast<double*>(arena.allocate(sizeof(double)));
*a = 10;
*b = 20;
*c = 3.14;
std::cout << *a << ", " << *b << ", " << *c << '\n'; // 10, 20, 3.14
std::cout << "Used: " << arena.used() << " / " << arena.capacity() << " bytes\n";
arena.reset(); // All allocations freed at once — O(1)
std::cout << "After reset: " << arena.used() << " bytes used\n";
return 0;
}C++17 std::pmr (Polymorphic Memory Resources)
The standard library provides a framework for custom allocators:
#include <iostream>
#include <memory_resource>
#include <vector>
int main() {
// Stack-based buffer with monotonic allocator
std::array<std::byte, 4096> buffer;
std::pmr::monotonic_buffer_resource pool(buffer.data(), buffer.size());
// Vector that allocates from the pool instead of the global heap
std::pmr::vector<int> vec(&pool);
for (int i = 0; i < 100; i++) {
vec.push_back(i);
}
std::cout << "Vector size: " << vec.size() << '\n';
std::cout << "Allocated from stack-backed pool — zero heap allocations!\n";
return 0;
}When to use custom allocators? Only when profiling shows that memory allocation is a bottleneck. For most applications, the default allocator is fine. Custom allocators shine in game engines, embedded systems, and high-frequency trading.
Quick Reference: Stack vs Heap
| Feature | Stack | Heap |
|---|---|---|
| Speed | Very fast (pointer adjustment) | Slower (allocator search + metadata) |
| Size | Limited (1–8 MB typical) | Limited by available RAM |
| Lifetime | Automatic (scope-based) | Manual (you control) |
| Thread safety | Each thread has its own | Shared — needs synchronization |
| Fragmentation | Never | Can fragment over time |
| Cache performance | Excellent (contiguous) | Variable (can be scattered) |
| Allocation | Automatic by compiler | Explicit (new, make_unique, containers) |
| Deallocation | Automatic on scope exit | Explicit (delete) or via RAII |
| Use case | Small, short-lived values | Large, long-lived, or dynamic-sized data |
Conclusion
Memory management in C++ isn't as scary as it sounds. The mental model is simple:
- Stack = fast, automatic, limited. Use it for small, local, fixed-size data.
- Heap = flexible, manual, costly. Use it for large, dynamic, or long-lived data.
- RAII = the bridge. Smart pointers and containers give you heap flexibility with stack safety.
- Modern C++ = you rarely call
new/delete. Usestd::vector,std::string,std::unique_ptr, and let the standard library handle memory for you.
The golden rule: prefer the stack. Use the heap when you must. Always use RAII to manage heap resources.
Practice Problems
Problem 1: Identify the storage duration
What is the storage duration of each variable?
int a = 10;
void foo() {
int b = 20;
static int c = 30;
int* d = new int(40);
thread_local int e = 50;
}Answer:
a— Static (global variable, lives for entire program)b— Automatic (local variable, dies whenfoo()returns)c— Static (static local, initialized once, lives for entire program)d(pointer) — Automatic (the pointer itself is on the stack)*d(pointed-to value) — Dynamic (on the heap, lives untildelete)e— Thread (each thread gets its own copy)
Problem 2: Fix the memory bugs
Find and fix all memory-related bugs:
int* createArray(int size) {
int arr[size]; // Bug 1
for (int i = 0; i < size; i++) {
arr[i] = i * i;
}
return arr; // Bug 2
}
void process() {
int* data = new int[100];
if (data[0] > 10) {
return; // Bug 3
}
delete data; // Bug 4
}Answer:
// Fixed version
std::vector<int> createArray(int size) {
std::vector<int> arr(size); // Fix 1: Use vector for runtime-sized array
for (int i = 0; i < size; i++) {
arr[i] = i * i;
}
return arr; // Fix 2: Return by value (no dangling pointer)
}
void process() {
auto data = std::make_unique<int[]>(100); // Fix 3 & 4: Smart pointer
if (data[0] > 10) {
return; // Smart pointer cleans up automatically
}
// No manual delete needed
}Problem 3: Optimize allocation strategy
This code creates 10,000 particles. How would you optimize its memory usage?
struct Particle {
double x, y, z;
double vx, vy, vz;
};
void simulate() {
std::vector<std::unique_ptr<Particle>> particles;
for (int i = 0; i < 10'000; i++) {
particles.push_back(std::make_unique<Particle>());
}
// ... simulation loop ...
}Answer:
void simulate() {
// Store particles contiguously — one allocation instead of 10,000
std::vector<Particle> particles(10'000);
// ... simulation loop ...
}Why: vector<unique_ptr<Particle>> makes 10,000 separate heap allocations, scattered across memory. vector<Particle> makes one allocation of contiguous memory, which is dramatically better for cache performance.
Related Posts
Series: Modern C++ Learning Roadmap
Previous: C++ Pointers: The Complete Guide
Related: Phase 1: C++ Fundamentals
📬 Subscribe to Newsletter
Get the latest blog posts delivered to your inbox every week. No spam, unsubscribe anytime.
We respect your privacy. Unsubscribe at any time.
💬 Comments
Sign in to leave a comment
We'll never post without your permission.