Memory Representation

One of the defining characteristics of an array is how it is stored in computer memory.

Key Concept: Arrays are stored in contiguous memory locations. This means the elements are placed right next to each other in memory, like houses on a street.

Visualizing Memory

Imagine memory as a long strip of slots. If each integer takes up 4 bytes, an array of 5 integers will occupy 20 consecutive bytes.

Hover over each cell below to see the address calculation in action:

Memory Layout: Arr[5]
0x3E8
10
Arr[0]
4B
0x3EC
20
Arr[1]
4B
0x3F0
30
Arr[2]
4B
0x3F4
40
Arr[3]
4B
0x3F8
50
Arr[4]
4B

The Address Formula

Because of this contiguous storage, the computer can calculate the address of any element instantly:

Address of Arr[i] = Base Address + (i * Size of Element)

For example, to find Arr[3]:

1000 + (3 * 4) = 1000 + 12 = 1012

This property allows O(1) random access, meaning accessing any element takes the same amount of time regardless of the array size. It doesn’t matter if the array has 5 elements or 5 million — accessing arr[0] is just as fast as accessing arr[4999999].

Why Does This Matter?

The Speed Advantage

  • O(1) Access: The CPU calculates the address with a single multiplication and addition. No searching required.
  • Cache Friendly: Due to spatial locality, accessing one element often loads neighboring elements into the CPU cache. This means iterating through an array is extremely fast — the next element is likely already in the fast cache.
  • Predictable Performance: No matter which element you access, the time is constant.

The Trade-offs

  • Fixed Size: We need to know the size upfront (for static arrays). The downside is that resizing is expensive because we need to find a new, larger block of contiguous memory and copy everything over.
  • Costly Insertions/Deletions: Inserting in the middle means shifting all subsequent elements to make room. Same for deletion.
  • Memory Waste: If you allocate an array of size 1000 but only use 10 elements, you waste memory.
🚫

Think about it: What happens if the memory slots after 1016 are already occupied by another variable? We can’t just extend the array! This is why arrays in languages like C++ and Java have a fixed size.

Stack vs Heap Allocation

Where your array lives in memory matters:

PropertyStackHeap
Declarationint arr[5];int* arr = new int[5];
SpeedFaster (automatic allocation)Slower (manual/runtime allocation)
SizeMust be known at compile timeCan be determined at runtime
LifetimeDestroyed when scope endsLives until explicitly freed
RiskStack overflow for large arraysMemory leaks if not freed
// Stack allocation -- fast, automatic cleanup
void stackExample() {
    int arr[100];  // Lives on the stack
}  // Automatically freed here
 
// Heap allocation -- flexible, manual cleanup
void heapExample() {
    int* arr = new int[100];  // Lives on the heap
    // ... use arr ...
    delete[] arr;  // Must free manually!
}
⚠️

In Python and JavaScript, you don’t deal with stack vs heap directly — the runtime handles it. But understanding this helps you reason about performance in systems languages like C++ and Rust.

Different Data Types, Different Sizes

The element size in the address formula varies by data type:

Data TypeSize (typical)Example Array of 5Total Memory
char1 byte{'a','b','c','d','e'}5 bytes
int4 bytes{10, 20, 30, 40, 50}20 bytes
double8 bytes{1.1, 2.2, 3.3, 4.4, 5.5}40 bytes

Here’s how a double array looks in memory (8 bytes per element):

Memory Layout: doubles[5]
0x7D0
1.1
doubles[0]
8B
0x7D8
2.2
doubles[1]
8B
0x7E0
3.3
doubles[2]
8B
0x7E8
4.4
doubles[3]
8B
0x7F0
5.5
doubles[4]
8B

Quick Check

If an integer array starts at address 500 and each int is 4 bytes, what is the address of arr[7]?
Why are arrays cache-friendly?