Static vs Dynamic Arrays
Understanding how arrays are stored in memory and how they resize.
Arrays store data in contiguous memory
An array keeps its elements in one continuous block of memory. If the base address is B and each element uses s bytes, then element i lives at B + i * s. That direct address calculation is why index access is O(1): the runtime jumps straight to the slot instead of walking through earlier elements. Use the demo below to switch between static and dynamic modes, append values, and reset the example back to its starting state.
Each cell is one array slot. The index is shown inside the cell, and the address increases by 4 bytes because this demo uses integers.
The formula under the tape updates for the focused cell, showing exactly how the address is computed from the base pointer and index.
Switch to Static to see one fixed block that cannot grow after allocation.
Switch to Dynamic to see spare capacity, then press Append until the array must resize into a larger contiguous block.
Use Reset any time to return the current mode to its starting example.
Switch between static and dynamic arrays to compare a fixed contiguous block with one that keeps spare capacity for future appends.
Static arrays cannot grow after allocation.
Address Formula
Address = BasePointer + (index x element_size)
Static arrays have fixed size. Dynamic arrays grow with capacity.
A static array reserves exactly one block and cannot grow automatically. A dynamic array still uses contiguous memory, but it also tracks extra capacity so appends can be cheap until the block fills up. The simulator makes this difference visible immediately when you switch modes.
One allocation, fixed length, and no automatic resizing.
Very predictable memory layout.
Common in low-level systems code and fixed-size buffers.
Tracks both current size and allocated capacity.
Appends are fast until capacity is exhausted.
When full, the array reallocates to a larger block and copies elements.
| Aspect | Static Array | Dynamic Array |
|---|---|---|
| Size | Fixed when the array is created. | Changes as elements are added or removed. |
| Memory allocation | One contiguous block sized exactly for the array. | One contiguous block plus extra capacity for future growth. |
| Growth | Cannot grow automatically. | Allocates a larger block when capacity is full. |
| Index access | O(1) direct address calculation. | O(1) direct address calculation. |
| Append at end | O(1) only if spare space was manually reserved. | Amortized O(1); occasional O(n) resize. |
| Typical examples | C arrays, fixed-size buffers. | Python list, Java ArrayList, C++ vector. |
What happens when a dynamic array runs out of capacity?
In dynamic mode, the array stays contiguous, so it cannot simply spill into random free cells. When it fills up, it moves to a new, larger block.
- 1The dynamic array checks whether size === capacity before writing the new value.
- 2If the array is full, it requests a larger contiguous block of memory, often about 2x the old capacity.
- 3The runtime copies each existing element from the old block into the new block.
- 4The new value is written into the next open slot in the larger block.
- 5The old memory block can now be released, and the array points to the new base address.
End operations are cheap. Front operations are not.
Use the interactive controls below to compare operations. Appending at the end usually touches one slot. Inserting or removing at the front forces many values to shift.
The row starts with sample values, and each cell represents one array slot in order.
If only the last cell changes, the operation stays local. If the whole row shifts left or right, many elements had to move.
Press Push and Pop to see changes happen at the end of the row.
Press Shift and Unshift to see every remaining value move, which is why these front operations cost O(n).
Push and pop touch the end of the array. Shift and unshift touch the front, so every existing value has to move.
Update Array
Enter a value, then compare end operations against front operations.
Search
Linear search checks one array element at a time from left to right.
Push and pop only touch the last filled slot. Shift and unshift move every remaining value, so they cost O(n).
| Operation | Complexity | Why |
|---|---|---|
| Push / Append | Amortized O(1) | Usually writes into the next free slot. A resize makes one append cost O(n). |
| Pop | O(1) | Removes the last element without shifting the rest of the array. |
| Shift / Unshift | O(n) | Every element after index 0 must move one position. |
| Index access | O(1) | The address is computed directly with base + index * element size. |
| Search | O(n) | An unsorted array may require checking each element until a match is found. |
The same ideas appear in real programming languages
The syntax changes, but the memory model stays recognizable.
The short version
- Arrays are fast at indexing because elements sit in contiguous memory.
- Static arrays keep a fixed size; dynamic arrays track both size and capacity.
- Dynamic resizing is expensive in one moment because it copies elements, but appends stay amortized O(1).
- Operations at the front of an array are costly because many elements must shift.
- Real language collections such as Python lists, Java ArrayList, and C++ vector all rely on the same resizing idea.