LRU Cache
Difficulty: Medium Source: NeetCode
Problem
Design a data structure that follows the constraints of a Least Recently Used (LRU) cache. Implement the
LRUCacheclass:
LRUCache(capacity)— Initialize the LRU cache with positive sizecapacity.get(key)— Return the value of thekeyif the key exists, otherwise return-1.put(key, value)— Update the value of thekeyif the key exists. Otherwise, add the key-value pair to the cache. If the number of keys exceeds thecapacityfrom this operation, evict the least recently used key.Both
getandputmust each run inO(1)average time complexity.Example 1: Input: [“LRUCache”,“put”,“put”,“get”,“put”,“get”,“put”,“get”,“get”,“get”] [[2],[1,1],[2,2],[1],[3,3],[2],[4,4],[1],[3],[4]] Output: [null,null,null,1,null,-1,null,-1,3,4]
Constraints:
- 1 <= capacity <= 3000
- 0 <= key <= 10^4
- 0 <= value <= 10^5
- At most 2 * 10^5 calls will be made to get and put
Prerequisites
Before attempting this problem, you should be comfortable with:
- Doubly Linked List — insertion and deletion at O(1) given a node reference
- Hash Map — O(1) key lookup that returns a direct node reference
- Dummy/Sentinel Nodes — using head and tail sentinels to avoid edge cases in list manipulation
1. Brute Force — OrderedDict (Python Cheat)
Intuition
Python’s collections.OrderedDict maintains insertion order and supports move_to_end(). We can use it directly: on each get or put, move the accessed key to the end (most recently used). The front of the dict is always the LRU. On capacity overflow, pop the first item.
This is great for interviews if the interviewer allows it, but you should also know the manual implementation below.
Solution
from collections import OrderedDict
class LRUCache:
def __init__(self, capacity: int):
self.cap = capacity
self.cache = OrderedDict()
def get(self, key: int) -> int:
if key not in self.cache:
return -1
self.cache.move_to_end(key) # mark as most recently used
return self.cache[key]
def put(self, key: int, value: int) -> None:
if key in self.cache:
self.cache.move_to_end(key)
self.cache[key] = value
if len(self.cache) > self.cap:
self.cache.popitem(last=False) # remove LRU (first item)
# Test cases
cache = LRUCache(2)
cache.put(1, 1)
cache.put(2, 2)
print(cache.get(1)) # 1
cache.put(3, 3) # evicts key 2 (LRU)
print(cache.get(2)) # -1 (evicted)
cache.put(4, 4) # evicts key 1 (LRU)
print(cache.get(1)) # -1 (evicted)
print(cache.get(3)) # 3
print(cache.get(4)) # 4
Complexity
- Time:
O(1)— OrderedDict operations are O(1) - Space:
O(capacity)
2. Doubly Linked List + Hash Map (Manual, Optimal)
Intuition
This is the textbook solution. We need two things:
- O(1) access by key → hash map:
key → node - O(1) LRU eviction + recency update → doubly linked list in access order
The doubly linked list maintains nodes in order from least recently used (near head sentinel) to most recently used (near tail sentinel). Using two sentinel nodes (dummy_head and dummy_tail) means we never have to check for null neighbors during insertion/deletion.
dummy_head <-> [LRU node] <-> ... <-> [MRU node] <-> dummy_tail
Every time we get or put a key:
- Remove the node from its current position in the list.
- Insert it right before
dummy_tail(marking it as MRU).
When capacity is exceeded:
- The node right after
dummy_headis the LRU — remove it from the list and delete from the map.
Algorithm
get(key): If key not in map, return -1. Otherwise, remove node from list, insert before tail (MRU), return value.put(key, value): If key exists, remove node. Create new node, insert before tail, add to map. If over capacity, remove node after head (LRU) from list and map.
Solution
class Node:
"""Doubly linked list node."""
def __init__(self, key=0, val=0):
self.key = key
self.val = val
self.prev = None
self.next = None
class LRUCache:
def __init__(self, capacity: int):
self.cap = capacity
self.cache = {} # key -> Node
# Sentinel nodes: left = LRU end, right = MRU end
self.left = Node() # dummy head
self.right = Node() # dummy tail
self.left.next = self.right
self.right.prev = self.left
def _remove(self, node: Node):
"""Remove a node from the doubly linked list."""
prev, nxt = node.prev, node.next
prev.next = nxt
nxt.prev = prev
def _insert_mru(self, node: Node):
"""Insert a node right before the dummy tail (mark as MRU)."""
prev = self.right.prev
prev.next = node
node.prev = prev
node.next = self.right
self.right.prev = node
def get(self, key: int) -> int:
if key not in self.cache:
return -1
node = self.cache[key]
self._remove(node)
self._insert_mru(node)
return node.val
def put(self, key: int, value: int) -> None:
if key in self.cache:
self._remove(self.cache[key])
node = Node(key, value)
self.cache[key] = node
self._insert_mru(node)
if len(self.cache) > self.cap:
# Evict LRU: the node right after dummy head
lru = self.left.next
self._remove(lru)
del self.cache[lru.key]
# Test cases
cache = LRUCache(2)
cache.put(1, 1)
cache.put(2, 2)
print(cache.get(1)) # 1 (now 1 is MRU, 2 is LRU)
cache.put(3, 3) # evicts key 2
print(cache.get(2)) # -1 (evicted)
cache.put(4, 4) # evicts key 1
print(cache.get(1)) # -1 (evicted)
print(cache.get(3)) # 3
print(cache.get(4)) # 4
print("---")
# Updating existing key
cache2 = LRUCache(2)
cache2.put(1, 1)
cache2.put(2, 2)
cache2.put(1, 10) # update key 1, makes it MRU
cache2.put(3, 3) # evicts key 2 (LRU), not key 1
print(cache2.get(1)) # 10
print(cache2.get(2)) # -1 (evicted)
print(cache2.get(3)) # 3
print("---")
# Capacity 1
cache3 = LRUCache(1)
cache3.put(1, 1)
cache3.put(2, 2) # evicts 1
print(cache3.get(1)) # -1
print(cache3.get(2)) # 2
Complexity
- Time:
O(1)— hash map lookup + linked list insert/remove are all O(1) - Space:
O(capacity)— at mostcapacitynodes in the list and map
Common Pitfalls
Using a singly linked list. To remove a node in O(1), you need access to its predecessor. A doubly linked list gives you node.prev directly. With a singly linked list, you’d need to traverse from the head to find the predecessor — that’s O(n).
Not storing the key in the node. When evicting the LRU node from the list, you need to remove it from the hash map too. Since you only have the node at that point (not the key), the node must store its own key.
Handling the put update case. When put is called with an existing key, you must first remove the old node from the list before inserting the updated one. Otherwise you’ll have a stale node floating in the list and the cache size will grow incorrectly.
Forgetting del self.cache[lru.key] after removing the LRU from the linked list. The map still holds a reference to the evicted node — without deletion, your cache will grow past capacity.