DSA: Greedy Algorithms Patterns
Solution to Last Week's Challenge
Last week's problem was: DSA: Trie vs HashMap Tradeoffs
The optimal solution uses dsa: trie vs hashmap tradeoffs pattern with careful state management.
Complexity Analysis:
- Time Complexity: O(n)
- Space Complexity: O(1)
In the landscape of technical interviews, few patterns are as polarizing as Greedy Algorithms. To a junior developer, "Greedy" sounds like a shortcut or a hack; to a senior engineer, it represents a highly optimized class of algorithms that trade exhaustive searching for immediate, localized decision-making. In real-world systems, greedy approaches power everything from network packet routing to data compression (Huffman Coding) and resource scheduling in cloud environments.
The challenge with Greedy algorithms isn't the implementation—which is often deceptively simple—but the proof of correctness. Unlike Dynamic Programming, where we explore all subproblems to ensure a global optimum, Greedy makes a single "best" choice at each step and never looks back. In an interview setting, your task is not just to write the code, but to demonstrate why making a locally optimal choice will inevitably lead to the globally optimal solution.
Theory and Core Concepts
The Greedy pattern relies on two fundamental properties. First is the Greedy Choice Property: a global optimum can be reached by choosing the local optimum. Second is Optimal Substructure: the optimal solution to the problem contains within it the optimal solutions to its subproblems. If a problem lacks these, you likely need Dynamic Programming.
Most Greedy problems involve an initial sorting step or the use of a Priority Queue. This is because we need to process elements in a specific order (by deadline, by weight, by value, etc.) to ensure our "immediate" choice is actually the most beneficial.
Pattern Visualization
Solving a Greedy problem follows a consistent logical flow. The most critical step is determining the "Greedy Criterion"—the rule by which you will make your choice.
Implementation: Activity Selection / Interval Scheduling
Consider the classic problem: given a set of activities with start and end times, what is the maximum number of activities one person can perform?
Brute Force (Recursive/Backtracking)
A brute force approach would try every possible combination of non-overlapping activities. This results in an exponential time complexity.
# Brute Force: O(2^n)
def max_activities_recursive(start, end, n):
if n == 0:
return 0
# Logic: For each activity, either include it (if it doesn't overlap)
# or exclude it, then take the maximum.
# (Simplified for brevity)
passOptimized Greedy Approach
The greedy choice here is to always pick the activity that finishes earliest. This leaves the maximum amount of time for subsequent activities.
def max_activities_greedy(start_times, end_times):
# Pair and sort by end times: O(N log N)
activities = sorted(zip(start_times, end_times), key=lambda x: x[1])
count = 0
last_end_time = -1
for start, end in activities:
# If the start time is >= the end time of the last activity
if start >= last_end_time:
count += 1
last_end_time = end # Greedy choice: update state
return count
# Time Complexity: O(N log N) due to sorting
# Space Complexity: O(N) to store the zipped listComplexity Analysis Table
| Approach | Time Complexity | Space Complexity | Best Use Case |
|---|---|---|---|
| Brute Force | $O(2^N)$ | $O(N)$ | Small $N$ (< 20), finding all combinations. |
| Dynamic Programming | $O(N^2)$ | $O(N)$ | When subproblems overlap but greedy choice fails. |
| Greedy (Sorting) | $O(N \log N)$ | $O(N)$ | Interval problems, scheduling, minimize/maximize. |
| Greedy (Heap) | $O(N \log K)$ | $O(K)$ | Streaming data where we only need the top $K$ choices. |
Common Patterns and Variations
1. The "Jump Game" Pattern
In this variation, you maintain a "farthest reachable" index. You don't need to sort; you just update your greedy boundary as you iterate.
def can_jump(nums):
max_reach = 0
for i, jump in enumerate(nums):
if i > max_reach:
return False
max_reach = max(max_reach, i + jump)
return True2. The "Gas Station" Pattern
This uses a "running balance" to find a starting point. If the total gas is less than total cost, no solution exists. Otherwise, the first point that completes the circuit greedily is the answer.
def can_complete_circuit(gas, cost):
if sum(gas) < sum(cost): return -1
total, start = 0, 0
for i in range(len(gas)):
total += gas[i] - cost[i]
if total < 0:
total = 0
start = i + 1
return startPractice Problems Strategy
When preparing, focus on the relationship between problem difficulty and how often they appear in FAANG interviews.
This Week's Interview Challenge
Problem: Minimum Number of Arrows to Burst Balloons
There are some spherical balloons taped onto a flat wall that represents the XY-plane. The balloons are represented as a 2D integer array points where points[i] = [x_start, x_end] denotes a balloon whose horizontal diameter stretches between x_start and x_end. You do not know the exact y-coordinates.
Arrows can be shot up vertically (in the positive y-direction) from different points along the x-axis. A balloon with [x_start, x_end] is burst by an arrow shot at x if x_start <= x <= x_end. There is no limit to the number of arrows that can be shot. A single arrow can burst multiple balloons if they overlap.
Given the array points, return the minimum number of arrows that must be shot to burst all balloons.
Example 1:
- Input:
points = [[10,16],[2,8],[1,6],[7,12]] - Output:
2 - Explanation: Shot an arrow at x = 6 (bursts [2,8] and [1,6]) and another at x = 11 (bursts [10,16] and [7,12]).
Example 2:
- Input:
points = [[1,2],[3,4],[5,6],[7,8]] - Output:
4
Constraints:
1 <= points.length <= 10^5points[i].length == 2-2^31 <= x_start < x_end <= 2^31 - 1
Hints:
- Think about the "Activity Selection" logic. If we sort the balloons, which coordinate (start or end) is more important for making a greedy choice?
- If you shoot an arrow at the end of the first balloon (after sorting), how many subsequent balloons can that same arrow hit?
- Consider the case where one balloon is entirely contained within another.
Template Code:
class Solution:
def findMinArrowShots(self, points: list[list[int]]) -> int:
if not points:
return 0
# Your greedy logic here
pass
# Submit your solution and check back next week for the detailed answer!Conclusion
Greedy algorithms are a testament to the power of efficiency. While they require a rigorous mental check to ensure that local choices don't lead to a dead end, their performance ($O(N \log N)$ or $O(N)$) makes them indispensable for high-scale systems. In interviews, always start by identifying if the problem has optimal substructure, sort your data, and clearly communicate your "Greedy Criterion" to the interviewer before writing a single line of code.