## Tuesday, April 6, 2010

### Why Floyd-Warshall is one of my favorite algorithms

Among classes of algorithms, dynamic programming is my favorite. The way it happened was almost a bait-and-switch: I was quickly drawn to it by its name, followed by a brief period of rejection ("Such a cool name and this is it?") and misunderstanding ("It's just caching!"), before eventually appreciation and finally admiration comes in.

If I have to pick a favorite among many dynamic programming algorithms, it will be Floyd-Warshall's all-pairs shortest path algorithm.
```procedure FloydWarshall ()
for k := 1 to n
for i := 1 to n
for j := 1 to n
path[i][j] = min ( path[i][j], path[i][k]+path[k][j] );
```
Here's why it's cool: when you first learn about the shortest-path problem in your graph theory course, you probably start with Dijkstra's algorithm that solves single-source shortest path. It's quite complicated at first, but then you get over it, and you fully understood it.

Then the teacher says "Now we want to solve the same problem but for ALL sources". You think to yourself, "Oh god, this is going to be a much harder problem! It's going to be at least N times more complicated than Dijkstra's algorithm!!!".

Then the teacher gives you Floyd-Warshall. And your mind explodes. Then you start to tear up at how beautifully simple the algorithm is. It's just a triply-nested loop. It only uses a simple array for its data structure.

The most eye-opening part for me is the following realization: say you have a solution for problem A. Then you have a bigger "superproblem" B which contains problem A. The solution to problem B may in fact be simpler than the solution to problem A.