Here's a neat new result on approximately solving graph laplacians in near-linear time. A way of understanding what it means to "solving graph laplacians", if you've never heard of that, is to say that it's like taking a big collection of resistors connected up every-which-way, sticking some current sources in them, and producing an analysis of all the currents and voltages everywhere. Or if you like to look at pretty pictures, it's kind of spiritually related to the problem solved in jim mccann's gradient-paint project in that you're specifying what happens locally and solving for what happens globally, in such a way that the local-global relationship is some kind of relaxationy energy minimization. In fact many of the best previous approaches to solving graph laplacians apparently tried to do (a graph-ish combinatorial version of) the same kind of multi-scale preconditioning that (I believe) jim is doing on graphics hardware, but the interesting thing about the above-linked paper is that it doesn't --- its approach is to repeatedly randomly sample cycles in a putative current field, and fix them up so they actually satisfy Kirchoff's Voltage Law.
A technical device used by the paper (and many of its predecessors) to find "good" cycles to sample is a new concept to me: that of "low-stretch" spanning trees. From the MIT news article:
This animation shows two different "spanning trees" for a simple graph, a grid like those used in much scientific computing. The speedups promised by a new MIT algorithm require "low-stretch" spanning trees (green), in which the paths between neighboring nodes don't become excessively long (red).
Given any spanning tree, you get a basis of the vector space of linear combinations of oriented edges of the graph (called the "cycle space" of the graph) by considering the edges not in the spanning tree: each such edge (a,b) gives you a cycle in combination with the unique path P from a to b in the spanning tree. In a low-stretch spanning tree, special thing is that these cycles aren't very big very often. It's not just a basis of the cycle space, it's somehow a nice, small basis.
Anyway lots of other ideas in this paper, still haven't finished reading it, but it's really neat.