Well for a bit of history:
You can imagine just considering a uniform random graph on some large number of vertices N and a number of edges δN for a parameter N. That is, of the set of all graphs with N vertices and δN edges, pick one uniformly randomly. For this you could ask, well, what do I need to crank δ up to to get a giant component? I don't happen to know the answer, but this is the simplest model I could think of, and I think the answer is known.
But
this paper says, hey, what effect might the idea of a graph growing over time have on this threshold? Their model was, every time step add a vertex, and then add an edge with probability δ. This gives you the same
number of edges as before, but it's skewed towards "older" vertices having more connections. What's the threshold here? If δ > 1/8, there is a giant component.
To make a fair comparison to an "ungrown" graph, they don't use the uniform model above, but one where the distribution of degrees of the vertices is forced to be the same. For this model, δ needs to be > 1/4 for a giant component to form.
So "grown" graphs more easily form a giant component.
So we tried to come up with a model that interpolated between these two, to see what happens to that threshold. It was pretty much Dan that figured out how to do this. Another way of phrasing the model is like this:
Generate a "grown" graph as described in the paper cited above. Now take a σ fraction of the edges, chop them in half, and randomly reattach them. If σ=0, obviously we've ended up with the model in the paper. If σ=1 then it turns out we get the "fair ungrown" model. For σ in between, we get something in between.