### (no subject)

Last night: delicious pizza at Iggies, which

This morning: listening to Swedes talk about European parlimentary politics.

---

Guy Blelloch talking about parallel aglorithms.

It hit me that Guy's list of example algorithms that parallelize well has a huge overlap with the standard stable of algorithms I've been thinking about that have low sensitivity. This isn't a coincidence! Both constraints mean that independence of individuals' influence on the final answer, whether because the cost of communicating that influence (parallelism) or by the fact that dependencies would magnify that influence in violation of prohibitions on quantitative influence per se (sensitivity).

Oh, here comes the parallelism-concurrency distinction. I've certainly heard this before, but think this is explained really well here.

Hah, quicksort! Source of recent twitter/#icfp oops where someone in the audience thought I didn't know that quicksort was quadratic in the worst case because I rattled it off under the heading of "n ln n algorithms our type system can't judge to be 1-sensitive".

A formal cost model for parallel computations, you say? Maybe we should sic some smart grad student on that...

Real cost of a parallel computation is O(w/p + d log p), for work w, depth d, processors p. The w/p bit is when "work dominates", and divides nicely across processors. But if d is big you can't help it, and the log p is load balancing cost.

Ooh the fact that the depth of quicksort is O(n) is ever so slightly subtle. It's only O(lg n) stages, but the ith stage partition (also the merge) costs O(2^-i). Ok so he repairs it with using trees instead of lists.

I don't quite see how the replacement works, but he went from O(n) depth to O(lg^2 n), which is huge.

Oh and now graph connectivity shows up too! Here we get

---

Audience question from SPJ about randomness. Basically: can you tame the randomness in the --- value-deterministic, but performance-nondeterministic ---- graph connectivity algorithm? I feel like Guy and SPJ are sort of talking each other... wait, no, Guy does understand the question correctly and just doesn't know the answer because it's a hard question. I guess I agree. Still I want just as much as SPJ to have it answered! For if you go back to the concurrency/parallelism distinction, it's kind of the whole point. A parallel execution of a program technically

**qedragon**recommended.This morning: listening to Swedes talk about European parlimentary politics.

---

Guy Blelloch talking about parallel aglorithms.

It hit me that Guy's list of example algorithms that parallelize well has a huge overlap with the standard stable of algorithms I've been thinking about that have low sensitivity. This isn't a coincidence! Both constraints mean that independence of individuals' influence on the final answer, whether because the cost of communicating that influence (parallelism) or by the fact that dependencies would magnify that influence in violation of prohibitions on quantitative influence per se (sensitivity).

Oh, here comes the parallelism-concurrency distinction. I've certainly heard this before, but think this is explained really well here.

Hah, quicksort! Source of recent twitter/#icfp oops where someone in the audience thought I didn't know that quicksort was quadratic in the worst case because I rattled it off under the heading of "n ln n algorithms our type system can't judge to be 1-sensitive".

A formal cost model for parallel computations, you say? Maybe we should sic some smart grad student on that...

Real cost of a parallel computation is O(w/p + d log p), for work w, depth d, processors p. The w/p bit is when "work dominates", and divides nicely across processors. But if d is big you can't help it, and the log p is load balancing cost.

Ooh the fact that the depth of quicksort is O(n) is ever so slightly subtle. It's only O(lg n) stages, but the ith stage partition (also the merge) costs O(2^-i). Ok so he repairs it with using trees instead of lists.

I don't quite see how the replacement works, but he went from O(n) depth to O(lg^2 n), which is huge.

Oh and now graph connectivity shows up too! Here we get

**limited**interaction leading to parallelizability and low-sensitivity.---

Audience question from SPJ about randomness. Basically: can you tame the randomness in the --- value-deterministic, but performance-nondeterministic ---- graph connectivity algorithm? I feel like Guy and SPJ are sort of talking each other... wait, no, Guy does understand the question correctly and just doesn't know the answer because it's a hard question. I guess I agree. Still I want just as much as SPJ to have it answered! For if you go back to the concurrency/parallelism distinction, it's kind of the whole point. A parallel execution of a program technically

*does*have interleavings if you look at it at a low enough level, but they all*don't matter*for the value output of the function. Can this be lifted to more algorithmic-smelling ideas?