That is, if I have, say Turing machines and I have ML programs, then that gives me two ways to measure how "intrinsically complicated" a bitstring is: one, the size of the smallest Turing machine that outputs it, and two, the size of the smallest ML program that outputs it. Perhaps there are idiosyncrasies of Turing machines and ML programs that make some bitstrings easier to compute with one or the other, but as a last resort I can always simulate one with the other: so if a Turing machine is really efficient at computing some particular family of bitstrings, then so too is ML (up to adding the constant overhead of writing a Turing machine simulator) and vice-versa.
Clearly just as we can measure the complicatedness (or randomness) of a bitstring relative to a computation model, we can also measure the complicatedness of one computation model relative to another, by asking for the size of the smallest program in the one model that simulates the other.
This makes me imagine the whole graph of all Turing-complete computation models (for definiteness, I guess, take them as partial functions N → N) with directed edges between each pair, each edge labelled by this measure.
Question: are there any real "landmarks" in this graph that we can navigate by? Are there any globally "simple" or otherwise canonical models of computation? Or is there some sort of Friedberg-Muchnik-esque theorem here that says that anything that can happen, will?
One can probably say, for example, that if any computation model M that computes bitstrings that are random from the point of view of N, then M itself is probably necessarily random from the point of view of N, but I'm curious what can be said without such relativization.