Log in

Notes from a Medium-Sized Island [entries|archive|friends|userinfo]

[ website | My Website ]
[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

(no subject) [Sep. 28th, 2016|09:35 pm]
[Tags|, ]

There's still a good deal of Stardew Valley going on in the apartment; 1.1 update beta came out, so K is on it.

It makes me think really hard about what 'realism' means in simulationy sorts of games, and I'd be interested if anyone wants to chime in on what any of what I'm going to say connects with the game-crit literature has already said in any interesting way. I assume it's said lots of things on the topic --- I'm largely naive of it and just talkin' out of my butt here.

The main comparisons I tend to keep thinking about are to minecraft, which I've played plenty, and dwarf fortress, of which I know a little second-hand. One thing about minecraft that is charmingly wonky is that if you want villagers to breed, obviously what you need to do is build doors with dirt on top of them --- because the game engine tries to achieve homeostasis between the population of a village, and the size of the village in terms of number of buildings. And clearly what a 'building' is a thing with a door and a ceiling, so the primal urges of villagers get riled up when the doors-with-minimum-viable-ceiling-lumps-to-villager ratio gets high enough.

This is not... "realistic". This is not how actual humans who live in villages --- which at some level of abstraction appears to be how we're meant to read these ambling rectangular cuboids --- behave. But it's a mechanism which interacts with a bunch of other game mechanisms that add up to a "realistically complex" or at least sufficiently complex, fun game.

Let's consider dwarf fortress then; the famous story I know about it is that the specific gravity of Sagauro wood, unlike other woods, couldn't be easily found online by the creators, so one of the fans of the game actually went and got some and measured it, and bam, ~430 kg/m^3.

So this is... "realistic" in a way that stardew valley is not, generally.

And yet stardew valley farming, and fishing, and shopping, and most of the mechanisms that don't traffic in explicitly fantasy elements do feel realistic in as much as I don't feel like I have to laugh at their absurdity when trying to explain them to someone, as I do pretty often with minecraft.

And somehow this basic level of plausibility smooths over even the existence of things that have no real-world counterpart at all: ancient seeds grow sensibly into ancient fruit, and void eggs turn, obviously, into void mayonnaise when you put them into the mayonnaise machine. As of the 1.1 update, at, least; thank god they fixed that obvious oversight.
Link8 comments|Leave a comment

(no subject) [Sep. 27th, 2016|05:11 pm]
[Tags|, ]

One of K's very old friends (from, like, middle school, I think?) is leaving new york for the midwest, so we went to a little good-bye dinner for her out in Flushing at a chinese place "Joe's Shanghai". Great pork dumplings, didn't care for the sesame chicken.
LinkLeave a comment

(no subject) [Sep. 25th, 2016|07:50 pm]

K got back tonight, pretty late in subjective time (3-4am or so, coming from europe) but all in one piece. None of her houseplants, which were entrusted to my care in the meantime, seemed to have entirely died, but the Diffenbachia has seen better days.
LinkLeave a comment

(no subject) [Sep. 24th, 2016|10:41 am]

Consider the chord motion in Lights's "Cactus In The Valley" that happens around 49s in:
v link goes here                    
| F  G C C | F  G C C | F  G Am D7  | F  G Csus C
| IV V I I | IV V I I | IV V vi II7 | IV V Isus I
  wipe...    show...    if my...      tell...

There's something about that sudden dropping by a fifth after a three-chord walk-up that I really like. It feels like... reconsidering, or like someone turning around to face the camera somehow on a cinematic beat.

Reminds me of a similar thing (except with the IV-V-vi walk-up replaced by a vi-V-I) that happens in Beethoven's Piano Concerto 5 in Eb op 73 where it does
v link goes here
| G#m F# B E  | B/F# F# G#m G#m | G#m F# B E  | B/F# F# B Em ...
| vi  V  I IV | I/V  V  vi  vi  | vi  V  I IV | I/V  V  I iv ...

and also in a slightly more rhythmically obscured form in Homestuck's "Showtime", and a second time if you count the relative minor as counting the same:
v link goes here
| Cm Bb Eb Eb | Ab Eb Bb G   | Cm Bb Eb Eb | Fm G   Cm Cm ...
| vi V  I  I  | IV I  V  III | vi V  I  I  | ii III vi vi ...


And while I'm in the chord-analyzey mood, I never before really sat down and thought about what's the mechanism going on in, like, uplifting trance tunes, e.g. Ultimate's "If We Were", that makes them sound like they're going somewhere but not ever quite getting there? The pattern seems to be
v link goes here
| B  D#m C# C# | ...
| IV vi  V  V  | ...

So no wonder! It spends fully half the song on the dominant, and you NEVER HEAR THE TONIC EVER. That's so weird! But somehow makes complete sense. Because how else could you analyze this? Consider it in B and have I-iii-II-II? Consider it in C# mixolydian and get bVII-ii-I-I? D#m and VI-i-VII-VII??? No way. The Ockam's razor solution is IV-vi-V-V and it sounds right besides, those C# pulling dominant-seventhily towards something... but you always start over on the IV.
LinkLeave a comment

(no subject) [Sep. 23rd, 2016|07:55 pm]

Sometimes the way physicists/engineers do math really drives me crazy.

If you try to search, as I did, thinking I saw a proof once, for why the continuous fourier transform and its inverse

F[g](\phi) = {1\over\sqrt{2\pi}} \int_{-\infty}^{\infty} g(t) e^{-i t \phi} dt
F^{-1}[G](t) = {1\over\sqrt{2\pi}} \int_{-\infty}^{\infty} g(t) e^{i t \phi} d\phi

need those normalization factors 1\over\sqrt{2\pi}, you get a wide range of non-answers that are mostly about the choice of shunting that 2pi around asymetrically to one side or the other, or the fact that you can make it vanish by stuffing it in the e^{-2 \pi i t \phi}... without ever explaining why it's there in the first place, or showing why changing the wave you're integrating against affects it the way it does.

So I had to sit down and actually crunch out some good ol' leibniz sums starting from the discrete fourier transform

DF[f] = \lambda s. {1\over\sqrt{2M}} \sum_{n = -M}^{M-1} f(n)e^{-\pi ins/M}
DF^{-1}[F] = \lambda n . {1\over\sqrt{2M}}\sum_{s = -M}^{M-1} F(s)e^{\pi ins /M}

for which you absolutely can check that the normalization constants are correct, since you get 2M copies of 1 as the frequencies exactly cancel when you do the computation of DF o DF^{-1} or DF^{-1} o DF.

Following that, cleverly choosing A = \sqrt{M / 2k} and d = 1 / \sqrt{2kM} means that you can take an integral like

\int_{-A}^{A} g(t) e^{-2 \pi k i t \phi} dt

and Leibniz it up into

\sum_{n = -M}^{M-1} d g(dn) e^{-2\pi k i d n \phi}

and with some algebra you can see that

F[g](\phi) = \sqrt{k} \int_{-A}^{A} g(t) e^{-2 \pi k i t \phi} dt
F^{-1}[G](t) = \sqrt{k} \int_{-A}^{A} g(t) e^{2 \pi k i t \phi} d\phi

are approximated by

\lambda \phi. DF[\lambda n . g(dn)](\phi/d)
\lambda t . DF^{-1}[\lambda s . G(ds)](t/d)

and these are easily seen to be inverses of each other.

This isn't a proof of course, I just wave my hands and say "pick k to match whatever convention you like, let M be really big, then d is small and A is big, and therefore it's an integral from -infinity to infinity" but at least it's an argument, not just "well you need a 2pi there because the book says so".
LinkLeave a comment

(no subject) [Sep. 22nd, 2016|08:07 pm]

Got aggressively traintastrophe'd today trying to get home from work. NQR were fucked up according to announcement, F didn't seem to be coming, so took a D up to 42nd and transferred to the 7. Getting off the 7 at Queensboro Plaza was the first time I wasn't sure if I'd be able to even make it off the train with the intense onrushing crowds of people trying to get in. Ended up walking about 30m home from there.
LinkLeave a comment

(no subject) [Sep. 21st, 2016|08:04 pm]

Went to a nice little evening gathering at brooklyn bridge park organized by akiva. Met some of his coworkers and friends, present and past. Nice folks, ate some cookies, watched a sunset, talked about things.
LinkLeave a comment

(no subject) [Sep. 19th, 2016|06:37 pm]

Actually figured out the proof now, see https://twitter.com/jcreed/status/778001357160808448/photo/1 for impenetrable details.

The bit I like most about it is how it actually uses the fact that the group of permutations acting on itself always yields isomorphism. This may sounds like a stupidly elementary fact about group theory, but it's just not a thing that I've had to use very often in manipulating expressions with pis and sigmas.

It comes up because I'm summing twice over all permutations of an m-element set, and then considering a product over all elements in that set, so like

sum_{rho in S_m} sum_{pi in S_m} prod_{j in 1..m} f(j, pi j, rho j)

There's one kind of reindexing that I am somewhat used to, which is replacing j with an expression like rho^-1 k to get

sum_{rho in S_m} sum_{pi in S_m} prod_{rho^-1 k in 1..m} f(rho^-1 k, pi rho^-1 k, k)

and using the fact that rho is an isomorphism acting on the set 1..m to drop the rho^-1 from the product:

sum_{rho in S_m} sum_{pi in S_m} prod_{rho^-1 k in 1..m} f(rho^-1 k, pi rho^-1 k, k)

The extra neat thing is exactly that you can do a substitution 'one level up' and say pi = zeta o rho

sum_{rho in S_m} sum_{(zeta o rho) in S_m} prod_{rho^-1 k in 1..m} f(rho^-1 k, zeta k, k)

and reason that since rho acting on S_m is an isomorphism, you might as well say

sum_{rho in S_m} sum_{zeta in S_m} prod_{rho^-1 k in 1..m} f(rho^-1 k, zeta k, k)
LinkLeave a comment

(no subject) [Sep. 18th, 2016|06:41 pm]
[Tags|, ]

I think I finally see the actual shape of how determinantal point processes relate to fermions, even though I can't 100% prove it yet. This is apparently where they come from historically, but it's super hard to suss out what the connection is from the recent literature, since they all say "oh yeah fermions, see the original paper" and I can't find the original paper online, and all the middling-old papers are huge wharrgarbls of hilbert spaces and infinite integrals and fuck all that.

I'm a computer scientist! So let's consider the finite dimensional case first. Let's try n=4, even. Suppose you have a particle that can be in one 4 states, say, |a>, |b>, |c>, and |d>. Pick any ol' 4x4 hermitian matrix you like for the hamiltonian. This says how much the particle likes being in different states, in the sense of how much enegery it has in them. (At thermal equilibrium, more energetic things are less likely; things like dropping down to lower energy states) For example I could say |a> is 5 Joules, |b> is 17, |c> is -3, |d> is 0. But this is quantum physics, so I could also pick some other basis: I could say (|a>+|b>)/\2 is 50 Joules, (|a>-|b>)/\2 is 1/2, (|c>+3i|d>)/\10 is 3.9, (3i|c>+|d>) is 100.

But no matter what I say, it's hermitian, so it has 4 nice orthonormal real-eigenvalued eigenvectors, and the eigenvectors are the states that have a definite energy, and the eigenvalues are the corresponding energies. Call the eigenvectors U_1, ... U_4. Their coordinates in terms of the basis vectors |a>, |b>, |c>, and |d> I'm going to write like U_{1a}, U_{2a}, etc., so that U_n = U_{na}|a> + U_{nb}|b> + U_{nc}|c> + U_{nd}|d>.

I'm going to now consider a larger state space, but still finite, of dimension 2^4, namely the corresponding fermionic fock space. You can have anywhere between 0 and 4 particles, and any of |a>, |b>, |c>, and |d> may be present or absent, independently. There's 16 basis states, some examples of which are |>, |a>, |ac>, |abc>, |bcd>, |abcd>. The only trick is that the permutation of the labels actually matters, (this is the "fermion" part) and swapping any two letters yields the same state only with the label flipped. So |ab> = -|ba>, |ac> = -|ca>, etc.

I'm going to define the dynamics --- i.e. define a Hamiltonian --- on the big Fock space in terms of the one I already picked for the single-particle case. What are the eigenvectors of the big Hamiltonian? Tensor products --- either 0-ary or 1-ary or binary or 3-ary or whatever you want --- of eigenvectors of the one-particle Hamiltonian, subject to order-flipping proviso. And the energies of these 'composite' eigenvectors are just the sums of the energies that went into them.

For an example, imagine the 1-particle hamiltonian with
eigenvector (|a>+|b>)/\2, eigenvalue 50
eigenvector (|a>-|b>)/\2, eigenvalue 1
eigenvector |c>, eigenvalue 9
eigenvector |d>, eigenvalue 0
Then a few examples of eigenvectors in Fock space are:
If we take (|a>+|b>)/\2 tensor |c>, we get an eigenvector (|ac>+|bc>)/\2 with eigenvalue 50+9=59
If we take (|a>+|b>)/\2 tensor (|a>-|b>)/\2, we get an eigenvector (|a>+|b>)(|a>-|b>)/2 = (|aa>+|ba>-|ab>-|bb>)/2 = (|ba>+|ba>)/2 = |ba> with eigenvalue 50+1=51
If we take (|a>-|b>)/\2 tensor |c> tensor |d> we get (|acd>-|bcd>)/\2 with eigenvalue 9+1+0=10

Now we have 16 orthogonal energy eigenstates; we let the system come to thermal equilibrium and sample it in the |a>,|b>,|c>,|d> basis. What does this mean? We pick an energy eigenstate at random with probability inversely proportional to the exponential of its energy. Then we observe some subset of |a>, |b>, |c>, |d> according to whatever the Born rule says about that eigenstate.

I think the punchline is: this whole game ends up being a determinantal point process with an L-ensemble kernel which has the same eigenvectors as the Hamiltonian, but the nth eigenvalue is not the energy E_n, but rather exp(-E_n) (or, like, exp(-E_n/β) if you want to actually think about the Boltzmann distribution temperature).
Link1 comment|Leave a comment

(no subject) [Sep. 17th, 2016|06:09 pm]

Did some overdue tidying up around the apartment; I don't know if I'm intrinsically that messy of a person, but I'm definitely messier when nobody's watching than when someone is.
LinkLeave a comment

[ viewing | most recent entries ]
[ go | earlier ]