Jason (jcreed) wrote,
Jason
jcreed

In hindsight I was certainly too hard on Minsky and his lecture. It was kind of hard to stop once I got rolling with my little snide bullet points there.

There's something I found genuinely unsatisying about his treatment of Occam's razor, and what it means to formulate a theory of anything, and how he responded to the audience's comments, however. It's one thing to counterattack your opponents' foundations when they attack yours in a one-on-one discussion (so maybe gustavolacerda is right that he should spend more time in those than lectures :) but I think he could have easily given his questioners the benefit of the doubt and assumed that they meant to use his favorite terminology, rather than answer completely unrelated questions to those intended by quibbling over the difference between falsification and verification, and between consciousness and intelligence or whatever else it is his theory is a theory of.

I personally have a lot of skepticism of theories --- most of all theories of the sort of things that go on in the human brain, or how we ought to model those things --- that have any small number of neat layers, be it three or six or twenty. I don't like the argument that, well, we need to "engineer" our theory to tolerances, so that if we find more data later we still have "room" to put the extra stuff in.

I'll go out on a limb here and say, a fragile theory is great. A theory that is highly robust to various possible future experimental results coming out in various ways is crap. Call it falsifiability or verifiability or whatever you want, but having a clear idea of when a theory is demolished is really great. I run an experiment, and it has a good chance of forcing me to think hard about what my new theory is. A theory that by all rights should fail at the slightest experimental deviation, and yet doesn't despite lots of experiments being run because all the experiments agree with it so well is a fantastic theory.

Start with cognitive models that are extraordinarily impoverished, and add levels or layers or modules or whatever the fuck you want to call then, when you are convinced by repeatable evidence that they're really there. Anyone can write down their 7-layer burrito model of what it means to be conscious or intelligent. Anyone can think up 16 personality types, 4 binary personality traits, 23 things that a person wants out of a MOTAS, and there's no obvious way of settling it when two different people come up with different lists of 16, 4, or 23 items short of subjective bickering.

If there is a good reason why this is a bad plan --- why somehow scientific theories need to be designed for future growth as if they were fucking software systems rather than things that are indeed very expiedient for us to be constantly formulating and throwing away at exactly the moment they are refuted --- I would love to hear it.
Subscribe
  • Post a new comment

    Error

    Anonymous comments are disabled in this journal

    default userpic

    Your reply will be screened

    Your IP address will be recorded 

  • 18 comments