Top.Mail.Ru
? ?
Notes from a Medium-Sized Island [entries|archive|friends|userinfo]
Jason

[ website | My Website ]
[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

[Sep. 27th, 2012|11:22 am]
Jason
[Tags|]

aleffert's post over here about Bret Victor's most recent essay reminded me that I wanted to say a thing about it that I think is importantly wrong.

He says it right near the beginning:
People understand what they can see. If a programmer cannot see what a program is doing, she can't understand it.


Really? Can't? Despite the really interesting ideas and demos and fantastic, commendable urging of more visualization and immediation and reactivity and all that good stuff, this is just blatantly false. It's the rhetorical exaggeration of a noble sentiment to the point of distasteful falsehood.

Here are some things I do believe:

  • If a programmer can "see" what a program is doing, through some reasonable visualization, they will be better able to understand their program.
  • If an audio filter programmer can hear what their program is doing, they will be better able to understand their program.
  • If a programmer can say some things about what they expect a program to do, and have a machine check that expectation against reality, they will be better able to understand their program. So much the better if the machine can heuristically guess plausible complete specifications from partial ones. This goes by the name of "testing" and "typechecking", and "randomized test generation" and "type inference".
  • If a distributed system programmer can conveniently get useful summary statistics by querying an enormous database of logs generated by their system, they will be better able to understand their program.
  • If a programmer has awesome machine-navigable hypertext with the documentation and the IDE popups and the tab completion, freunlaven, then they will be better able to understand their program

By all means, keep researching cool ways of making programs more well-understood and thereby making programming better, but stop pretending that one method for doing that is criminally neglected and it is ABSOLUTELY ESSENTIAL and THE ONE THING THAT PROGRAMMERS REALLY NEED and all the other ways are stupid poop that will never help anyone ever.

Feel free to consider this aimed at any of the above things-I-believe, and feel free to point out approaches that I have negligently forgotten to think of.
LinkReply

Comments:
From: simrob
2012-09-27 04:31 pm (UTC)
wild applause

Edited at 2012-09-27 04:36 pm (UTC)
(Reply) (Thread)
(Deleted comment)
[User Picture]From: jcreed
2012-09-27 06:14 pm (UTC)
Yeah.

It would take effort to come up with visualizations for arbitrary things, but I still bet that it is worth trying. Maybe there is a "nice basis" that systematically covers a wide range of visualizey/modelly things in the same way that HM covers a surprisingly nice amount of type phenomena given its simplicity.
(Reply) (Parent) (Thread)
[User Picture]From: roseandsigil
2012-09-28 01:07 am (UTC)
I imagine most of these problems spatially, but I don't believe this is universal.
(Reply) (Parent) (Thread)
From: aleffert
2012-09-28 02:27 am (UTC)
I also tend to think about them spatially, but only in a very intuitive way. I also experience cases of a datatype spatially, but I couldn't really explain what that means.
(Reply) (Parent) (Thread)
[User Picture]From: toorsdenote
2012-09-28 07:46 pm (UTC)
"People understand what they can see. If a programmer cannot see what a program is doing, she can't understand it."

If a person can see something, they can understand it.

Therefore, if a person can't see something, they can't understand it.

Um, oops.
(Reply) (Thread)
[User Picture]From: jcreed
2012-09-28 08:19 pm (UTC)
Yeah not only would that be a fallacy, I don't even believe the premise. Show me a complete diagram of an x86 chip with bits flying around it as computation proceeds, and I daresay I would still need a lot more to fully understand it. It would still be pretty awesome though.
(Reply) (Parent) (Thread)
(Deleted comment)
From: aleffert
2012-09-30 12:00 am (UTC)
Since you're going to take my response to that as a typical "reactionary" one, I'd like to say that I basically agree with what you said as your eventual response. We can obviously make our tools better and we should.

I sort of allude to this in one of the comments, but I think what really bothers me about his presentation is that there's no work to getting the visualization that you want. In the examples I mention of things I was doing, there's a whole lot of context that specifies what visualization could be useful at that moment (where visualization is shorthand for information provided by my tools, not necessarily a graphic).

We can and should make our tools easier to hack and provide us with more information, but that doesn't mean that for any given task, they can already show me what I want to know.
This is the same as the classic flaw of concept videos and hollywood interfaces. And furthermore, even if our tools were easier to hack, it may not be worth my time to do in certain particular cases. I similarly find that writing unit tests is often not a good use of my time, just because I work on graphics heavy things with tight schedules and changing requirements.

I suspect Mr. Victor doesn't really think our tools can magically do that either. After all, he wrote the code to generate those examples. But when, as you astutely point out, he conflates constructed learning cases with a generalization to all programming, I feel like I need to point out the man behind the curtain.
(Reply) (Parent) (Thread)
(Deleted comment)
From: aleffert
2012-09-30 05:44 am (UTC)
At my old job we actually solved a lot of those sorts of UI regression issues by having QA people who would do things like verify every bug that was claimed as fixed was actually fixed. And then verify it again later a little while before launch. Which sort of speaks to the specialization you're talking about.
(Reply) (Parent) (Thread)