David Lewis and AI

Consider the following:
A railroad installs automatic signals: semaphores and the machinery to make their position depend on the occupancy of the track ahead. Instead of a communicator who does observable actions according to a contingency plan, there is the original agent who acts to install the machinery and there is the machinery which subsequently operates according to a contingency plan...

Or the trains which stop and go in response to the semaphores could be automated. On a railroad with automated trains and manual semaphores, every agent who operates a semaphore is involved in a two-person coordination problem with the agent who chose, once and for all, a contingency plan to be built into the trains. (On a railroad with both automated trains and automated semaphores, there is only the single coordination problem between the agent who chooses a contingency plan to be built into the trains and the agent who chooses a contingency plan to be built into the semaphores.) -- pp. 128-129
I think the analysis of agency here is exactly correct: if the semaphores were implementing a negligently designed plan, and there is a terrible train wreck, we arrest the designer of the system, not the semaphores.

Simply because we make an even more complicated piece of machinery to carry out our plans for, say, winning the world computer chess championship, why would this analysis change at all? It is the designers of Junior who are pitting their wits against the designers of Shredder. They are playing a two-person game -- the fact that it is a game of pure conflict surely doesn't change the analysis of where the agency lies! -- in which each hopes to best the other by building a better contingency plan for playing chess into a machine. No one seems inclined to believe that railroad semaphores know they are directing trains: why do some people, once the machine gets a bit more complicated, feel inclined to say things like "Shredder understands chess better than any human"?

Comments

  1. I'm not arguing against these examples, but it breaks down eventually. It would most certainly be a stretch to write that "this match was not between Kasparov and Karpov, but their coaches. Clearly Kasparov's trainers did a better job of programming him to play chess." (Or maybe the coaches' coaches did a better job of programming them to teach someone to play chess... ad infinitum.)

    So if, in fact, it's possible to create an artificial intelligence, there will come a point where we should switch to talking about its motivations, its mistakes, its successes, and not talk about its programmers any more than the way that a child's character reflects on his parents (which it does, of course, but that's not primarily how we consider people.)

    Maybe that point has happened once the programmer/designer/trainer says, "I have no idea why it did that."

    ReplyDelete
    Replies
    1. "It would most certainly be a stretch to write that "this match was not between Kasparov and Karpov, but their coaches. Clearly Kasparov's trainers did a better job of programming him to play chess."

      Well, yes, the coaches did not *construct* Kasparov and Karpov. They taught them. (Programmers are quite literally constructing a machine with their code.) Kasparov and Karpov decided on their own to play chess: the coaches did not create them as automatons that literally could not help but play chess when the right button is pushed.

      You seem to be missing the point about agency.

      And I am not saying that some AI might some day decide on its own to play chess! I just see no evidence that any has as of yet.

      Delete
  2. So the actual agent is not necessarily the builder or the programmer, but the person who decided to operate the lines using semaphores, or to pit one chess computer against the other.

    ReplyDelete
    Replies
    1. If I work for you building a road, it might be fair to say I am exercising less agency than you, but I am still an agent: I can, after all, quit. But my bulldozer is not an agent, and I see no reason to call Big Blue one either: I don't think Big Blue has the option to play or not.

      Delete

Post a Comment

Popular posts from this blog

Libertarians, My Libertarians!

"Machine Learning"

"Pre-Galilean" Foolishness