Friday, March 21, 2008

would real A.I. be perceived as real?

A question entered my mind as I played a little football today. As has been pointed out before, an accurate simulation of human intelligence would be capable of mistakes. But do you think gamers can ever be expected to perceive the mistakes of their NPC companions and teammates as acceptable... or even realistic?

Frankly, I doubt it.

When an NFL wide receiver doesn't catch a pass, when a World Cup goalie loses sight of a corner kick, when a basketball player misses a free throw... fans, in my experience, are generally not very forgiving. Most fans don't think "Well, even a professional's going to mess up sometimes... ". No, they think more along the lines of "C'mon, you idiot! Even I could have caught that!". Right?

If most people don't keep in mind that even the best trained and most experienced veterans will mess up and have off days, then how could we possibly expect gamers to accept mistakes from NPCs... even if those mistakes are intentionally included as realistic variables?

Every long-time gamer has had an experience that makes us say "the game cheated". Sometimes, we're right. Games do cheat. But it's often the same sort of response as yelling "Catch the freakin' ball!" while watching our favorite quarterback shake his head in disbelief (even the players have trouble accepting mistakes). We habitually choose to believe the simplest answer, rather than the truth.

If we're so willing to doubt our flesh-and-blood comrades, then NPCs don't stand a chance in hell.


  1. I would have to ask the other side of the question. If you were to create a perfect AI, one that models human decision making in all areas, would it be flawless? If it were flawless, would it be perceived as human?

  2. It just occurred to me that this ties in closely with the inability of movie-goers to accept the less-than-perfect judgements of film characters. Few viewers consider the characters' emotional distractions, limited scope of concentration, or simple logical imperfection... or they consider it, but for some reason reject it as observers to the plot.

    In the context of a football video game, a flawless AI would not be perceived as human. To assume that any prolonged perfection in a game opponent's performance somehow involves cheating is a very common rationalization (when not an astute realization). Humans also cheat, and perhaps a blind match (the player doesn't know whether it's a single-player or multiplayer game) could be mistaken for a human cheater, but the impression of unfairness would make the AI fail its purpose (entertainment) anyway.

    A conversational AI program might have more success at being mistaken for human. It would almost certainly hit the uncanny valley and alert the reader to some fundamental failure in the conversation. A human conversation's scope of possibility is so huge. Even with learning, dynamics, true and false memories, and a great compendium of knowledge, the AI would eventually disrupt the conversation. An adult human mind has grown through so much and can imagine so quickly and wildly.

    However, human personalities vary so greatly that we tend to rationalize inconsistencies in conversation as innocent disabilities, distractions, personality flaws ("flaws" meaning imperfectly agreeable, rather than imperfectly human), or unknown information ("Wait a minute! Am I talking to a kid?").

    And the typical user has less at stake in a conversation than in a game. Whereas inability to understand and/or anticipate a game opponent's behavior is interpreted as one's own failure to rise to a challenge, a user of a chat program typically does not assume any direct challenge, and considers an inability to connect as an innocent failure of both sides (or the other's) or no failure at all (an inevitable incongruence of two different personalities).

    I'm making this up as I go along, so hopefully there's some value in there somewhere. :)


Note: Only a member of this blog may post a comment.