Wednesday, May 21, 2008

AI: beyond logic

Human beings are not logical. Or, rather, we're not only logical. Every one of us makes choices that depart from what we consciously perceive to be the best, most rational options. A person who exhibits no emotion is called robotic, because emotion is also an essential aspect of humanity. Some might even go further and say that the will is neither logic nor emotion, but something else.

My own view is that emotions are based in a sort of logic. Happiness is a response to communion with the things we love; togetherness. Sadness is a response to being apart from those objects. Anger is a response to injustice. And other emotions are generally some mixture of basic emotions like those, filtered through our own personal perceptions of what is truth, what is love, etc.

Regardless of what one theorizes emotions are, we are incapable of completely predicting human behavior. We can't even fully predict the behavior of many lesser animals.

Viewers of horror films often remark that particular characters are unbelievable because those characters act irrationally. I don't think the audience truly expects an absence of impediments to logic, such as fear and anger (or even varying degrees of intelligence). Instead, the audience is responding to the characters being too neatly fashioned... too predictable. Of course, she's just going to hide and cry until she's found and slaughtered! That's what they always do, right? Audiences generally expect characters to have half a brain, but the actions of depthful characters always contain surprises. A truly depthful character is not completely knowable, because no human being is completely knowable.

So where is all of this leading? When depth is a goal, AI should include self-contained variables. A character's actions should not be determined solely by a "personality" type plus environmental circumstances. Within that personality, there should be a variable range.

The same person placed in the same basic circumstance will often respond differently. In reality, this doesn't happen willy-nilly; it's in response to a change in internal circumstances (distraction, drowsiness, recurring thoughts due to individual values, new ideas conflicting with conditioning, etc). But trying to mimic the endless intricacies of a human personality is a lost cause. So a simulation meant to give the illusion of true depth should contain internal dice... doing for personality what the Euphoria engine does for physics, albeit accomplishing that illusion in a very different way.

Minor variables can have big consequences, as I'm sure AI programmers are very aware. If an enemy hesitates in combat for just a second, all sorts of things could happen in that second. The player might gain an advantage. The enemy's companions could arrive and start shooting. The wind or waves could change, pushing a vehicle slightly off its intended course. That moment's hesitation doesn't have to be exactly described by the programmers or animators, because the player's imagination will fill in the blanks. Perhaps it was fear. Perhaps it was a sudden realization. Perhaps it was his back going out of alignment (speaking of simulation... the problems of getting old!).

One thing game developers could learn from old movies (40s and 50s) is the benefits of innuendo and incomplete information. Scenes are most powerful when not everything is spelled out for the audience. That includes combat scenes.

Anyway, I think the future of storytelling in games lies largely in the development of better AIs. This is one way.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.