You might enjoy watching the NOVA special on the DARPA Grand Challenge, a desert race for robots. I was particularly impressed by Stanley, a robot by Stanford's AI Lab led by Sebastian Thrun.
Stanley combines laser input with a video camera's data. The laser scans terrain to detect flatness. Next, the AI compares that data to the video image. It guesses that the video-captured landscape beyond the laser's range will roughly match the landscape within the laser's range. The AI focuses on colors in the video image. So, Stanley assumes, if flattest terrain coincides with a general color (the color of the road), then wherever that color is seen in the video image is more likely to be flat terrain (useable road) as well. The laser data is the core data, but the camera data enables the AI to predict future input.
Needless to say, going to the link above and watching the show will clarify my description.
Making art part of AI
It occurred to me that similar systems could be used to improve game AI... only game designers have the advantage of being able to design environments to fit. Rather than design pathfinding programs to work in any game setting, why not work cooperatively with artists and level designers so that environments can directly inform the AI?
Stanley had to actively discern road by comparing two types of data, and with limited certainty. In a game, certain colors and textures can be exclusive to particular objects, thereby acting as perfectly accurate qualifiers for AI.
Script triggers and invisible rails for object movement are commonly seen in games, of course. But what I'm proposing (and perhaps this method is already used -- I'm not a programmer) is data included in environment/level planning that is informative, rather than determinative.
Such cues might take the form of marker objects, hues, line or object patterns, textures, or combinations. And they could be used for anything from pathfinding to personality simulations (ex: different game AI could prefer different terrain or abundance of cover).