By itself, that technology's awesome. I expect I'll be able to take excellent 3-D tours of half the major tourist attractions in the world 15 years from now. In addition, this seems to be a major step toward easier modeling of real objects.
If the human eye can discern an understanding of depth in 2-D representations, like photographs and paintings, then someone should be able to create software that does the same. We already have a rough ability to do this (apply the "sharpen" filter to a photograph half a dozen times and notice how it takes on a 3-D effect), though I'm not suggesting the invention of such software should be easy. Two-dimensional representations could be made three-dimensional.
If that can be accomplished to an acceptable degree of accuracy, think of how these two technologies could be combined! If software could accurately detect depth and distance in 2-D representations, and if those representations (now 3-D) could be linked into a combined model (similar to the products of Photosynth), then an entire real-world environment could be represented in a 3-D computer model by little more than taking a lot of photographs and processing them through the software.
The model would be almost entirely realistic (i.e., past the Uncanny Valley). No more trying to simulate the endless and unpredictable variety of textures, all of the minor flaws and eccentricities that give real objects believable character. The model of a house would have chipped paint in all the right places, settled supports in all the right places, windows with smudges, bricks with holes, and so on.
Of course, I realize that this technology won't come without significant hurdles. Even after it's accomplished, game developers would then have to figure out how to assign physics to the art which they didn't build piece-by-piece. They would have to go in after-the-fact to divide the art into sections for gameplay purposes. And that's just one hurdle.
Still, Photosynth seems to be an exciting development for game developers in particular. =)