*Urban-informatics prototypes. Hmmm.
"...for infrastructure engineering, augmentation has to be displayed with much more accuracy. If you want to get information about a fire hydrant, you probably want to make sure the information you get is related with that hydrant, and not the drain next to it. The accuracy at which we can measure the position and orientation of an augmentation device is a major problem in augmented reality research, and prevents the development of serious AR applications. Note that researchers in the AR world spend a lot of energy trying to solve that problem, but until then, we need to find (temporary) solutions.
"Our team has been working on the problem for a while. What we thought is instead of trying to track position and orientation, why not make the problem simpler? Position is by far the hardest part to measure with accuracy, so let skip that – let’s assume the user does not move, but simply rotates around. (I know, that is quite a constraint, but let’s assume that for now). If we know the user location and we know that he stays at that location, all we have to measure is his orientation. And that is much easier to measure with accuracy. Now if you stand at a specific position and all you can do is turn around, what is your perception of the world? It turns out the world becomes a 2D image, in the form of a 360 degree panorama. So why not augmenting panoramas instead? After all, there are plenty of panoramas around... (think of Google Streetview). (((Or Bing Read/Write World.)))
"So we developed a technique for registering and anchoring a 3D model to a panorama with high accuracy, and used it to develop 2 prototypes: one that runs on the desktop, another one on a tablet. The desktop version can be used to view infrastructure in its real world context, from a remote location (e.g. your office). Check the video below...."