View this PageEdit this PageUploads to this PageHistory of this PageHomeRecent ChangesSearchHelp Guide

siggraph paper


Future Work
  • Hull interaction in the Space Manager
  • Create systems with unknow relationships between the fiducials and the world coordinate system.

System fixes and small improvements
  • combiner should decide which path on the AugmentVisitor, instead of on the ErrorVisitor
  • computation should use the shortest path from the camera, instead of the path from the root.
  • track memory leak.
  • improve computation of the Inner Hull.
  • Have the trackers report uncertainty and add it to the system.
  • allow osgAR::Group to be nested.
  • correct camera distortion
  • rethink augmentations->SpaceManager/HUD relationships and make sure it is done right.


  1. Multiple tracking areas – set up 2 different tracking areas: high accuracy and low accuracy. Move the tracked camera between the different areas and show how the application uses LOE to adjust the augmentation. We may also run an application that does not adapt to show the difference. This would be run in an indoor environment. Possibilities could be: IS600 and artificially degrade tracker, 1200 and have different areas with different beacon densities.
  2. Tracker failure – the application would loose tracker input for a while. Our application would change the augmentations used, a common application would stop working.
  3. Explanation of the method – use the errorBounder to explain how the registration error estimate is computed and used to change the augmentations. This could be applied with, say, the 1200 to show its accuracy. (Eric has requested this.)
  4. Outdoor example – have a tracked vehicle or a mobile user heading towards B34 and Simon's office. Starts with big objects far away (a building) and transitions down to small details (windows and doors). Also illustrates coupling with the information filter.

Tasks for Enylton:
  1. Set up infrastructure to record video.
  2. call-out lines on LabelPlacer.
  3. combiner using tool-box.
  4. change configuration files so that ART reports in meters
  5. Create a HULL class that stores the computed convex hull.
  6. Create a generic class to update the camera position.
  7. Use 'render to texture' to distort the backgroud image.
  8. Create methods to deal with the interaction of the 2D bounding boxes.
  9. Compute the inner hull.
  10. Have one ErrorBounder with multiple transformations as parents.
  11. Implement the TransformCombiner.
  12. Create a ConvexHull class (2D and 3D). Using CGAL.
  13. Add labels using the bounding box computation.
    1. use callback to position the labels.
  14. create examples of label placer
  15. Allow augmentation nodes to be nested. (using a stack on Optimizer Visitor)
  16. Only compute the estimate for nodes that will be necessary, for efficiency reasons (use flags/damage)
  17. Allow the user to add more than one model to be used for computation.
  18. Add the tracker base to the scene graph (by specifying a node).
  19. Check with Simon if it is possible to integrate the error estimate of the head tracker by inverting the covariance matrix from the root to the tracker and setting it to be the 'initial' MatrixEstimate on the Augment Visitor.
  20. Allow the user to specify the metric and the aggregator to be used by Assessment.
  21. Attach a video camera to the tracker and calibrate it (hardware).
  22. Fix bug that causes augmentation to be 1 frame behind.
  23. Describe how LOE works.
  24. Implement switch node to automatically choose between different augmentations (LOE).
  25. Separate drawing code from computation code in ErrorVisitor
  26. From a Drawable object, extract its vertices and calculate the 2D projection of those vertices.
  27. Develop debug code which turns a set of 2D vertices into 2D line objects which are drawn on the screen.

Tasks for Simon:
  1. Check that the code for errors in the FOV, etc. work properly. Tests at VR2004 suggested that making the projection matrix imperfect seemed to cause the variance to collapse to 0.
  2. Check that the osgVideo interface can be used with cameras at NRL.
  3. Work out how to drive the viewpoint using the nfl data
  4. Develop usable interface for setting variances from high level parameters (FOV, quaternion error, etc.)
  5. Post questions about CullVisitor node oddities to the OSG mailing list
  6. Post questions about RELATIVE_TO_ABSOLUTE oddities to the OSG mailing list
  7. Write code to add errors to trackers. (Convert errors expressed in Euler angles and positions to quaternion errors). DONE
  8. Apply the nonlinear transformation code to the projected vertices. DONE
  9. Develop debugging display which uses 2D vertices. DONE
Possible steps which need to be looked at:
  1. Work out way of tagging objects to show which ones can be potentially confusing to one another. "Confusers". (Semantic tagging using some kind of type field?)
  2. Work out some way to determine if there are interferences between different objects. (2D space management stuff from Blaine?)
  3. Work out set of behaviours for doing different representations / smooth animations.

Link to this Page

  • Old Misc Pages last edited on 15 May 2008 at 2:02 pm by