• Publikationen
Show publication details

Seo, Byung-Kuk; Wuest, Harald

A Direct Method for Robust Model-Based 3D Object Tracking from a Monocular RGB Image

2016

Gang, Hua (Ed.) et al.: Computer Vision - ECCV 2016 Workshops. Proceedings Part I. Springer, 2016. (Lecture Notes in Computer Science (LNCS) 9913), pp. 551-562

European Conference on Computer Vision (ECCV) <14, 2016, Amsterdam, The Netherlands>

This paper proposes a novel method for robust 3D object tracking from a monocular RGB image when an object model is available. The proposed method is based on direct image alignment between consecutive frames over a 3D target object. Unlike conventional direct methods that only rely on image intensity, we newly model intensity variations using the surface normal of the object under the Lambertian assumption. From the prediction about image intensity in this model, we also employ a constrained objective function, which significantly alleviates degradation of the tracking performance. In experiments, we evaluate our method using datasets that consist of test sequences under challenging conditions, and demonstrate its benefits compared to other methods.

Show publication details

Sheldrick, Peter; Wuest, Harald (Betreuer); Kuijper, Arjan (Betreuer)

CAD-Model Tracking using RGB-D Cameras

2016

Darmstadt, TU, Master Thesis, 2016

This thesis deals with the determination of the six DOF of an RGB-D camera relative to a known CAD-Model. Extracting features in image based tracking with no other input data reduces the achievable precision of tracking. This thesis presents methods that use the whole input frame from a depth camera - these are so called "dense'" methods. Methods such as ICP, that is used in KinectFusion, and depth image warping, which is used in DVO-SLAM, are compared for the task of CAD-Model tracking. Rendering is used for tracking and both GPU implementations such as OpenGL and CPU ray casting is used to track real depth data.

Show publication details

Wuest, Harald; Engelke, Timo; Schmitt, Florian; Keil, Jens

From CAD to 3D Tracking - Enhancing & Scaling Model-Based Tracking for Industrial Appliances

2016

Mayol-Cuevas, Walterio (Ed.) et al.: 2016 IEEE International Symposium on Mixed and Augmented Reality : ISMAR 2016. Los Alamitos, Calif.: IEEE Computer Society, 2016, 2 p.

IEEE International Symposium on Mixed and Augmented Reality (ISMAR) <15, 2016, Merida, Mexico>

For Augmented Reality to succeed in industrial appliances, industries demand not only robust and reliable tracking techniques, but also a scalable and performant pipeline, that is easy-to-integrate within the existing data- and content environment and that enables vendors to create tracking solutions on their own. In our demo, we present recent advances of our model tracking pipeline and tracking technology, which on the one hand is easy use and easy to integrate, while on the other hand robust enough during difficult environmental conditions and which delivers high accuracy for the industrial domain. We showcase our results inside an AR-manual scenario.

Show publication details

Seo, Byung-Kuk; Wuest, Harald

Robust 3D Object Tracking Using an Elaborate Motion Model

2016

Mayol-Cuevas, Walterio (Ed.) et al.: 2016 IEEE International Symposium on Mixed and Augmented Reality : ISMAR 2016. Los Alamitos, Calif.: IEEE Computer Society, 2016, pp. 70-71

IEEE International Symposium on Mixed and Augmented Reality (ISMAR) <15, 2016, Merida, Mexico>

This paper proposes a new method for robust 3D object tracking from a single RGB image when an object model is available. The proposed method is based on image alignment between consecutive frames over a 3D target object. Different from conventional methods that only rely on image intensity for the alignment, we model intensity variations using the surface normal of the object. From this model, we also define a new constraint for the pose estimation, leading to significant improvement in the tracking robustness. In experiments, we demonstrate the benefits of our method by evaluating it under challenging tracking conditions.

Show publication details

Wientapper, Folker; Engelke, Timo; Keil, Jens; Wuest, Harald; Mensik, Johanna

User Friedly Calibration and Tracking for Optical Stereo See-Through Augmented Reality

2014

Julier, Simon (Ed.) et al.: IEEE International Symposium on Mixed and Augmented Reality - Science & Technology 2014 : ISMAR 2014. Piscataway, NJ: IEEE Service Center, 2014, pp. 385-386

IEEE International Symposium on Mixed and Augmented Reality (ISMAR) <13, 2014, Munich, Germany>

Optical see through head mounted displays (OST-HMD) are ever since the first days of Augmented Reality (AR) in focus of development and in nowadays first affordable and prototypes are spread out to markets. Despite common technical problems, such as having a proper field of view, weight, and other problems concerning the miniaturization of these systems, a crucial aspect for AR relies also in the calibration of such a device with respect to the individual user for proper alignment of augmentations. Our demonstrator shows a practical solution for this problem along with a fully featured example application for a typical maintenance use case based on a generalized framework for application creation. We depict the technical background and procedure of the calibration, the tracking approach considering the sensors of the device, user experience factors, and its implementation procedure in general. We present our demonstrator using an Epson Moverio BT-200 OST-HMD.

Show publication details

Ventura, Jonathan; Wagner, Daniel; Kurz, Daniel; Wuest, Harald; Benhimane, Selim

Workshop on Tracking Methods & Applications

2014

Julier, Simon (Ed.) et al.: IEEE International Symposium on Mixed and Augmented Reality - Science & Technology 2014 : ISMAR 2014. Piscataway, NJ: IEEE Service Center, 2014, 2 p.

IEEE International Symposium on Mixed and Augmented Reality (ISMAR) <13, 2014, Munich, Germany>

The focus of this workshop is on all issues related to tracking for mixed and augmented reality applications. Unlike the tracking sessions of the main conference, this workshop does not require pure novelty of the proposed methods; it rather encourages presentations that concentrate on complete systems and integrated approaches engineered to run in real-world scenarios. The research felds covered include self-localization using computer vision or other sensing modalities (such as depth cameras, GPS, inertial, etc.) and tracking systems issues (such as system design, calibration, estimation, fusion, etc.). This year's focus is also expanded to research on object detection and semantic scene understanding with relevance to augmented reality. Implementations on mobile devices and under real-time constraints are also part of the workshop focus. These are issues of core importance for practical augmented reality systems.

Show publication details

Wientapper, Folker; Wuest, Harald; Rojtberg, Pavel; Fellner, Dieter W.

A Camera-Based Calibration for Automotive Augmented Reality Head-Up-Displays

2013

IEEE Computer Society Visualization and Graphics Technical Committee (VGTC): 12th IEEE International Symposium on Mixed and Augmented Reality 2013. : ISMAR 2013. Los Alamitos, Calif.: IEEE Computer Society, 2013, pp. 189-197

IEEE International Symposium on Mixed and Augmented Reality (ISMAR) <12, 2013, Adelaide, SA, Australia>

Using Head-up-Displays (HUD) for Augmented Reality requires to have an accurate internal model of the image generation process, so that 3D content can be visualized perspectively correct from the viewpoint of the user. We present a generic and cost-effective camera-based calibration for an automotive HUD which uses the windshield as a combiner. Our proposed calibration model encompasses the view-independent spatial geometry, i.e. the exact location, orientation and scaling of the virtual plane, and a view-dependent image warping transformation for correcting the distortions caused by the optics and the irregularly curved windshield. View-dependency is achieved by extending the classical polynomial distortion model for cameras and projectors to a generic five-variate mapping with the head position of the viewer as additional input. The calibration involves the capturing of an image sequence from varying viewpoints, while displaying a known target pattern on the HUD. The accurate registration of the camera path is retrieved with state-of-the-art vision-based tracking. As all necessary data is acquired directly from the images, no external tracking equipment needs to be installed. After calibration, the HUD can be used together with a head-tracker to form a head-coupled display which ensures a perspectively correct rendering of any 3D object in vehicle coordinates from a large range of possible viewpoints. We evaluate the accuracy of our model quantitatively and qualitatively.

Show publication details

Bockholt, Ulrich; Wuest, Harald; Wientapper, Folker; Engelke, Timo; Webel, Sabine

Augmented Reality Assistenzsysteme für Wartung und Service in Industrie, Bau und Gebäudemanagement

2013

Schenk, Michael (Ed.): 16. IFF-Wissenschaftstage 2013. Tagungsband : Digital Engineering zum Planen, Testen und Betreiben technischer Systeme. Stuttgart: Fraunhofer Verlag, 2013, pp. 195-200

IFF-Wissenschaftstage <16, 2013, Magdeburg, Germany>

Für die Nutzbarkeit von Augmented Reality ist die Voraussetzung entscheidend, dass sich der Einsatz von AR im industriellen Kontext insbesondere dann rentiert, wenn sich Skaleneffekte einstellen. Im Bereich von Montage-, Reparatur- und Fehlerdiagnosearbeiten ist dies der Fall, wenn einerseits ein und dieselbe AR-Applikation in hohem Maße mehrfach benutzt werden kann, und andererseits die Erstellung des Trackingmodells hierfür nur einmal durchgeführt werden muss. Mit der Häufigkeit der Anwendung amortisiert sich dann der Aufwand für das einmalige Einrichten der AR-Applikation. Weiterhin ist es für die Akzeptanz von AR-Applikationen vorteilhaft, wenn beim Endanwender keine besonderen Kenntnisse hinsichtlich des Trackings bzw. der Benutzung der AR-Applikation vorausgesetzt werden müssen. Die AR-Applikation sollte den Endanwender bei der Erreichung seiner persönlichen Arbeitsziele unterstützen, ihm jedoch nicht ein zu hohes Maß an zusätzlichem Vorwissen in der Handhabung abverlangen. Seine Aufmerksamkeit sollte nicht darauf ausgerichtet sein, wie er sich verhalten muss, damit das Tracking funktioniert. Vielmehr muss er durch AR schneller, einfacher und intuitiver an die benötigten Informationen gelangen.

Show publication details

Bockholt, Ulrich; Wientapper, Folker; Wuest, Harald; Fellner, Dieter W.

Augmented-Reality-basierte Interaktion mit Smartphone-Systemen zur Unterstützung von Servicetechnikern

2013

at - Automatisierungstechnik, Vol.61 (2013), 11, pp. 793-799

Smartphonesysteme erfordern neue Interaktionsparadigmen, die die integrierte Sensorik auswerten (GPS, Inertial, Kompass), die aber insbesondere auf die Smartphonekamera aufsetzten, mit der die Umgebung aufgezeichnet wird. In diesem Zusammenhang liefern Augmented Reality Verfahren großes Potential besonders für industrielle Anwendung bei Wartungs- und Instandsetzungsarbeiten.

Show publication details

Engelke, Timo; Becker, Mario; Wuest, Harald; Keil, Jens; Kuijper, Arjan

MobileAR Browser - A Generic Architecture for Rapid AR-multi-level Development

2013

Expert Systems with Applications, Vol.40 (2013), 7, pp. 2704-2714

We present our novel generic approach for interfacing web components on mobile devices in order to rapidly develop Augmented Reality (AR) applications using HTML5, JavaScript, X3D and a vision engine. A general concept is presented exposing a generalized abstraction of components that are to be integrated in order to allow the creation of AR capable interfaces on widely available mobile devices. Requirements are given, yielding a set of abstractions, components, and helpful interfaces that allow rapid prototyping, research at application level, as well as commercial applications. A selection of various applications (also commercial) using the developed framework is given, proving the generality of the architecture of our MobileAR Browser. Using this concept a large number of developers can be reached. The system is designed to work with different standards and allows for domain separation of tracking algorithms, render content, interaction and GUI design. This can potentially help groups of developers and researchers with different competences creating their application in parallel, while the declarative content remains exchangeable.

Show publication details

Bockholt, Ulrich; Webel, Sabine; Engelke, Timo; Olbrich, Manuel; Wuest, Harald

Skill Capturing and Augmented Reality for Training

2013

Bergamasco, Massimo (Ed.) et al.: Skill Training in Multimodal Virtual Environments. Boca Raton: Taylor & Francis, CRC Press, 2013, pp. 81-90

AR-based training applications must clearly differ from AR-based guiding applications, as they must really train the user and not only guide the user through the task. AR is a good technology for training, as instructions or location-dependent information can be directly linked and attached to physical objects. Because objects to be maintained usually contain a large number of similar components, the provision of location-dependent information is vitally important for the training. Furthermore, in AR-based training, sessions can be combined with teleconsultation technologies, and the availability of a trainer on-site is not mandatory.

Show publication details

Olbrich, Manuel; Wuest, Harald; Rieß, Patrick; Bockholt, Ulrich

Augmented Reality Pipe Layout Planning in the Shipbuilding Industry

2011

IEEE Computer Society Visualization and Graphics Technical Committee (VGTC): 10th IEEE International Symposium on Mixed and Augmented Reality : ISMAR 2011. The Institute of Electrical and Electronics Engineers (IEEE), 2011, pp. 269-270

IEEE International Symposium on Mixed and Augmented Reality (ISMAR) <10, 2011, Basel, Switzerland>

As large ships are never produced in masses, it often occurs that the construction process and production process overlap in time. Many shipbuilding companies have problems with discrepancies between the construction data and the real built ship. The assembly department often has to modify CAD data for a successful installation. We present an augmented reality system, where a user can visualize the construction data of pipes and modify these in the case of misalignment, collisions or any other conflicts. The modified pipe geometry can be stored and further used as input for CNC pipe bending machines. To guarantee an exactly orthogonal passage of the pipes through aligning bolt holes, we integrated an optical measurement tool into the pipe alignment process.

Show publication details

Wientapper, Folker; Wuest, Harald; Kuijper, Arjan

Composing the Feature Map Retrieval Process for Robust and Ready-to-Use Monocular Tracking

2011

Computers & Graphics, Vol.35 (2011), 4, pp. 778-788

This paper focuses on the preparative process of natural feature map retrieval for a mobile camera-based tracking system. We cover the most important aspects of a general purpose tracking system including the acquisition of the scene's geometry, tracking initialization and fast and accurate frame-by-frame tracking. To this end, several state-of-the-art techniques - each targeted at one particular subproblem - are fused together, whereby their interplay and complementary benefits form the core of the system and the thread of our discussion. The choice of the individual sub-algorithms in our system reflects the scarcity of computational resources on mobile devices. In order to allow a more accurate, more robust and faster tracking during run-time, we therefore transfer the computational load into the preparative customization step wherever possible. From the viewpoint of the user, the preparative stage is kept very simple. It only involves recording the scene from various viewpoints and defining a transformation into a target coordinate frame via manual definition of only a few 3D to 3D point correspondences. Technically, the image sequence is used to (1) capture the scene's geometry by a SLAM-Method and subsequent refinement via constrained Bundle Adjustment, (2) to train a Randomized-Trees classifier for wide-baseline tracking initialization, and (3) to analyze the view-point dependent visibility of each feature. During run-time, robustness and performance of the frame-to-frame tracking are further increased by fusing inertial measurements within a combined pose estimation.

Show publication details

Bockholt, Ulrich; Webel, Sabine; Staack, Ingo; Riedel, Michael; Rieß, Patrick; Olbrich, Manuel; Wuest, Harald

Kooperative Mixed Reality für Konstruktion und Fertigung im Schiffsbau

2011

Schenk, Michael (Ed.): 14. IFF-Wissenschaftstage 2011. Tagungsband : Digitales Engineering und virtuelle Techniken zum Planen, Testen und Betreiben technischer Systeme. Stuttgart: Fraunhofer Verlag, 2011, pp. 185-190

IFF-Wissenschaftstage <14, 2011, Magdeburg, Germany>

Verteilte Mixed Reality Systeme, d.h. VR/AR-Installationen, die sich an unterschiedlichen Standorten befinden und via Internet vernetzt sind, wurden bislang nur in akademischen Testszenarien eingesetzt. Sie konnten den Sprung in industrielle Nutzung jedoch nicht vollziehen. Im Szenario "VR-Meeting" des BMBF-Projektes AVILUS wurde auf Grundlage des Virtual und Augmented Reality Systems "instantreality" (www.instantreality.de) ein kollaboratives VR-System entwickelt, mit dem Entwicklungsteams an verteilten Standorten in einer virtuellen Konstruktionssitzung zusammenarbeiten. Dabei wurden zum einen Verfahren für eine sichere Datenverteilung realisiert, zum anderen wurden Methoden zur effizienten Protokollierung von verteilten immersiven Konstruktionssitzungen erforscht.

Show publication details

Wientapper, Folker; Wuest, Harald; Kuijper, Arjan

Reconstruction and Accurate Alignment of Feature Maps for Augmented Reality

2011

IEEE Computer Society: 3DIMPVT 2011 : International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission. Los Alamitos, Calif.: IEEE Computer Society Conference Publishing Services (CPS), 2011, pp. 140-147

International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT) <1, 2011, Hangzhou, China>

This paper focuses on the preparative process of retrieving accurate feature maps for a camera-based tracking system. With this system it is possible to create ready-touse Augmented Reality applications with a very easy setup work-flow, which in practice only involves three steps: filming the object or environment from various viewpoints, defining a transformation between the reconstructed map and the target coordinate frame based on a small number of 3D-3D correspondences and, finally, initiating a feature learning and Bundle Adjustment step. Technically, the solution comprises several sub-algorithms. Given the image sequence provided by the user, a feature map is initially reconstructed and incrementally extended using a Simultaneous-Localization-and-Mapping (SLAM) approach. For the automatic initialization of the SLAM module, a method for detecting the amount of translation is proposed. Since the initially reconstructed map is defined in an arbitrary coordinate system, we present a method for optimally aligning the feature map to the target coordinated frame of the augmentation models based on 3D-3D correspondences defined by the user. As an initial estimate we solve for a rigid transformation with scaling, known as Absolute Orientation. For refinement of the alignment we present a modification of the well-known Bundle Adjustment, where we include these 3D-3D-correspondences as constraints. Compared to ordinary Bundle Adjustment we show that this leads to significantly more accurate reconstructions, since map deformations due to systematic errors such as small camera calibration errors or outliers are well compensated. This again results in a better alignment of the augmentations during run-time of the application, even in large-scale environments.

Show publication details

Keil, Jens; Zöllner, Michael; Becker, Mario; Wientapper, Folker; Engelke, Timo; Wuest, Harald

The House of Olbrich - An Augmented Reality Tour through Architectural History

2011

IEEE Computer Society Visualization and Graphics Technical Committee (VGTC): 10th IEEE International Symposium on Mixed and Augmented Reality - Arts, Media, and Humanities : ISMAR-AMH 2011. The Institute of Electrical and Electronics Engineers (IEEE), 2011, pp. 15-18

IEEE International Symposium on Mixed and Augmented Reality (ISMAR) <10, 2011, Basel, Switzerland>

With "House of Olbrich" we present an iPhone Augmented Reality (AR) app that visualizes the compelling history of Darmstadt's unique Jugendstil (Art Nouveau) quarter with video-see through Augmented Reality. We propose methods for enabling high performance computer vision algorithms to deploy sophisticated AR visuals on current generation Smartphones by outsourcing resource intensive tasks to the cloud. This allows us to apply methods on 3D feature recognition for even complex tracking situations outdoors, where lightning conditions change and tracked objects are often occluded. By taking a snapshot of the building, the user learns about the architect, design and history of the building. Historical media, like old photographs and blueprints are superimposed on the building's front, depicting the moved history of the famous House of Olbrich, which was destroyed during World War II and has been only rudimentary restored. Augmented Reality technology allows tourists to jump back in time visually by using their Smartphones: Mixing Realities emphasizes the user's experience and leads his attention to the impressive historical architecture of the Art Nouveau. In addition, we ease interaction means by superimposing snapshots. Tourists may view and read information also in a relaxed position without the need to front-up their mobiles all the time.

Show publication details

Kahn, Svenja; Wuest, Harald; Stricker, Didier; Fellner, Dieter W.

3D Discrepancy Check via Augmented Reality

2010

Höllerer, Tobias (Ed.) et al.: 9th IEEE International Symposium on Mixed and Augmented Reality 2010 : ISMAR. Science & Technology Proceedings. Los Alamitos, Calif.: IEEE Computer Society, 2010, pp. 241-242

IEEE International Symposium on Mixed and Augmented Reality (ISMAR) <9, 2010, Seoul, South Korea>

For many tasks like markerless model-based camera tracking it is essential that the 3D model of a scene accurately represents the real geometry of the scene. It is therefore very important to detect deviations between a 3D model and a scene. We present an innovative approach which is based on the insight that camera tracking can not only be used for Augmented Reality visualization but also to solve the correspondence problem between 3D measurements of a real scene and their corresponding positions in the 3D model. We combine a time-of-flight camera (which acquires depth images in real time) with a custom 2D camera (used for the camera tracking) and developed an analysis-by-synthesis approach to detect deviations between a scene and a 3D model of the scene.

Show publication details

Kahn, Svenja; Wuest, Harald; Fellner, Dieter W.

Time-of-Flight Based Scene Reconstruction with a Mesh Processing Tool for Model Based Camera Tracking

2010

Institute for Systems and Technologies of Information, Control and Communication (INSTICC): VISIGRAPP 2010. Proceedings : International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications. INSTICC Press, 2010, pp. 302-309

International Conference on Computer Vision Theory and Applications (VISAPP) <5, 2010, Angers, France>

The most challenging algorithmical task for markerless Augmented Reality applications is the robust estimation of the camera pose. With a given 3D model of a scene the camera pose can be estimated via model-based camera tracking without the need to manipulate the scene with fiducial markers. Up to now, the bottleneck of model-based camera tracking is the availability of such a 3D model. Recently time-of-flight cameras were developed which acquire depth images in real time. With a sensor fusion approach combining the color data of a 2D color camera and the 3D measurements of a time-of-flight camera we acquire a textured 3D model of a scene. We propose a semi-manual reconstruction step in which the alignment of several submeshes with a mesh processing tool is supervised by the user to ensure a correct alignment. The evaluation of our approach shows its applicability for reconstructing a 3D model which is suitable for model-based camera tracking even for objects which are difficult to measure reliably with a time-of-flight camera due to their demanding surface characteristics.

Show publication details

Engelke, Timo; Webel, Sabine; Bockholt, Ulrich; Wuest, Harald; Gavish, Nirit; Tecchia, Franco; Preusche, Carsten

Towards Automatic Generation of Multimodal AR-Training Applications and Workflow Descriptions

2010

Avizzano, Carlo Alberto (Ed.) et al.: IEEE RO-MAN 2010 : 19th IEEE International Symposium on Robot and Human Interactive Communication. Proceedings [online]. New York: The Institute of Electrical and Electronics Engineers (IEEE), 2010, pp. 434-439

IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) <19, 2010, Viareggio, Italy>

Augmented Reality (AR) is a technology which has become very popular in the last years. In this context also the idea of using of AR for training applications has become very important. AR offers a large potential for training only if the training is well focused to the skills that have to be trained and if the training protocol is well designed. On the other hand, the generation of the training content to be transferred via AR is a comprehensive problem that is addressed in this paper. Thus, this paper tries to describe the whole chain of implementations and general aspects involved in the creation of AR training applications, including examples for used multimodal devices. This chain starts with the capturing of expert actions to be hold in "digital representation of skill". The digital representation of skill is transferred to the training protocol that specifies the storyboard of the AR training session. The paper includes two different implementations of AR training systems and describes the general idea of informational abstraction from low level data up to interaction and from design to application.

Show publication details

Riedel, Michael; Staack, Ingo; Rieß, Patrick; Wuest, Harald; Bockholt, Ulrich

Virtual und Augmented Reality Technologien zur Unterstützung von Konstruktion und Fertigung im U-Bootsbau

2010

Schenk, Michael (Ed.): 13. IFF-Wissenschaftstage 2010. Tagungsband : Digitales Engineering und virtuelle Techniken zum Planen, Testen und Betreiben technischer Systeme. Stuttgart: Fraunhofer Verlag, 2010, pp. 140-145

IFF-Wissenschaftstage <13, 2010, Magdeburg, Germany>

Um die globale Wettbewerbsfähigkeit nachhaltig zu sichern, die erhöhten technischen Anforderungen zu erfüllen und verkürzte Entwicklungszyklen bei gleichzeitig erhöhter Komplexität zu gewährleisten, ist der effiziente Einsatz von CAx- und virtuellen Technologien im Wertschöpfungsprozess für HDW unbedingte Voraussetzung. Dabei findet aufgrund von geringen Stückzahlen und hoher Individualität im U-Bootsbau eine enge Verzahnung von Fertigung und Entwicklung statt. Diese Verzahnung erfordert einen permanenten Abgleich zwischen digital konstruierten und real gefertigten Bauteilen. Um diese Anforderung zu unterstützen, werden im Rahmen des BMBF geförderten Projektes AVILUS Technologien entwickelt, die einerseits eine VR-gestützte Kooperation an verteilten Standorten unterstützten ("VRMeeting"), die andererseits aber mit Hilfe von Augmented Reality Technologien auch einen Abgleich von digitaler (Entwicklung) und realer Welt (Fertigung) unterstützen.

Show publication details

Becker, Mario; Wuest, Harald; Wientapper, Folker; Engelke, Timo

A Prototyping Architecture for Augmented Reality

2009

Latoschik, Marc Erich (Ed.) et al.: 2nd Workshop on Software Engineering and Architectures for Realtime Interactive Systems (SEARIS@VR2009) : IEEE Virtual Reality 2009 Workshop. Aachen: Shaker, 2009, pp. 51-54

Software Engineering and Architectures for Realtime Interactive Systems (SEARIS) <2, 2009, Lafayette, LA, USA>

In this paper we introduce an architecture for rapid development and assessment of advanced 3D visual track ing algorithms. We claim, that no universal tracking approach exists, that fulfills the requirements of all possible application seenarios at the same time. On the contrary, very specific and working solutions can be developed for given situations and uses. Therefore, software for visual tracking must be designed as a highly flexible system that can be quickly re-configured in order to enable the development of optimized solutions in terms of robustness, accuracy, frame rate and delay. To this purpose we designed an architecture that offers many functionalities, which can be combined together to build a new processing chain. The overall system offers numerous advantages, such as interactive programming, run-time access to data and parameters and easy interfacing with other libraries or applications.

Show publication details

Zöllner, Michael; Keil, Jens; Wuest, Harald; Pletinckx, Daniël

An Augmented Reality Presentation System for Remote Cultural Heritage Sites

2009

Debattista, Kurt (Ed.) et al.: VAST 2009. VAST-STAR, Short and Project Papers Proceedings. Msida: University of Malta, 2009, pp. 112-116

International Symposium on Virtual Reality, Archaeology and Cultural Heritage (VAST) <10, 2009, St. Julians, Malta>

Museums often lack the possibilitiy to present archaeological or cultural heritage sites in a realistic and interesting way. Thus are proposing a new way to show augmented reality applications of cultural heritage sites at remote places like museums. In the exhibition space large wall-filling photographs of the real site are superimposed with interactive contextual annotations like 3D reconstructions, images and movies. Therefore we are using two different hardware setups for visualization: Standard UMPCs and a custom made revolving display. The setup has been installed and tested at SIGGRAPH 2008, Allard Pierson Museum in Amsterdam and CeBIT 2009. Museum visitors could experience Forum Romanum and Saticum in an informative and intuitive way by pointing the video see through devices on different areas of the photographs. The result is a more realistic and entertaining way for presenting cultural heritage sites in museums. Furthermore our solution is less expensive than comparable installations regarding content and hardware.

Show publication details

Zöllner, Michael; Keil, Jens; Drevensek, Timm; Wuest, Harald

Cultural Heritage Layers: Integrating Historic Media in Augmented Reality

2009

Sablatnig, Robert (Ed.) et al.: 15th International Conference on Virtual Systems and Multimedia. Proceedings : VSMM 2009. Los Alamitos, Calif.: IEEE Computer Society, 2009, pp. 193-196

International Conference on Virtual Systems and MultiMedia (VSMM) <15, 2009, Vienna, Austria>

In this paper we are presenting Cultural Heritage Layers, an approach that enables the visualization of historic media like drawings, paintings and photographs of buildings and historic scenes seamlessly superimposed on reality via video see through using X3D. This enables simple, inexpensive and sustainable Augmented Reality applications in the cultural heritage and architectural area based on industry standards. The main idea is to use existing historic media from archives and superimpose them seamlessly on reality at the right spot. These locative layers are context sensitively telling the location's history and create the impression of a virtual time journey. The registration of the virtual objects in the video images is provided by a robust 6DOF tracking framework based on two technologies that work in tandem: an initialization step based on Randomized Trees and a frame-to-frame tracking phase based on KLT. The entire application runs in real time on current Ultra Mobile PCs and MIDs.

Show publication details

Jung, Yvonne; Keil, Jens; Wuest, Harald; Engelke, Timo; Rieß, Patrick; Behr, Johannes

Knowledge at Your Fingertips: Multi-touch Interaction for GIS and Architectural Design Review Applications

2009

Institute for Systems and Technologies of Information, Control and Communication (INSTICC): VISIGRAPP 2009. Proceedings : International Joint Conference on Computer Vision and Computer Graphics Theory and Applications [CD-ROM]. INSTICC Press, 2009, GRAPP, pp. 387-392

International Conference on Computer Graphics Theory and Applications (GRAPP) <4, 2009, Lisboa, Portugal>

This paper introduces novel techniques of interacting and controlling 3D content using multi-touch interaction principles for navigation and virtual camera control. Based on applications from GIS and for the architectural design review process, implementation and usage of these interaction techniques are illustrated. A comprehensive hardware and software setup is used, which not only includes tracking, but also an X3D based layer to simplify application development. Therefore it allows designers and other non-programmers to develop multi-touch applications very efficiently, while allowing to focus on user interaction and content.

Show publication details

Wientapper, Folker; Ahrens, Katrin; Wuest, Harald; Bockholt, Ulrich

Linear-Projection-Based Classification of Human Postures in Time-of-Flight Data

2009

IEEE Systems, Man and Cybernetics Society: IEEE International Conference on Systems, Man and Cybernetics : SMC 2009. New York: IEEE Press, 2009, pp. 565-570

IEEE International Conference on Systems, Man and Cybernetics (SMC) <2009, San Antonio, TX, USA>

This paper presents a simple yet effective approach for classification of human postures by using a time-of-flight camera. We investigate and adopt linear projection techniques such as Locality Preserving Projections (LPP), Linear Discriminant Analysis (LDA) and Principal Component Analysis (PCA), which are more widespread in face recognition and other pattern recognition tasks.We analyze the relations between LPP and LDA and show experimentally that using LPP in a supervised manner effectively yields very similar results as LDA, implying that LPP may be regarded as a generalization of LDA. Features for offline training and online classification are created by adopting common image processing techniques such as background-subtraction and blob detection to the time-of-flight data.

Show publication details

Wuest, Harald; Wientapper, Folker; Stricker, Didier

Acquisition of High Quality Planar Patch Features

2008

Bebis, George (Ed.) et al.: Advances in Visual Computing. Proceedings Part I : ISVC 2008. Berlin, Heidelberg, New York: Springer, 2008. (Lecture Notes in Computer Science (LNCS) 5358), pp. 530-539

International Symposium on Visual Computing (ISVC) <4, 2008, Las Vegas, NV, USA>

Camera-based tracking systems which reconstruct a feature map with structure from motion or SLAM techniques highly depend on the ability to track a single feature in different scales, different lighting conditions and a wide range of viewing angles. The acquisition of high quality features is therefore indispensable for a continuous tracking of a feature with a maximum possible range of valid appearances. We present a tracking system where not only the position of a feature but also its surface normal is reconstructed and used for precise prediction and tracking recovery of lost features. The appearance of a reference patch is also estimated sequentially and refined during the tracking, which leads to a more stable feature tracking step. Such reconstructed reference templates can be used for tracking a camera pose with a great variety of viewing positions. This feature reconstruction process is combined with a feature management system, where a statistical analysis of the ability to track a feature is performed, and only the most stable features for a given camera viewing position are used for the 2D feature tracking step. This approach results in a map of high quality features, where the real time capabilities can be preserved by only tracking the most necessary 2D feature points.

Show publication details

Jung, Yvonne; Keil, Jens; Behr, Johannes; Webel, Sabine; Zöllner, Michael; Engelke, Timo; Wuest, Harald; Becker, Mario

Adapting X3D for Multi-touch Environments

2008

Spencer, Stephen N. (Ed.): Proceedings WEB3D 2008 : 13th International Symposium on 3D Web Technology. New York: ACM Press, 2008, pp. 27-30

International Conference on 3D Web Technology (WEB3D) <13, 2008, Los Angeles, CA, USA>

Multi-touch interaction on tabletop displays is a very active field of todays HCI research. However, most publications still focus on tracking techniques or develop a gesture configuration for a specific application setup. Very few explore generic high level interfaces for multi-touch applications. In this paper we present a comprehensive hardware and software setup, which includes an X3D based layer to simplify the application development process. We present a robust FTIR based optical tracking system, examine in how far current sensor and navigation abstractions in the X3D standard are useful and finally present extensions to the standard, which enable designers and other non-programmers to develop multi-touch applications very efficiently.

Show publication details

Wuest, Harald; Fellner, Dieter W. (Betreuer); Stricker, Didier (Betreuer)

Efficient Line and Patch Feature Characterization and Management for Real-time Camera Tracking

2008

Darmstadt, TU, Diss., 2008

One of the key problems of augmented reality is the tracking of the camera position and viewing direction in real-time. Current vision-based systems mostly rely on the detection and tracking of fiducial markers. Some markerless approaches exist, which are based on 3D line models or calibrated reference images. These methods require a high manual preprocessing work step, which is not applicable for the efficient development and design of industrial AR applications. The problem of the preprocessing overload is addressed by the development of vision-based tracking algorithms, which require a minimal workload of the preparation of reference data. A novel method for the automatic view-dependent generation of line models in real-time is presented. The tracking system only needs a polygonal model of a reference object, which is often available from the industrial construction process. Analysis-by-synthesis techniques are used with the support of graphics hardware to create a connection between virtual model and real model. Point-based methods which rely on optical flow-based template tracking are developed for the camera pose estimation in partially known scenarios. With the support of robust reconstruction algorithms a real-time tracking system for augmented reality applications is developed, which is able to run with only very limited previous knowledge about the scene. The robustness and real-time capability is improved with a statistical approach for a feature management system which is based on machine learning techniques.

Show publication details

Zöllner, Michael; Pagani, Alain; Pastarmov, Yulian; Wuest, Harald; Stricker, Didier

Reality Filtering: A Visual Time Machine in Augmented Reality

2008

Ashley, Michael (Ed.) et al.: VAST 2008. Proceedings. Aire-la-Ville: Eurographics Association, 2008, pp. 71-77

International Symposium on Virtual Reality, Archaeology and Cultural Heritage (VAST) <9, 2008, Braga, Portugal>

We present Reality Filtering, an application that makes it possible to visualize original content like drawings or paintings of buildings and frescos seamlessly superimposed on reality by using filtered augmented reality. This enables simple and inexpensive applications in the cultural heritage and architecture area. The main idea is that the video stream showing the reality is filtered on the fly to acquire the same presentation style as the virtual objects. This allows for a better integration of original historic content and creates the impression of a virtual time journey. The registration of the virtual objects in the video images is provided by a robust 6DOF tracking framework based on two technologies that work in tandem: an initialization step based on Randomized Trees and a frame-to-frame tracking phase based on KLT. For the initialization, we present the novel concept of temporally distributed computational load (TDCL), which is able to automatically detect and register multiple objects while maintaining a constant video frame rate of 20 frames / sec. For mid- to long-range augmentation a pure 2- dimensional tracking with 3DOF is applicable and leads to significant performance gain. The entire application runs in real time on Ultra Mobile PCs.

Show publication details

Meffert, Carsten; Wuest, Harald (Betreuer)

Rekursive 3D Rekonstruktion von aktiven Konturen

2008

Koblenz/Landau, Univ., Diplomarbeit, 2008

Diese Arbeit behandelt die Rekonstruktion von monokularen Bildfolgen mittels Kanten. Die Fehlertolleranz soll dabei durch die Verwendung von Snakes, zum Tracken der Kanten, erheblich verbessert werden. Geprüft wurde ob und in welchem Maß dieser Ansatz zu einer Verbesserung bzw. Optimierung führt. In dem dabei entstandenen Plugin für die Visionlib des Fraunhofer IGD werden die Snakes direkt auf den Kantenfeatures initialisiert. Mittels Gradientenwerten, Spannung und Krümmung bleiben die Snakes auf Features durch die Sequenz erhalten. Ausreißer werden erkannt und zurück auf das entsprechende Feature geführt, ohne das eine neue Snake erstellt werden muss. Die Ergebnisse zeigen, dass der Energieterm der Snakes allein schon zu einer signifikanten Verbesserung der Robustheit führt, da einmal begangene Fehler durch weitere Iterationen wieder behoben werden können. Die Präzision kann derweil mit anderen Verfahren mithalten und liegt bei einer Abweichung von ca. 1-2 Grad. Snakes bieten somit eine effiziente Möglichkeit das Tracking von Kanten zu optimieren, ihr Potential sollte in diesem Bereich weiter ausgeschöpft werden.

Show publication details

Wuest, Harald; Wientapper, Folker; Stricker, Didier

Adaptable Model-Based Tracking Using Analysis-by-Synthesis Techniques

2007

Kropatsch, Walter G. (Ed.) et al.: Computer Analysis of Images and Patterns. Proceedings : CAIP 2007. Berlin, Heidelberg, New York: Springer, 2007. (Lecture Notes in Computer Science (LNCS) 4673), pp. 20-27

International Conference on Computer Analysis of Images and Patterns (CAIP) <12, 2007, Vienna, Austria>

In this paper we present a novel analysis-by-synthesis approach for real-time camera tracking in industrial scenarios. The camera pose estimation is based on the tracking of line features which are generated dynamically in every frame by rendering a polygonal model and extracting contours out of the rendered scene. Different methods of the line model generation are investigated. Depending on the scenario and the given 3D model either the image gradient of the frame buffer or discontinuities of the z-buffer and the normal map are used for the generation of a 2D edge map. The 3D control points on a contour are calculated by using the depth value stored in the z-buffer. By aligning the generated features with edges in the current image, the extrinsic parameters of the camera are estimated. The camera pose used for rendering is predicted by a line-based frame-to-frame tracking which takes advantage of the generated edge features. The method is validated and evaluated with the help of ground-truth data as well as real image sequences.

Show publication details

Becker, Mario; Bleser, Gabriele; Pagani, Alain; Stricker, Didier; Wuest, Harald

An Architecture for Prototyping and Application Development of Visual Tracking Systems

2007

Institute of Electrical and Electronics Engineers (IEEE): Proceedings of 3DTV-CON 2007 [CD-ROM] : Capture, Transmission and Display of 3D Video, 4 p.

International Conference on 3DTV (3DTV-CON) <1, 2007, Kos, Greece>

In this paper we introduce a novel architecture for rapid development and assessment of advanced 3D visual tracking systems. Indeed, we notice that it does not exist up to now a universal tracking approach that fulfils the requirements of all possible application scenarios at the same time. On contrary, very specific and performing solutions can be developed for given situations and uses. Therefore, software for visual tracking must be designed as a highly flexible system that can be quickly re-configured in order to enable the development of optimised solutions in terms of accuracy, robustness, frame rate and delay to this purpose we designed an architecture that offers many functionalities which can be combined together, and thus build a new processing chain. The overall system offers numerous advantages, such as interactive programming, real-time access to the data and parameter at runtime.

Show publication details

Koch, Reinhard; Evers-Senne, Jan-Friso; Schiller, Ingo; Wuest, Harald; Stricker, Didier

Architecture and Tracking Algorithms for a Distributed Mobile Industrial AR System

2007

International Association for Pattern Recognition (IAPR): ICVS 2007 - Vision Systems in the Real World [Online] : Adaptation, Learning, Evaluation [online]. [cited 14 June 2007] Available from: http://www.icvs2007.org/programme.php, 2007, 10 p.

International Conference on Computer Vision Systems (ICVS) <5, 2007, Bielefeld, Germany>

In Augmented Reality applications, a 3D object is registered with a camera and visual augmentations of the object are rendered into the users field of view with a head mounted display. For correct rendering, the 3D pose of the users view w.r.t. the 3D object must be registered and tracked in real-time, which is a computational intensive task. This contribution describes a distributed system that allows to track the 3D camera pose and to render images on a light-weight mobile front end user interface system. The front end system is connected by WLAN to a backend server that takes over the computational burden for real-time tracking. We describe the system architecture and the tracking algorithms of our system.

Show publication details

Wuest, Harald; Pagani, Alain; Stricker, Didier

Feature Management for Efficient Camera Tracking

2007

Yagi, Yasushi (Ed.) et al.: Computer Vision - ACCV 2007. Berlin; Heidelberg; New York: Springer, 2007. (Lecture Notes in Computer Science (LNCS) 4843), LNCS 4843, pp. 769-778

Asian Conference on Computer Vision (ACCV) <8, 2007, Tokyo, Japan>

In dynamic scenes with occluding objects many features need to be tracked for a robust real-time camera pose estimation. An open problem is that tracking too many features has a negative effect on the real-time capability of a tracking approach. This paper proposes a method for the feature management which performs a statistical analysis of the ability to track a feature and then uses only those features which are very likely to be tracked from a current camera position. Thereby a large set of features in different scales is created, where every feature holds a probability distribution of camera positions from which the feature can be tracked successfully. As only the feature points with the highest probability are used in the tracking step, the method can handle a large amount of features in different scale without losing the ability of real time performance. Both the statistical analysis and the reconstruction of the features' 3D coordinates are performed online during the tracking and no preprocessing step is needed.

Show publication details

Webel, Sabine; Becker, Mario; Stricker, Didier; Wuest, Harald

Identifying Differences Between CAD and Physical Mock-ups Using AR

2007

IEEE Computer Society: ISMAR 2007 : Proceedings of the Sixth IEEE and ACM International Symposium on Mixed and Augmented Reality. Los Alamitos, Calif.: IEEE Computer Society, 2007, pp. 281-282

IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR) <6, 2007, Nara, Japan>

Since the last ten years product development in automotive industry is changing radically. Most physical mock-ups have vanished and are now replaced by digital ones. But they are still needed for final evaluations or issues, which cannot be adequately simulated. During their production, deviations from the CAD model may be made. Since digital and real mock-up must match for the further product development, the transfer of differences between physical and digital mock-up to the CAD format is a crucial issue. In this paper an Augmented Reality (AR) based tool-chain is presented, which allows matching the CAD data with real mock-ups and documents the differences between them. Essential functions like measurement and online construction are provided, allowing the end-users to create information in AR space and feeding them back into the CAD model.

Show publication details

Wuest, Harald; Stricker, Didier

Tracking of Industrial Objects by Using CAD Models

2007

Journal of Virtual Reality and Broadcasting, Vol.4 (2007), 1, 9 p.

In this paper we present a model-based approach for real-time camera pose estimation in industrial scenarios. The line model which is used for tracking is generated by rendering a polygonal model and extracting contours out of the rendered scene. By un-projecting a point on the contour with the depth value stored in the z-buffer, the 3D coordinates of the contour can be calculated. For establishing 2D/3D correspondences the 3D control points on the contour are projected into the image and a perpendicular search for gradient maxima for every point on the contour is performed. Multiple hypotheses of 2D image points corresponding to a 3D control point make the pose estimation robust against ambiguous edges in the image.

Show publication details

Bleser, Gabriele; Wuest, Harald; Stricker, Didier

Online Camera Pose Estimation in Partially Known and Dynamic Scenes

2006

Institute of Electrical and Electronics Engineers (IEEE): ISMAR 2006 : Proceedings of the Fifth IEEE and ACM International Symposium on Mixed and Augmented Reality. Los Alamitos, Calif.: IEEE Computer Society, 2006, pp. 56-65

IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR) <5, 2006, Santa Barbara, CA, USA>

One of the key requirements of augmented reality systems is a robust real-time camera pose estimation. In this paper we present a robust approach, which does neither depend on of.ine pre-processing steps nor on pre-knowledge of the entire target scene. The connection between the real and the virtual world is made by a given CAD model of one object in the scene. However, the model is only needed for initialization. A line model is created out of the object rendered from a given camera pose and registrated onto the image gradient for finding the initial pose. In the tracking phase, the camera is not restricted to the modeled part of the scene anymore. The scene structure is recovered automatically during tracking. Point features are detected in the images and tracked from frame to frame using a brightness invariant template matching algorithm. Several template patches are extracted from different levels of an image pyramid and are used to make the 2D feature tracking capable for large changes in scale. Occlusion is detected already on the 2D feature tracking level. The features' 3D locations are roughly initialized by linear triangulation and then refined recursively over time using techniques of the Extended Kalman Filter framework. A quality manager handles the influence of a feature on the estimation of the camera pose. As structure and pose recovery are always performed under uncertainty, statistical methods for estimating and propagating uncertainty have been incorporated consequently into both processes. Finally, validation results on synthetic as well as on real video sequences are presented.

Show publication details

Moll, Karl Markus; Wuest, Harald (Betreuer)

Rekonstruktion von 3D-Linienmodellen aus Bildsequenzen

2006

Darmstadt, TU, Diplomarbeit, 2006

In dieser Diplomarbeit wird das Problem der Rekonstruktion von 3D-Linienmodellen aus Videodaten behandelt. 3D-Rekonstruktion ist ein interessantes Forschungsgebiet, das seit den 1970er Jahren große Beachtung findet. Ziel dieser Diplomarbeit war es dabei, ein System zur vollautomatischen Generierung von Linienmodellen zu erstellen, welche später zum markerlosen Tracking verwendet werden können. Die automatische Generierung hat dabei den Vorteil, daß dieselben Verfahren zur Kantenerkennung zum Einsatz kommen wie später beim Tracking. Weiterhin wurde ein Verfahren zum inkrementellen Update der bestehenden Rekonstruktion aus weiteren Videodaten entwickelt und evaluiert. Die Ergebnisse haben gezeigt, daß das Problem generell lösbar ist, aber auch, daß es noch viel Verbesserungsbedarf hinsichtlich Geschwindigkeit und Stabilität gibt. Es existieren verschiedene Ansätze zur 3D-Rekonstruktion, die bedeutendsten davon sind Faktorisierungsansätze -- meist gefolgt von Bündelblockausgleich (bundle adjustment) -- und Verfahren über die Fundamentalmatrix (two-view, Punktfeatures) beziehungsweise den Trifokaltensor (three-view, Linienfeatures). In der Diplomarbeit wird aus mehreren Gründen das Verfahren über den Trifokaltensor gewählt. Der Algorithmus wurde unter Zuhilfenahme der VisionLib-Bibliothek implementiert, die in der Abteilung A4 des Fraunhofer IGD entwickelt wird. Die Implementierung wurde mit Hilfe von synthetischen und realen Daten evaluiert. Die Ergebnisse waren vielversprechend, es gelangen gute Rekonstruktionen auf Basis von realen Videodaten. Allerdings zeigte sich auch, daß es noch Probleme hinsichtlich Stabilität und Geschwindigkeit gibt.

Show publication details

Wuest, Harald; Stricker, Didier

Robustes Kamera-Tracking für industrielle Anwendungen im Bereich der Erweiterten Realität

2006

Hochschule für Technik Stuttgart: 1. Internationales Symposium "Geometrisches Modellieren, Visualisieren und Bildverarbeitung". Proceedings. Stuttgart, 2006, pp. 105-112

Internationales Symposium "Geometrisches Modellieren, Visualisieren und Bildverarbeitung" <1, 2005, Stuttgart, Germany>

Dieses Paper beschreibt mehrere bildbasierte Trackingmethoden, welche für industrielle Augmented Reality (AR) Anwendungen genutzt werden können. Jeder der vorgestellten Algorithmen hat Stärken und Schwächen und daher ist keines dieser Verfahren für alle möglichen Szenarien geeignet. Vielmehr ergänzen sich die verschiedenen Verfahren, so dass von jedem Algorithmus die Stärken genutzt werden können und damit ein robustes und echtzeitfähiges Tracking möglich ist. Es werden folgende verschiedenen Stadien des Trackings beschrieben: Die Initialisierung zur Bestimmung der ersten Kameraposition, das Tracken von Bild zu Bild, und die Reinitialisierung nach dem gescheiterten Tracking.

Show publication details

Wuest, Harald; Stricker, Didier

Tracking of Industrial Objects by Using CAD Models

2006

Müller, Stefan (Ed.) et al.: Virtuelle und Erweiterte Realität : 3. Workshop der GI-Fachgruppe VR/AR. Aachen: Shaker, 2006. (Berichte aus der Informatik), pp. 155-164

Workshop der GI-Fachgruppe VR/AR: Virtuelle und Erweiterte Realität <3, 2006, Aachen, Germany>

In this paper we present a model-based approach for real-time camera pose estimation in industrial scenarios. The line model which is used for tracking is generated by rendering a polygonal model and extracting contours out of the rendered scene. By un-projecting a point on the contour with the depth value stored in the z-buffer, the 3D coordinates of the contour can be calculated. For establishing 2D/3D correspondences the 3D control points on the contour are projected into the image and a perpendicular search for gradient maxima for every point on the contour is performed. Multiple hypotheses of 2D image points corresponding to a 3D control point make the pose estimation robust against ambiguous edges in the image.

Show publication details

Wuest, Harald; Vial, Florent; Stricker, Didier

Adaptive Line Tracking with Multiple Hypotheses for Augmented Reality

2005

Institute of Electrical and Electronics Engineers (IEEE): ISMAR 2005 : Proceedings of the Fourth IEEE and ACM International Symposium on Mixed and Augmented Reality. Los Alamitos, Calif.: IEEE Computer Society, 2005, pp. 62-69

IEEE and ACM International Symposium on Mixed and Augmented Reality (ISMAR) <4, 2005, Vienna, Austria>

We present a real-time model-based line tracking approach with adaptive learning of image edge features that can handle partial occlusion and illumination changes. A CAD (VRML) model of the object to track is needed. First, the visible edges of the model with respect to the camera pose estimate are sorted out by a visibility test performed on standard graphics hardware. For every sample point of every projected visible 3D model line, a search for gradient maxima in the image is then carried out in a direction perpendicular to that line. Multiple hypotheses of these maxima are considered as putative matches. The camera pose is updated by minimizing the distances between the projection of all sample points of the visible 3D model lines and the most likely matches found in the image. The state of every edge's visual properties is updated after each successful camera pose estimation.We evaluated the algorithm and showed the improvements compared to other tracking approaches

Show publication details

Becker, Mario; Bleser, Gabriele; Pagani, Alain; Pastarmov, Yulian; Stricker, Didier; Vial, Florent; Weidenhausen, Jens; Wohlleber, Cedric; Wuest, Harald

Visual Tracking for Augmented Reality: No Universal Solution but Many Powerful Building Blocks

2005

Kuhlen, Torsten (Ed.) et al.: Virtuelle und Erweiterte Realität : 2. Workshop der GI-Fachgruppe VR/AR. Aachen: Shaker, 2005. (Berichte aus der Informatik), pp. 107-118

Workshop der GI-Fachgruppe VR/AR: Virtuelle und Erweiterte Realität <2, 2005, Aachen>

In this paper, we present an overview of several visual tracking methods for industrial augmented reality applications. We show that no universal algorithm can deal with the large number of possible scenes, and that the different methods have to be seen as complementary approaches that all have their strengths and weaknesses. The main difficulty, then, consists in combining existing building blocks in the right manner so that the overall system enables stable tracking. This paper addresses each phase of the tracking, i.e. Initialization, Tracking, Re-Initialization, and proposes a first choice of appropriate algorithms. Finally, a global system is designed, tested and evaluated with help of video sequences of different real environments.