User Tools

Site Tools


public:reva:revaresearch_en

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
public:reva:revaresearch_en [2017/05/16 14:26] – external edit 127.0.0.1public:reva:revaresearch_en [2023/07/20 10:00] (current) – external edit 127.0.0.1
Line 1: Line 1:
 = Research Activities = Research Activities
  
-**[[vortex:vortexresearch_en#Image Processing]]**+**[[public:reva:vortexresearch_en#Image Processing]]**
  
-**[[vortex:vortexresearch_en#Video Processing]]**+**[[public:reva:vortexresearch_en#Video Processing]]**
  
-**[[vortex:vortexresearch_en#3D Modelling]]**+**[[public:reva:vortexresearch_en#3D Modelling]]**
  
-**[[vortex:vortexresearch_en#Augmented Reality]]**+**[[public:reva:vortexresearch_en#Augmented Reality]]**
  
-**[[vortex:vortexresearch_en#Interactions]]**+**[[public:reva:vortexresearch_en#Interactions]]**
  
  
Line 18: Line 18:
  
 === Urban scene understanding === Urban scene understanding
-{{:vortex:imajbox.jpg?100|}} +{{public:reva:imajbox.jpg?100|}} 
  
-{{:vortex:sylvie-1.jpg?400|}} {{:vortex:sylvie-2.jpg?400|}} {{:vortex:sylvie-3.jpg?400|}}+{{public:reva:sylvie-1.jpg?400|}} {{public:reva:sylvie-2.jpg?400|}} {{public:reva:sylvie-3.jpg?400|}}
  
 This work has been done in collaboration with {{:vortex:logoimajing.jpg?80|http://www.imajing.fr/}} This work has been done in collaboration with {{:vortex:logoimajing.jpg?80|http://www.imajing.fr/}}
Line 48: Line 48:
 We introduced the use of point-based 3D models as a shape prior for real-time 3D tracking with a monocular camera. The joint use of point-based 3D models together with GPU allows to adapt and simplify an existing tracking algorithm originally designed for triangular meshes. Point-based models are of particular interest in this context, because they are the direct output of most laser scanners. We show that state-of-the-art techniques developed for point-based rendering can be used to compute in real-time intermediate values required for visual tracking. In particular, apparent motion predictors at each pixel are computed in parallel, and novel views of the tracked object are generated online to help wide-baseline matching. Both computations derive from the same general surface splatting technique which we implement, along with other low-level vision tasks, on the GPU, leading to a real-time tracking algorithm We introduced the use of point-based 3D models as a shape prior for real-time 3D tracking with a monocular camera. The joint use of point-based 3D models together with GPU allows to adapt and simplify an existing tracking algorithm originally designed for triangular meshes. Point-based models are of particular interest in this context, because they are the direct output of most laser scanners. We show that state-of-the-art techniques developed for point-based rendering can be used to compute in real-time intermediate values required for visual tracking. In particular, apparent motion predictors at each pixel are computed in parallel, and novel views of the tracked object are generated online to help wide-baseline matching. Both computations derive from the same general surface splatting technique which we implement, along with other low-level vision tasks, on the GPU, leading to a real-time tracking algorithm
  
-{{htvid>http://ubee.enseeiht.fr/dokuwiki/lib/exe/fetch.php?media=vortex:leopard_occlusions.ogv|http://ubee.enseeiht.fr/dokuwiki/lib/exe/fetch.php?media=vortex:leopard_occlusions.mp4|320x240}}+{{htvid>http://ubee.enseeiht.fr/dokuwiki/lib/exe/fetch.php?media=public:reva:leopard_occlusions.ogv|http://ubee.enseeiht.fr/dokuwiki/lib/exe/fetch.php?media=public:reva:leopard_occlusions.mp4|320x240}}
  
-{{htvid>http://ubee.enseeiht.fr/dokuwiki/lib/exe/fetch.php?media=vortex:leopard_AR.ogv|http://ubee.enseeiht.fr/dokuwiki/lib/exe/fetch.php?media=vortex:leopard_AR.mp4|320x240}}+{{htvid>http://ubee.enseeiht.fr/dokuwiki/lib/exe/fetch.php?media=public:reva:leopard_AR.ogv|http://ubee.enseeiht.fr/dokuwiki/lib/exe/fetch.php?media=public:reva:leopard_AR.mp4|320x240}}
  
 Related Publications: Related Publications:
Line 74: Line 74:
 Our method is based on a skeletonisation algorithm which allows to generate a possible skeleton from a foliage segmentation. Then, a 3D generative model, based on a parametric model of branching systems that takes into account botanical knowledge is built. This method extends previous works by constraining the resulting skeleton to follow hierarchical organisation of natural branching structure. A rst instance of a 3D model is generated. A reprojection of this model is compared with the original image. Then, we show that selecting the model from multiple proposals for the main branching structure of the plant and for the foliage improves the quality of the generated 3D model. Varying parameter values of the generative model, we produce a series of candidate models. A criterion based on comparing 3D virtual plant reprojection with original image selects the best model.  Our method is based on a skeletonisation algorithm which allows to generate a possible skeleton from a foliage segmentation. Then, a 3D generative model, based on a parametric model of branching systems that takes into account botanical knowledge is built. This method extends previous works by constraining the resulting skeleton to follow hierarchical organisation of natural branching structure. A rst instance of a 3D model is generated. A reprojection of this model is compared with the original image. Then, we show that selecting the model from multiple proposals for the main branching structure of the plant and for the foliage improves the quality of the generated 3D model. Varying parameter values of the generative model, we produce a series of candidate models. A criterion based on comparing 3D virtual plant reprojection with original image selects the best model. 
  
-{{ :vortex:plant1.png?800 |}}+{{ public:reva:plant1.png?800 |}}
  
-{{ :vortex:plant2.png?800 |}}+{{ public:reva:plant2.png?800 |}}
  
 Relevant publications: Relevant publications:
Line 86: Line 86:
 We introduced a new unified Structure- from-Motion (SfM) paradigm in which images of circular point-pairs can be combined with images of natural points. An imaged circular point-pair encodes the 2D Euclidean structure of a world plane and can easily be derived from the image of a planar shape, especially those including circles. A classical SfM method generally runs two steps: first a projective factorization of all matched image points (into projective cameras and points) and second a camera self- calibration that updates the obtained world from projective to Euclidean. This work shows how to introduce images of circular points in these two SfM steps while its key contribution is to provide the theoretical foundations for combining “classical” linear self-calibration constraints with additional ones derived from such images.  We introduced a new unified Structure- from-Motion (SfM) paradigm in which images of circular point-pairs can be combined with images of natural points. An imaged circular point-pair encodes the 2D Euclidean structure of a world plane and can easily be derived from the image of a planar shape, especially those including circles. A classical SfM method generally runs two steps: first a projective factorization of all matched image points (into projective cameras and points) and second a camera self- calibration that updates the obtained world from projective to Euclidean. This work shows how to introduce images of circular points in these two SfM steps while its key contribution is to provide the theoretical foundations for combining “classical” linear self-calibration constraints with additional ones derived from such images. 
  
-{{:vortex:sfmcircle.png?400|}} {{:vortex:sfmcircle3d.png?200|}}+{{public:reva:sfmcircle.png?400|}} {{public:reva:sfmcircle3d.png?200|}}
  
 Related publications: Related publications:
Line 98: Line 98:
 challenging research problem called 3D-reconstruction. Among the different techniques available, photometric stereo produces highly accurate results when the lighting conditions have been identified. When these conditions are unknown, the problem becomes the so-called uncalibrated photometric stereo problem, which is ill-posed. We showed how total variation (TV) can be used to reduce the ambiguities of uncalibrated photometric stereo, and we will study two methods for estimating the parameters of the //generalized bas-relief ambiguity// challenging research problem called 3D-reconstruction. Among the different techniques available, photometric stereo produces highly accurate results when the lighting conditions have been identified. When these conditions are unknown, the problem becomes the so-called uncalibrated photometric stereo problem, which is ill-posed. We showed how total variation (TV) can be used to reduce the ambiguities of uncalibrated photometric stereo, and we will study two methods for estimating the parameters of the //generalized bas-relief ambiguity//
  
-{{:vortex:dsc_0058.jpg?200|}}  +{{public:reva:dsc_0058.jpg?200|}}  
-{{:parcoursmm:3in:wiki_stereophotometrie.png?900|}}+{{public:parcoursmm:3in:wiki_stereophotometrie.png?900|}}
  
-This research work is undergoing a technology transfer project supported by {{:vortex:toulouse-tech-transfer.jpg?80|http://www.toulouse-tech-transfer.com/}}+This research work is undergoing a technology transfer project supported by {{public:reva:toulouse-tech-transfer.jpg?80|http://www.toulouse-tech-transfer.com/}}
  
 Related publications: Related publications:
Line 112: Line 112:
 We further propose two advanced methods:  //keyview-aware//, which trades off mesh quality and camera speed appropriately depending on how close the current view is to the keyviews, and  //adaptive-zoom//, which improves visual quality by moving the virtual camera away from the original path. A user study reveals that our keyview-aware method is preferred over the basic methods. Moreover, the adaptive-zoom scheme compares favorably to the keyview-aware method, showing that path adaptation is a viable approach to handling bandwidth variation. We further propose two advanced methods:  //keyview-aware//, which trades off mesh quality and camera speed appropriately depending on how close the current view is to the keyviews, and  //adaptive-zoom//, which improves visual quality by moving the virtual camera away from the original path. A user study reveals that our keyview-aware method is preferred over the basic methods. Moreover, the adaptive-zoom scheme compares favorably to the keyview-aware method, showing that path adaptation is a viable approach to handling bandwidth variation.
  
-{{ :vortex:3dstreaming.png?800 |}}+{{ public:reva:3dstreaming.png?800 |}}
  
 Related publications: Related publications:
Line 145: Line 145:
  
  
-{{:vortex:toura.png?100|}}+{{public:reva:toura.png?100|}}
  
 Web site of the project: [[http://www.apsat.eu/en/applications-en/toura-tourisme-en.html|Project TOURA]] Web site of the project: [[http://www.apsat.eu/en/applications-en/toura-tourisme-en.html|Project TOURA]]
Line 160: Line 160:
  
  
-{{:vortex:sicasse.png?800|}}+{{public:reva:sicasse.png?800|}}
 ---- ----
 == Interactions == Interactions
Line 170: Line 170:
  
 Robust marker detection for Augmented Ads Robust marker detection for Augmented Ads
-{{:vortex:ubleam-1.png?300|}}  +{{public:reva:ubleam-1.png?300|}}  
  
-Shared patent with {{:vortex:ubleam_logo_horizontal_mantra_200web.png?80|}}+Shared patent with {{public:reva:ubleam_logo_horizontal_mantra_200web.png?80|}}
  
 Marker detection and camera pose estimation for Augmented Applications Marker detection and camera pose estimation for Augmented Applications
-{{htvid>http://ubee.enseeiht.fr/dokuwiki/lib/exe/fetch.php?media=vortex:cones.ogv|http://ubee.enseeiht.fr/dokuwiki/lib/exe/fetch.php?media=vortex:cones.mp4}}+{{htvid>http://ubee.enseeiht.fr/dokuwiki/lib/exe/fetch.php?media=vortex:cones.ogv|http://ubee.enseeiht.fr/dokuwiki/lib/exe/fetch.php?media=public:reva:cones.mp4}}
  
 Available code: Available code:
public/reva/revaresearch_en.1494937607.txt.gz · Last modified: 2023/07/20 09:59 (external edit)