Archives for Technology - Page 34

22 Nov

Multi-Scale Kernels Using Random Walks

image-100
image-100
We introduce novel multi-scale kernels using the random walk framework and derive corresponding embeddings and pairwise distances. The fractional moments of the rate of continuous time random walk (equivalently diffusion rate) are used to discover higher order kernels (or similarities) between pair of points. The formulated kernels are isometry, scale and tessellation invariant, can be made globally or locally shape aware and are insensitive to partial objects and noise based on the moment and influence parameters. In addition, the corresponding kernel distances and embeddings are convergent and efficiently computable. We introduce dual Green's mean signatures based on the kernels and discuss the applicability of the multi-scale distance and embedding. Collectively, we present a unified view of popular embeddings and distance metrics while recovering intuitive probabilistic interpretations on discrete surface meshes.We introduce novel multi-scale kernels using the random walk framework and derive corresponding embeddings and pairwise distances. The fractional moments of the rate of continuous time random walk (equivalently diffusion rate) are used to discover higher order kernels (or similarities) between pair of points. The formulated kernels are isometry, scale and tessellation invariant, can be made globally or locally shape aware and are insensitive to partial objects and noise based on the moment and influence parameters. In addition, the corresponding kernel distances and embeddings are convergent and efficiently computable.
20 Nov

Stackless Multi-BVH Traversal for CPU, MIC and GPU Ray Tracing

image-101
image-101
Stackless traversal algorithms for ray tracing acceleration structures require significantly less storage per ray than ordinary stack-based ones. This advantage is important for massively parallel rendering methods, where there are many rays in flight. On SIMD architectures, a commonly used acceleration structure is the MBVH, which has multiple bounding boxes per node for improved parallelism. It scales to branching factors higher than two, for which, however, only stack-based traversal methods have been proposed so far. In this paper, we introduce a novel stackless traversal algorithm for MBVHs with up to four-way branching. Our approach replaces the stack with a small bitmask, supports dynamic ordered traversal, and has a low computation overhead. We also present efficient implementation techniques for recent CPU, MIC (Intel Xeon Phi) and GPU (NVIDIA Kepler) architectures.Stackless traversal algorithms for ray tracing acceleration structures require significantly less storage per ray than ordinary stack-based ones. This advantage is important for massively parallel rendering methods, where there are many rays in flight. On SIMD architectures, a commonly used acceleration structure is the multi bounding volume hierarchy (MBVH), which has multiple bounding boxes per node for improved parallelism. It scales to branching factors higher than two, for which, however, only stack-based traversal methods have been proposed so far. In this paper, we introduce a novel stackless traversal algorithm for MBVHs with up to 4-way branching.
16 Nov

Subdivision Surfaces with Creases and Truncated Multiple Knot Lines

image-102
image-102
We deal with subdivision schemes based on arbitrary degree B-splines. We focus on extraordinary knots which exhibit various levels of complexity in terms of both valency and multiplicity of knot lines emanating from such knots. The purpose of truncated multiple knot lines is to model creases which fair out. Our construction supports any degree and any knot line multiplicity and provides a modelling framework familiar to users used to B-splines and NURBS systems.We deal with subdivision schemes based on arbitrary degree B-splines. We focus on extraordinary knots which exhibit various levels of complexity in terms of both valency and multiplicity of knot lines emanating from such knots. The purpose of truncated multiple knot lines is to model creases which fair out. Our construction supports any degree and any knot line multiplicity and provides a modelling framework familiar to users used to B-splines and NURBS systems.
14 Nov

Scalable Realistic Rendering with Many-Light Methods

image-103
image-103
Recent years have seen increasing attention and significant progress in many-light rendering, a class of methods for efficient computation of global illumination. The many-light formulation offers a unified mathematical framework for the problem reducing the full lighting transport simulation to the calculation of the direct illumination from many virtual light sources. These methods are unrivaled in their scalability: they are able to produce plausible images in a fraction of a second but also converge to the full solution over time. In this state-of-the-art report, we give an easy-to-follow, introductory tutorial of the many-light theory; provide a comprehensive, unified survey of the topic with a comparison of the main algorithms; discuss limitations regarding materials and light transport phenomena and present a vision to motivate and guide future research. We will cover both the fundamental concepts as well as improvements, extensions and applications of many-light rendering.Recent years have seen increasing attention and significant progress in many-light rendering, a class of methods for efficient computation of global illumination. The many-light formulation offers a unified mathematical framework for the problem reducing the full lighting transport simulation to the calculation of the direct illumination from many virtual light sources. These methods are unrivaled in their scalability: they are able to produce plausible images in a fraction of a second but also converge to the full solution over time. In this state-of-the-art report, we give an easy-to-follow, introductory tutorial of the many-light theory.
14 Nov

Visibility Silhouettes for Semi-Analytic Spherical Integration

image-104
image-104
At each shade point, the spherical visibility function encodes occlusion from surrounding geometry, in all directions. Computing this function is difficult and point-sampling approaches, such as ray-tracing or hardware shadow mapping, are traditionally used to efficiently approximate it. We propose a semi-analytic solution to the problem where the spherical silhouette of the visibility is computed using a search over a 4D dual mesh of the scene. Once computed, we are able to semi-analytically integrate visibility-masked spherical functions along the visibility silhouette, instead of over the entire hemisphere. In this way, we avoid the artefacts that arise from using point-sampling strategies to integrate visibility, a function with unbounded frequency content. We demonstrate our approach on several applications, including direct illumination from realistic lighting and computation of pre-computed radiance transfer data. Additionally, we present a new frequency-space method for exactly computing all-frequency shadows on diffuse surfaces. Our results match ground truth computed using importance-sampled stratified Monte Carlo ray-tracing, with comparable performance on scenes with low-to-moderate geometric complexity. At each shade point, the spherical visibility function encodes occlusion from surrounding geometry, in all directions. Computing this function is difficult and point-sampling approaches, such as ray-tracing or hardware shadow mapping, are traditionally used to efficiently approximate it. We propose a semi-analytic solution to the problem where the spherical silhouette of the visibility is computed using a search over a 4D dual mesh of the scene. Once computed, we are able to semi-analytically integrate visibility-masked spherical functions along the visibility silhouette, instead of over the entire hemisphere. In this way, we avoid the artifacts that arise from using point-sampling strategies to integrate visibility, a function with unbounded frequency content.
18 Oct

Controlled Metamorphosis Between Skeleton-Driven Animated Polyhedral Meshes of Arbitrary Topologies

image-105
image-105
Enabling animators to smoothly transform between animated meshes of differing topologies is a long-standing problem in geometric modelling and computer animation. In this paper, we propose a new hybrid approach built upon the advantages of scalar field-based models (often called implicit surfaces) which can easily change their topology by changing their defining scalar field. Given two meshes, animated by their rigging-skeletons, we associate each mesh with its own approximating implicit surface. This implicit surface moves synchronously with the mesh. The shape-metamorphosis process is performed in several steps: first, we collapse the two meshes to their corresponding approximating implicit surfaces, then we transform between the two implicit surfaces and finally we inverse transition from the resulting metamorphosed implicit surface to the target mesh. The examples presented in this paper demonstrating the results of the proposed technique were implemented using an in-house plug-in for Maya™.Enabling animators to smoothly transform between animated meshes of differing topologies is a long-standing problem in geometric modelling and computer animation. In this paper, we propose a new hybrid approach built upon the advantages of scalar field-based models (often called implicit surfaces) which can easily change their topology by changing their defining scalar field. Given two meshes, animated by their rigging-skeletons, we associate each mesh with its own approximating implicit surface. This implicit surface moves synchronously with the mesh. The shape-metamorphosis process is performed in several steps: first, we collapse the two meshes to their corresponding approximating implicit surfaces, then we transform between the two implicit surfaces.
17 Oct

Visualization of the Centre of Projection Geometrical Locus in a Single Image

image-106
image-106
Single view reconstruction (SVR) is an important approach for 3D shape recovery since many non-existing buildings and scenes are captured in a single image. Historical photographs are often the most precise source for virtual reconstruction of a damaged cultural heritage. In semi-automated techniques, that are mainly used under practical situations, the user is the one who recognizes and selects constraints to be used. Hence, the veridicality and the accuracy of the final model partially rely on man-based decisions. We noticed that users, especially non-expert users such as cultural heritage professionals, usually do not fully understand the SVR process, which is why they have trouble in decision making while modelling. That often fundamentally affects the quality of the final 3D models. Considering the importance of human performance in SVR approaches, in this paper we offer a solution that can be used to reduce the amount of user errors. Specifically, we address the problem of locating the centre of projection (CP). We introduce a tool set for 3D visualization of the CP's geometrical loci that provides the user with a clear idea of how the CP's location is determined. Thanks to this type of visualization, the user becomes aware of the following: (1) the constraint relevant for CP location, (2) the image suitable for SVR, (3) more constraints for CP location required, (4) which constraints should be used for the best match, (5) will additional constraints create a useful redundancy. In order to test our approach and the assumptions it relies on, we compared the amount of user made errors in the standard approaches with the one in which additional visualization is provided.Users usually do not fully understand the SVR process, which is why they have trouble in decision making while modelling. That often fundamentally affects the quality of the final 3D models. We introduce a tool set for 3D visualisation of the CP's geometrical loci that provides the user with a clear idea of how the CP's location is determined. Evaluation proves that Tool set provides an effective improvement.
10 Oct

A Survey of Volumetric Illumination Techniques for Interactive Volume Rendering

image-107
image-107
Interactive volume rendering in its standard formulation has become an increasingly important tool in many application domains. In recent years several advanced volumetric illumination techniques to be used in interactive scenarios have been proposed. These techniques claim to have perceptual benefits as well as being capable of producing more realistic volume rendered images. Naturally, they cover a wide spectrum of illumination effects, including varying shading and scattering effects. In this survey, we review and classify the existing techniques for advanced volumetric illumination. The classification will be conducted based on their technical realization, their performance behaviour as well as their perceptual capabilities. Based on the limitations revealed in this review, we will define future challenges in the area of interactive advanced volumetric illumination.Interactive volume rendering in its standard formulation has become an increasingly important tool in many application domains. In recent years several advanced volumetric illumination techniques to be used in interactive scenarios have been proposed. These techniques claim to have perceptual benefits as well as being capable of producing more realistic volume rendered images. Naturally, they cover a wide spectrum of illumination effects, including varying shading and scattering effects. In this survey, we review and classify the existing techniques for advanced volumetric illumination. The classification will be conducted based on their technical realization, their performance behavior as well as their perceptual capabilities.
10 Oct

Appearance Stylization of Manhattan World Buildings

image-108
image-108
We propose a method that generates stylized building models from examples (Figure ). Our method only requires minimal user input to capture the appearance of a Manhattan world (MW) building, and can automatically retarget the captured ‘look and feel’ to new models. The key contribution is a novel representation, namely the ‘style sheet’, that is captured independently from a building's structure. It summarizes characteristic shape and texture patterns on the building. In the retargeting stage, a style sheet is used to decorate new buildings of potentially different structures. Consistent face groups are proposed to capture complex texture patterns from the example model and to preserve the patterns in the retarget models. We will demonstrate how to learn such style sheets from different MW buildings and the results of using them to generate novel models.We propose a method that generates stylized building models from examples. Our method only requires minimal user input to capture the appearance of a Manhattan world building, and can automatically retarget the captured “look and feel” to new models. The key contribution is a novel representation, namely the “style sheet”, that is captured independently from a building's structure. It summarizes characteristic shape and texture patterns on the building. In the retargeting stage, a style sheet is used to decorate new buildings of potentially different structures. Consistent face groups are proposed to capture complex texture patterns from the example model and to preserve the patterns in the retarget models.
17 Sep

Mobility-Trees for Indoor Scenes Manipulation

image-109
image-109
In this work, we introduce the ‘mobility-tree’ construct for high-level functional representation of complex 3D indoor scenes. In recent years, digital indoor scenes are becoming increasingly popular, consisting of detailed geometry and complex functionalities. These scenes often consist of objects that reoccur in various poses and interrelate with each other. In this work we analyse the reoccurrence of objects in the scene and automatically detect their functional mobilities. ‘Mobility’ analysis denotes the motion capabilities (i.e. degree of freedom) of an object and its subpart which typically relates to their indoor functionalities. We compute an object's mobility by analysing its spatial arrangement, repetitions and relations with other objects and store it in a ‘mobility-tree’. Repetitive motions in the scenes are grouped in ‘mobility-groups’, for which we develop a set of sophisticated controllers facilitating semantical high-level editing operations. We show applications of our mobility analysis to interactive scene manipulation and reorganization, and present results for a variety of indoor scenes.In this work, we introduce the ‘mobility-tree’ construct for high-level functional representation of complex 3D indoor scenes. In recent years, digital indoor scenes are becoming increasingly popular, consisting of detailed geometry and complex functionalities. These scenes often consist of objects that reoccur in various poses and interrelate with each other. In this work we analyse the reoccurrence of objects in the scene and automatically detect their functional mobilities. ‘Mobility’ analysis denotes the motion capabilities (i.e. degree of freedom) of an object and its subpart which typically relates to their indoor functionalities. We compute an object's mobility by analysing its spatial arrangement, repetitions and relations with other objects and store it in a ‘mobility-tree’.