Archives for Technology

08 Jan

Predicting Visual Perception of Material Structure in Virtual Environments

image-1202
image-1202
One of the most accurate yet still practical representation of material appearance is the Bidirectional Texture Function (BTF). The BTF can be viewed as an extension of Bidirectional Reflectance Distribution Function (BRDF) for additional spatial information that includes local visual effects such as shadowing, interreflection, subsurface-scattering, etc. However, the shift from BRDF to BTF represents not only a huge leap in respect to the realism of material reproduction, but also related high memory and computational costs stemming from the storage and processing of massive BTF data. In this work, we argue that each opaque material, regardless of its surface structure, can be safely substituted by a BRDF without the introduction of a significant perceptual error when viewed from an appropriate distance. Therefore, we ran a set of psychophysical studies over 25 materials to determine so-called critical viewing distances, i.e. the minimal distances at which the material spatial structure (texture) cannot be visually discerned. Our analysis determined such typical distances typical for several material categories often used in interior design applications. Furthermore, we propose a combination of computational features that can predict such distances without the need for a psychophysical study. We show that our work can significantly reduce rendering costs in applications that process complex virtual scenes.One of the most accurate yet still practical representation of material appearance is the Bidirectional Texture Function (BTF). The BTF can be viewed as an extension of Bidirectional Reflectance Distribution Function (BRDF) for additional spatial information that includes local visual effects such as shadowing, interreflection, subsurface-scattering, etc.
08 Jan

Accurate and Efficient Computation of Laplacian Spectral Distances and Kernels

image-1203
image-1203
This paper introduces the Laplacian spectral distances, as a function that resembles the usual distance map, but exhibits properties (e.g. smoothness, locality, invariance to shape transformations) that make them useful to processing and analysing geometric data. Spectral distances are easily defined through a filtering of the Laplacian eigenpairs and reduce to the heat diffusion, wave, biharmonic and commute-time distances for specific filters. In particular, the smoothness of the spectral distances and the encoding of local and global shape properties depend on the convergence of the filtered eigenvalues to zero. Instead of applying a truncated spectral approximation or prolongation operators, we propose a computation of Laplacian distances and kernels through the solution of sparse linear systems. Our approach is free of user-defined parameters, overcomes the evaluation of the Laplacian spectrum and guarantees a higher approximation accuracy than previous work. This paper introduces the Laplacian spectral distances, as a function that resembles the usual distancemap, but exhibits properties (e.g. smoothness, locality, invariance to shape transformations) that make them useful to processing and analysing geometric data. Spectral distances are easily defined through a filtering of the Laplacian eigenpairs and reduce to the heat diffusion, wave, biharmonic and commute-time distances for specific filters. In particular, the smoothness of the spectral distances and the encoding of local and global shape properties depend on the convergence of the filtered eigenvalues to zero.
08 Jan

Towards Globally Optimal Normal Orientations for Large Point Clouds

image-1204
image-1204
Various processing algorithms on point set surfaces rely on consistently oriented normals (e.g. Poisson surface reconstruction). While several approaches exist for the calculation of normal directions, in most cases, their orientation has to be determined in a subsequent step. This paper generalizes propagation-based approaches by reformulating the task as a graph-based energy minimization problem. By applying global solvers, we can achieve more consistent orientations than simple greedy optimizations. Furthermore, we present a streaming-based framework for orienting large point clouds. This framework orients patches locally and generates a globally consistent patch orientation on a reduced neighbour graph, which achieves similar quality to orienting the full graph. Various processing algorithms on point set surfaces rely on consistently oriented normals (e.g. Poisson surface reconstruction). While several approaches exist for the calculation of normal directions, in most cases, their orientation has to be determined in a subsequent step. This paper generalizes propagation-based approaches by reformulating the task as a graph-based energy minimization problem and presents a streaming-based out-of-core implementation.
05 Jan

Discovering Structured Variations Via Template Matching

image-1188
image-1188
Understanding patterns of variation from raw measurement data remains a central goal of shape analysis. Such an understanding reveals which elements are repeated, or how elements can be derived as structured variations from a common base element. We investigate this problem in the context of 3D acquisitions of buildings. Utilizing a set of template models, we discover geometric similarities across a set of building elements. Each template is equipped with a deformation model that defines variations of a base geometry. Central to our algorithm is a simultaneous template matching and deformation analysis that detects patterns across building elements by extracting similarities in the deformation modes of their matching templates. We demonstrate that such an analysis can successfully detect structured variations even for noisy and incomplete data. Understanding patterns of variation from raw measurement data remains a central goal of shape analysis. Such an understanding reveals which elements are repeated, or how elements can be derived as structured variations from a common base element. We investigate this problem in the context of 3D acquisitions of buildings. Utilizing a set of template models, we discover geometric similarities across a set of building elements. Each template is equipped with a deformation model that defines variations of a base geometry.
23 Dec

Synthesizing Ornamental Typefaces

image-1165
image-1165
We present a method for creating ornamental typeface images. Ornamental typefaces are a composite artwork made from the assemblage of images that carry similar semantics to words. These appealing word-art works often attract the attention of more people and convey more meaningful information than general typefaces. However, traditional ornamental typefaces are usually created by skilled artists, which involves tedious manual processes, especially when searching for appropriate materials and assembling them. Hence, we aim to provide an easy way to create ornamental typefaces for novices. How to combine users' design intentions with image semantic and shape information to obtain readable and appealing ornamental typefaces is the key challenge to generate ornamental typefaces. To address this problem, we first provide a scribble-based interface for users to segment the input typeface into strokes according to their design concepts. To ensure the consistency of the image semantics and stroke shape, we then define a semantic-shape similarity metric to select a set of suitable images. Finally, to beautify the typeface structure, an optional optimal strategy is investigated. Experimental results and user studies show that the proposed algorithm effectively generates attractive and readable ornamental typefaces.We present a method for creating ornamental typeface images. Ornamental typefaces are a composite artwork made from the assemblage of images that carry similar semantics to words. These appealing word-art works often attract the attention of more people and convey more meaningful information than general typefaces. However, traditional ornamental typefaces are usually created by skilled artists, which involves tedious manual processes, especially when searching for appropriate materials and assembling them. Hence, we aim to provide an easy way to create ornamental typefaces for novices.
22 Dec

Sparse GPU Voxelization of Yarn-Level Cloth

image-1166
image-1166
Most popular methods in cloth rendering rely on volumetric data in order to model complex optical phenomena such as sub-surface scattering. These approaches are able to produce very realistic illumination results, but their volumetric representations are costly to compute and render, forfeiting any interactive feedback. In this paper, we introduce a method based on the Graphics Processing Unit (GPU) for voxelization and visualization, suitable for both interactive and offline rendering. Recent features in the OpenGL model, like the ability to dynamically address arbitrary buffers and allocate bindless textures, are combined into our pipeline to interactively voxelize millions of polygons into a set of large three-dimensional (3D) textures (>109 elements), generating a volume with sub-voxel accuracy, which is suitable even for high-density woven cloth such as linen.Most popular methods in cloth rendering rely on volumetric data in order to model complex optical phenomena such as sub-surface scattering. These approaches are able to produce very realistic illumination results, but their volumetric representations are costly to compute and render, forfeiting any interactive feedback. In this paper, we introduce a method based on the GPU for voxelization and visualization, suitable for both interactive and offline rendering. Recent features in the OpenGL model, like the ability to dynamically address arbitrary buffers and allocate bindless textures, are combined into our pipeline to interactively voxelize millions of polygons into a set of large three-dimensional (3D) textures (>109 elements), generating a volume with sub-voxel accuracy, which is suitable even for high-density woven cloth such as linen.
21 Dec

Memory-Efficient Interactive Online Reconstruction From Depth Image Streams

image-1149
image-1149
We describe how the pipeline for 3D online reconstruction using commodity depth and image scanning hardware can be made scalable for large spatial extents and high scanning resolutions. Our modified pipeline requires less than 10% of the memory that is required by previous approaches at similar speed and resolution. To achieve this, we avoid storing a 3D distance field and weight map during online scene reconstruction. Instead, surface samples are binned into a high-resolution binary voxel grid. This grid is used in combination with caching and deferred processing of depth images to reconstruct the scene geometry. For pose estimation, GPU ray-casting is performed on the binary voxel grid. A one-to-one comparison to level-set ray-casting in a distance volume indicates slightly lower pose accuracy. To enable unlimited spatial extents and store acquired samples at the appropriate level of detail, we combine a hash map with a hierarchical tree representation.We describe how the pipeline for 3D online reconstruction using commodity depth and image scanning hardware can be made scalable for large spatial extents and high scanning resolutions. Our modified pipeline requires less than 10% of the memory that is required by previous approaches at similar speed and resolution. To achieve this, we avoid storing a 3D distance field and weight map during online scene reconstruction. Instead, surface samples are binned into a high-resolution binary voxel grid. This grid is used in combination with caching and deferred processing of depth images to reconstruct the scene geometry. For pose estimation, GPU ray-casting is performed on the binary voxel grid.
18 Nov

A Virtual Director Using Hidden Markov Models

image-1088
image-1088
Automatically computing a cinematographic consistent sequence of shots over a set of actions occurring in a 3D world is a complex task which requires not only the computation of appropriate shots (viewpoints) and appropriate transitions between shots (cuts), but the ability to encode and reproduce elements of cinematographic style. Models proposed in the literature, generally based on finite state machine or idiom-based representations, provide limited functionalities to build sequences of shots. These approaches are not designed in mind to easily learn elements of cinematographic style, nor do they allow to perform significant variations in style over the same sequence of actions. In this paper, we propose a model for automated cinematography that can compute significant variations in terms of cinematographic style, with the ability to control the duration of shots and the possibility to add specific constraints to the desired sequence. The model is parametrized in a way that facilitates the application of learning techniques. By using a Hidden Markov Model representation of the editing process, we demonstrate the possibility of easily reproducing elements of style extracted from real movies. Results comparing our model with state-of-the-art first-order Markovian representations illustrate these features, and robustness of the learning technique is demonstrated through cross-validation.Automatically computing a cinematographic consistent sequence of shots over a set of actions occurring in a 3D world is a complex task which requires not only the computation of appropriate shots (viewpoints) and appropriate transitions between shots (cuts), but the ability to encode and reproduce elements of cinematographic style. Models proposed in the literature, generally based on finite state machine or idiom-based representations, provide limited functionalities to build sequences of shots. These approaches are not designed in mind to easily learn elements of cinematographic style, nor do they allow to perform significant variations in style over the same sequence of actions. In this paper, we propose a model for automated cinematography that can compute significant variations in terms of cinematographic style, with the ability to control the duration of shots and the possibility to add specific constraints to the desired sequence. The model is parametrized in a way that facilitates the application of learning techniques. By using a Hidden Markov Model representation of the editing process, we demonstrate the possibility of easily reproducing elements of style extracted from real movies. Results comparing our model with state-of-the-art first-order Markovian representations illustrate these features, and robustness of the learning technique is demonstrated through cross-validation.
18 Nov

Performance Comparison of Bounding Volume Hierarchies and Kd-Trees for GPU Ray Tracing

image-1089
image-1089
We present a performance comparison of bounding volume hierarchies and kd-trees for ray tracing on many-core architectures (GPUs). The comparison is focused on rendering times and traversal characteristics on the GPU using data structures that were optimized for very high performance of tracing rays. To achieve low rendering times, we extensively examine the constants used in termination criteria for the two data structures. We show that for a contemporary GPU architecture (NVIDIA Kepler) bounding volume hierarchies have higher ray tracing performance than kd-trees for simple and moderately complex scenes. On the other hand, kd-trees have higher performance for complex scenes, in particular for those with high depth complexity. Finally, we analyse the causes of the performance discrepancies using the profiling characteristics of the ray tracing kernels.We present a performance comparison of bounding volume hierarchies and kd-trees for ray tracing on many-core architectures (GPUs). The comparison is focused on rendering times and traversal characteristics on the GPU using data structures that were optimized for very high performance of tracing rays. To achieve low rendering times, we extensively examine the constants used in termination criteria for the two data structures. We show that for a contemporary GPU architecture (NVIDIA Kepler) bounding volume hierarchies have higher ray tracing performance than kd-trees for simple and moderately complex scenes.
18 Nov

Digital Fabrication Techniques for Cultural Heritage: A Survey

image-1090
image-1090
Digital fabrication devices exploit basic technologies in order to create tangible reproductions of 3D digital models. Although current 3D printing pipelines still suffer from several restrictions, accuracy in reproduction has reached an excellent level. The manufacturing industry has been the main domain of 3D printing applications over the last decade. Digital fabrication techniques have also been demonstrated to be effective in many other contexts, including the consumer domain. The Cultural Heritage is one of the new application contexts and is an ideal domain to test the flexibility and quality of this new technology. This survey overviews the various fabrication technologies, discussing their strengths, limitations and costs. Various successful uses of 3D printing in the Cultural Heritage are analysed, which should also be useful for other application contexts. We review works that have attempted to extend fabrication technologies in order to deal with the specific issues in the use of digital fabrication in the Cultural Heritage. Finally, we also propose areas for future research.Digital fabrication devices exploit basic technologies in order to create tangible reproductions of 3D digital models. Although current 3D printing pipelines still suffer from several restrictions, accuracy in reproduction has reached an excellent level. The manufacturing industry has been themain domain of 3D printing applications over the last decade.Digital fabrication techniques have also been demonstrated to be effective in many other contexts, including the consumer domain. The Cultural Heritage is one of the new application contexts and is an ideal domain to test the flexibility and quality of this new technology.
1 2 34