Sparse GPU Voxelization of Yarn-Level Cloth

Abstract

Most popular methods in cloth rendering rely on volumetric data in order to model complex optical phenomena such as sub-surface scattering. These approaches are able to produce very realistic illumination results, but their volumetric representations are costly to compute and render, forfeiting any interactive feedback. In this paper, we introduce a method based on the Graphics Processing Unit (GPU) for voxelization and visualization, suitable for both interactive and offline rendering. Recent features in the OpenGL model, like the ability to dynamically address arbitrary buffers and allocate bindless textures, are combined into our pipeline to interactively voxelize millions of polygons into a set of large three-dimensional (3D) textures (>109 elements), generating a volume with sub-voxel accuracy, which is suitable even for high-density woven cloth such as linen.

Thumbnail image of graphical abstract

Most popular methods in cloth rendering rely on volumetric data in order to model complex optical phenomena such as sub-surface scattering. These approaches are able to produce very realistic illumination results, but their volumetric representations are costly to compute and render, forfeiting any interactive feedback. In this paper, we introduce a method based on the GPU for voxelization and visualization, suitable for both interactive and offline rendering. Recent features in the OpenGL model, like the ability to dynamically address arbitrary buffers and allocate bindless textures, are combined into our pipeline to interactively voxelize millions of polygons into a set of large three-dimensional (3D) textures (>109 elements), generating a volume with sub-voxel accuracy, which is suitable even for high-density woven cloth such as linen.