Many applications, such as computer-aided design and game rendering, need to reproduce realistic material appearance in complex light environment and different visual conditions. The authenticity of the three-dimensional object or the scene is heavily depended on the simulation of textures, where the Bidirectional Texture Function (BTF) data plays an essential role. Researches on BTF has been focused on data acquisition, compression and modeling. In this paper, we propose a deep convolutional generative adversarial network (DCGAN) to learn the appearance of the BTF for predicting new BTF data under novel conditions. We use the illumination direction, viewing direction and material type as the conditional constraints to train the network. The proposed method was tested on a public BTF dataset and it was shown that it reduces the data storage cost and produces satisfactory synthetic results.
|Number of pages||7|
|Journal||Procedia Computer Science|
|Publication status||Published - 6 Feb 2019|
|Event||International Conference on Identification, Information and Knowledge in the Internet of Things - Beijing, China|
Duration: 19 Oct 2018 → 21 Oct 2018