Project Details

Description

Funded by the XRNetwork+ scheme (EPSRC) administered by the University of York, this project aims to hone an efficient method for capturing photoreal assets to enhance AI-driven visual effects in films in three distinct ways:

1. Photogrammetry / Volumetric capture: Utilising state-of-the-art techniques, multiple images of real-world objects or environments will be captured from various angles. These images will be processed to generate preliminary 3D models, capturing fine details.

2. Synthetic data: Scanned assets will automatically create data for machine learning models.

3. Real-time integration with Film Production Pipeline: The developed assets and AI-enhanced techniques will be integrated into the film virtual production pipeline. This integration will enable filmmakers to efficiently utilise the photoreal assets and AI models to create effects that were previously unattainable at this time and cost point.

In terms of outcomes, this project will provide a framework for capturing photoreal CG assets efficiently, reducing the cost associated with asset creation. Secondly, it will facilitate the development of AI models that enhance the realism of assets, pushing the boundaries of visual effects in films. Lastly, the integration of these assets and techniques into the film production pipeline will empower filmmakers to create immersive and visually captivating experiences for audiences at reduced cost.
StatusFinished
Effective start/end date6/10/2331/03/24

Funding

  • University of York: £42,934.54

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.