In this paper, we develop a personalized video relighting algorithm that produces high-quality and temporally consistent relit videos under any pose, expression, and lighting condition in real-time. Existing relighting algorithms typically rely either on publicly available synthetic data, which yields poor relighting results, or instead on light stage data which is difficult to obtain. We show that by just capturing video of a user watching YouTube videos on a monitor we can train a personalized algorithm capable of performing high-quality relighting under any condition.
Our key contribution is a novel neural relighting architecture that effectively separates the intrinsic appearance features - the geometry and reflectance of the face - from the source lighting and then combines them with the target lighting to generate a relit image. This neural network architecture enables smoothing of intrinsic appearance features leading to temporally stable video relighting. Both qualitative and quantitative evaluations show that our architecture improves portrait image relighting quality and temporal consistency over state-of-the-art approaches on both casually captured `Light Stage at Your Desk' (LSYD) and light-stage-captured `One Light At a Time' (OLAT).
A qualitative comparison with existing relighting techniques(Sengupta et al. and Sun et al.) on unseen test data from LSYD
A qualitative comparison with existing relighting techniques(Sengupta et al. and Sun et al.) on unseen test data from OLAT
We can also relight portrait images captured in-the-wild without using any source lighting as input
We relight a portrait video with varying source lighting using a constant ring light for every frame
@misc{choi2024personalizedvideorelightingathome,
title={Personalized Video Relighting With an At-Home Light Stage},
author={Jun Myeong Choi and Max Christman and Roni Sengupta},
year={2024},
eprint={2311.08843},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2311.08843},
}