EgoRenderer: Rendering Human Avatars from Egocentric Camera Images

A recent paper on proposes to render whole-physique avatars with practical visual appeal and motion of a person sporting an selfish fisheye digicam from arbitrary exterior digicam viewpoints. It permits new programs in athletics effectiveness assessment, well being treatment, or digital truth.

Masks meant to foil facial recognition methods, at the Intercontinental Spy Museum. Picture credit: Derek Bruff by using Flickr, CC BY-NC two.

The method employs a light-weight and compact sensor and is fully mobile, enabling actors to roam freely. The method decomposes the rendering pipeline into texture synthesis, pose building, and neural graphic translation to empower a very practical visual appeal and pose transfer to an arbitrary exterior view. A big artificial dataset and a community personalized to the digicam setup are produced to infer the dense correspondences involving the enter visuals and an fundamental parametric physique model.

The qualitative and quantitative evaluations exhibit that the proposed program generalizes improved to novel viewpoints and poses than baseline solutions.

We present EgoRenderer, a program for rendering whole-physique neural avatars of a person captured by a wearable, selfish fisheye digicam that is mounted on a cap or a VR headset. Our program renders photorealistic novel sights of the actor and her motion from arbitrary digital digicam areas. Rendering whole-physique avatars from these kinds of selfish visuals occur with distinctive worries because of to the major-down view and big distortions. We tackle these worries by decomposing the rendering method into various measures, like texture synthesis, pose building, and neural graphic translation. For texture synthesis, we propose Ego-DPNet, a neural community that infers dense correspondences involving the enter fisheye visuals and an fundamental parametric physique model, and to extract textures from selfish inputs. In addition, to encode dynamic appearances, our method also learns an implicit texture stack that captures thorough visual appeal variation across poses and viewpoints. For accurate pose era, we to start with estimate physique pose from the selfish view applying a parametric model. We then synthesize an exterior no cost-viewpoint pose graphic by projecting the parametric model to the user-specified focus on viewpoint. We future mix the focus on pose graphic and the textures into a combined characteristic graphic, which is transformed into the output coloration graphic applying a neural graphic translation community. Experimental evaluations exhibit that EgoRenderer is capable of generating practical no cost-viewpoint avatars of a person sporting an selfish digicam. Comparisons to various baselines exhibit the rewards of our method.

Research paper: Hu, T., Sarkar, K., Liu, L., Zwicker, M., and Theobalt, C., “EgoRenderer: Rendering Human Avatars from Egocentric Digicam Images”, 2021. Website link to the paper: muscles/2111.12685

Website link to the web-site of challenge: