sample uses the Intel® RealSense™ SDK to scan and map a user’s face onto an existing 3D character model. The code is written in C++ and uses DirectX*. The sample requires Intel® RealSense™ SDK R5 or greater, which can be found here.
Scanning
The face scanning module is significantly improved in the WM5 SDK. Improvements include:
Improved color data
Improved consistency of the scan by providing hints to direct the user’s face to an ideal starting position
Face landmark data denoting the positions of key facial features
These improvements enable easier integration into games and other 3D applications by producing more consistent results and requiring less user modification.
The scanning implementation in this sample guides the user’s head to a correct position by using the hints provided from the Intel® RealSense™ SDK. Once the positioning requirements are met, the sample enables the start scan button.
This sample focuses on the face mapping process, and therefore the GUI for directing the user during the scan process is not ideal. The interface for an end-user application should better direct the user to the correct starting position as well as provide instructions once the scan begins.
The output of the scan is an .OBJ model file and an associate texture which will be consumed in the face mapping phase of the sample.
Figure 1: The face scanning module provides a preview image that helps the user maximize scan coverage.
Figure 2: Resulting scanned mesh. The image on the far right shows landmark data. Note that the scan is only the face and is not the entire head. The color data is captured from the first frame of the scan and is projected onto the face mesh; this approach yields high color quality but results in texture stretching on the sides of the head.
Face Mapping
The second part of the sample consumes the user’s scanned face color and geometry data and blends it onto an existing head model. The challenge is to create a complete head from the scanned face. This technique displaces the geometry of an existing head model as opposed to stitching the scanned face mesh onto the head model. The shader performs vertex displacement and color blending between the head and face meshes. This blending can be performed every time the head model is rendered, or a single time by caching the results. This sample supports both approaches.
The high-level process of this mapping technique includes:
Render the scanned face mesh using an orthographic projection matrix to create a displacement map and a color map.
Create a matrix to project positions on the head model onto the generated displacement and color maps. This projection matrix accounts for scaling and translations determined by face landmark data.
Render the head model using the projection matrix to map vertex positions to texture coordinates on the displacement and color maps.
Sample the generated maps to deform the vertices and color the pixels. The blending between the color map and the original head texture is controlled by an artist-created control map.
(Optional) Use the same displacement and blending methodologies to create a displaced mesh and single diffuse color texture that incorporates all blending effects.
Art Assets
The following art assets are used in this sample:
Head model. A head model the face is applied to. The model benefits from higher resolution in the facial area where the vertices are displaced.
Feature map. Texture mapped to the UVs of the head model that affects the brightness of the head.
Detail map. Repeated texture that applies additional detail to the feature map.
Color transfer map. Controls blending between two base skin tones. This allows different tones to be applied at different locations of the head. For example, the cheeks and ears can have a slightly different color than the rest of the face.
0 comments:
Post a Comment