Input to the model
In this track participants build their models based on 3D input data. The inputs to the model are:
- A 3D garment in a static T-pose. 3D garment is available in the form of a mesh to build a surface,
- SMPL body shape parameters,
- A sequence of SMPL pose parameters.
There are other metadata in the training set like fabric type, garment tightness, etc. Participants can use them in the training as well. However must note that these information is not available in the test phase. Warning: zRot (check here for the definition) is not necessary in this track.
The goal of this track is to successfully build a generative model to recostruct 3D garment conditioned to the pose. Participants are free to condition their models to a single pose or a sequence of them.
Also, participants are free to apply any preprocessing they want on the data. For instance 3D garments can be represented by a mesh, point cloud or volumetric data.
The output of the model is 3D reconstructed garments per frame. Note that ground truth garments are relative to SMPL root joint. As mentioned earlier, participants are free to predict 3D garments as a mesh, point cloud or volumetric data. However, evaluation is done based on mesh format (3D vertices + Faces). Therefore in the case of a point cloud or volumetric data, Participants must perform an additional post-processing to convert predicted garments to the right format.
Providing a submission
A submission is a zip file with the following structure:
- <sequence 1>
- <garment 1>.bin <======== a file to store face/topology data, i.e. vertex indices of each triangulated face
- <garment 1>.pc16 <======== a file to store vertex locations for the whole sequence
- <garment 2>.bin
- <garment 2>.pc16
- <sequence 2>
where "sequence i" and "garment j" must have the same name as the validation/test data. As can be seen, garment topology and #vertices is fixed in the whole sequence. Participants must use the provided functions in the starting kit to write "bin" and "pc16" files to ensure bug free submissions. Note that mesh data must be triangulated.
Important: Participants can use the whole sequence in the val/test set to predict garments, but they must save frames every 10 frames in their submissions. For instance if <sequence 1> has 300 frames, the following frames must be written in the "pc16" file: [10, 20, 30, ..., 300].
Evaluation is done based on surface-to-surface metric defined here. Note that the evaluation is outfit-wise, i.e. garments are merged into an outfit before evaluation. To avoid heavy evaluation processing (that can block compute workers for hours), we limit number of vertices for each outfit to a maximum of 20K. In the evaluation code, we penalize outfits with more than 20K vertices. Specifically, we ignore them and assign a high error to them.