Telehealth has the potential to offset the high demand for help during public health emergencies, such as the ongoing COVID pandemic, and in rural locations where health services and qualified treatment providers make services difficult if not impossible to obtain. Besides communication, the use of existing sensor infrastructure within modern smart devices for medical tests are compelling. Remote Photoplethysmography (rPPG) - the problem of non-invasively estimating blood volume variations in the microvascular tissue from video - would be well suited for these situations.
Over the past few years a number of research groups have made rapid advances in remote PPG methods for estimating heart rate from digital video and obtained impressive results. How these various methods compare in naturalistic conditions, where spontaneous movements, facial expressions, or illumination changes are present, is relatively unknown. Most previous benchmarking efforts focused on posed situations. No commonly accepted evaluation protocol exists for estimating vital signs in spontaneous behavior with which to compare them.
To enable comparisons among alternative methods, we present the 1st Vision for Vitals Workshop & Challenge (V4V 2021). This topic is germane to both computer vision and multimedia communities. For computer vision, it is an exciting approach to longstanding limitations of vital signs estimating approaches. For multimedia, remote vital signs estimation would enable more powerful applications.
The main track is intended to bring together computer vision researchers whose work is related to vision based vital signs estimation. We are soliciting original contributions which address a wide range of theoretical and application issues of remote vital signs estimation, including but not limited to:
V4V Challenge evaluates remote PPG methods for vital signs estimation on a new large corpora of face videos annotated with corresponding high-resolution videos and vital signs from contact sensors. The goal of the challenge is to reconstruct the vital signs of the subjects from the video sources. The participants will receive an annotated training set and a test set without annotations.
There are two subtracks in the challenge - (1) Heart rate (HR) estimation and (2) Respiration rate (RR) estimation. All participants are required to submit predictions to HR sub-challenge. Although, participation in RR sub-challenge is optional, we encourage all participants to involve in both sub-challenges. To learn more, please head over to this page on Codalab.
The datasets may be used for the V4V Challenge of ICCV 2021 only. The recipient of the datasets must be a full-time faculty, researcher or employee of an organization (not a student) and must agree to terms and conditions listed on the codalab.
If you are interested in downloading the V4V dataset please download and sign the EULA and email the scanned copy back to lijun(at)cs(dot)binghamton(dot)edu, zli191(at)binghamton(dot)edu and laszlojeni(at)cmu(dot)edu
Please visit the Codalab page where the competition is hosted. Additionally, the evaluation code can be downloaded here for local use by participants. The requirements file for the local environment setup can be downloaded from the same repository. Please report any bugs in the evaluation code in the issues of the repository.
Please note that along with your submission to Codalab competition page, in order to be included in the leaderboard, you are required to submit a short paper containing the description of your method. For paper submission to the workshop, visit our CMT page. When submitting your paper to CMT, please also email arevanur(at)andrew(dot)cmu(dot)edu with your username/Team name on Codalab, and your workshop paper title, to make it easier to link your paper to the Codalab submissions.
Challenge paper submissions must be written in English and must be sent in PDF format. Please refer to the ICCV submission guidelines for instructions regarding formatting, templates, and policies. The submissions will be reviewed by the program committee and selected papers will be published in ICCV Workshop proceedings.