Efficient monocular point-of - gaze estimation on multiple screens and 3D face tracking for driver behaviour analysis

Egileak: Jon Goenetxea Imaz Luis Unzueta Irurtia Juan Diego Ortega Unai Elordi Hidalgo Oihana Otaegui Madurga

Data: 15.10.2018


Abstract

In this work, we present an efficient monocular method to estimate the point of gaze (PoG) and the face in the 3D
space of multi-screen driving simulator users, for driver behaviour analysis. It consists in a hybrid procedure that combines
appearance and model-based computer vision techniques to extract the 3D geometric representations of the user’s face
and gaze directions. These are placed in the same virtual 3D space as those of the monocular camera and the screens. In
this context, the intersections of the overall 3D gaze vector with the planes that contain each screen is calculated with an
efficient line-plane intersection geometric procedure. Finally, a point-in-polygon strategy is applied to see if any of the
calculated PoGs lies within any of the screens, and if not, the PoG on the same plane as that of the closest screen is
provided. Experiments show that the error for the obtained PoG accuracy is reasonable for automotive applications, even
in the uncalibrated case, compared to other state-of-the-art approaches, which require the user’s calibration. Another
advantage is that it can be integrated in devices with low computational capabilities, such as smartphones, with sufficient
robustness for driver behaviour analysis.

BIB_text

@Article {
title = {Efficient monocular point-of - gaze estimation on multiple screens and 3D face tracking for driver behaviour analysis},
pages = {118-125},
keywds = {
face tracking, point of gaze estimation, driver behaviour analysis
}
abstract = {

In this work, we present an efficient monocular method to estimate the point of gaze (PoG) and the face in the 3D
space of multi-screen driving simulator users, for driver behaviour analysis. It consists in a hybrid procedure that combines
appearance and model-based computer vision techniques to extract the 3D geometric representations of the user’s face
and gaze directions. These are placed in the same virtual 3D space as those of the monocular camera and the screens. In
this context, the intersections of the overall 3D gaze vector with the planes that contain each screen is calculated with an
efficient line-plane intersection geometric procedure. Finally, a point-in-polygon strategy is applied to see if any of the
calculated PoGs lies within any of the screens, and if not, the PoG on the same plane as that of the closest screen is
provided. Experiments show that the error for the obtained PoG accuracy is reasonable for automotive applications, even
in the uncalibrated case, compared to other state-of-the-art approaches, which require the user’s calibration. Another
advantage is that it can be integrated in devices with low computational capabilities, such as smartphones, with sufficient
robustness for driver behaviour analysis.


}
date = {2018-10-15},
}
Vicomtech

Gipuzkoako Zientzia eta Teknologia Parkea,
Mikeletegi Pasealekua 57,
20009 Donostia / San Sebastián (Espainia)

+(34) 943 309 230

Zorrotzaurreko Erribera 2, Deusto,
48014 Bilbo (Espainia)

close overlay

Jokaeraren araberako publizitateko cookieak beharrezkoak dira eduki hau kargatzeko

Onartu jokaeraren araberako publizitateko cookieak