19 lines
1.2 KiB
Markdown
19 lines
1.2 KiB
Markdown
|
||
# Perceiving Humans: from Monocular 3D Localization to Social Distancing
|
||
|
||
> Perceiving humans in the context of Intelligent Transportation Systems (ITS)
|
||
often relies on multiple cameras or expensive LiDAR sensors.
|
||
In this work, we present a new cost- effective vision-based method that perceives humans’ locations in 3D
|
||
and their body orientation from a single image.
|
||
We address the challenges related to the ill-posed monocular 3D tasks by proposing a deep learning method
|
||
that predicts confidence intervals in contrast to point estimates. Our neural network architecture estimates
|
||
humans 3D body locations and their orientation with a measure of uncertainty.
|
||
Our vision-based system (i) is privacy-safe, (ii) works with any fixed or moving cameras,
|
||
and (iii) does not rely on ground plane estimation.
|
||
We demonstrate the performance of our method with respect to three applications:
|
||
locating humans in 3D, detecting social interactions,
|
||
and verifying the compliance of recent safety measures due to the COVID-19 outbreak.
|
||
Indeed, we show that we can rethink the concept of “social distancing” as a form of social interaction
|
||
in contrast to a simple location-based rule. We publicly share the source code towards an open science mission.
|
||
|