diff --git a/.github/workflows/tests.yml b/.github/workflows/tests.yml
index 8dd34fc..80adf7b 100644
--- a/.github/workflows/tests.yml
+++ b/.github/workflows/tests.yml
@@ -5,7 +5,22 @@
name: Tests
-on: [push, pull_request]
+on:
+ push:
+ paths:
+ - 'monoloco/**'
+ - 'test/**'
+ - 'docs/00*.png'
+ - 'docs/frame0032.jpg'
+ - '.github/workflows/tests.yml'
+
+ pull_request:
+ paths:
+ - 'monoloco/**'
+ - 'test/**'
+ - 'docs/00*.png'
+ - 'docs/frame0032.jpg'
+ - '.github/workflows/tests.yml'
jobs:
build:
diff --git a/README.md b/README.md
index 7f936fb..cf44dd2 100644
--- a/README.md
+++ b/README.md
@@ -2,12 +2,14 @@
Continuously tested on Linux, MacOS and Windows: [](https://github.com/vita-epfl/monoloco/actions?query=workflow%3ATests)
+
-
-
+
+
This library is based on three research projects for monocular/stereo 3D human localization (detection), body orientation, and social distancing. Check the __video teaser__ of the library on [__YouTube__](https://www.youtube.com/watch?v=O5zhzi8mwJ4).
+
---
> __MonStereo: When Monocular and Stereo Meet at the Tail of 3D Human Localization__
@@ -34,7 +36,12 @@ __[Article](https://arxiv.org/abs/2009.00984)__ &nbs
__[Article](https://arxiv.org/abs/1906.06059)__ __[Citation](#Citation)__ __[Video](https://www.youtube.com/watch?v=ii0fqerQrec)__
-
+
+## Library Overview
+Visual illustration of the library components:
+
+
+
## License
All projects are built upon [Openpifpaf](https://github.com/vita-epfl/openpifpaf) for the 2D keypoints and share the AGPL Licence.
@@ -52,6 +59,7 @@ For quick installation, do not clone this repository, make sure there is no fold
```
pip3 install monoloco
+pip3 install matplotlib
```
For development of the source code itself, you need to clone this repository and then:
@@ -102,27 +110,6 @@ When processing KITTI images, the network uses the provided intrinsic matrix of
In all the other cases, we use the parameters of nuScenes cameras, with "1/1.8'' CMOS sensors of size 7.2 x 5.4 mm.
The default focal length is 5.7mm and this parameter can be modified using the argument `--focal`.
-## Webcam
-
-You can use the webcam as input by using the `--webcam` argument. By default the `--z_max` is set to 10 while using the webcam and the `--long-edge` is set to 144. If multiple webcams are plugged in you can choose between them using `--camera`, for instance to use the second camera you can add `--camera 1`.
-we can see a few examples below, obtained we the following commands :
-
-For the first and last visualization:
-```
-python -m monoloco.run predict \
---webcam \
---activities raise_hand
-```
-For the second one :
-```
-python -m monoloco.run predict \
---webcam \
---activities raise_hand social_distance
-```
-
-
-
-With `social_distance` in `--activities`, only the keypoints will be shown, with no image, allowing total anonimity.
## A) 3D Localization
@@ -138,7 +125,7 @@ If you provide a ground-truth json file to compare the predictions of the networ
For an example image, run the following command:
```sh
-python -m monoloco.run predict docs/002282.png \
+python3 -m monoloco.run predict docs/002282.png \
--path_gt names-kitti-200615-1022.json \
-o