diff --git a/README.md b/README.md
index 1ce7351..5f011a0 100644
--- a/README.md
+++ b/README.md
@@ -99,6 +99,20 @@ When processing KITTI images, the network uses the provided intrinsic matrix of
In all the other cases, we use the parameters of nuScenes cameras, with "1/1.8'' CMOS sensors of size 7.2 x 5.4 mm.
The default focal length is 5.7mm and this parameter can be modified using the argument `--focal`.
+**Webcam**
+You can use the webcam as input by using the `--webcam` argument. By default the `--z_max` is set to 10 while using the webcam.
+For example, the following command :
+```
+python -m monoloco.run predict \
+--webcam \
+--output_types multi
+```
+Yields the following result :
+
+
+
+With `social_distance` in `--activities`, only the keypoints will be shown, allowing total anonimity.
+
## A) 3D Localization
**Ground-truth comparison**
@@ -162,7 +176,7 @@ python3 -m monoloco.run predict --glob docs/005523*.png \ --output_types multi \

## B) Social Distancing (and Talking activity)
-To visualize social distancing compliance, simply add the argument `--social-distance` to the predict command. This visualization is not supported with a stereo camera.
+To visualize social distancing compliance, simply add the argument `social_distance` to `--activities`. This visualization is not supported with a stereo camera.
Threshold distance and radii (for F-formations) can be set using `--threshold-dist` and `--radii`, respectively.
For more info, run:
@@ -176,12 +190,27 @@ An example from the Collective Activity Dataset is provided below.
To visualize social distancing run the below, command:
```
python -m monoloco.run predict docs/frame0032.jpg \
---social_distance --output_types front bird
+--activities social_distance --output_types front bird
```
+## C) Raise hand detection
+To detect a risen hand, you can add `raise_hand` to `--activities`.
-## C) Orientation and Bounding Box dimensions
+For more info, run:
+`python -m monoloco.run predict --help`
+
+**Examples**
+
+To visualize raised hand with the webcam, run the below command:
+```
+python -m monoloco.run predict docs/frame0032.jpg \
+--activities raise_hand --output_types front bird
+```
+![webcam_raise_hand]()
+
+
+## D) Orientation and Bounding Box dimensions
The network estimates orientation and box dimensions as well. Results are saved in a json file when using the command
`--output_types json`. At the moment, the only visualization including orientation is the social distancing one.
@@ -374,4 +403,4 @@ When using this library in your research, we will be happy if you cite us!
month = {October},
year = {2019}
}
-```
\ No newline at end of file
+```