diff --git a/README.md b/README.md
index 13636d6..d1b59c4 100644
--- a/README.md
+++ b/README.md
@@ -72,29 +72,52 @@ To check all the options:
or check the file `monoloco/run.py`
# Predictions
-# TODO from here
For a quick setup download a pifpaf and MonoLoco++ / MonStereo models from
[here](https://drive.google.com/drive/folders/1jZToVMBEZQMdLB5BAIq2CdCLP5kzNo9t?usp=sharing) and save them into `data/models`.
-## Monocular 3D Localization
+## A) 3D Localization
The predict script receives an image (or an entire folder using glob expressions),
-calls PifPaf for 2d human pose detection over the image
-and runs Monoloco++ for 3d location of the detected poses.
+calls PifPaf for 2D human pose detection over the image
+and runs Monoloco++ or MonStereo for 3D localization &/or social distancing &/or orientation
+
+**Which Network**
The command `--net` defines if saving pifpaf outputs, MonoLoco++ outputs or MonStereo ones.
-You can check all commands for Pifpaf at [openpifpaf](https://github.com/vita-epfl/openpifpaf).
-Output options include json files and/or visualization of the predictions on the image in *frontal mode*,
-*birds-eye-view mode* or *combined mode* and can be specified with `--output_types`
+- select `--net monstereo` if you have stereo images
+- select `--net monoloco_pp` if you have monocular (single) images
+- select `--net pifpaf` if you are interested in 2D keypoint outputs
-Ground-truth KITTI files for comparing results can be downloaded from
-[here](https://drive.google.com/drive/folders/1jZToVMBEZQMdLB5BAIq2CdCLP5kzNo9t?usp=sharing)
-(file called *names-kitti*) and should be saved into `data/arrays`
-Ground-truth files can also be generated, more info in the preprocessing section.
+**Which Visualization**
+- select `--output_types multi` if you want to visualize both frontal view or bird's eye view in the same picture
+- select `--output_types bird front` if you want to different pictures for the two views or just one view
+- select `--output_types json` if you'd like the ouput json file
+
+Those options can be combined
+
+**Focal Length and Camera Parameters**
+Absolute distances are affected by the camera intrinsic parameters.
+When processing KITTI images, the network uses the provided intrinsic matrix of the dataset.
+In all the other cases, we use the parameters of nuScenes cameras, with "1/1.8'' CMOS sensors of size 7.2 x 5.4 mm.
+The default focal length is 5.7mm and this parameter can be modified using the argument `--focal`.
+
+**Ground-truth comparison**
+If you provide a ground-truth json file to compare the predictions of the network,
+ the script will match every detection using Intersection over Union metric.
+ The ground truth file can be generated using the subparser `prep` and called with the command `--path_gt`.
+As this step requires running the pose detector over all the training images and save the annotations, we
+provide the resulting json file for the category *pedestrians* from
+[Google Drive](https://drive.google.com/file/d/1e-wXTO460ip_Je2NdXojxrOrJ-Oirlgh/view?usp=sharing)
+and save it into `data/arrays`.
+
+If a ground-truth json file is not available, with the command `--show_all`, is possible to
+show all the prediction for the image
+
+**Monocular examples**
For an example image, run the following command:
```
-python -m monstereo.run predict \
+python -m monoloco.run predict \
docs/002282.png \
--net monoloco_pp \
--output_types multi \
@@ -105,78 +128,79 @@ docs/002282.png \
--n_dropout <50 to include epistemic uncertainty, 0 otherwise>
```
-
+
To show all the instances estimated by MonoLoco add the argument `show_all` to the above command.
-
+
It is also possible to run [openpifpaf](https://github.com/vita-epfl/openpifpaf) directly
by specifying the network with the argument `--net pifpaf`. All the other pifpaf arguments are also supported
and can be checked with `python -m monstereo.run predict --help`.
-
+
-### Focal Length and Camera Parameters
-Absolute distances are affected by the camera intrinsic parameters.
-When processing KITTI images, the network uses the provided intrinsic matrix of the dataset.
-In all the other cases, we use the parameters of nuScenes cameras, with "1/1.8'' CMOS sensors of size 7.2 x 5.4 mm.
-The default focal length is 5.7mm and this parameter can be modified using the argument `--focal`.
+**Stereo Examples**
+To run MonStereo on stereo images, make sure the stereo pairs have the following name structure:
+- Left image: \.\
+- Right image: \**_r**.\
-## Social Distancing
-To visualize social distancing compliance, simply add the argument `--social-distance` to the predict command.
+(It does not matter the exact suffix as long as the images are ordered)
+You can load one or more image pairs using glob expressions. For example:
+
+```
+python3 -m monoloco.run predict \
+--glob docs/000840*.png --output_types multi \
+ --model data/models/ms-200710-1511.pkl \
+ --path_gt data/arrays/names-kitti-200615-1022.json \
+ -o data/output --scale 2
+ ```
+
+
+
+```
+python3 -m monoloco.run predict --glob docs/005523*.png \ --output_types multi \
+--model data/models/ms-200710-1511.pkl \
+--path_gt data/arrays/names-kitti-200615-1022.json \
+ -o data/output --scale 2
+ ```
+
+
+
+## B) Social Distancing (and Talking activity)
+To visualize social distancing compliance, simply add the argument `--social-distance` to the predict command. This visualization is only supported with `--net monoloco_pp` at the moment.
+Threshold distance and radii (for F-formations) can be set using `--threshold-dist` and `--radii`, respectively.
+
+For more info, run:
+`python -m monoloco.run predict --help`
+
+**Examples**
An example from the Collective Activity Dataset is provided below.
-
+
To visualize social distancing run the below, command:
```
-python -m monstereo.run predict \
+python -m monoloco.run predict \
docs/frame0038.jpg \
--net monoloco_pp \
--social_distance \
--output_types front bird --show_all \
--model data/models/monoloco_pp-201203-1424.pkl -o