diff --git a/README.md b/README.md
index 17512b3..047fdb6 100644
--- a/README.md
+++ b/README.md
@@ -72,26 +72,27 @@ To check all the options:
or check the file `monoloco/run.py`
# Predictions
-For a quick setup download a pifpaf and MonoLoco++ / MonStereo models from
-[here](https://drive.google.com/drive/folders/1jZToVMBEZQMdLB5BAIq2CdCLP5kzNo9t?usp=sharing) and save them into `data/models`.
-## A) 3D Localization
-The predict script receives an image (or an entire folder using glob expressions),
+The software receives an image (or an entire folder using glob expressions),
calls PifPaf for 2D human pose detection over the image
and runs Monoloco++ or MonStereo for 3D localization &/or social distancing &/or orientation
-**Which Network**
-The command `--net` defines if saving pifpaf outputs, MonoLoco++ outputs or MonStereo ones.
+**Which Modality**
+The command `--mode` defines which network to run.
-- select `--net monstereo` if you have stereo images
-- select `--net monoloco_pp` if you have monocular (single) images
-- select `--net pifpaf` if you are interested in 2D keypoint outputs
+- select `--mode mono` (default) to predict 3D localization on monocular image(s)
+- select `--mode stereo` for stereo images
+- select `--moode keypoints` if just interested in 2D keypoints from OpenPifPaf
+
+Models are downloaded automatically. To use a specific model, use the command `--model`. Additional models can be downloaded from [here](https://drive.google.com/drive/folders/1jZToVMBEZQMdLB5BAIq2CdCLP5kzNo9t?usp=sharing)
**Which Visualization**
- select `--output_types multi` if you want to visualize both frontal view or bird's eye view in the same picture
- select `--output_types bird front` if you want to different pictures for the two views or just one view
- select `--output_types json` if you'd like the ouput json file
+If you select `--mode keypoints`, use standard OpenPifPaf arguments
+For
Those options can be combined
**Focal Length and Camera Parameters**
@@ -100,31 +101,24 @@ When processing KITTI images, the network uses the provided intrinsic matrix of
In all the other cases, we use the parameters of nuScenes cameras, with "1/1.8'' CMOS sensors of size 7.2 x 5.4 mm.
The default focal length is 5.7mm and this parameter can be modified using the argument `--focal`.
+## A) 3D Localization
+
**Ground-truth comparison**
If you provide a ground-truth json file to compare the predictions of the network,
the script will match every detection using Intersection over Union metric.
- The ground truth file can be generated using the subparser `prep` and called with the command `--path_gt`.
-As this step requires running the pose detector over all the training images and save the annotations, we
-provide the resulting json file for the category *pedestrians* from
-[Google Drive](https://drive.google.com/file/d/1e-wXTO460ip_Je2NdXojxrOrJ-Oirlgh/view?usp=sharing)
-and save it into `data/arrays`.
-
-If a ground-truth json file is not available, with the command `--show_all`, is possible to
-show all the prediction for the image
+ The ground truth file can be generated using the subparser `prep`, or directly downloaded from [Google Drive](https://drive.google.com/file/d/1e-wXTO460ip_Je2NdXojxrOrJ-Oirlgh/view?usp=sharing)
+ and called it with the command `--path_gt`.
+
**Monocular examples**
For an example image, run the following command:
```
-python -m monoloco.run predict \
-docs/002282.png \
---net monoloco_pp \
---output_types multi \
---model data/models/monoloco_pp-201203-1424.pkl \
---path_gt data/arrays/names-kitti-200615-1022.json \
+python -m monoloco.run predict docs/002282.png \
+--path_gt \
-o