diff --git a/README.md b/README.md
index c93c032..9a23681 100644
--- a/README.md
+++ b/README.md
@@ -3,60 +3,322 @@
-This repository contains the code for two research projects:
-
-1. **Perceiving Humans: from Monocular 3D Localization to Social Distancing (MonoLoco++)**
- [README](https://github.com/vita-epfl/monstereo/blob/master/docs/MonoLoco%2B%2B.md) & [Article](https://arxiv.org/abs/2009.00984)
-
- 
-
- 
-
-
-2. **MonStereo: When Monocular and Stereo Meet at the Tail of 3D Human Localization**
-[README](https://github.com/vita-epfl/monstereo/blob/master/docs/MonStereo.md) & [Article](https://arxiv.org/abs/2008.10913)
+This library is based on three research projects:
+
+> __MonStereo: When Monocular and Stereo Meet at the Tail of 3D Human Localization__
+> _[L. Bertoni](https://scholar.google.com/citations?user=f-4YHeMAAAAJ&hl=en), [S. Kreiss](https://www.svenkreiss.com),
+[T. Mordan](https://people.epfl.ch/taylor.mordan/?lang=en), [A. Alahi](https://scholar.google.com/citations?user=UIhXQ64AAAAJ&hl=en)_, ICRA21 --> [Article](https://arxiv.org/abs/2008.10913),[Video](#Todo)
- 
+
-Both projects has been built upon the CVPR'19 project [Openpifpaf](https://github.com/vita-epfl/openpifpaf)
-for 2D pose estimation and the ICCV'19 project [MonoLoco](https://github.com/vita-epfl/monoloco) for monocular 3D localization.
-All projects share the AGPL Licence.
+---
+
+
+
+> __Perceiving Humans: from Monocular 3D Localization to Social Distancing__
+> _[L. Bertoni](https://scholar.google.com/citations?user=f-4YHeMAAAAJ&hl=en), [S. Kreiss](https://www.svenkreiss.com),
+[A. Alahi](https://scholar.google.com/citations?user=UIhXQ64AAAAJ&hl=en)_, T-ITS 2021 --> [Article](https://arxiv.org/abs/2009.00984), [Video](https://www.youtube.com/watch?v=r32UxHFAJ2M)
+
+
+
+---
+
+
+> __MonoLoco: Monocular 3D Pedestrian Localization and Uncertainty Estimation__
+> _[L. Bertoni](https://scholar.google.com/citations?user=f-4YHeMAAAAJ&hl=en), [S. Kreiss](https://www.svenkreiss.com), [A.Alahi](https://scholar.google.com/citations?user=UIhXQ64AAAAJ&hl=en)_, ICCV 2019 --> [Article](https://arxiv.org/abs/1906.06059), [Video](https://www.youtube.com/watch?v=ii0fqerQrec)
+
+
+
+All projects are built upon [Openpifpaf](https://github.com/vita-epfl/openpifpaf) for the 2D keypoints and share the AGPL Licence.
-# Setup
-Installation steps are the same for both projects.
-
-### Install
-The installation has been tested on OSX and Linux operating systems, with Python 3.6 or Python 3.7.
-Packages have been installed with pip and virtual environments.
-For quick installation, do not clone this repository,
-and make sure there is no folder named monstereo in your current directory.
+# Quick setup
A GPU is not required, yet highly recommended for real-time performances.
-MonoLoco++ and MonStereo can be installed as a single package, by:
+The installation has been tested on OSX and Linux operating systems, with Python 3.6, 3.7, 3.8.
+Packages have been installed with pip and virtual environments.
+
+For quick installation, do not clone this repository, make sure there is no folder named monoloco in your current directory, and run:
+
+
```
-pip3 install monstereo
+pip3 install monoloco
```
-For development of the monstereo source code itself, you need to clone this repository and then:
+For development of the source code itself, you need to clone this repository and then:
```
pip3 install sdist
-cd monstereo
+cd monoloco
python3 setup.py sdist bdist_wheel
pip3 install -e .
```
### Interfaces
All the commands are run through a main file called `main.py` using subparsers.
-To check all the commands for the parser and the subparsers (including openpifpaf ones) run:
+To check all the options:
+
+* `python3 -m monoloco.run --help`
+* `python3 -m monoloco.run predict --help`
+* `python3 -m monoloco.run train --help`
+* `python3 -m monoloco.run eval --help`
+* `python3 -m monoloco.run prep --help`
+
+or check the file `monoloco/run.py`
+
+# Predictions
+For a quick setup download a pifpaf and MonoLoco++ / MonStereo models from
+[here](https://drive.google.com/drive/folders/1jZToVMBEZQMdLB5BAIq2CdCLP5kzNo9t?usp=sharing) and save them into `data/models`.
+
+## Monocular 3D Localization
+The predict script receives an image (or an entire folder using glob expressions),
+calls PifPaf for 2d human pose detection over the image
+and runs Monoloco++ for 3d location of the detected poses.
+The command `--net` defines if saving pifpaf outputs, MonoLoco++ outputs or MonStereo ones.
+You can check all commands for Pifpaf at [openpifpaf](https://github.com/vita-epfl/openpifpaf).
+
+Output options include json files and/or visualization of the predictions on the image in *frontal mode*,
+*birds-eye-view mode* or *combined mode* and can be specified with `--output_types`
+
+Ground-truth KITTI files for comparing results can be downloaded from
+[here](https://drive.google.com/drive/folders/1jZToVMBEZQMdLB5BAIq2CdCLP5kzNo9t?usp=sharing)
+(file called *names-kitti*) and should be saved into `data/arrays`
+Ground-truth files can also be generated, more info in the preprocessing section.
+
+For an example image, run the following command:
+
+```
+python -m monstereo.run predict \
+docs/002282.png \
+--net monoloco_pp \
+--output_types multi \
+--model data/models/monoloco_pp-201203-1424.pkl \
+--path_gt data/arrays/names-kitti-200615-1022.json \
+-o