update structure and image
This commit is contained in:
parent
be5abce6d5
commit
4e4160267d
56
README.md
56
README.md
@ -4,17 +4,57 @@ This repository contains the code for two research projects:
|
|||||||
|
|
||||||
1. **MonStereo: When Monocular and Stereo Meet at the Tail of 3D Human Localization**
|
1. **MonStereo: When Monocular and Stereo Meet at the Tail of 3D Human Localization**
|
||||||
[README](https://github.com/vita-epfl/monstereo/tree/master/docs/MonStereo.md) & [Article](https://arxiv.org/abs/2008.10913)
|
[README](https://github.com/vita-epfl/monstereo/tree/master/docs/MonStereo.md) & [Article](https://arxiv.org/abs/2008.10913)
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
|
||||||
2. **Perceiving Humans: from Monocular 3D Localization to Social Distancing (MonoLoco++)**
|
2. **Perceiving Humans: from Monocular 3D Localization to Social Distancing (MonoLoco++)**
|
||||||
[README](https://github.com/vita-epfl/monstereo/tree/master/docs/MonoLoco_pp.md) & [Article](https://arxiv.org/abs/2009.00984)
|
[README](https://github.com/vita-epfl/monstereo/tree/master/docs/MonoLoco_pp.md) & [Article](https://arxiv.org/abs/2009.00984)
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Both projects has been built upon [Openpifpaf](https://github.com/vita-epfl/openpifpaf)
|
Both projects has been built upon the CVPR'19 project [Openpifpaf](https://github.com/vita-epfl/openpifpaf)
|
||||||
for 2D pose estimation and [MonoLoco](https://github.com/vita-epfl/monoloco) for monocular 3D localization.
|
for 2D pose estimation and the ICCV'19 project [MonoLoco](https://github.com/vita-epfl/monoloco) for monocular 3D localization.
|
||||||
All projects share the AGPL Licence.
|
All projects share the AGPL Licence.
|
||||||
|
|
||||||
|
|
||||||
|
# Setup
|
||||||
|
Installation steps are the same for both projects.
|
||||||
|
|
||||||
|
### Install
|
||||||
|
The installation has been tested on OSX and Linux operating systems, with Python 3.6 or Python 3.7.
|
||||||
|
Packages have been installed with pip and virtual environments.
|
||||||
|
For quick installation, do not clone this repository,
|
||||||
|
and make sure there is no folder named monstereo in your current directory.
|
||||||
|
A GPU is not required, yet highly recommended for real-time performances.
|
||||||
|
MonStereo can be installed as a package, by:
|
||||||
|
|
||||||
|
```
|
||||||
|
pip3 install monstereo
|
||||||
|
```
|
||||||
|
|
||||||
|
For development of the monstereo source code itself, you need to clone this repository and then:
|
||||||
|
```
|
||||||
|
pip3 install sdist
|
||||||
|
cd monstereo
|
||||||
|
python3 setup.py sdist bdist_wheel
|
||||||
|
pip3 install -e .
|
||||||
|
```
|
||||||
|
|
||||||
|
### Interfaces
|
||||||
|
All the commands are run through a main file called `main.py` using subparsers.
|
||||||
|
To check all the commands for the parser and the subparsers (including openpifpaf ones) run:
|
||||||
|
|
||||||
|
* `python3 -m monstereo.run --help`
|
||||||
|
* `python3 -m monstereo.run predict --help`
|
||||||
|
* `python3 -m monstereo.run train --help`
|
||||||
|
* `python3 -m monstereo.run eval --help`
|
||||||
|
* `python3 -m monstereo.run prep --help`
|
||||||
|
|
||||||
|
or check the file `monstereo/run.py`
|
||||||
|
|
||||||
|
Further instructions for prediction, preprocessing, training and evaluation can be found here:
|
||||||
|
|
||||||
|
* [MonStereo README](https://github.com/vita-epfl/monstereo/tree/master/docs/MonStereo.md)
|
||||||
|
* [MonoLoco++ README](https://github.com/vita-epfl/monstereo/tree/master/docs/MonoLoco_pp.md)
|
||||||
|
|||||||
BIN
docs/000840_multi.png
Normal file
BIN
docs/000840_multi.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 3.4 MiB |
@ -24,53 +24,15 @@ month = {August},
|
|||||||
year = {2020}
|
year = {2020}
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
# Prediction
|
||||||
|
The predict script receives an image (or an entire folder using glob expressions),
|
||||||
|
calls PifPaf for 2d human pose detection over the image
|
||||||
|
and runs MonStereo for 3d location of the detected poses.
|
||||||
|
|
||||||
# Features
|
Output options include json files and/or visualization of the predictions on the image in *frontal mode*,
|
||||||
The code has been built upon the ICCV'19 project [MonoLoco](https://github.com/vita-epfl/monoloco).
|
*birds-eye-view mode* or *multi mode* and can be specified with `--output_types`
|
||||||
This repository supports
|
|
||||||
|
|
||||||
* the original MonoLoco
|
|
||||||
* An improved Monocular version (MonoLoco++) for x,y,z coordinates, orientation, and dimensions
|
|
||||||
* MonStereo
|
|
||||||
|
|
||||||
# Setup
|
|
||||||
|
|
||||||
### Install
|
|
||||||
The installation has been tested on OSX and Linux operating systems, with Python 3.6 or Python 3.7.
|
|
||||||
Packages have been installed with pip and virtual environments.
|
|
||||||
For quick installation, do not clone this repository,
|
|
||||||
and make sure there is no folder named monstereo in your current directory.
|
|
||||||
A GPU is not required, yet highly recommended for real-time performances.
|
|
||||||
MonStereo can be installed as a package, by:
|
|
||||||
|
|
||||||
```
|
|
||||||
pip3 install monstereo
|
|
||||||
```
|
|
||||||
|
|
||||||
For development of the monstereo source code itself, you need to clone this repository and then:
|
|
||||||
```
|
|
||||||
pip3 install sdist
|
|
||||||
cd monstereo
|
|
||||||
python3 setup.py sdist bdist_wheel
|
|
||||||
pip3 install -e .
|
|
||||||
```
|
|
||||||
|
|
||||||
### Data structure
|
|
||||||
|
|
||||||
Data
|
|
||||||
├── arrays
|
|
||||||
├── models
|
|
||||||
├── kitti
|
|
||||||
├── logs
|
|
||||||
├── output
|
|
||||||
|
|
||||||
|
|
||||||
Run the following to create the folders:
|
|
||||||
```
|
|
||||||
mkdir data
|
|
||||||
cd data
|
|
||||||
mkdir arrays models kitti logs output
|
|
||||||
```
|
|
||||||
|
|
||||||
### Pre-trained Models
|
### Pre-trained Models
|
||||||
* Download Monstereo pre-trained model from
|
* Download Monstereo pre-trained model from
|
||||||
@ -85,27 +47,6 @@ Alternatively, you can download a Pifpaf pre-trained model from [openpifpaf](htt
|
|||||||
If you'd like to use an updated version, we suggest to re-train the MonStereo model as well.
|
If you'd like to use an updated version, we suggest to re-train the MonStereo model as well.
|
||||||
* The model for the experiments is provided in *data/models/ms-200710-1511.pkl*
|
* The model for the experiments is provided in *data/models/ms-200710-1511.pkl*
|
||||||
|
|
||||||
# Interfaces
|
|
||||||
All the commands are run through a main file called `main.py` using subparsers.
|
|
||||||
To check all the commands for the parser and the subparsers (including openpifpaf ones) run:
|
|
||||||
|
|
||||||
* `python3 -m monstereo.run --help`
|
|
||||||
* `python3 -m monstereo.run predict --help`
|
|
||||||
* `python3 -m monstereo.run train --help`
|
|
||||||
* `python3 -m monstereo.run eval --help`
|
|
||||||
* `python3 -m monstereo.run prep --help`
|
|
||||||
|
|
||||||
or check the file `monstereo/run.py`
|
|
||||||
|
|
||||||
# Prediction
|
|
||||||
The predict script receives an image (or an entire folder using glob expressions),
|
|
||||||
calls PifPaf for 2d human pose detection over the image
|
|
||||||
and runs MonStereo for 3d location of the detected poses.
|
|
||||||
|
|
||||||
|
|
||||||
Output options include json files and/or visualization of the predictions on the image in *frontal mode*,
|
|
||||||
*birds-eye-view mode* or *multi mode* and can be specified with `--output_types`
|
|
||||||
|
|
||||||
|
|
||||||
### Ground truth matching
|
### Ground truth matching
|
||||||
* In case you provide a ground-truth json file to compare the predictions of MonSter,
|
* In case you provide a ground-truth json file to compare the predictions of MonSter,
|
||||||
@ -139,6 +80,22 @@ but require first to run a pose detector over
|
|||||||
all the training images and collect the annotations.
|
all the training images and collect the annotations.
|
||||||
The code supports this option (by running the predict script and using `--mode pifpaf`).
|
The code supports this option (by running the predict script and using `--mode pifpaf`).
|
||||||
|
|
||||||
|
### Data structure
|
||||||
|
|
||||||
|
Data
|
||||||
|
├── arrays
|
||||||
|
├── models
|
||||||
|
├── kitti
|
||||||
|
├── logs
|
||||||
|
├── output
|
||||||
|
|
||||||
|
Run the following to create the folders:
|
||||||
|
```
|
||||||
|
mkdir data
|
||||||
|
cd data
|
||||||
|
mkdir arrays models kitti logs output
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
### Datasets
|
### Datasets
|
||||||
Download KITTI ground truth files and camera calibration matrices for training
|
Download KITTI ground truth files and camera calibration matrices for training
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user