3QFP: Efficient neural implicit surface reconstruction using Tri-Quadtrees and Fourier feature Positional encoding [ICRA24]
Overview of our method.
The code is based on the implementation of SHINE-Mapping.
uv is a fast Python package installer and resolver.
- Install uv (if not already installed):
curl -LsSf https://astral.sh/uv/install.sh | sh- Install dependencies (creates virtual environment with Python 3.9):
uv pip install -e . --index-strategy unsafe-best-matchThis will automatically install:
- PyTorch 1.12.1+cu116 (CUDA 11.6, compatible with CUDA 11.6/11.7/11.8)
- torchvision 0.13.1+cu116
- torchaudio 0.12.1+cu116
- kaolin 0.12.0 (pre-built for PyTorch 1.12.1)
- torch-scatter 2.1.0+cu116
- tinycudann (built from source)
- All other dependencies
Note: --index-strategy unsafe-best-match is required to use both PyTorch and Kaolin custom indexes.
- Verify installation:
.venv/bin/python -c "import torch; print(f'PyTorch: {torch.__version__}'); print(f'CUDA: {torch.version.cuda}'); print(f'CUDA available: {torch.cuda.is_available()}')"
.venv/bin/python -c "import tinycudann; print('tinycudann: OK')"
.venv/bin/python -c "import kaolin as kal; print(f'kaolin: {kal.__version__}')"Note: The project uses PyTorch 1.12.1+cu116 with pre-built kaolin 0.12.0 for compatibility. CUDA 11.6 binaries work with CUDA 11.6/11.7/11.8 hardware.
- Create a conda environment:
conda create --name 3qfp python=3.9
conda activate 3qfp- Install torch-related packages (CUDA 11.6):
pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 torchaudio==0.12.1+cu116 --extra-index-url https://download.pytorch.org/whl/cu116- Install kaolin and torch-scatter:
pip install kaolin==0.12.0 -f https://nvidia-kaolin.s3.us-east-2.amazonaws.com/torch-1.12.1_cu116.html
pip install torch-scatter==2.1.0 -f https://data.pyg.org/whl/torch-1.12.1+cu116.html- Install other dependencies:
pip install numpy tqdm wandb open3d scikit-image natsort pyquaternion pyyaml scipy
pip install "werkzeug>=2.0,<3.0" "flask>=2.0,<3.0"
pip install --no-build-isolation git+https://github.com/NVlabs/tiny-cuda-nn.git@master#subdirectory=bindings/torchAlso, similarly, we suggest the download scripts from SHINE-Mapping.
MaiCitydataset
sh ./scripts/download_maicity.sh
KITTIdataset
sh ./scripts/download_kitti_example.sh
Newer College
sh ./scripts/download_ncd_example.sh
In the configuration (.yaml) files, you can specify the dataset path.
pc_path: the folder containing the point cloud (.bin, .ply or .pcd format) for each frame.
pose_path : the pose file (.txt) containing the transformation matrix of each frame.
calib_path : the calib file (.txt) containing the static transformation between sensor and body frames (optional, would be identity matrix if set as '').
python run.py ./config/maicity/maicity_batch.yamlOr with uv:
.venv/bin/python run.py ./config/radar.yaml