Weak Cube R-CNN: Weakly Supervised 3D Detection using only 2D Bounding Boxes

Technical University of Denmark
23rd Scandinavian Conference on Image Analysis, SCIA 2025

1Technical University of Denmark
,
2Pioneer Centre for AI
Figure 1 Weak Cube R-CNN. In contrast to standard 3D object detectors that require 3D ground truths, our proposed method is trained using only 2D bounding boxes but can predict 3D cubes at test time. Weak Cube R-CNN significantly reduces the annotation time since 3D ground-truths require 11× more time than annotating 2D boxes. More importantly, it does not require access to LiDAR or multi-camera setups.

Abstract

Monocular 3D object detection is an essential task in computer vision, and it has several applications in robotics and virtual reality. However, 3D object detectors are typically trained in a fully supervised way, relying extensively on 3D labeled data, which is labor-intensive and costly to annotate. This work focuses on weakly-supervised 3D detection to reduce data needs using a monocular method that leverages a single-camera system over expensive LiDAR sensors or multi-camera setups. We propose a general model Weak Cube R-CNN, which can predict objects in 3D at inference time, requiring only 2D box annotations for training by exploiting the relationship between 2D projections of 3D cubes. Our proposed method utilizes pre-trained frozen foundation 2D models to estimate depth and orientation information on a training set. We use these estimated values as pseudo-ground truths during training. We design loss functions that avoid 3D labels by incorporating information from the external models into the loss. In this way, we aim to implicitly transfer knowledge from these large foundation 2D models without having access to 3D bounding box annotations. Experimental results on the SUN RGB-D dataset show increased performance in accuracy compared to an annotation time equalized Cube R-CNN baseline. While not precise for centimetre-level measurements, this method provides a strong foundation for further research.

Model overview

Figure 1 Overview of Weak Cube R-CNN . The model extracts features from an image and predicts objects in 2D and their cubes in 3D. We split the cube into each of its attributes and optimise each attribute with regards to a pseudo ground truth information. During training, instead of the simple 3D ground truth provided in the fully supervised setting, we must use many different sources of information provided by frozen models to emulate the same ground truth annotation.

Pipeline Overview

Using a combination of the GroundingDINO and Segment Anything methods (middle image), effectively filters the ground. Now you can run a simple plane-estimation RANSAC algorithm on the filtered point cloud, this turns out to be quite robust.
Figure 1 Ground estimation pipeline showing the point cloud obtained through the depth map. The 2nd step selects the region in the depth map corresponding to the ground in the color image. The depth map is interpreted as a point cloud where planeRANSAC obtains a normal vector to the ground.

Prediction Examples

Just For Fun

What happens when you try to run the model on completely out of domain data? You get some very fun results. Just look at how it tries to estimate the dimensions to be several meters, while the hand is clearly showing the scale as few centimeters!
Figure ood

Paper

BibTeX

@misc{hansen2025weakcubercnnweakly,
        title={Weak Cube R-CNN: Weakly Supervised 3D Detection using only 2D Bounding Boxes}, 
        author={Andreas Lau Hansen and Lukas Wanzeck and Dim P. Papadopoulos},
        year={2025},
        eprint={2504.13297},
        archivePrefix={arXiv},
        primaryClass={cs.CV},
        url={https://arxiv.org/abs/2504.13297}, 
  }