Does this need depth data capture as well? The “casual captures” makes it seem like it only needs images, but apparently they are using depth data as well
I think it does use depth data from parameters in docs: python infer_shape.py --input_pkl <sample.pkl> (possibly achievable using software like MapAnything). I believe CUDA only.
Yeah they confirm that at the bottom of the linked page
> Furthermore, by leveraging tools like MapAnything to generate metric points, ShapeR can even produce metric 3D shapes from monocular images without retraining.
Does this need depth data capture as well? The “casual captures” makes it seem like it only needs images, but apparently they are using depth data as well
Also, can it run on Apple silicon?
Nope, only needs depth for ground truth.
its designed to be run on top of a SLAM system that outputs a sparse point cloud.
on page 4 on the top right you can see how the point cloud is used to then feed into the object generator: https://cdn.jsdelivr.net/gh/facebookresearch/ShapeR@main/res...
I think it does use depth data from parameters in docs: python infer_shape.py --input_pkl <sample.pkl> (possibly achievable using software like MapAnything). I believe CUDA only.
Yeah they confirm that at the bottom of the linked page
> Furthermore, by leveraging tools like MapAnything to generate metric points, ShapeR can even produce metric 3D shapes from monocular images without retraining.