Note that the model returns bin-edges (instead of bin-centers).
Recommended way:InferenceHelper class in infer.py provides an easy interface for inference and handles various types of inputs (with any prepocessing required). It uses Test-Time-Augmentation (H-Flips) and also calculates bin-centers for you:
from infer import InferenceHelper
infer_helper = InferenceHelper(dataset='nyu')
# predict depth of a batched rgb tensor
example_rgb_batch = ...
bin_centers, predicted_depth = infer_helper.predict(example_rgb_batch)
# predict depth of a single pillow image
img = Image.open("test_imgs/classroom__rgb_00283.jpg") # any rgb pillow image
bin_centers, predicted_depth = infer_helper.predict_pil(img)
# predict depths of images stored in a directory and store the predictions in 16-bit format in a given separate dir
infer_helper.predict_dir("/path/to/input/dir/containing_only_images/", "path/to/output/dir/")
AdaBins
Official implementation of Adabins: Depth Estimation using adaptive bins
Download links
Colab demo
Inference
Move the downloaded weights to a directory of your choice (we will use “./pretrained/“ here). You can then use the pretrained models like so:
Note that the model returns bin-edges (instead of bin-centers).
Recommended way:
InferenceHelper
class ininfer.py
provides an easy interface for inference and handles various types of inputs (with any prepocessing required). It uses Test-Time-Augmentation (H-Flips) and also calculates bin-centers for you:TODO: