Feature Splatting for Better Novel View Synthesis with Low Overlap

Paper Code

Novel view synthesis of a trajectory on the scene 03f7a0e617 from ScanNet++ dataset. Left 3D Gaussian Splatting, right FeatSplat32.

3D Gaussian Splatting has emerged as a very promising scene representation, achieving state-of-the-art quality in novel view synthesis significantly faster than competing alternatives. However, its use of spherical harmonics to represent scene colors limits the expressivity of 3D Gaussians and, as a consequence, the capability of the representation to generalize as we move away from the training views.

In this paper, we propose to encode the color information of 3D Gaussians into per-Gaussian feature vectors, which we denote as Feature Splatting (FeatSplat). To synthesize a novel view, Gaussians are first “splatted” into the image plane, then the corresponding feature vectors are alpha-blended, and finally the blended vector is decoded by a small MLP to render the RGB pixel values. To further inform the model, we concatenate a camera embedding to the blended feature vector, to condition the decoding also on the viewpoint information. Our experiments show that these novel model for encoding the radiance considerably improves novel view synthesis for low overlap views that are distant from the training views. Finally, we also show the capacity and convenience of our feature vector representation, demonstrating its capability not only to generate RGB values for novel views, but also to modify scene light after optimization, and to learn per-pixel semantic labels.

Inference-time modification of scene light.
@article{martins2024feature,
      title={Feature Splatting for Better Novel View Synthesis with Low Overlap}, 
      author={T. Berriel Martins and Javier Civera},
      year={2024},
      eprint={2405.15518},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}