In large-scale scene reconstruction using 3D Gaussian splatting, it is common to partition the scene into multiple smaller regions and reconstruct them individually. However, existing division methods are occlusion-agnostic, meaning that each region may contain areas with severe occlusions. As a result, the cameras within those regions are less correlated, leading to a low average contribution to the overall reconstruction. In this paper, we propose an occlusion-aware scene division strategy that clusters training cameras based on their positions and co-visibilities to acquire multiple regions. Cameras in such regions exhibit stronger correlations and a higher average contribution, facilitating high-quality scene reconstruction. We further propose a region-based rendering technique to accelerate large scene rendering, which culls Gaussians invisible to the region where the viewpoint is located. Such a technique significantly speeds up the rendering without compromising quality. Extensive experiments on multiple large scenes show that our method achieves superior reconstruction results with faster rendering speeds compared to existing state-of-the-art approaches.
Overview of OccluGaussian. Top left: To reconstruct a large scene, we divide it into multiple regions by adopting an occlusionaware scene division strategy. (a) We first create an attributed view graph from the posed cameras, where nodes represent cameras with positional features, and edges represent visibility correlations between them. (b) A graph clustering algorithm is applied to the view graph to cluster the cameras into multiple regions, and (c) we further refine them to obtain more balanced sizes. (d) The region boundaries are calculated based on the clustered cameras. Each region is individually reconstructed and finally merged into a complete model. Bottom left: Each region is reconstructed using three sets of training cameras: base cameras located inside the region, extended cameras providing adequate visual content of the region, and border cameras used to constrain Gaussian primitives near the boundaries. Right: We introduce a region-based rendering technique, which culls 3D Gaussians that are occluded from the region where the rendering viewpoint is located. Furthermore, we subdivide the scene into smaller sub-regions with fewer essential 3D Gaussians. This approach effectively reduces redundant computations and further boosts our rendering speed.
Here we display side-by-side videos comparing our method to state-of-the-art baselines across different scenes. [NOTE: DOGS results is acquired by replacing the scene division strategy proposed in their paper with ours and keep other hyperparameters consistent for 3DGS optimization.] Select a baseline method below:
If you find the video playing asynchronously or not loading, please refresh the page.
Here, we present side-by-side videos that conduct a comprehensive comparison between our method and various clustering methods in diverse scenes. Please choose a clustering method from the options below:
If you find the video playing asynchronously or not loading, please refresh the page.
We selected a view from the Alameda scene in the Zip-NeRF dataset for rendering. Then, we printed the real-time frames per second (FPS) of the rendering above the video. This is to demonstrate that our region-based rendering method incurs no loss in visual quality during the process of rendering acceleration. Here, 'RBR' denotes the vanilla region-based rendering approach, and 'RSD' denotes the region subdivision approach within our region-based rendering framework.
Our region-based culling strategy significantly boosts the rendering speed without causing any noticeable decline in visual quality. Furthermore, the proposed region subdivision technique can further expedite the rendering process.
If you find the video playing asynchronously or not loading, please refresh the page.
@article{liu2025occlugaussian,
title = {OccluGaussian: Occlusion-Aware Gaussian Splatting for Large Scene Reconstruction and Rendering},
author = {Liu, Shiyong and Tang, Xiao and Li, Zhihao and He, Yingfan and Ye, Chongjie and Liu, Jianzhuang and Huang, Binxiao and Zhou, Shunbo and Wu, Xiaofei},
booktitle = {arXiv.org},
year = {2025}
}