CASE STUDY
Evaluation of open-source libraries and commercial software utilised for dense point cloud generation: a case study of cultural heritage objects
More details
Hide details
1
Faculty of Environmental Engineering, Geodesy and Renewable Energy, Kielce University of Technology, 7 Tysiąclecia Państwa Polskiego Avenue, 25-314 Kielce, Poland
2
Faculty of Geodesy and Cartography, Warsaw University of Technology, 1 Politechniki Square, 00-661 Warsaw, Poland
3
Institute of Civil Engineering, Warsaw University of Life Sciences, Nowoursynowska 166, 02-787 Warsaw, Poland
A - Research concept and design; B - Collection and/or assembly of data; C - Data analysis and interpretation; D - Writing the article; E - Critical revision of the article; F - Final approval of article
Submission date: 2025-11-15
Final revision date: 2026-02-05
Acceptance date: 2026-03-20
Publication date: 2026-04-15
Corresponding author
Anna Michałek
Faculty of Environmental Engineering, Geodesy and Renewable Energy, Kielce University of Technology, 7 Tysiąclecia Państwa Polskiego avenue, 25-314, Kielce, Poland
Reports on Geodesy and Geoinformatics 2026;121:1-18
KEYWORDS
TOPICS
ABSTRACT
The development of image-based approaches for 3D dense reconstruction has found wide application in the architectural documentation of cultural objects and sites. For this reason, it is crucial to understand the potential and limitations of Multi-View Stereo (MVS) algorithms used in 3D shape reconstruction. This article aims to evaluate the quality and accuracy of dense point generation for cultural heritage objects using open-source (MicMac, OpenMVS, RealityScan), commercial algorithms (Agisoft Metashape and Pix4D), and an Unmanned Aerial Vehicle (UAV) in parallel and skew configurations at two historic wooden structures. To achieve this, a workflow for data evaluation was proposed, incorporating eigenvalues and quality factors, such as planarity, roughness, normal vector variance, and cloud-to-TLS (Terrestrial Laser Scanning) cloud deviation analysis. The results demonstrate that the image-acquisition geometry strongly influences reconstruction accuracy. Parallel configurations consistently produced higher completeness, narrower deviation distributions, and improved geometric fidelity compared to skew configurations. Among commercial solutions, Agisoft Metashape and Pix4D achieved the best overall performance. Open-source algorithms yielded variable results: OpenMVS-SGM achieved competitive accuracy under optimal conditions, while the OpenMVS-Patch-Based Approach offered a balance between density and roughness.
REFERENCES (37)
1.
Abbate E., Sammartano G., Spanò A. (2019). Prospective upon multi-source urban scale data for 3D documentation and monitoring of urban legacies. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. XLII-2/W11: 11–19-11–19. doi:10.5194/isprs-archives-xlii-2-w11-11-2019.
4.
Arif R., Essa K. (2017). Evolving techniques of documentation of a world heritage site in Lahore. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. XLII-2/W5: 33–40-33–40. doi:10.5194/isprs-archives-xlii-2-w5-33-2017.
6.
Bazazian D., Casas J. R., Ruiz-Hidalgo J. (2015). Fast and Robust Edge Extraction in Unorganized Point Clouds. 2015 International Conference on Digital Image Computing: Techniques and Applications (DICTA). 1–8-1–8. doi:10.1109/dicta.2015.7371262.
7.
Bochenska A., Kot P., Muradov M., Markiewicz J., Zawieska D., Hess M., Antoniou A. (2023). Critical evaluation of cultural heritage architectual standard documentation methods across different European countries. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. XLVIII-M-2–2023: 251–258-251–258. doi:10.5194/isprs-archives-xlviii-m-2-2023-251-2023.
8.
Doroszuk K., Markiewicz J. (2022). The possibility of using close-range photogrammetry in the inventory of historic complex basements -- Case study. Sensors and Machine Learning Applications. 1 (2). doi:10.55627/smla.001.02.0014.
10.
Giżyńska J., Komorowska E., Kowalczyk M. (2022). The comparison of photogrammetric and terrestrial laser scanning methods in the documentation of small cultural heritage object -- case study. Journal of Modern Technologies for Cultural Heritage Preservation. 1 (1).
11.
Kerbl B., Kopanas G., Leimkuehler T., Drettakis G. (2023). 3D Gaussian Splatting for Real-Time Radiance Field Rendering. ACM Transactions on Graphics. 42: 1-14-1-14. doi:10.1145/3592433.
12.
Kostrzewa, A., Płatek-Żak, A., Banat, P., and Wilk, Ł. (2025). Open-Source vs. Commercial Photogrammetry: Comparing Accuracy and Efficiency of OpenDroneMap and Agisoft Metashape. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLVIII-1/W4-2025:65–72, doi:10.5194/isprs-archives-xlviii-1-w4-2025-65-2025.
14.
Grilli E., Remondino F. (2019). Classification of 3D Digital Heritage. Remote Sensing. 11 (7): 847-847. doi:10.3390/rs11070847.
15.
Hackel T., Wegner J. D., Schindler K. (2016). Fast semantic segmentation of 3D point clouds with strongly varying density. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences. III–3: 177–184-177–184. doi:10.5194/isprsannals-iii-3-177-2016.
16.
Matsuki H., Murai R., Kelly P. H., Davison A. J. (2024). Gaussian splatting slam. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 18039-18048.
18.
Mildenhall B., Srinivasan P. P., Tancik M., Barron J. T., Ramamoorthi R., Ng R. (2021). NeRF: representing scenes as neural radiance fields for view synthesis. Communications of the ACM. 65 (1): 99–106-99–106. doi:10.1145/3503250.
19.
Muradov M., Kot P., Markiewicz J., Łapiński S., Tobiasz A., Onisk K., Shaw A., Hashim K., Zawieska D., Mohi-Ud-Din G. (2022). Non-destructive system for in-wall moisture assessment of cultural heritage buildings. Measurement. 203: 111930-111930. doi:10.1016/j.measurement.2022.111930.
20.
Murtiyoso A., Grussenmeyer P. (2017). Documentation of heritage buildings using close-range UAV images: dense matching issues, comparison and case studies. The Photogrammetric Record. 32 (159): 206–229-206–229. doi:10.1111/phor.12197.
21.
Murtiyoso A., Pellis E., Grussenmeyer P., L, es T., Masiero A. (2022). Towards Semantic Photogrammetry: Generating Semantically Rich Point Clouds from Architectural Close-Range Photogrammetry. Sensors. 22 (3): 966-966. doi:10.3390/s22030966.
22.
Müller T., Evans A., Schied C., Keller A. (2022). Instant neural graphics primitives with a multiresolution hash encoding. ACM Transactions on Graphics. 41 (4): 1–15-1–15. doi:10.1145/3528223.3530127.
26.
Remondino F. (2011). Heritage Recording and 3D Modeling with Photogrammetry and 3D Scanning. Remote Sensing. 3 (6): 1104–1138-1104–1138. doi:10.3390/rs3061104.
27.
Rupnik E., Daakir M., Pierrot Deseilligny M. (2017). MicMac – a free, open-source solution for photogrammetry. Open Geospatial Data, Software and Standards. 2 (1). doi:10.1186/s40965-017-0027-2.
28.
Schonberger J. L., Frahm J. M. (2016). Structure-from-motion revisited. Proceedings of the IEEE conference on computer vision and pattern recognition. 4104-4113.
29.
Schönberger J. L., Zheng E., Frahm J. M., Pollefeys M. (2016). Pixelwise view selection for unstructured multi-view stereo. European conference on computer vision. 501-518. Springer. doi:10.1007/978-3-319-46487-9_31.
30.
Shen S. (2013). Accurate Multiple View 3D Reconstruction Using Patch-Based Stereo for Large-Scale Scenes. IEEE Transactions on Image Processing. 22 (5): 1901–1914-1901–1914. doi:10.1109/tip.2013.2237921.
31.
Stathopoulou E. K., Battisti R., Cernea D., Remondino F., Georgopoulos A. (2021). Semantically Derived Geometric Constraints for MVS Reconstruction of Textureless Areas. Remote Sensing. 13 (6): 1053-1053. doi:10.3390/rs13061053.
32.
Stathopoulou E. K., Remondino F. (2023). A survey on conventional and learning-based methods for multi-view stereo. The Photogrammetric Record. 38 (183): 374–407-374–407. doi:10.1111/phor.12456.
33.
Tancik M., Weber E., Ng E., Li R., Yi B., Wang T., Kristoffersen A., Austin J., Salahi K., Ahuja A., Mcallister D., Kerr J., Kanazawa A. (2023). Nerfstudio: A Modular Framework for Neural Radiance Field Development. Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Proceedings. 1–12-1–12. doi:10.1145/3588432.3591516.
34.
Tobiasz A., Markiewicz J., Łapiński S., Nikel J., Kot P., Muradov M. (2019). Review of Methods for Documentation, Management, and Sustainability of Cultural Heritage. Case Study: Museum of King Jan III’s Palace at Wilanów. Sustainability. 11 (24): 7046-7046. doi:10.3390/su11247046.
35.
Wang G., Pan L., Peng S., Liu S., Xu C., Miao Y., Zhan W., Tomizuka M., Pollefeys M., Wang H. (2025). NeRFs in robotics: A survey. The International Journal of Robotics Research. doi:10.1177/02783649251374246.
36.
Weinmann M., Jutzi B., Mallet C. (2013). Feature relevance assessment for the semantic interpretation of 3D point cloud data. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. II-5/W2: 313–318-313–318. doi:10.5194/isprsannals-ii-5-w2-313-2013.
37.
Weinmann M., Jutzi B., Mallet C., Weinmann M. (2017). Geometric features and their relevance for 3D point cloud classification. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences. IV-1/W1: 157–164-157–164. doi:10.5194/isprs-annals-iv-1-w1-157-2017.