The spatial distribution and abundance of plant species are of critical importance for the identification of plant communities, the assessment of biodiversity, and the fulfilment of environmental policy requirements, such as those outlined in the Habitat Directive 92/43/EEC. Recent advancement in high-resolution drone imaging provides new opportunities for the identification of plant species, offering significant advantages over traditional expert-based methods, which, while accurate, are often time-consuming. This study utilizes deep learning models, namely Vision Transformer (VIT-B16 and VIT-H14) and Convolutional Neural Networks (VGG19 and Resnet101), to quantify the abundance of tree species from RGB images captured by drones in multiple areas of central Italy. The images were segmented into 256 × 256-pixel tiles to enable efficient computational analysis. Following a rigorous training and evaluation process, the ViT-H14 model was identified as the most effective approach, demonstrating an accuracy of over 0.93. The model's efficacy was substantiated through a comparison with manual analyses conducted by botanical experts, utilising the Mantel Test. This analysis revealed a strong correlation (r =0.87), substantiating the model's capacity to interpret forest images with a high degree of accuracy. These findings demonstrate the potential of deep learning models, particularly ViT-B16 and VIT-H14, for efficient and scalable ecological monitoring and biodiversity assessments.
AI-based estimation of forest plant community composition from UAV imagery / Nepi, L.; Quattrini, G.; Pesaresi, S.; Mancini, A.; Pierdicca, R.. - In: ECOLOGICAL INFORMATICS. - ISSN 1574-9541. - 90:(2025). [10.1016/j.ecoinf.2025.103199]
AI-based estimation of forest plant community composition from UAV imagery
Nepi L.
;Quattrini G.;Pesaresi S.;Mancini A.;Pierdicca R.
2025-01-01
Abstract
The spatial distribution and abundance of plant species are of critical importance for the identification of plant communities, the assessment of biodiversity, and the fulfilment of environmental policy requirements, such as those outlined in the Habitat Directive 92/43/EEC. Recent advancement in high-resolution drone imaging provides new opportunities for the identification of plant species, offering significant advantages over traditional expert-based methods, which, while accurate, are often time-consuming. This study utilizes deep learning models, namely Vision Transformer (VIT-B16 and VIT-H14) and Convolutional Neural Networks (VGG19 and Resnet101), to quantify the abundance of tree species from RGB images captured by drones in multiple areas of central Italy. The images were segmented into 256 × 256-pixel tiles to enable efficient computational analysis. Following a rigorous training and evaluation process, the ViT-H14 model was identified as the most effective approach, demonstrating an accuracy of over 0.93. The model's efficacy was substantiated through a comparison with manual analyses conducted by botanical experts, utilising the Mantel Test. This analysis revealed a strong correlation (r =0.87), substantiating the model's capacity to interpret forest images with a high degree of accuracy. These findings demonstrate the potential of deep learning models, particularly ViT-B16 and VIT-H14, for efficient and scalable ecological monitoring and biodiversity assessments.| File | Dimensione | Formato | |
|---|---|---|---|
|
1-s2.0-S1574954125002080-main-2.pdf
accesso aperto
Tipologia:
Versione editoriale (versione pubblicata con il layout dell'editore)
Licenza d'uso:
Creative commons
Dimensione
3.35 MB
Formato
Adobe PDF
|
3.35 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


