Hostname: page-component-6bb9c88b65-6vlrh Total loading time: 0 Render date: 2025-07-25T01:44:24.383Z Has data issue: false hasContentIssue false

The pipelines of deep learning-based plant image processing

Published online by Cambridge University Press:  25 July 2025

Kaiyue Hong
Affiliation:
Co-Innovation Center for Sustainable Forestry in Southern China, College of Life Sciences, https://ror.org/03m96p165 Nanjing Forestry University , Nanjing, China
Yun Zhou*
Affiliation:
Department of Botany and Plant Pathology, Center for Plant Biology, https://ror.org/02dqehb95 Purdue University , West Lafayette, IN, USA
Han Han*
Affiliation:
Co-Innovation Center for Sustainable Forestry in Southern China, College of Life Sciences, https://ror.org/03m96p165 Nanjing Forestry University , Nanjing, China
*
Corresponding authors: Han Han; Email: hhan@njfu.edu.cn Yun Zhou; Email: zhouyun@purdue.edu
Corresponding authors: Han Han; Email: hhan@njfu.edu.cn Yun Zhou; Email: zhouyun@purdue.edu

Abstract

Recent advancements in data science and artificial intelligence have significantly transformed plant sciences, particularly through the integration of image recognition and deep learning technologies. These innovations have profoundly impacted various aspects of plant research, including species identification, disease detection, cellular signaling analysis, and growth monitoring. This review summarizes the latest computational tools and methodologies used in these areas. We emphasize the importance of data acquisition and preprocessing, discussing techniques such as high-resolution imaging and unmanned aerial vehicle (UAV) photography, along with image enhancement methods like cropping and scaling. Additionally, we review feature extraction techniques like colour histograms and texture analysis, which are essential for plant identification and health assessment. Finally, we discuss emerging trends, challenges, and future directions, offering insights into the applications of these technologies in advancing plant science research and practical implementations.

Information

Type
Review
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press in association with John Innes Centre

1. Introduction

In the digital age, large-scale plant image datasets are essential for advancing plant science, yet their efficient processing remains challenging; artificial intelligence (AI) and deep learning (DL) offer transformative solutions by enabling machines to simulate human intelligence in tasks like image recognition and decision-making (Williamson et al., Reference Williamson, Brettschneider, Caccamo, Davey, Goble and Kersey2023). In plant research, machine learning (ML), a subset of AI, enables automatic plant image analysis, allowing computers to learn and improve without explicit programming by identifying patterns in data to predict outcomes and make decisions through supervised, unsupervised, and reinforcement learning approaches (Silva et al., Reference Silva, Teixeira, Silva, Brommonschenkel and Fontes2019). DL, a specialized branch of ML, uses multi-layer neural networks to process complex plant image data, automatically extract features, and perform tasks like classification and prediction, driving significant advancements in plant image analysis, especially in plant growth monitoring and disease detection (Saleem et al., Reference Saleem, Potgieter and Arif2019). Together, AI, ML, and DL propel innovation in plant science – ML enables learning from data, while DL leverages deep neural networks for advanced image analysis – driving transformative progress in plant research.

Processing and analyzing high-resolution plant images pose challenges for image processing algorithms due to plant diversity in colour, shape, and size, with additional complications from complex backgrounds and dense leaf structures affecting segmentation and feature extraction (Sachar & Kumar, Reference Sachar and Kumar2021). To tackle these challenges, tailored methods in preprocessing, feature extraction, and data augmentation have been developed, showing strong effectiveness in plant image processing (Barbedo, Reference Barbedo2016). For example, data augmentation methods like random rotation and flipping improve model adaptability to plant diversity by helping it learn more robust features (Cap et al., Reference Cap, Uga, Kagiwada and Iyatomi2020). Furthermore, targeted approaches like colour normalization and background suppression improve feature recognition accuracy, reduce external interference, and highlight plants’ distinct visual characteristics, optimizing workflows and enhancing plant image analysis accuracy and efficiency (Petrellis, Reference Petrellis2019).

Challenges like data acquisition and the lack of high-quality annotated data hinder the widespread adoption of DL technologies in plant science. Image recognition, the first and crucial step in plant image processing, has been significantly advanced by the rapid development of DL, particularly Convolutional Neural Networks (CNNs) (Cai et al., Reference Cai, Xu, Chen, Yang, Weng, Huang and Lou2023). CNNs are a type of feedforward neural network and a representative algorithm of DL, using convolutional calculations and possessing a deep structure (Kuo et al., Reference Kuo, Zhang, Li, Duan and Chen2019). The performance of CNNs in plant species recognition has been thoroughly evaluated on several large public wood image datasets, consistently demonstrating high accuracy. For example, CNN models achieved 97.3% accuracy on the Brazilian wood image database (Universidade Federal do Paraná, UFPR) and 96.4% on the Xylarium Digital Database (XDD), clearly outperforming traditional feature engineering methods. CNNs are both effective and generalizable for wood image recognition tasks (Hwang & Sugiyama, Reference Hwang and Sugiyama2021).

Large language models (LLMs), such as ChatGPT, are advanced DL models that, when combined with domain-specific tools like the Agronomic Nucleotide Transformer (AgroNT) – a novel DNA-focused LLM – have demonstrated great potential in plant genetics and stress response studies (Mendoza-Revilla et al., Reference Mendoza-Revilla, Trop, Gonzalez, Roller, Dalla-Torre and de Almeida2024). By analyzing the genomes of 48 crop species and processing over 10 million cassava mutations, the LLM-based tools offer valuable insights into plant development, interactions, and traits, advancing gene expression profiling and opening new research possibilities (Agathokleousb et al., Reference Agathokleous, Rillig, Peñuelas and Yu2024). Notably, LLMs have revealed new insights by uncovering non-obvious regulatory patterns in promoter regions, predicting the functional impacts of non-coding variants, and suggesting novel gene-stress associations that were previously unrecognized using traditional bioinformatics approaches (Mendoza-Revilla et al., Reference Mendoza-Revilla, Trop, Gonzalez, Roller, Dalla-Torre and de Almeida2024). For example, AgroNT has been shown to predict transcription factor binding affinities across diverse plant species with unprecedented accuracy, enabling the discovery of conserved stress-responsive elements in divergent genomes (Wang et al., Reference Wang, Yuan, Yan and Liu2025). Although still in its early stages, the application of language models in plant biology holds great potential to transform the field, despite currently lagging behind advancements in other domains.

This review evaluates key technologies in plant image processing, such as data acquisition, preprocessing, feature extraction, and model training, examining their effectiveness, limitations, and potential to advance plant science research. It also compares various methodologies and ML models, highlighting their advantages, limitations, and challenges, providing a detailed framework to help researchers make informed decisions in plant image processing studies and applications.

2. Data acquisition and preprocessing

2.1. Data acquisition and plant feature extraction

Data acquisition and preprocessing are vital for ML in image processing, with high-resolution imaging, unmanned aerial vehicle (UAV) photography, and 3D scanning providing detailed morphological data for DL model foundation (Shahi et al., Reference Shahi, Xu, Neupane and Guo2022) (Table 1). While high-resolution devices offer superior quality, regular cameras and smartphones provide greater accessibility and scalability, enabling large-scale data collection and enhancing dataset diversity and model robustness.

Table 1 Data acquisition techniques and their applications in plant sciences

Feature extraction in plant image analysis integrates morphology, physiology, genetics, and ecology, starting with colour features (e.g., histograms, coherence vectors) and followed by morphological features (e.g., area, perimeter, shape descriptors) to identify plant traits (Mahajan et al., Reference Mahajan, Raina, Gao and Kant Pandit2021). Texture features, capturing local variations in images, are crucial for species differentiation and disease detection, revealing surface structures like roughness and contrast (Mohan & Peeples, Reference Mohan and Peeples2024). CNNs have proven effective in managing complex plant images, enhancing the classification and detection of diseases through robust feature extraction methods (Ahmad et al., Reference Ahmad, Ashiq, Badshah, Khan and Hussain2022). Additionally, structural features, such as leaf morphology and spatial arrangements, are extracted using techniques like edge detection and shape description (Shoaib et al., Reference Shoaib, Shah, Ei-Sappagh, Ali, Alenezi, Hussain and Ali2023). Lastly, physiological features, including leaf count, size, and vein structure, provide valuable data on plant health and growth dynamics (Bühler et al., Reference Bühler, Rishmawi, Pflugfelder, Huber, Scharr, Hülskamp and Jahnke2015). These features can be obtained manually or automatically through image processing, with recent studies favoring automated methods like segmentation and morphological analysis for high-throughput, objective phenotyping. A key advantage of CNNs is their ability to learn hierarchical features from raw images, eliminating manual feature engineering and enhancing model adaptability and performance in diverse plant phenotyping tasks.

2.2. Preprocessing techniques

Data preprocessing in plant image analysis includes key steps like cropping, resizing, enhancing, augmenting, and annotating to optimize images for ML models. Cropping and resizing standardize dimensions, enhancing computational efficiency and reducing model complexity (Maraveas, Reference Maraveas2024). Data augmentation modifies original images to generate new datasets, with techniques like contrast adjustment, denoising, and sharpening enhancing detail visibility and accuracy (Abebe et al., Reference Abebe, Kim, Kim, Kim and Baek2023), while augmentation strategies like rotation and flipping diversify the dataset to prevent overfitting and improve model generalization (Syarovy et al., Reference Syarovy, Pradiko, Farrasati, Rasyid, Mardiana and Pane2024). Despite their benefits, preprocessing steps can cause information loss, requiring a balance between simplifying the model and preserving critical information. While most preprocessing is not labor-intensive, annotating and labeling training data remains highly labor-intensive, often becoming bottlenecks that hinder project progress. Accurate annotation, often referred to as ‘ground truth’, is essential for supervised learning, as it provides a reliable benchmark for model training and evaluation. In both research and practical applications – such as image recognition, natural language processing, and predictive analytics – the quality of labelled data directly influences model accuracy and reliability (Zhou et al., Reference Zhou, Siegel, Zarecor, Lee, Campbell and Andorf2018a, Reference Zhou, Yan, Han, Li, Geng, Liu and Meyerowitz2018b).

To achieve optimal results, recommended sizes of datasets vary by task complexity. For binary classification, 1,000 to 2,000 images per class are typically sufficient (Singh et al., Reference Singh, Jain, Jain, Kayal, Kumawat and Batra2020). Multi-class classification requires 500 to 1,000 images per class, with higher requirements as the number of classes increases (Mühlenstädt & Frtunikj, Reference Mühlenstädt and Frtunikj2024). More complex tasks, such as object detection, demand larger datasets, often up to 5,000 images per object (Cai et al., Reference Cai, Zhang, Zhu, Zhang, Li and Xue2022). DL models like CNNs generally need 10,000 to 50,000 images, with larger models requiring 100,000+ images (Greydanus & Kobak, Reference Greydanus and Kobak2020). Data augmentation can multiply dataset size by 2–5 times (Shorten & Khoshgoftaar, Reference Shorten and Khoshgoftaar2019). Additionally, Transfer Learning, a machine-learning model, is effective for smaller datasets, requiring as few as 100 to 200 images per class for successful training (Zhu et al., Reference Zhu, Braun, Chiang and Romagnoli2021).

2.3. Commonly used public dataset

The Plant Village dataset is a widely used public resource for DL-based plant disease diagnosis research (Mohameth et al., Reference Mohameth, Bingcai and Sada2020). It serves as a valuable tool in agricultural and plant disease research, offering a comprehensive collection of labelled images essential for developing and testing ML models for plant health monitoring (Pandey et al., Reference Pandey, Tripathi, Singh, Gaur and Gupta2024). Its accessibility, diversity, and standardized format make it a benchmark for algorithm development in precision agriculture, contributing to early disease detection and yield management, and addressing global challenges like food security and sustainable farming (Majji & Kumaravelan, Reference Majji and Kumaravelan2021). Promoting the use of such datasets can enhance collaboration among researchers, standardize methodologies, and support scalable solutions across various agricultural environments (Ahmad et al., Reference Ahmad, Abdullah, Moon and Han2021).

Similar to the plant village dataset, other plant image datasets include the plant doc dataset, which contains images from various plant species for plant disease diagnosis (Singh et al., Reference Singh, Jain, Jain, Kayal, Kumawat and Batra2020). The crop disease dataset features images of diseases in multiple crops, making it suitable for training DL models, especially for crop disease classification (Yuan et al., Reference Yuan, Chen, Ren, Wang and Li2022). The tomato leaf disease dataset focuses on disease images specific to tomato leaves, supporting research in tomato disease recognition and detection (Ahmad et al., Reference Ahmad, Hamid, Yousaf, Shah and Ahmad2020). These datasets are widely used in agriculture, particularly for plant disease detection, crop growth studies, and plant health management, driving the ongoing development of intelligent agricultural technologies.

3. Model development and training

3.1. The selection of ML model

Image classification, used to categorize input images into predefined groups, is commonly applied in plant identification and disease diagnosis. CNNs, with their strong hierarchical feature extraction abilities, excel in these tasks. Models like AlexNet and ResNet are frequently used to classify plant species and developmental stages (Zhu et al., Reference Zhu, Li, Li, Wu and Yue2018; Malagol et al., Reference Malagol, Rao, Werner, Töpfer and Hausmann2025). ResNet, by incorporating residual learning and skip connections, addresses gradient vanishing and degradation in deep networks. Its enhanced model has been applied in high-throughput quantification of grape leaf trichomes, supporting phenotypic analysis and disease resistance studies. CNN-based models generally achieve over 90% accuracy on public datasets, validating their ‘High’ performance in comparative evaluations (Yu et al., Reference Yu, Luo, Zhou, Zhang, Wu and Ren2021; Yao et al., Reference Yao, Tran, Garg and Sawyer2024).

Simpler models like K-nearest neighbors (K-NN) and support vector machines (SVMs) are ideal for smaller datasets with less complex features. Though computationally efficient and easy to implement, they are more sensitive to noise and tend to perform less effectively on complex image data (Ghosh et al., Reference Ghosh, Singh, Jhanjhi, Masud and Aljahdali2022). K-NN, for instance, classifies samples based on proximity in feature space and can be enhanced using surrogate loss training (Picek et al., Reference Picek, Šulc, Patel and Matas2022). SVMs utilize kernel functions like the Radial Basis Function (RBF) to handle non-linear data and prevent overfitting (Sharma et al., Reference Sharma, Kumar, Gour, Saini, Upadhyay and Kumar2024). Both are typically rated as ‘Medium’ in performance due to their limitations in handling large-scale, high-dimensional data (Azlah et al., Reference Azlah, Chua, Rahmad, Abdullah and Wan Alwi2019).

For object detection tasks, which require both classification and localization, models like Faster R-CNN (FRCNN) and You Only Look Once version 5 (YOLOv5) offer high spatial accuracy. FRCNN uses a region proposal network (RPN) and shared convolutional layers to efficiently predict object categories and bounding boxes (Deepika & Arthi, Reference Deepika and Arthi2022). YOLOv5 enables real-time detection and has been applied to UAV-based monitoring for early detection of pine wilt disease (Yu et al., Reference Yu, Luo, Zhou, Zhang, Wu and Ren2021). In dense slash pine forests, improved FRCNN achieved 95.26% accuracy and an R² of 0.95 in crown detection, showcasing the utility of deep learning for woody plant monitoring (Cai et al., Reference Cai, Xu, Chen, Yang, Weng, Huang and Lou2023).

Other advanced models include deep belief networks (DBNs), which use stacked restricted Boltzmann machines (RBMs) for unsupervised hierarchical learning and are fine-tuned via backpropagation (Lu et al., Reference Lu, Du, Liu, Zhang and Hao2022). Recurrent neural networks (RNNs), particularly long short-term memory (LSTM) networks, are effective for modeling temporal dependencies, such as plant growth simulation using time-lapse imagery (Xing et al., Reference Xing, Wang, Sun, Huang and Lin2023; Liu et al., Reference Liu, Chen, Wu, Han, Bao and Zhang2024a, Reference Liu, Guo, Zheng, Yao, Li and Li2024b). Graph neural networks (GNNs) are increasingly used for modeling complex relationships in plant stress response and gene regulation, although they require significant training effort and parameter tuning (Chang et al., Reference Chang, Jin, Rao and Zhang2024).

In scenarios with limited labelled data or domain shifts, transfer learning is particularly valuable. By leveraging pre-trained models, it enables medium to high performance in plant classification and disease recognition tasks (Wu et al., Reference Wu, Jiang and Cao2022). Advanced architectures like GoogLeNet, with its multi-scale inception module, further enhance classification accuracy. For instance, a GoogLeNet model achieved F-scores of 0.9988 and 0.9982 in classifying broadleaf and coniferous tree species, respectively, after 100 training epochs (Minowa et al., Reference Minowa, Kubota and Nakatsukasa2022).

In summary, each model class exhibits distinct strengths. CNNs specialize in image classification; SVMs and K-NN are optimal for simpler datasets; FRCNN and YOLOv5 excel in object detection; DBNs and RNNs support hierarchical and temporal modeling; GNNs tackle high-dimensional interactions; and transfer learning enhances adaptability across domains. The qualitative performance ratings (High/Medium) presented in Table 2 synthesize evaluation metrics like accuracy, precision, recall, and F1-score across representative studies, enabling researchers to select appropriate models based on task complexity and dataset characteristics.

Table 2 A comparison of common ML models in plant recognition and classification

3.2. The integration of language models

In recent years, with the widespread application of LLMs such as BERT and the GPT series in natural language processing and cross-modal learning, plant science research has also begun exploring the integration of LLMs into areas such as gene function prediction, literature-based knowledge mining, and bioinformatic inference. For instance, LLMs can automatically extract potential functional annotation information from a large body of plant gene literature, aiding in the construction of plant gene regulatory networks. Moreover, due to their powerful contextual understanding capabilities, LLMs demonstrate enhanced accuracy and generalizability in predicting gene expression patterns across species (Zhang et al., Reference Zhang, Ibrahim, Khaskheli, Raza, Zhou and Shamsi2024). The application of LLMs in plant biology is beginning to transform the field by driving advancements in chemical mapping, genetic research, and disease diagnostics (Eftekhari et al., Reference Eftekhari, Serra, Schoeffmann and Portelli2024). For instance, by analyzing data from over 2,500 publications, researchers have revealed the phylogenetic distribution of plant compounds and enabled the creation of systematic chemical maps with improved automation and accuracy (Busta et al., Reference Busta, Hall, Johnson, Schaut, Hanson, Gupta and Maeda2024). LLMs and protein language models (PLMs) also enhance the analysis of nucleic acid and protein sequences, advancing genetic improvements and supporting sustainable agricultural systems (Liu et al., Reference Liu, Chen, Wu, Han, Bao and Zhang2024a, Reference Liu, Guo, Zheng, Yao, Li and Li2024b). In disease diagnostics, models like contrastive language image pre-training (CLIP) utilize high-quality images and textual annotations to improve classification accuracy for plant diseases, achieving significant precision gains on datasets such as plant village and field plant (Eftekhari et al., Reference Eftekhari, Serra, Schoeffmann and Portelli2024).

Similarly, a CNN-based system combining InceptionV3 with GPT-3.5 Turbo achieved 99.85% training accuracy and 88.75% validation accuracy in detecting tomato diseases, providing practical treatment recommendations (Madaki et al., Reference Madaki, Muhammad-Bello and Kusharki2024). The feature fusion contrastive language-image pre-training (FF-CLIP) model further enhances this approach by integrating visual and textual data to identify complex disease textures, achieving a 33.38% improvement in Top-1 accuracy for unseen data in zero-shot plant disease identification (Liaw et al., Reference Liaw, Chai, Lee, Bonnet and Joly2025). These advancements highlight the transformative potential of language models in advancing plant biology research and driving sustainable agricultural innovation.

In summary, two notable new insights brought by LLMs in plant science include: (1) the ability to identify and summarize ‘potential regulatory information within non-coding sequences’, which is often overlooked by traditional models; and (2) the promotion of holistic modeling of plant trait complexity through multimodal integration – such as combining sequence, image, and textual data – offering new avenues for complex trait prediction and breeding design.

3.3. Model training and evaluation methods

Effective ML model training relies on three key components: data partitioning, loss functions, and optimization strategies. Properly splitting the data (commonly 70:15:15 for training, validation, and testing) ensures good generalization (Figure 1). The training set fits model parameters, the validation set guides hyperparameter tuning, and the test set evaluates final performance, enhancing model robustness(Ghazi et al., Reference Ghazi, Yanikoglu and Aptoula2017). For instance, Mohanty et al. used the PlantVillage dataset (54,306 images), applied an 80:10:10 split and cross-entropy loss with SGD optimization, achieving over 99% accuracy and demonstrating the efficiency of deep learning in plant disease identification (Mohanty et al., Reference Mohanty, Hughes and Salathé2016). Similarly, Ferentinos used a publicly available plant image dataset with 87,848 leaf images, splitting it into 80% for training and 20% for testing, achieving 99.53% accuracy on the test set. This study highlights the effectiveness of data partitioning and CNNs in plant species classification and disease detection (Ferentinos, Reference Ferentinos2018).

Figure 1. CNN and data training process flowchart. A. DL-based image processing flowchart; B. Data training process elowchart.

The loss function quantifies the difference between predicted and true values, minimized during training. For regression tasks, common loss functions include mean squared error (MSE), root mean squared error (RMSE), and mean absolute error (MAE), with MSE penalizing larger errors, RMSE providing interpretable results, and MAE being more robust to outliers (Picek et al., Reference Picek, Šulc, Patel and Matas2022). For classification tasks, binary classification problems often use binary cross-entropy to measure the accuracy of predictions involving ‘yes/no’ or ‘true/false’ decisions (Bai et al., Reference Bai, Gu, Liu, Yang, Cai, Wang and Yao2023). Custom loss functions can also be defined to suit specific project needs. For instance, Gillespie et al. developed a deep learning model called ‘Deepbiosphere’ and designed a sampling bias-aware binary cross-entropy loss function, which significantly improved the model’s performance in monitoring changes in rare plant species (Gillespie et al., Reference Gillespie, Ruffley and Exposito-Alonso2024).

Optimization algorithms are equally critical in DL. Common optimizers include stochastic gradient descent (SGD), adaptive moment estimation (ADAM) (Saleem et al., Reference Saleem, Potgieter and Arif2020), and RMSprop (Kanna et al., Reference Kanna, Kumar, Kumar, Changela, Woźniak, Shafi and Ijaz2023). SGD updates parameters using randomly selected samples per iteration, ADAM combines momentum with adaptive learning rates, and RMSprop adjusts the learning rate of each parameter using moving averages, effectively reducing gradient oscillation (Mokhtari et al., Reference Mokhtari, Ahmadnia, Nahavandi and Rasoulzadeh2023). For example, Sun et al. employed the SGD optimizer with momentum techniques in plant disease recognition tasks, which improved the convergence speed and stability of the model, thereby enhancing its accuracy (Sun et al., Reference Sun, Liu, Zhou and Hu2021). In a similar vein, Kavitha et al. trained six ImageNet-pretrained CNN models on an RMP dataset for rural medicinal plant classification and reported that MobileNet, optimized with SGD, achieved the best classification performance, highlighting its effectiveness in medicinal plant recognition (Kavitha et al., Reference Kavitha, Kumar, Naresh, Kalmani, Bamane and Pareek2023). In another study, Labhsetwar et al. compared different optimizers for plant disease classification and found that the Adam optimizer achieved the highest validation accuracy of 98%, demonstrating its strong performance in this context. (Labhsetwar et al., Reference Labhsetwar, Haridas, Panmand, Deshpande, Kolte and Pati2021). Complementing these findings, Praharsha et al. evaluated multiple optimizers in CNNs and found that RMSprop, with a learning rate of 0.001 and L2 regularization of 0.0001, achieved the highest validation accuracy of 89.09%, outperforming Adam and SGD, and proving especially effective for plant pest classification tasks (Praharsha et al., Reference Praharsha, Poulose and Badgujar2024).

Evaluation metrics like accuracy, recall, F1 score, and area under the curve (AUC) are essential for assessing model performance, with accuracy being effective for balanced datasets but potentially misleading for imbalanced data.(Naidu et al., Reference Naidu, Zuva and Sibanda2023). Recall emphasizes the model’s ability to identify all relevant positive cases, essential for tasks like plant species identification, while the F1 score, the harmonic mean of precision and recall, offers a balanced evaluation, particularly for imbalanced datasets (Fourure et al., Reference Fourure, Javaid, Posocco and Tihon2021). AUC evaluates model classification ability across different thresholds and is particularly useful in imbalanced classification tasks (Vakili et al., Reference Vakili, Ghamsari and Rezaei2020). For example, Tariku et al. developed an automated plant species classification system using UAV-captured Red, Green, Blue (RGB) images and transfer learning, achieving 0.99 accuracy, precision, recall, and 0.995 F1 score, highlighting the effectiveness of these metrics in real-world tasks and the importance of recall and F1 score for handling diverse and imbalanced datasets (Tariku et al., Reference Tariku, Ghiglieno, Gilioli, Gentilin, Armiraglio and Serina2023). In another study, Sa et al. introduced WeedMap, a large-scale semantic weed mapping framework using UAV-captured multispectral imagery and deep neural networks, achieving AUCs of 0.839, 0.863, and 0.782 for background, crop, and weed classes, respectively, highlighting the role of AUC in evaluating model performance across multiple categories in real-world agricultural applications (Sa et al., Reference Sa, Popović, Khanna, Chen, Lottes, Liebisch and Siegwart2018).

Evaluation should be conducted after training and before deployment. Re-evaluation is also necessary when there are changes in data distribution or environmental conditions (Reich and Barai, Reference Reich and Barai1999). For example, when encountering new plant species or ecological conditions, retraining or fine-tuning may be needed to maintain strong performance on novel inputs (Soares et al., Reference Soares, Ferreira and Lopes2017).

4. The applications of ML in plant research

4.1. Biotic and abiotic stress management

DL models are instrumental in analysing plant leaf images to detect diseases and pests, a vital component of plant protection (Shoaib et al., Reference Shoaib, Shah, Ei-Sappagh, Ali, Alenezi, Hussain and Ali2023). For instance, a 2019 study developed a back propagation neural network (BPNN) – a multilayer feedforward model trained via backpropagation and optimized through one-way ANOVA – that effectively identified rice diseases like blast and blight, highlighting its strength in pattern recognition and classification (Chaudhari & Malathi, Reference Chaudhari and Malathi2023). Additionally, hyperspectral remote sensing combined with ML enables rapid detection of plant viruses like Solanum tuberosum virus Y, enhancing early disease identification (Polder et al., Reference Polder, Blok, De Villiers, Van der Wolf and Kamp2019). The YOLOv5s algorithm processes RGBdrone images for real-time pine wilt disease detection, suitable for large-scale monitoring (Du et al., Reference Du, Wu, Wen, Zheng, Lin and Wu2024). Generative adversarial networks (GANs), consisting of a generator and a discriminator that improve through adversarial learning, have been applied in plant science for tasks such as data augmentation, plant disease detection, and growth simulation (Gandhi et al., Reference Gandhi, Nimbalkar, Yelamanchili and Ponkshe2018).

ML and DL advance abiotic stress management through sensors and drones for early detection, precise predictions, and improved plant resilience (Patil et al., Reference Patil, Kolhar and Jagtap2024). These technologies also optimize plant stress responses, with further advancements expected in agricultural applications (Sharma et al., Reference Sharma, Kumar, Gour, Saini, Upadhyay and Kumar2024). Hyperspectral imaging aids early disease detection related to abiotic stresses (Lowe et al., Reference Lowe, Harrison and French2017). The ‘ASmiR’ framework predicts plant miRNAs under abiotic stresses, supporting stress-resistant crop breeding (Pradhan et al., Reference Pradhan, Meher, Naha, Rao, Kumar, Pal and Gupta2023). GNN, a DL model tailored for graph-structured data, learns node representations via relationships and adjacency, and has been effectively used to predict miRNA associations with abiotic stresses by capturing complex structural patterns (Chang et al., Reference Chang, Jin, Rao and Zhang2024). Interested readers are encouraged to refer to the cited review, which highlights the role of bioinformatics and AI in managing abiotic stresses for food security, aiding stress gene analysis, and improving crop resilience to drought and salinity (Chang et al., Reference Chang, Jin, Rao and Zhang2024).

4.2. Plant species identification and classification

DL aids large-scale growth monitoring, identifying plant growth patterns, health, and predicting yields. An intelligent greenhouse management system uses ML and mobile networks for automated phenotypic monitoring (Rahman et al., Reference Rahman, Shah, Riaz, Kifayat, Moqurrab and Yoo2024). Shapley Additive Explanations (SHAP), which quantifies each feature’s contribution to model predictions, is commonly used to interpret and evaluate the performance of yield prediction models (Sun et al., Reference Sun, Di, Sun, Shen and Lai2019). Lightweight SegNet (LW-SegNet) is a CNN architecture tailored for image segmentation tasks, designed to reduce network parameters and computational demands, ensuring efficient and accurate results, particularly in resource-limited environments. Lightweight networks like LW-SegNet and Lightweight U-Net (LW-Unet) enable efficient segmentation of rice varieties in plant research (Zhang et al., Reference Zhang, Liu and Tie2023a, Reference Zhang, Zhou, Kang, Hu, Heuvelink and Marcelis2023b, Reference Zhang, Sun, Zhang, Yang and Wang2023c). A hybrid model combining RF regression and radiative transfer simulation estimates wheat leaf area index (LAI) using UAV multispectral imaging (Sahoo et al., Reference Sahoo, Gakhar, Rejith, Verrelst, Ranjan, Kondraju and Kumar2023). Self-supervised Learning (SSL) trains models using patterns in unlabelled data without manual labeling, accelerating the training process; while it speeds up plant breeding with unlabelled datasets, supervised pre-training still generally outperforms SSL, particularly in tasks like leaf counting (Ogidi et al., Reference Ogidi, Eramian and Stavness2023) .

CNNs enable fast and accurate plant species identification. In sustainable agriculture, a CNN-based DL model was developed to classify weeds, optimizing herbicide use for eco-friendly control (Corceiro et al., Reference Corceiro, Alibabaei, Assunção, Gaspar and Pereira2023). CNNs (VGG-16, GoogleNet, ResNet-50, ResNet-101) were employed to identify 23 wild grape species, showcasing the effectiveness of DL in leaf recognition and crop variety identification (Pan et al., Reference Pan, Liu, Su, Ju, Fan and Zhang2024).

4.3. Plant growth simulation

ML explores complex molecular and cellular mechanisms in plant growth and development, such as stem cell homeostasis in arabidopsis shoot apical meristems (SAMs) (Hohm et al., Reference Hohm, Zitzler and Simon2010), leaf development (Richardson et al., Reference Richardson, Cheng, Johnston, Kennaway, Conlon and Rebocho2021), and sepal giant cell development (Roeder, Reference Roeder2021), and the simulation of weed growth in crop fields over decades (Zhang et al., Reference Zhang, Liu and Tie2023a, Reference Zhang, Zhou, Kang, Hu, Heuvelink and Marcelis2023b, Reference Zhang, Sun, Zhang, Yang and Wang2023c).

Various algorithms have been applied to study plant development. For example, image processing and ML using SVM and RF were used to analyze Cannabis sativa callus morphology, with SVM showing higher accuracy, while genetic algorithms optimized PGR concentrations to validate the model (Hesami and Jones, Reference Hesami and Jones2021). A microfluidic chip was created to simulate pollen tube growth, and the ‘Physical microenvironment Assay (SPA)’ method was established to study mechanical signal transmission during pollen tube penetration of pistil tissues (Zhou et al., Reference Zhou, Han, Dai, Liu, Gao, Guo and Zhu2023).

Many groups are utilizing the latest computational technologies and algorithms to develop tools that enhance the efficiency and precision of plant biology research (Muller & Martre, Reference Muller and Martre2019). The virtual plant tissue (VPTissue) software simulates plant developmental processes, facilitating the integration of functional modules and cross-model coupling to efficiently simulate cellular-level plant growth (De Vos et al., Reference De Vos, Dzhurakhalov, Stijven, Klosiewicz, Beemster and Broeckhove2017). ADAM-Plant software uses stochastic techniques to simulate breeding plans for self- and cross-pollinated crops, tracking genetic changes across scenarios and supporting diverse population structures, genomic models, and selection strategies for optimized breeding design (Liu et al., Reference Liu, Tessema, Jensen, Cericola, Andersen and Sørensen2019). The L-Py framework, a Python-based L-system simulation tool, simplifies plant architecture simulation and analysis, with dynamic features that enhance programming flexibility, making plant growth model development more convenient (Boudon et al., Reference Boudon, Pradal, Cokelaer, Prusinkiewicz and Godin2012). Many groups are also developing advanced computational tools to accurately simulate plant morphological changes at various stages of growth (Boudon et al., Reference Boudon, Chopard, Ali, Gilles, Hamant and Boudaoud2015). A 3D maize canopy model was created using a t-distribution for the initial model, treating the maize whorl – leaves, stem segments, and buds – as an agent to precisely simulate the canopy’s spatial dynamics and structure (Wu et al., Reference Wu, Wen, Gu, Huang, Wang and Lu2024).

Significant advancements from 2012 to 2023 have enhanced our understanding of plant biology. In 2012, researchers used live imaging combined with computational analysis to monitor cellular and tissue dynamics in A. thaliana (Cunha et al., Reference Cunha, Tarr, Roeder, Altinok, Mjolsness and Meyerowitz2012). The introduction of the Cellzilla platform in 2013 enabled simulation of plant tissue growth at the cellular level (Shapiro et al., Reference Shapiro, Meyerowitz and Mjolsness2013). A pivotal study in 2014 focused on the WOX5-IAA17 feedback loop, which is essential for maintaining the auxin gradient in A. thaliana (Tian et al., Reference Tian, Wabnik, Niu, Li, Yu and Pollmann2014). By 2016, research explored plant signaling pathways and mechanical models to analyze sepal growth and morphology (Hervieux et al., Reference Hervieux, Dumond, Sapala, Routier-Kierzkowska, Kierzkowski and Roeder2016). In 2018, studies delineated the expression pattern of the CLV3 gene in SAMs (Zhou et al., Reference Zhou, Siegel, Zarecor, Lee, Campbell and Andorf2018a, Reference Zhou, Yan, Han, Li, Geng, Liu and Meyerowitz2018b), followed by 2019 research on leaf development and chloroplast ultrastructure (Kierzkowski et al., Reference Kierzkowski, Runions, Vuolo, Strauss, Lymbouridou and Routier-Kierzkowska2019), and the TCX2 gene’s role in maintaining stem cell identity (Clark et al., Reference Clark, Buckner, Fisher, Nelson, Nguyen and Simmons2019). Research in 2020 focused on epidermis-specific transcription factors affecting stem cell niches (Han et al., Reference Han, Yan, Li, Zhu, Feng, Liu and Zhou2020), and 2021 introduced new modeling techniques for root tip growth and stem cell division (Marconi et al., Reference Marconi, Gallemi, Benkova and Wabnik2021). In 2022, 3D bioprinting was used to study cellular dynamics in both A. thaliana and Glycine max (Van den Broeck et al., Reference Van den Broeck, Schwartz, Krishnamoorthy, Tahir, Spurney and Madison2022). The latest studies in 2023 provided new insights into weed evolution and applied advanced DL techniques for plant cell analysis (Feng et al., Reference Feng, Yu, Fang, Jiang, Yang, Chen and Hu2023). These milestones demonstrate the integration of computational tools and empirical datasets in plant science, enabling innovative methods and applications that propel the field forward.

4.4. Plant cell segmentation

Accurate cell segmentation is crucial for understanding plant cell morphology, developmental processes, and tissue organization. Recent advancements in DL and computer vision have led to the development of various specialized tools for segmenting plant cell structures from complex microscopy data. This section provides an overview of key tools, highlighting their core methodologies, applications, and advantages in plant research.

PlantSeg is a neural network-based tool designed for high-resolution plant cell segmentation. It starts with image preprocessing, including scaling and normalization, followed by U-Net-based boundary prediction to identify cell boundaries (Wei et al., Reference Wei, Chen, Yu, Chapman, Melloy and Huang2024). The boundary map is transformed into a region adjacency graph (RAG), where nodes represent image regions and edges represent boundary predictions. Graph segmentation algorithms, such as Multicut or GASP, partition the graph into individual cells, and post-processing ensures the segmentations align with the original resolution and corrects over-segmentation (Wolny et al., Reference Wolny, Cerrone, Vijayan, Tofanelli, Barro and Louveaux2020). By streamlining these processes, PlantSeg supports high-throughput analysis of plant cell dynamics, particularly for confocal and light sheet microscopy data (Vijayan et al., Reference Vijayan, Mody, Yu, Wolny, Cerrone, Strauss and Kreshuk2024). This tool not only improves segmentation efficiency but also handles large-scale datasets, providing robust support for long-term monitoring of plant cell behavior.

Complementing PlantSeg, the Soybean-MVS dataset leverages multi-view stereo (MVS) technology to provide a 3D imaging resource, capturing the full growth cycle of soybeans and enabling precise 3D segmentation of plant organs (Sun et al., Reference Sun, Zhang, Sun, Li, Yu and Miao2023). This dataset plays a significant role in plant growth and developmental research, offering fine-grained data support for dynamic analysis of long-term growth processes.

Other tools focus on generalizability and adaptability across various plant species and imaging modalities. Cellpose utilizes a convolutional neural network (CNN), which performs well in segmenting different cell types and shapes, especially in dynamic plant structures and large-scale image analysis, improving scalability (Stringer et al., Reference Stringer, Wang, Michaelos and Pachitariu2021). This feature enables Cellpose to maintain high accuracy and robustness under diverse experimental conditions when processing plant cells.

DeepCell uses CNNs for plant cell segmentation and supports cell tracking and morphology analysis, offering robust tools for phenotype research. It excels in handling complex cellular dynamics and large-scale datasets, making it ideal for long-term monitoring of plant phenotypes (Greenwald et al., Reference Greenwald, Miller, Moen, Kong, Kagel and Dougherty2022).

Ilastik provides an interactive ML-based segmentation approach, combining flexibility and accuracy. Its user-guided training enables adaptation to diverse plant datasets and experimental conditions, making it valuable for cross-species and multi-modal plant data analysis (Robinson & Vink, Reference Robinson and Vink2024).

Finally, MGX (MorphoGraphX) specializes in 3D morphological analysis of plant tissues by processing 3D microscopy data to visualize and quantify cell shapes, sizes, and spatial patterns. It supports studies on cell interactions and tissue growth, offering precise tools for plant tissue development research (Kerstens et al., Reference Kerstens, Strauss, Smith and Willemsen2020).

In conclusion, despite differences in algorithms and interfaces, these tools collectively advance plant microscopy by minimizing manual segmentation, enhancing reproducibility, and enabling high-throughput and multidimensional analysis. Their complementary strengths offer researchers diverse options, from 2D segmentation to full 3D tissue modeling, tailored to specific experimental needs, and significantly improve efficiency and precision in plant cell and tissue analysis.

5. Summary

ML and image recognition technologies show great promise in plant science, yet several challenges must be addressed for their effective application (Xiong et al., Reference Xiong, Yu, Liu, Shu, Wang and Liu2021). Figure 2 illustrates the keyword network analysis of DL in plant research publications, while Table 3 outlines the challenges and future trends in ML and image recognition technologies within plant science.

Figure 2. A keyword network analysis of DL in plants. A keyword analysis of plant AI technologies reveals clear technological connections. Blue lines indicate image processing technologies, and green lines represent plant phenotyping and growth analysis. At its core, ‘DL’ links to ‘ML’, ‘image processing,’ and ‘computer vision.’ Technologies such as ‘remote sensing’ and ‘precision agriculture’. The relationships between terms like ‘plant growth’, ‘diseases’, ‘phenomics’, and ‘smart agriculture’ indicate the growing integration of AI and ML in improving plant practices.

Table 3 Challenges and future trends of ML and image recognition technologies in plant science

In summary, we examined the usage of DL-based image recognition in plant science, covering plant feature extraction, classification, disease detection, and growth analysis. We highlight the importance of data acquisition and preprocessing methods like high-resolution imaging, drone photography, and 3D scanning, as well as techniques for improving data quality. Various feature extraction methods – such as colour histograms, shape descriptors, and texture features – are reviewed for plant identification. The development of ML models, especially CNNs, is also discussed, alongside current challenges and future prospects. Despite progress, challenges remain. Future research should aim to apply methods across diverse plant systems, refine data acquisition, and enhance algorithm efficiency. Advancements will likely improve model generalization and interpretability, with interdisciplinary collaboration in plant biology, mathematics, and computer science being crucial to addressing upcoming challenges.

Open peer review

To view the open peer review materials for this article, please visit http://doi.org/10.1017/qpb.2025.10018.

Acknowledgements

We acknowledge Manjun Shang for critical reading our manuscript. We apologized to colleagues whose work was not included or described in this review due to limited space.

Competing interest

None.

Data availability statement

No data and coding involved in this manuscript.

Author contributions

KYH, HH and YZ conceived the study. KYH wrote the manuscript. HH and YZ revised the manuscript.

Funding statement

This work was supported by National Natural Science Foundation of China (No. 32202496 to H. H.), Nanjing Forestry University (start-up funding to H.H.), and Jiangsu Sheng Tepin Jiaoshou Program (to H.H.).

Footnotes

Associate Editor: Dr. Ross Sozzani

References

Abebe, A. M., Kim, Y., Kim, J., Kim, S. L., & Baek, J. (2023). Image-based high-throughput phenotyping in horticultural crops. Plants, 12(10), 2061. https://doi.org/10.3390/plants12102061CrossRefGoogle ScholarPubMed
Adke, S., Li, C., Rasheed, K. M., & Maier, F. W. (2022). Supervised and weakly supervised deep learning for segmentation and counting of cotton bolls using proximal imagery. Sensors, 22(10), 3688. https://doi.org/10.3390/s22103688 CrossRefGoogle Scholar
Agathokleous, E., Rillig, M. C., Peñuelas, J., & Yu, Z. (2024). One hundred important questions facing plant science derived using a large language model. Trends in Plant Science, 29(2), 210218. https://doi.org/10.1016/j.tplants.2023.06.008 CrossRefGoogle ScholarPubMed
Ahmad, I., Hamid, M., Yousaf, S., Shah, S. T., & Ahmad, M. O. (2020). Optimizing pretrained convolutional neural networks for tomato leaf disease detection. Complexity, 2020(1), 8812019. https://doi.org/10.1109/ACCESS.2021.3119655 CrossRefGoogle Scholar
Ahmad, M., Abdullah, M., Moon, H., & Han, D. (2021). Plant disease detection in imbalanced datasets using efficient convolutional neural networks with stepwise transfer learning. IEEE Access, 9, 140565140580. https://doi.org/10.1109/ACCESS.2021.3119655 CrossRefGoogle Scholar
Ahmad, M. U., Ashiq, S., Badshah, G., Khan, A. H., & Hussain, M. (2022). Feature extraction of plant leaf using deep learning. Complexity, 2022(1), 6976112. https://doi.org/10.1155/2022/6976112 CrossRefGoogle Scholar
Anh, P. T. Q., Thuyet, D. Q., & Kobayashi, Y. (2022). Image classification of root-trimmed garlic using multi-label and multi-class classification with deep convolutional neural network. Postharvest Biology and Technology, 190, 111956. https://doi.org/10.1016/j.postharvbio.2022.111956 CrossRefGoogle Scholar
Azlah, M. A. F., Chua, L. S., Rahmad, F. R., Abdullah, F. I., & Wan Alwi, S. R. (2019). Review on techniques for plant leaf classification and recognition. Computers, 8(4), 77. https://doi.org/10.3390/computers8040077 CrossRefGoogle Scholar
Bai, X., Gu, S., Liu, P., Yang, A., Cai, Z., Wang, J., & Yao, J. (2023). Rpnet: Rice plant counting after tillering stage based on plant attention and multiple supervision network. The Crop Journal, 11(5), 15861594. https://doi.org/10.1016/j.cj.2023.04.005 CrossRefGoogle Scholar
Barbedo, J. G. A. (2016). A review on the main challenges in automatic plant disease identification based on visible range images. Biosystems Engineering, 144, 5260. https://doi.org/10.1016/j.biosystemseng.2016.01.017 CrossRefGoogle Scholar
Boudon, F., Pradal, C., Cokelaer, T., Prusinkiewicz, P., & Godin, C. (2012). L-Py: An L-system simulation framework for modeling plant architecture development based on a dynamic language. Frontiers in Plant Science, 3, 76. https://doi.org/10.3389/fpls.2012.00076 CrossRefGoogle ScholarPubMed
Boudon, F., Chopard, J., Ali, O., Gilles, B., Hamant, O., Boudaoud, A., et al. (2015). A computational framework for 3D mechanical modeling of plant morphogenesis with cellular resolution. PLoS Computational Biology, 11(1), e1003950. https://doi.org/10.1371/journal.pcbi.1003950 CrossRefGoogle ScholarPubMed
Bühler, J., Rishmawi, L., Pflugfelder, D., Huber, G., Scharr, H., Hülskamp, M., … Jahnke, S. (2015). Phenovein – A tool for leaf vein segmentation and analysis. Plant Physiology, 169(4), 23592370. https://doi.org/10.1104/pp.15.00974 Google ScholarPubMed
Busta, L., Hall, D., Johnson, B., Schaut, M., Hanson, C. M., Gupta, A., … Maeda, A. (2024). Mapping of specialized metabolite terms onto a plant phylogeny using text mining and large language models. The Plant Journal, 120(1), 406419. https://doi.org/10.1111/tpj.16906 CrossRefGoogle ScholarPubMed
Cai, L., Zhang, Z., Zhu, Y., Zhang, L., Li, M., & Xue, X. (2022). Bigdetection: A large-scale benchmark for improved object detector pre-training. In Paper presented at the Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. https://doi.org/10.48550/arXiv.2203.13249 CrossRefGoogle Scholar
Cai, C., Xu, H., Chen, S., Yang, L., Weng, Y., Huang, S., … Lou, X. (2023). Tree recognition and crown width extraction based on novel faster-RCNN in a dense loblolly pine environment. Forests, 14(5), 863. https://doi.org/10.3390/f14050863 CrossRefGoogle Scholar
Cap, Q. H., Uga, H., Kagiwada, S., & Iyatomi, H. (2020). Leafgan: An effective data augmentation method for practical plant disease diagnosis. IEEE Transactions on Automation Science and Engineering, 19(2), 12581267. https://doi.org/10.1109/TASE.2020.3041499 CrossRefGoogle Scholar
Chang, L., Jin, X., Rao, Y., & Zhang, X. (2024). Predicting abiotic stress-responsive miRNA in plants based on multi-source features fusion and graph neural network. Plant Methods, 20(1), 33. https://doi.org/10.1186/s13007-024-01158-7 CrossRefGoogle ScholarPubMed
Chaudhari, D. J., & Malathi, K. (2023). Detection and prediction of rice leaf disease using a hybrid CNN-SVM model. Optical Memory and Neural Networks, 32(1), 3957. https://doi.org/10.3103/S1060992X2301006X CrossRefGoogle Scholar
Chen, Y., Huang, Y., Zhang, Z., Wang, Z., Liu, B., Liu, C., … Wan, F. (2023). Plant image recognition with deep learning: A review. Computers and Electronics in Agriculture, 212, 108072. https://doi.org/10.1016/j.compag.2023.108072 CrossRefGoogle Scholar
Chulif, S., Lee, S. H., Chang, Y. L., & Chai, K. C. (2023). A machine learning approach for cross-domain plant identification using herbarium specimens. Neural Computing and Applications, 35(8), 59635985. https://doi.org/10.1007/s00521-022-07951-6 CrossRefGoogle Scholar
Clark, N. M., Buckner, E., Fisher, A. P., Nelson, E. C., Nguyen, T. T., Simmons, A. R., et al. (2019). Stem-cell-ubiquitous genes spatiotemporally coordinate division through regulation of stem-cell-specific gene networks. Nature Communications, 10(1), 5574. https://doi.org/10.1038/s41467-019-13132-2 CrossRefGoogle ScholarPubMed
Cong, S., & Zhou, Y. (2023). A review of convolutional neural network architectures and their optimizations. Artificial Intelligence Review, 56(3), 19051969. https://doi.org/10.1007/s10462-022-10213-5 CrossRefGoogle Scholar
Corceiro, A., Alibabaei, K., Assunção, E., Gaspar, P. D., & Pereira, N. (2023). Methods for detecting and classifying weeds, diseases and fruits using AI to improve the sustainability of agricultural crops: A review. PRO, 11(4), 1263. https://doi.org/10.3390/pr11041263 Google Scholar
Cunha, A., Tarr, P. T., Roeder, A. H., Altinok, A., Mjolsness, E., & Meyerowitz, E. M. (2012). Computational analysis of live cell images of the Arabidopsis thaliana plant. In Methods in cell biology (Vol. 110, pp. 285323): Elsevier. https://doi.org/10.1016/B978-0-12-388403-9.00012-6 CrossRefGoogle Scholar
De Vos, D., Dzhurakhalov, A., Stijven, S., Klosiewicz, P., Beemster, G. T., & Broeckhove, J. (2017). Virtual plant tissue: Building blocks for next-generation plant growth simulation. Frontiers in Plant Science, 8, 686. https://doi.org/10.3389/fpls.2017.00686 CrossRefGoogle ScholarPubMed
Deepika, P., & Arthi, B. (2022). Prediction of plant pest detection using improved mask FRCNN in cloud environment. Measurement: Sensors, 24, 100549. https://doi.org/10.1016/j.measen.2022.100549 Google Scholar
Du, Z., Wu, S., Wen, Q., Zheng, X., Lin, S., & Wu, D. (2024). Pine wilt disease detection algorithm based on improved YOLOv5. Frontiers in Plant Science, 15, 1302361. https://doi.org/10.3389/fpls.2024.1302361 CrossRefGoogle ScholarPubMed
Duncan, K. E., Czymmek, K. J., Jiang, N., Thies, A. C., & Topp, C. N. (2022). X-ray microscopy enables multiscale high-resolution 3D imaging of plant cells, tissues, and organs. Plant Physiology, 188(2), 831845. https://doi.org/10.1093/plphys/kiab405 CrossRefGoogle ScholarPubMed
Eftekhari, P., Serra, G., Schoeffmann, K., & Portelli, D. B. (2024). Using natural language processing to enhance visual models for plant leaf diseases classification. https://netlibrary.aau.at/obvuklhs/download/pdf/10268703 Google Scholar
Feng, X., Yu, Z., Fang, H., Jiang, H., Yang, G., Chen, L., … Hu, G. (2023). Plantorganelle hunter is an effective deep-learning-based method for plant organelle phenotyping in electron microscopy. Nature Plants, 9(10), 17601775. https://doi.org/10.1038/s41477-023-01527-5 CrossRefGoogle Scholar
Ferentinos, K. P. (2018). Deep learning models for plant disease detection and diagnosis. Computers and Electronics in Agriculture, 145, 311318. https://doi.org/10.1016/j.compag.2018.01.009 CrossRefGoogle Scholar
Forero, M. G., Murcia, H. F., Mendez, D., & Betancourt-Lozano, J. (2022). LiDAR platform for acquisition of 3D plant phenotyping database. Plants, 11(17), 2199. https://doi.org/10.3390/plants11172199 CrossRefGoogle ScholarPubMed
Fourure, D., Javaid, M. U., Posocco, N., & Tihon, S. (2021). Anomaly detection: How to artificially increase your f1-score with a biased evaluation protocol. Paper presented at the Joint European conference on machine learning and knowledge discovery in databases. https://doi.org/10.1007/978-3-030-86514-6_1 CrossRefGoogle Scholar
Gandhi, R., Nimbalkar, S., Yelamanchili, N., & Ponkshe, S. (2018). Plant disease detection using CNNs and GANs as an augmentative approach. Paper presented at the +2018 IEEE International Conference on Innovative Research and Development (ICIRD). https://doi.org/10.1109/ICIRD.2018.8376321 CrossRefGoogle Scholar
Ghazi, M. M., Yanikoglu, B., & Aptoula, E. (2017). Plant identification using deep neural networks via optimization of transfer learning parameters. Neurocomputing, 235, 228235. https://doi.org/10.1016/j.neucom.2017.01.018 CrossRefGoogle Scholar
Ghosh, S., Singh, A., Jhanjhi, N., Masud, M., & Aljahdali, S. (2022). SVM and KNN based CNN architectures for plant classification. Computers, Materials & Continua, 71(3). https://doi.org/10.32604/cmc.2022.023414 CrossRefGoogle Scholar
Gillespie, L. E., Ruffley, M., & Exposito-Alonso, M. (2024). Deep learning models map rapid plant species changes from citizen science and remote sensing data. Proceedings of the National Academy of Sciences, 121(37), e2318296121. https://doi.org/10.1073/pnas.2318296121 CrossRefGoogle ScholarPubMed
Greenwald, N. F., Miller, G., Moen, E., Kong, A., Kagel, A., Dougherty, T., et al. (2022). Whole-cell segmentation of tissue images with human-level performance using large-scale data annotation and deep learning. Nature Biotechnology, 40(4), 555565. https://doi.org/10.1038/s41587-021-01094-0 CrossRefGoogle ScholarPubMed
Greydanus, S. J., & Kobak, D. (2020). Scaling down deep learning with MNIST-1D. Paper presented at the Forty-first International Conference on Machine Learning. https://doi.org/10.48550/arXiv.2011.14439 CrossRefGoogle Scholar
Han, H., Yan, A., Li, L., Zhu, Y., Feng, B., Liu, X., & Zhou, Y. (2020). A signal cascade originated from epidermis defines apical-basal patterning of Arabidopsis shoot apical meristems. Nature Communications, 11(1), 1214. https://doi.org/10.1038/s41467-020-14989-4 CrossRefGoogle ScholarPubMed
Harris, J. A. (1913). On the calculation of intra-class and inter-class coefficients of correlation from class moments when the number of possible combinations is large. Biometrika, 9(3/4), 446472. https://doi.org/10.1093/biomet/9.3-4.446 CrossRefGoogle Scholar
Hervieux, N., Dumond, M., Sapala, A., Routier-Kierzkowska, A.-L., Kierzkowski, D., Roeder, A. H., et al. (2016). A mechanical feedback restricts sepal growth and shape in Arabidopsis. Current Biology, 26(8), 10191028. https://doi.org/10.1016/j.cub.2016.03.004 CrossRefGoogle Scholar
Hesami, M., & Jones, A. M. P. (2021). Modeling and optimizing callus growth and development in Cannabis sativa using random forest and support vector machine in combination with a genetic algorithm. Applied Microbiology and Biotechnology, 105(12), 52015212. https://doi.org/10.1007/s00253-021-11375-y CrossRefGoogle ScholarPubMed
Hohm, T., Zitzler, E., & Simon, R. (2010). A dynamic model for stem cell homeostasis and patterning in Arabidopsis meristems. PLoS One, 5(2), e9189. https://doi.org/10.1371/journal.pone.0009189 CrossRefGoogle ScholarPubMed
Hwang, S. W., & Sugiyama, J. (2021). Computer vision-based wood identification and its expansion and contribution potentials in wood science: A review. Plant Methods, 17(1), 47. https://doi.org/10.1186/s13007-021-00746-1 CrossRefGoogle ScholarPubMed
Jeyapoornima, B., Suganthi, P., Shankar, K., Chowdary, V. V., Sai, V. R. U., & Venkat, B. N. (2023). An Automated System Based on Informational Value of Images for the Purpose of Classifying Plant Diseases. Paper presented at the 2023 International Conference on Artificial Intelligence and Knowledge Discovery in Concurrent Engineering (ICECONF). https://doi.org/10.1109/ICECONF57129.2023.10083618 CrossRefGoogle Scholar
Kanna, G. P., Kumar, S. J., Kumar, Y., Changela, A., Woźniak, M., Shafi, J., & Ijaz, M. F. (2023). Advanced deep learning techniques for early disease prediction in cauliflower plants. Scientific Reports, 13(1), 18475. https://doi.org/10.1038/s41598-023-45403-w CrossRefGoogle ScholarPubMed
Kavitha, S., Kumar, T. S., Naresh, E., Kalmani, V. H., Bamane, K. D., & Pareek, P. K. (2023). Medicinal plant identification in real-time using deep learning model. SN Computer Science, 5(1), 73. https://doi.org/10.1007/s42979-023-02398-5 CrossRefGoogle Scholar
Kerstens, M., Strauss, S., Smith, R., & Willemsen, V. (2020). From stained plant tissues to quantitative cell segmentation analysis with MorphoGraphX. Plant Embryogenesis: Methods and Protocols, 6383. https://doi.org/10.1007/978-1-0716-0342-0_6 CrossRefGoogle ScholarPubMed
Khan, A. T., Jensen, S. M., Khan, A. R., & Li, S. (2023). Plant disease detection model for edge computing devices. Frontiers in Plant Science, 14, 1308528. https://doi.org/10.3389/fpls.2023.1308528 CrossRefGoogle ScholarPubMed
Kierzkowski, D., Runions, A., Vuolo, F., Strauss, S., Lymbouridou, R., Routier-Kierzkowska, A.-L., et al. (2019). A growth-based framework for leaf shape development and diversity. Cell, 177(6), 1405, e1417–1418. https://doi.org/10.1016/j.cell.2019.05.011 CrossRefGoogle ScholarPubMed
Kuo, C.-C. J., Zhang, M., Li, S., Duan, J., & Chen, Y. (2019). Interpretable convolutional neural networks via feedforward design. Journal of Visual Communication and Image Representation, 60, 346359. https://doi.org/10.1016/j.jvcir.2019.03.010 CrossRefGoogle Scholar
Labhsetwar, S. R., Haridas, S., Panmand, R., Deshpande, R., Kolte, P. A., & Pati, S. (2021, January). Performance analysis of optimizers for plant disease classification with convolutional neural networks. In 2021 4th biennial international conference on nascent technologies in engineering (ICNTE) (pp. 16). IEEE. https://doi.org/10.1109/ICNTE51185.2021.9487698 Google Scholar
Liaw, J. Z., Chai, A. Y. H., Lee, S. H., Bonnet, P., & Joly, A. (2025). Can language improve visual features for distinguishing unseen plant diseases? Paper presented at the International Conference on Pattern Recognition. https://doi.org/10.1007/978-3-031-78113-1_20 CrossRefGoogle Scholar
Liu, H., Tessema, B. B., Jensen, J., Cericola, F., Andersen, J. R., & Sørensen, A. C. (2019). ADAM-plant: A software for stochastic simulations of plant breeding from molecular to phenotypic level and from simple selection to complex speed breeding programs. Frontiers in Plant Science, 9, 425945. https://doi.org/10.3389/fpls.2018.01926 CrossRefGoogle ScholarPubMed
Liu, G., Chen, L., Wu, Y., Han, Y., Bao, Y., & Zhang, T. (2024a). PDLLMs: A group of tailored DNA large language models for analyzing plant genomes. Molecular Plant. https://doi.org/10.1016/j.molp.2024.12.006 Google Scholar
Liu, X., Guo, J., Zheng, X., Yao, Z., Li, Y., & Li, Y. (2024b). Intelligent plant growth monitoring system based on LSTM network. IEEE Sensors Journal. https://doi.org/10.1109/JSEN.2024.3376818 Google Scholar
Lowe, A., Harrison, N., & French, A. P. (2017). Hyperspectral image analysis techniques for the detection and classification of the early onset of plant disease and stress. Plant Methods, 13(1), 80. https://doi.org/10.1186/s13007-017-0233-z CrossRefGoogle ScholarPubMed
Lu, Y., Du, J., Liu, P., Zhang, Y., & Hao, Z. (2022). Image classification and recognition of rice diseases: A hybrid DBN and particle swarm optimization algorithm. Frontiers in Bioengineering and Biotechnology, 10, 855667. https://doi.org/10.3389/fbioe.2022.855667 CrossRefGoogle Scholar
Madaki, A. Y., Muhammad-Bello, B., & Kusharki, M. (2024). AI-driven detection and treatment of tomato plant diseases using convolutional neural networks and OpenAI language models. Proceedings of the 5th International Electronic Conference on Applied Sciences, Computing and Artificial Intelligence, 4–6 December 2024, MDPI; Basel, Switzerland. https://sciforum.net/manuscripts/20907/slides.pdf Google Scholar
Mahajan, S., Raina, A., Gao, X.-Z., & Kant Pandit, A. (2021). Plant recognition using morphological feature extraction and transfer learning over SVM and AdaBoost. Symmetry, 13(2), 356. https://doi.org/10.3390/sym13020356 CrossRefGoogle Scholar
Majji, V. A., & Kumaravelan, G. (2021). Detection and classification of plant leaf disease using convolutional neural network on plant village dataset. International Journal of Innovative Research in Applied Sciences and Engineering (IJIRASE), 4(11), 931935. https://doi.org/10.29027/IJIRASE.v4.i11.2021.931-935 Google Scholar
Malagol, N., Rao, T., Werner, A., Töpfer, R., & Hausmann, L. (2025). A high-throughput ResNet CNN approach for automated grapevine leaf hair quantification. Scientific Reports, 15(1), 1590. https://doi.org/10.1038/s41598-025-85336-0 CrossRefGoogle ScholarPubMed
Mano, S., Miwa, T., Nishikawa, S.-i., Mimura, T., & Nishimura, M. (2009). Seeing is believing: on the use of image databases for visually exploring plant organelle dynamics. Plant and cell physiology, 50(12), 20002014. https://doi.org/10.1093/pcp/pcp128 CrossRefGoogle ScholarPubMed
Maraveas, C. (2024). Image analysis artificial intelligence Technologies for Plant Phenotyping: Current state of the art. AgriEngineering, 6(3), 33753407. https://doi.org/10.3390/agriengineering6030193 CrossRefGoogle Scholar
Marconi, M., Gallemi, M., Benkova, E., & Wabnik, K. (2021). A coupled mechano-biochemical model for cell polarity guided anisotropic root growth. eLife, 10, e72132. https://doi.org/10.7554/elife.72132 CrossRefGoogle ScholarPubMed
Mendoza-Revilla, J., Trop, E., Gonzalez, L., Roller, M., Dalla-Torre, H., de Almeida, B. P., et al. (2024). A foundational large language model for edible plant genomes. Communications Biology, 7(1), 835. https://doi.org/10.1038/s42003-024-06465-2 CrossRefGoogle ScholarPubMed
Minowa, Y., Kubota, Y., & Nakatsukasa, S. (2022). Verification of a deep learning-based tree species identification model using images of broadleaf and coniferous tree leaves. Forests, 13(6), 943. https://doi.org/10.3390/f13060943 CrossRefGoogle Scholar
Mohameth, F., Bingcai, C., & Sada, K. A. (2020). Plant disease detection with deep learning and feature extraction using plant village. Journal of Computer and Communications, 8(6), 1022. https://doi.org/10.4236/jcc.2020.86002 CrossRefGoogle Scholar
Mohan, A., & Peeples, J. (2024). Lacunarity pooling layers for plant image classification using texture analysis. Paper presented at the proceedings of the IEEE/CVF conference on computer vision and pattern recognition. https://doi.org/10.48550/arXiv.2404.16268 CrossRefGoogle Scholar
Mohanty, S. P., Hughes, D. P., & Salathé, M. (2016). Using deep learning for image-based plant disease detection. Frontiers in Plant Science, 7, 1419. https://doi.org/10.3389/fpls.2016.01419 CrossRefGoogle ScholarPubMed
Mokhtari, A., Ahmadnia, F., Nahavandi, M., & Rasoulzadeh, R. (2023). A comparative analysis of the Adam and RMSprop optimizers on a convolutional neural network model for predicting common diseases in strawberries. https://www.researchgate.net/publication/375372415.Google Scholar
Mühlenstädt, T., & Frtunikj, J. (2024). How much data do you need? Part 2: Predicting DL class specific training dataset sizes. arXiv preprint arXiv:2403.06311 . https://doi.org/10.48550/arXiv.2403.06311 CrossRefGoogle Scholar
Muller, B., & Martre, P. (2019). Plant and crop simulation models: Powerful tools to link physiology, genetics, and phenomics. Journal of Experimental Botany 70, 23392344. https://doi.org/10.1093/jxb/erz175 CrossRefGoogle ScholarPubMed
Naidu, G., Zuva, T., & Sibanda, E. M. (2023). A review of evaluation metrics in machine learning algorithms. Paper presented at the computer science on-line conference. https://doi.org/10.1007/978-3-031-35314-7_2 CrossRefGoogle Scholar
Nguyen, C. V., Fripp, J., Lovell, D. R., Furbank, R., Kuffner, P., Daily, H., & Sirault, X. (2016). 3D scanning system for automatic high-resolution plant phenotyping. Paper presented at the 2016 international conference on digital image computing: techniques and applications (DICTA). https://doi.org/10.48550/arXiv.1702.08112 CrossRefGoogle Scholar
Ogidi, F. C., Eramian, M. G., & Stavness, I. (2023). Benchmarking self-supervised contrastive learning methods for image-based plant phenotyping. Plant Phenomics, 5, 0037. https://doi.org/10.34133/plantphenomics.0037 CrossRefGoogle ScholarPubMed
Padhiary, M., Rani, N., Saha, D., Barbhuiya, J. A., & Sethi, L. (2023). Efficient precision agriculture with python-based raspberry Pi image processing for real-time plant target identification. IJRAR, 10(3). https://www.researchgate.net/publication/373825210_Efficient_Precision_Agriculture_with_Pythonbased_Raspberry_Pi_Image_Processing_for_Real-ime_Plant_Target_Identification Google Scholar
Pan, B., Liu, C., Su, B., Ju, Y., Fan, X., Zhang, Y., et al. (2024). Research on species identification of wild grape leaves based on deep learning. Scientia Horticulturae, 327, 112821. https://doi.org/10.1016/j.scienta.2023.112821 CrossRefGoogle Scholar
Pandey, A., & Vir, R. (2024). An Improved Approach to Classify Plant Disease using CNN and Random Forest. Paper presented at the 2024 IEEE 3rd World Conference on Applied Intelligence and Computing (AIC). https://doi.org/10.1109/ICAIA57370.2023.10169830 CrossRefGoogle Scholar
Pandey, V., Tripathi, U., Singh, V. K., Gaur, Y. S., & Gupta, D. (2024). Survey of accuracy prediction on the plant village dataset using different ML techniques. EAI Endorsed Transactions on Internet of Things, 10. https://doi.org/10.4108/eetiot.4578 Google Scholar
Patil, S., Kolhar, S., & Jagtap, J. (2024). Deep learning in plant stress phenomics studies – A review. International Journal of Computing and Digital Systems, 16(1), 305316. https://doi.org/10.12785/ijcds/1571029493 Google Scholar
Paudel, D., de Wit, A., Boogaard, H., Marcos, D., Osinga, S., & Athanasiadis, I. N. (2023). Interpretability of deep learning models for crop yield forecasting. Computers and Electronics in Agriculture, 206, 107663. https://doi.org/10.1016/j.compag.2023.107663 CrossRefGoogle Scholar
Petrellis, N. (2019). Plant disease diagnosis with colour normalization. Paper presented at the 2019 8th international conference on modern circuits and systems technologies (MOCAST). https://doi.org/10.1109/MOCAST.2019.8741614 CrossRefGoogle Scholar
Picek, L., Šulc, M., Patel, Y., & Matas, J. (2022). Plant recognition by AI: Deep neural nets, transformers, and kNN in deep embeddings. Frontiers in Plant Science, 13, 787527. https://doi.org/10.3389/fpls.2022.787527 CrossRefGoogle ScholarPubMed
Polder, G., Blok, P. M., De Villiers, H. A., Van der Wolf, J. M., & Kamp, J. (2019). Potato virus Y detection in seed potatoes using deep learning on hyperspectral images. Frontiers in Plant Science, 10, 209. https://doi.org/10.3389/fpls.2019.00209 CrossRefGoogle ScholarPubMed
Pradhan, U. K., Meher, P. K., Naha, S., Rao, A. R., Kumar, U., Pal, S., & Gupta, A. (2023). ASmiR: A machine learning framework for prediction of abiotic stress–specific miRNAs in plants. Functional & Integrative Genomics, 23(2), 92. https://doi.org/10.1007/s10142-023-01014-2 CrossRefGoogle ScholarPubMed
Praharsha, C. H., Poulose, A., & Badgujar, C. (2024). Comprehensive investigation of machine learning and deep learning networks for identifying multispecies tomato insect images. Sensors, 24(23), 7858. https://doi.org/10.3390/s24237858 CrossRefGoogle ScholarPubMed
Rahman, H., Shah, U. M., Riaz, S. M., Kifayat, K., Moqurrab, S. A., & Yoo, J. (2024). Digital twin framework for smart greenhouse management using next-gen mobile networks and machine learning. Future Generation Computer Systems. https://doi.org/10.1016/j.future.2024.03.023 CrossRefGoogle Scholar
Rao, D. S., Ch, R. B., Kiran, V. S., Rajasekhar, N., Srinivas, K., Akshay, P. S., … Bharadwaj, B. L. (2022). Plant disease classification using deep bilinear CNN . Intell. Autom. Soft Comput, 31(1), 161176. https://doi.org/10.32604/iasc.2022.017706 Google Scholar
Reich, Y., & Barai, S. (1999). Evaluating machine learning models for engineering problems. Artificial Intelligence in Engineering, 13(3), 257272. https://doi.org/10.1016/S0954-1810(98)00021-1 CrossRefGoogle Scholar
Richardson, A., Cheng, J., Johnston, R., Kennaway, R., Conlon, B., Rebocho, A., et al. (2021). Evolution of the grass leaf by primordium extension and petiole-lamina remodeling. Science, 374(6573), 13771381. https://doi.org/10.1126/science.abf9407 CrossRefGoogle ScholarPubMed
Robinson, H. F., & Vink, J. N. (2024). Rapid and robust monitoring of Phytophthora infectivity through detached leaf assays with automated image analysis. In Phytophthora: Methods and protocols (pp. 93104): Springer. https://doi.org/10.1007/978-1-0716-4330-3_7 Google Scholar
Roeder, A. H. (2021). Arabidopsis sepals: A model system for the emergent process of morphogenesis. Quantitative Plant Biology, 2, e14. https://doi.org/10.1017/qpb.2021.12 CrossRefGoogle Scholar
Sa, I., Popović, M., Khanna, R., Chen, Z., Lottes, P., Liebisch, F., … Siegwart, R. (2018). WeedMap: A large-scale semantic weed mapping framework using aerial multispectral imaging and deep neural network for precision farming. Remote Sensing, 10(9), 1423. https://doi.org/10.3390/rs10091423 CrossRefGoogle Scholar
Sachar, S., & Kumar, A. (2021). Survey of feature extraction and classification techniques to identify plant through leaves. Expert Systems with Applications, 167, 114181. https://doi.org/10.1016/j.eswa.2020.114181 CrossRefGoogle Scholar
Sahoo, R. N., Gakhar, S., Rejith, R. G., Verrelst, J., Ranjan, R., Kondraju, T., … Kumar, S. (2023). Optimizing the retrieval of wheat crop traits from UAV-borne hyperspectral image with radiative transfer modelling using Gaussian process regression. Remote Sensing, 15(23), 5496. https://doi.org/10.3390/rs15235496 CrossRefGoogle Scholar
Saleem, M. H., Potgieter, J., & Arif, K. M. (2019). Plant disease detection and classification by deep learning. Plants, 8(11), 468. https://doi.org/10.3390/plants8110468 CrossRefGoogle ScholarPubMed
Saleem, M. H., Potgieter, J., & Arif, K. M. (2020). Plant disease classification: A comparative evaluation of convolutional neural networks and deep learning optimizers. Plants, 9(10), 1319. https://doi.org/10.3390/plants9101319 CrossRefGoogle ScholarPubMed
Shahi, T. B., Xu, C. Y., Neupane, A., & Guo, W. (2022). Machine learning methods for precision agriculture with UAV imagery: A review. Electronic Research Archive, 30(12), 42774317. https://doi.org/10.3934/era.2022218 CrossRefGoogle Scholar
Shapiro, B. E., Meyerowitz, E., & Mjolsness, E. (2013). Using cellzilla for plant growth simulations at the cellular level. Frontiers in Plant Science, 4, 57520. https://doi.org/10.3389/fpls.2013.00408 CrossRefGoogle ScholarPubMed
Shahoveisi, F., Taheri Gorji, H., Shahabi, S., Hosseinirad, S., Markell, S., & Vasefi, F. (2023). Application of image processing and transfer learning for the detection of rust disease. Scientific Reports, 13(1), 5133. https://doi.org/10.1038/s41598-023-31942-9 CrossRefGoogle ScholarPubMed
Sharma, G., Kumar, A., Gour, N., Saini, A. K., Upadhyay, A., & Kumar, A. (2024). Cognitive framework and learning paradigms of plant leaf classification using artificial neural network and support vector machine. Journal of Experimental & Theoretical Artificial Intelligence, 36(4), 585610. https://doi.org/10.1080/0952813X.2022.2096698 CrossRefGoogle Scholar
Shoaib, M., Shah, B., Ei-Sappagh, S., Ali, A., Alenezi, F., Hussain, T., & Ali, F. (2023). An advanced deep learning models-based plant disease detection: A review of recent research. Frontiers in Plant Science, 14, 1158933. https://www.x-mol.com/paperRedirect/1638206368800894976.CrossRefGoogle ScholarPubMed
Shorten, C., & Khoshgoftaar, T. M. (2019). A survey on image data augmentation for deep learning. Journal of Big Data, 6(1), 148. https://doi.org/10.1186/s40537-019-0197-0 CrossRefGoogle Scholar
Silva, J. C. F., Teixeira, R. M., Silva, F. F., Brommonschenkel, S. H., & Fontes, E. P. (2019). Machine learning approaches and their current application in plant molecular biology: A systematic review. Plant Science, 284, 3747. https://doi.org/10.1016/j.plantsci.2019.03.020 CrossRefGoogle Scholar
Singh, D., Jain, N., Jain, P., Kayal, P., Kumawat, S., & Batra, N. (2020). PlantDoc: A dataset for visual plant disease detection. In Proceedings of the 7th ACM IKDD CoDS and 25th COMAD (pp. 249253). https://doi.org/10.1145/3371158.3371196 CrossRefGoogle Scholar
Singh, K. K. (2018). An artificial intelligence and cloud based collaborative platform for plant disease identification, tracking and forecasting for farmers. Paper presented at the 2018 IEEE international conference on cloud computing in emerging markets (CCEM). https://doi.org/10.1109/CCEM.2018.00016 CrossRefGoogle Scholar
Soares, R., Ferreira, P., & Lopes, L. (2017). Can plant-pollinator network metrics indicate environmental quality? Ecological Indicators, 78, 361370. https://doi.org/10.1016/j.ecolind.2017.03.037 CrossRefGoogle Scholar
Stringer, C., Wang, T., Michaelos, M., & Pachitariu, M. (2021). Cellpose: A generalist algorithm for cellular segmentation. Nature Methods, 18(1), 100106. https://doi.org/10.1038/s41592-020-01018-x CrossRefGoogle ScholarPubMed
Šulc, M., & Matas, J. (2017). Fine-grained recognition of plants from images. Plant Methods, 13, 114. https://doi.org/10.1186/s13007-017-0265-4 CrossRefGoogle ScholarPubMed
Sun, J., Di, L., Sun, Z., Shen, Y., & Lai, Z. (2019). County-level soybean yield prediction using deep CNN-LSTM model. Sensors, 19(20), 4363. https://doi.org/10.3390/s19204363 CrossRefGoogle ScholarPubMed
Sun, Y., Liu, Y., Zhou, H., & Hu, H. (2021). Plant diseases identification through a discount momentum optimizer in deep learning. Applied Sciences, 11(20), 9468. https://doi.org/10.3390/app11209468 CrossRefGoogle Scholar
Sun, Y., Zhang, Z., Sun, K., Li, S., Yu, J., Miao, L., et al. (2023). Soybean-MVS: Annotated three-dimensional model dataset of whole growth period soybeans for 3D plant organ segmentation. Agriculture, 13(7), 1321. https://doi.org/10.3390/agriculture13071321 CrossRefGoogle Scholar
Syarovy, M., Pradiko, I., Farrasati, R., Rasyid, S., Mardiana, C., Pane, R. et al. (2024). Pre-processing techniques using a machine learning approach to improve model accuracy in estimating oil palm leaf chlorophyll from portable chlorophyll meter measurement. Paper presented at the IOP conference series: Earth and environmental science. https://doi.org/10.1088/1755-1315/1308/1/012054 CrossRefGoogle Scholar
Tariku, G., Ghiglieno, I., Gilioli, G., Gentilin, F., Armiraglio, S., & Serina, I. (2023). Automated identification and classification of plant species in heterogeneous plant areas using unmanned aerial vehicle-collected RGB images and transfer learning. Drones, 7(10), 599. https://doi.org/10.3390/drones7100599 CrossRefGoogle Scholar
Tian, H., Wabnik, K., Niu, T., Li, H., Yu, Q., Pollmann, S., et al. (2014). WOX5–IAA17 feedback circuit-mediated cellular auxin response is crucial for the patterning of root stem cell niches in Arabidopsis. Molecular Plant, 7(2), 277289. https://doi.org/10.1093/mp/sst118 CrossRefGoogle ScholarPubMed
Vakili, M., Ghamsari, M., & Rezaei, M. (2020). Performance analysis and comparison of machine and deep learning algorithms for IoT data classification. arXiv preprint arXiv:2001.09636 . https://doi.org/10.48550/arXiv.1909.02061 CrossRefGoogle Scholar
Van den Broeck, L., Schwartz, M. F., Krishnamoorthy, S., Tahir, M. A., Spurney, R. J., Madison, I., et al. (2022). Establishing a reproducible approach to study cellular functions of plant cells with 3D bioprinting. Science Advances, 8(41), eabp9906. https://doi.org/10.1126/sciadv.abp9906 CrossRefGoogle ScholarPubMed
Vijayan, A., Mody, T. A., Yu, Q., Wolny, A., Cerrone, L., Strauss, S., … Kreshuk, A. (2024). A deep learning-based toolkit for 3D nuclei segmentation and quantitative analysis in cellular and tissue context. Development, 151(14). https://doi.org/10.1242/dev.202800 CrossRefGoogle ScholarPubMed
Wang, X.-F., Huang, D.-S., Du, J.-X., Xu, H., & Heutte, L. (2008). Classification of plant leaf images with complicated background. Applied mathematics and computation, 205(2), 916926. https://doi.org/10.1016/j.amc.2008.05.108 CrossRefGoogle Scholar
Wang, Z., Yuan, H., Yan, J., & Liu, J. (2025). Identification, characterization, and design of plant genome sequences using deep learning. The Plant Journal, 121(1), e17190. https://doi.org/10.1111/tpj.17190 CrossRefGoogle ScholarPubMed
Wei, T., Chen, Z., Yu, X., Chapman, S., Melloy, P., & Huang, Z. (2024). Plantseg: A large-scale in-the-wild dataset for plant disease segmentation. arXiv preprint arXiv:2409.04038 . https://doi.org/10.48550/arXiv.2409.04038 CrossRefGoogle Scholar
Williamson, H. F., Brettschneider, J., Caccamo, M., Davey, R. P., Goble, C., Kersey, P. J., et al. (2023). Data management challenges for artificial intelligence in plant and agricultural research. F1000Research, 10, 324. https://doi.org/10.12688/f1000research.52204.2 CrossRefGoogle Scholar
Wolny, A., Cerrone, L., Vijayan, A., Tofanelli, R., Barro, A. V., Louveaux, M., et al. (2020). Accurate and versatile 3D segmentation of plant tissues at cellular resolution. eLife, 9, e57613. https://doi.org/10.7554/eLife.57613 CrossRefGoogle ScholarPubMed
Wongchai, A., Rao Jenjeti, D., Priyadarsini, A. I., Deb, N., Bhardwaj, A., & Tomar, P. (2022). Farm monitoring and disease prediction by classification based on deep learning architectures in sustainable agriculture. Ecological modelling, 474, 110167. https://doi.org/10.1016/j.ecolmodel.2022.110167 CrossRefGoogle Scholar
Wu, Z., Jiang, F., & Cao, R. (2022). Research on recognition method of leaf diseases of woody fruit plants based on transfer learning. Scientific Reports, 12(1), 15385. https://doi.org/10.1038/s41598-022-18337-y CrossRefGoogle ScholarPubMed
Wu, Y., Wen, W., Gu, S., Huang, G., Wang, C., Lu, X., et al. (2024). Three-dimensional Modeling of maize canopies based on computational intelligence. Plant Phenomics, 6, 0160. https://doi.org/10.34133/plantphenomics.0160 CrossRefGoogle ScholarPubMed
Xing, D., Wang, Y., Sun, P., Huang, H., & Lin, E. (2023). A CNN-LSTM-att hybrid model for classification and evaluation of growth status under drought and heat stress in chinese fir (Cunninghamia lanceolata). Plant Methods, 19(1), 66. https://doi.org/10.1186/s13007-023-01044-8 CrossRefGoogle ScholarPubMed
Xiong, J., Yu, D., Liu, S., Shu, L., Wang, X., & Liu, Z. (2021). A review of plant phenotypic image recognition technology based on deep learning. Electronics, 10(1), 81. https://doi.org/10.3390/electronics10010081 CrossRefGoogle Scholar
Yao, J., Tran, S. N., Garg, S., & Sawyer, S. (2024). Deep learning for plant identification and disease classification from leaf images: Multi-prediction approaches. ACM Computing Surveys, 56(6), 137. https://doi.org/10.48550/arXiv.2310.16273 CrossRefGoogle Scholar
Yu, R., Luo, Y., Zhou, Q., Zhang, X., Wu, D., & Ren, L. (2021). Early detection of pine wilt disease using deep learning algorithms and UAV-based multispectral imagery. Forest Ecology and Management, 497, 119493. https://doi.org/10.1016/j.foreco.2021.119493 CrossRefGoogle Scholar
Yuan, Y., Chen, L., Ren, Y., Wang, S., & Li, Y. (2022). Impact of dataset on the study of crop disease image recognition. International Journal of Agricultural and Biological Engineering, 15(5), 181186. https://doi.org/10.25165/j.ijabe.20221505.7005 CrossRefGoogle Scholar
Zhang, J., Zhang, D., Cai, Z., Wang, L., Wang, J., Sun, L., … Zhao, J. (2022). Spectral technology and multispectral imaging for estimating the photosynthetic pigments and SPAD of the Chinese cabbage based on machine learning. Computers and Electronics in Agriculture, 195, 106814. https://doi.org/10.1016/j.compag.2022.106814 CrossRefGoogle Scholar
Zhang, C., Liu, Y., & Tie, N. (2023a). Forest land resource information acquisition with sentinel-2 image utilizing support vector machine, K-nearest neighbor, random forest, decision trees and multi-layer perceptron. Forests, 14(2), 254. https://doi.org/10.3390/f14020254 CrossRefGoogle Scholar
Zhang, N., Zhou, X., Kang, M., Hu, B.-G., Heuvelink, E., & Marcelis, L. F. (2023b). Machine learning versus crop growth models: An ally, not a rival. AoB Plants, 15(2), plac061. https://doi.org/10.1093/aobpla/plac061 CrossRefGoogle Scholar
Zhang, P., Sun, X., Zhang, D., Yang, Y., & Wang, Z. (2023c). Lightweight deep learning models for high-precision rice seedling segmentation from UAV-based multispectral images. Plant Phenomics, 5, 0123. https://doi.org/10.34133/plantphenomics.0123 CrossRefGoogle Scholar
Zhang, X., Ibrahim, Z., Khaskheli, M. B., Raza, H., Zhou, F., & Shamsi, I. H. (2024). Integrative approaches to abiotic stress management in crops: Combining bioinformatics educational tools and artificial intelligence applications. Sustainability, 16(17), 7651. https://doi.org/10.3390/su16177651 CrossRefGoogle Scholar
Zhou, J., Li, J., Wang, C., Wu, H., Zhao, C., & Teng, G. (2021). Crop disease identification and interpretation method based on multimodal deep learning. Computers and Electronics in Agriculture, 189, 106408. https://doi.org/10.1016/j.compag.2021.106408 CrossRefGoogle Scholar
Zhou, N., Siegel, Z. D., Zarecor, S., Lee, N., Campbell, D. A., Andorf, C. M., et al. (2018a). Crowdsourcing image analysis for plant phenomics to generate ground truth data for machine learning. PLoS Computational Biology, 14(7), e1006337. https://doi.org/10.1371/journal.pcbi.1006337 CrossRefGoogle Scholar
Zhou, Y., Yan, A., Han, H., Li, T., Geng, Y., Liu, X., & Meyerowitz, E. M. (2018b). HAIRY MERISTEM with WUSCHEL confines CLAVATA3 expression to the outer apical meristem layers. Science, 361(6401), 502506. https://doi.org/10.1126/science.aar8638 CrossRefGoogle Scholar
Zhou, X., Han, W., Dai, J., Liu, S., Gao, S., Guo, Y., … Zhu, X. (2023). SPA, a stigma-style-transmitting tract physical microenvironment assay for investigating mechano-signaling in pollen tubes. Proceedings of the National Academy of Sciences, 120(49), e2314325120. https://doi.org/10.1073/pnas.2314325120 CrossRefGoogle ScholarPubMed
Zhu, L., Li, Z., Li, C., Wu, J., & Yue, J. (2018). High performance vegetable classification from images based on alexnet deep learning model. International Journal of Agricultural and Biological Engineering, 11(4), 217223. https://doi.org/10.25165/IJABE.V11I4.2690 CrossRefGoogle Scholar
Zhu, W., Braun, B., Chiang, L. H., & Romagnoli, J. A. (2021). Investigation of transfer learning for image classification and impact on training sample size. Chemometrics and Intelligent Laboratory Systems, 211, 104269. https://doi.org/10.1016/j.chemolab.2021.104269 CrossRefGoogle Scholar
Figure 0

Table 1 Data acquisition techniques and their applications in plant sciences

Figure 1

Table 2 A comparison of common ML models in plant recognition and classification

Figure 2

Figure 1. CNN and data training process flowchart. A. DL-based image processing flowchart; B. Data training process elowchart.

Figure 3

Figure 2. A keyword network analysis of DL in plants. A keyword analysis of plant AI technologies reveals clear technological connections. Blue lines indicate image processing technologies, and green lines represent plant phenotyping and growth analysis. At its core, ‘DL’ links to ‘ML’, ‘image processing,’ and ‘computer vision.’ Technologies such as ‘remote sensing’ and ‘precision agriculture’. The relationships between terms like ‘plant growth’, ‘diseases’, ‘phenomics’, and ‘smart agriculture’ indicate the growing integration of AI and ML in improving plant practices.

Figure 4

Table 3 Challenges and future trends of ML and image recognition technologies in plant science

Author comment: The pipelines of deep learning-based plant image processing — R0/PR1

Comments

Dear Editor,

Thank you so much for inviting us to submit a review article to Quantitative Plant Biology. Here, we are submitting a scientific review entitled “Image processing and machine learning in plant sciences” for consideration for publication in your reputable journal .

In the current digital era, computational tools and quantitative algorithms have fundamentally impacted and rapidly revolutionized various aspects of plant research, such as species identification, plant disease detection, cellular signaling quantification, and growth and development assessment. We believe that a comprehensive introduction and overview of the latest technology advancement, with a focus on image processing and machine learning in plant science, will be of broad interest and greatly facilitate quantitative studies in plant biology.

In this manuscript, we review the latest computational tools and methodologies used in plant science. We discuss data acquisition and preprocessing techniques such as high-resolution imaging and UAV photography, as well as advanced image processing methods. We also review feature extraction techniques such as color histograms and texture analysis, which are essential for accurate plant identification and health assessment. Furthermore, we highlight the impact of the latest deep learning techniques, particularly Convolutional Neural Networks (CNNs), in advancing plant image analysis compared to conventional approaches. Lastly, we discuss current challenges and future perspectives in applying these technologies in both basic and applied plant research.

Thank you again for your consideration!

Sincerely,

Han Han Ph.D.

Assistant Professor

College of Life Sciences

Nanjing Forestry University

Yun Zhou Ph.D.

Associate Professor

Department of Botany and Plant Pathology

Purdue University

Review: The pipelines of deep learning-based plant image processing — R0/PR2

Conflict of interest statement

Reviewer declares none.

Comments

This review describes how advances in machine learning and image processing can be applied to plant science. It describes major trends and methods to use these novel methods in the field of plant biology. It systematically illustrates different methods to acquire data, preprocess it, and to extract features from it. It then continues to mention different types of machine learning algorithms and how to train them. At last, the authors describe previous important work in the field and describe future trends and challenges.

Overall, I believe that this is an important manuscript, that explains the trends of machine learning to plant biologists. As such, I believe that this manuscript needs to better explain the technical terms and when generally it is better to use one option over the other, in a language that is understandable to a trained biologist. I also think that it is important to further stress what are the specific challenges and points when comparing application of machine learning to plant science. Moreover, I found several cases in which the work described in the manuscript does not match the information supplied in the cited work.

Major points:

- It is worth to explain in the introduction that the usage of machine learning in the plant biology field lags behind the state-of-the-art of neural networks. For example, these days language models are revolutionizing the field of A.I. and we still don’t see a major use for these methods in plant biology.

- Terms should be better defined: Image classification, Image recognition, object detection are somewhat similar tasks but have distinct precise definitions and purposes. I think that when using such a term, a clear definition and explanation about the task that is performed by the algorithm should be added, and terms should be kept – FRCNN uses the term object detection to describe their work.

- I think that for this review, the authors should emphasize more what is special about plants. For example, which preprocessing, feature extraction, data augmentation are in general used in machine learning, and how are they better suited for plant research and why, and maybe even add these points to the tables.

- In table 2, the techniques suggested are usually not labor intensive, and it is important to stress that: a. these methods also lead to information loss, and thus there is a need to find the right balance between simplicity of the model, and loss of potentially important information. b. that the labor-intensive part is the actual labeling if the training data, and in many cases that is the bottle neck that can cause such projects to fail.

- In the computer vision field, the labeled data is often referred to as ground truth, and it is important to introduce such common-used terms in the review and explain them.

- Regarding table 3: I believe that the trend in computer vision is not to hand-craft feature extraction but allow the algorithm to learn which features to extract by itself. Also, the separation between tables 2 and 3 is somewhat artificial. I’m not sure whether this table is needed.

- Table 4: can the manuscript include some general ranking of performances of the different algorithms to perform a task (object detection- for example)? For someone, not familiar with the field this table can be overwhelming.

- Figure 1: This figure is hard to comprehend, processes seem to be happening in parallel. A more linear layout of the algorithm would be very beneficial.

- In lines 216-217, there is an explanation of data partitioning, I think that the practical use of each set is missing from the description. When does one use the training set? – for every image used while training, validation – once per epoch, or a way to evaluate performance mid training on unseen data, and test – to compare between models, and evaluate performance at the end of training. The way it is currently written is unclear for someone that is not from the field. What is the difference between a parameter and a hyper-parameter?

- Evaluation metrics are mentioned but not explained. Each should include what is it measuring, why, and what it is detecting strongly compared to the other metrics.

- What are GANs? (explain and cite).

- What is a BP NN, YOLOv5s? What is different between these NNs and a regular CNN?

- Session 5.2.1 – you didn’t mention genomes previously at all. What differences are there between image and sequence ML? With which data? What about language models?

- I think that the NN plantseg that segments cells in plants is important to cite and explain as a different use case of image processing to analyze microscopy data (https://doi.org/10.7554/eLife.57613).

Minor points:

- Authors should ensure that they accurately explain work cited. For example: in line 54, Ullah et al. 2024 does not seem to state the number 83% in their paper.

- Line 54: citation missing for Mhango and colleagues

- In the data acquisition, I think it make sense to also mention the use of regular cameras and phone, which have lower image quality and precision, but easier to obtain large amounts of data.

- In line 106: There is a need to explain what data augmentation actually means in practice: one, performs operation on original images, feed those to the algorithm that sees them as new data sets. This allows for higher richness and robustness in training.

- In line 135: the word “with” should be replace with “from”.

- In line 186: can the authors provide a rough estimate for an acceptable size of a dataset to yield good results?

- Line 191: this paragraph starts with “In summary” yet when the paragraph ends the section continues.

- Line 194: change the word “field” with “plant biology”

- Loss function -can be also customized and designed by oneself, according to the needs of the project. The importance and role of the loss function should be further stressed.

- Line 237: citation in wrong format

- Line 238: please add a sentence about when one should evaluate their model.

- Line 380: what is SHAP?

- Line 385: 98% accuracy in what? What was the task at hand?

- Line 385: the authors start the sentence with Sun et al. but cited Zhang et al.

-The plant Village dataset, has the potential to be an important resource, it should be mentioned sooner in the paper, and stress its importance and other datasets like it.

- Line 426: Table 5, should be Table 6.

Review: The pipelines of deep learning-based plant image processing — R0/PR3

Conflict of interest statement

Reviewer declares none.

Comments

This is a review paper summarizing image processing and machine learning in plant science. A review paper is supposed to provide information that is well defined and clearly summarized so the readers can form an accurate knowledge base regarding a certain topic. But this paper is not well written, lacking effective summary and synthesis of the literature. The overall issue is that the title is too broad and limited effort on synthesizing and summarizing the literature. Suggest the authors to rename the title and narrow down the focus area and restructure the outline.

Specifically, the authors piled multiple papers in tables and just simply describe what was done in each paper. Suggest the authors to digest the content and summarize them better. And there are an inefficient number of papers included for a particular topic.

For example, the data acquisition section (Table 1), the category for the techniques is lack of consideration. The categories should be clearly divided into either the type of camera (visual, multispectral, lidar), the type of platform (UAV or indoor settings?), or the type of applications (plant morphology, plant physiology, growth, yield, etc.)? Does 3D scanning technology equate to LiDAR? “environmental monitoring sensor” is also included and it is not even images dataset.

The authors are suggested to clearly define different types of algorithms at a higher level, difference among commonly used terms AI, machine learning, and deep learning. You have combined both image data and “spreadsheet” data which may require different types of algorithms to handle. The applications included in this review are also limited. You included disease, but what about abiotic stress? Most of the studies are agricultural related studies (crop science) and less with plant science.

Recommendation: The pipelines of deep learning-based plant image processing — R0/PR4

Comments

No accompanying comment.

Decision: The pipelines of deep learning-based plant image processing — R0/PR5

Comments

No accompanying comment.

Author comment: The pipelines of deep learning-based plant image processing — R1/PR6

Comments

Dear Editor,

Thank you so much for inviting us to submit a review article to Quantitative Plant Biology. Here, we are submitting a scientific review entitled “ The Pipelines of Deep learning-based Plant Image Processing” for consideration for publication in your reputable journal.

In the current digital era, computational tools and quantitative algorithms have fundamentally impacted and rapidly revolutionized various aspects of plant research, such as species identification, plant disease detection, cellular signaling quantification, and growth and development assessment. the application of machine learning (ML) and artificial intelligence (AI) in plant research is still in its early stages and lags behind their adoption in other scientific fields. We aimed to summarize recent advancements and encourage the future integration of ML and AI into plant science research. We believe that a comprehensive introduction and overview of the latest technology advancement, with a focus on deep learning-based image processing in plant science, will be of broad interest and greatly facilitate quantitative studies in plant biology.

In this manuscript, we review the latest computational tools and methodologies used in plant image processing. We discuss data acquisition and preprocessing techniques such as high-resolution imaging and UAV photography, as well as advanced image processing methods. We also review feature extraction techniques which are essential for accurate plant identification and health assessment. Furthermore, we highlight the impact of the latest deep learning techniques, particularly Convolutional Neural Networks (CNNs), in advancing plant image analysis. Lastly, we discuss current challenges and future perspectives in applying these technologies in both basic and applied plant research.

Thank you again for your consideration!

Sincerely,

Han Han Ph.D.

Assistant Professor

College of Life Sciences

Nanjing Forestry University

Yun Zhou Ph.D.

Associate Professor

Department of Botany and Plant Pathology

Purdue University

Review: The pipelines of deep learning-based plant image processing — R1/PR7

Conflict of interest statement

Reviewer declares none.

Comments

The review is now much better explained than before, and it is much more comprehensive.

Yet there is still some disconnect between the ML and plant sciences parts.

Even though the paper is much better written, there are still some points to address:

Major comments:

1. The explanations about the algorithms are much more straightforward now and appear much more didactic. Yet, in many cases (for example, in sections 3.1 and 3.3) – a connection to the actual plant science is missing. I think that a small addition to each of the algorithms about how they do on a plant-relevant study and what the apparent citation managed to do with it will add significant value (like in line 229). If no work related to plants exists – explain the need and potential for such a work or remove it entirely. Maybe the authors could combine sections 3 and 4?

2.Some errors occurred; Figure 2 is not seen in the final PDF.

3. Section 4.4 lacks flow. It lacks an introduction and summary paragraphs; it’s just a list of algorithm descriptions. Each section should have more insight than a list of existing manuscripts or tools.

4. The addition of the performance score is useful; a numerical score on known metrics would be better, even though I recognize that it can be challenging to curate such values due to the use of different metrics in different manuscripts. If it’s impossible, can the method by which the qualitative descriptions of performances (high/ medium, etc.) were obtained be described?

Minor comments:

- Fig. 1 seems to have a black background with large white boxes. This may be due to my computer’s configuration, but in any case, please make sure that the background stays white across all platforms.

- Maybe combine the very short section 5 with section 6?

Line 74. The idea that performance has been tested misses a conclusion - How did the CNNs perform on the public databases? Were they any good?

Line 84. What are some of the new insights that were revealed by LLMs?

Line 115. Maybe it is necessary to say that CNNs can be fed with the raw images and learn the features by themselves.

Line 119. Who added the physiological features? Were they manually analyzed and added to the algorithm as metadata? Or maybe they were automatically computed?

Lines 135 - 144. The paragraph is a bit repetitive. The same sentence appears several times with slightly different wording. Consider condensing it.

Review: The pipelines of deep learning-based plant image processing — R1/PR8

Conflict of interest statement

Reviewer declares none.

Comments

The paper makes a valuable contribution by compiling tools and techniques used in plant image processing. It fits the journal’s scope, is timely, and is useful for both new and experienced researchers in the field. However, the authors should add more critical comparisons between methods to improve clarity and depth. For example, while many models and techniques are listed, the paper lacks critical insight or comparison — e.g., which methods are best for which tasks?

Recommendation: The pipelines of deep learning-based plant image processing — R1/PR9

Comments

Thank you very much for the time and effort you’ve dedicated to addressing the comments. The revised version looks excellent and represents a strong contribution. If possible, just a few minor remarks remain to be addressed to further strengthen the work.

Decision: The pipelines of deep learning-based plant image processing — R1/PR10

Comments

No accompanying comment.

Author comment: The pipelines of deep learning-based plant image processing — R2/PR11

Comments

No accompanying comment.

Recommendation: The pipelines of deep learning-based plant image processing — R2/PR12

Comments

No accompanying comment.

Decision: The pipelines of deep learning-based plant image processing — R2/PR13

Comments

No accompanying comment.