DETERMINING THE STAGES OF STRAWBERRY MATURITY USING CONVOLUTIONAL NEURAL NETWORK MODELS
Keywords:
artificial intelligence, computer vision, convolutional neural network, MobileNet, EfficientNetAbstract
Throughout the history of agriculture, crop cultivation and harvesting technologies have been constantly improving, easing the work in the field and increasing the efficiency of fruit harvesting. Today, modern technologies, including artificial intelligence, are also penetrating the agro-industrial sector. Its use is becoming increasingly widespread every year. This technology allows agricultural producers to obtain large amounts of data in real-time, analyze them, and make decisions regarding fertilizer application, pesticide use, irrigation, and determining the ripeness of fruits or plants. An equally important aspect of artificial intelligence solutions is the tracking of carbon footprints, which gives an advantage when entering European markets. The combination of the latest developments in unmanned vehicles and robots allows for an increase in productivity and production volumes in agriculture. An important part of modern agro-industry is computer vision — one of the fields of artificial intelligence, focused on creating intelligent systems capable of processing and analyzing visual information in a way similar to the human sensory system. Neural networks are often used in this technology for both image recognition and classification. This article presents a comparative analysis of various convolutional neural network models for classifying the ripeness stage of strawberries, namely MobileNetV2, MobileNetV3Small, and EfficientNetB0. The network was evaluated based on parameters such as training and validation accuracy, training and validation losses, and training time. The best results were found with the MobileNetV3Small network. The results and methodology of this research can be useful for both scientists and entrepreneurs working in the agro-industrial sector and implementing artificial intelligence in the production process.
References
1. Sady i Ogrody. (2024). Co czeka rynek truskawek? Produkcja do przetwórstwa staje się nieopłacalna [What awaits the strawberry market? Production for processing is becoming unprofitable]. https://www.sadyogrody.pl/owoce/101/co_czeka_rynek_truskawek_produkcja_do_przetworstwa_staje_sie_nieoplacalna,39056.html.
2. Wang, C., Wang, H., Han, Q., Zhang, Z., Kong, D., & Zou, X. (2024). Strawberry detection and ripeness classification using YOLOv8+ model and image processing method. Agriculture, 14(5), 751. https://doi.org/10.3390/.
3. Miraei Ashtiani, S.-H., Javanmardi, S., Jahanbanifard, M., Martynenko, A., & Verbeek, F. J. (2021). Detection of mulberry ripeness stages using deep learning models. IEEE Access, 9, 100380–100394. https://ieeexplore.ieee.org/abstract/document/9481231.
4. Al-Masawabe, M. M., Samhan, L. F., Al-Farra, A. H., Aslem, Y. E., & Abu-Naser, S. S. (2021). Papaya maturity classifications using deep convolutional neural networks. International Journal of Academic Engineering Research, 5(12), 22–29. https://philpapers.org/rec/ALMPMC
5. Pardede, J., Sitohang, B., Akbar, S., & Khodra, M. L. (2021). Implementation of transfer learning using VGG16 on fruit ripeness detection. International Journal of Intelligent Systems and Applications, 13(2), 34–43. URL: https://www.mecs-press.org/ijisa/ijisa-v13-n2/v13n2-4.html
6. Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., & Adam, H. (2017). MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv. https://arxiv.org/abs/1704.04861.
7. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L.-C. (2018). MobileNetV2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 4510–4520). https://doi.org/10.1109/CVPR.2018.00474. https://openaccess.thecvf.com/content_cvpr_2018/papers/Sandler_MobileNetV2_Inverted_Residuals_CVPR_2018_paper.pdf.
8. Howard, A., Sandler, M., Chu, G., Chen, L.-C., Chen, B., Tan, M., Wang, W., Zhu, Y., Pang, R., Vasudevan, V., Le, Q. V., & Adam, H. (2019). Searching for MobileNetV3. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 1314–1324). https://arxiv.org/abs/1905.02244.
9. Tan, M., & Le, Q. V. (2019). EfficientNet: Rethinking model scaling for convolutional neural networks. In Proceedings of the 36th International Conference on Machine Learning (pp. 6105–6114). PMLR. https://arxiv.org/abs/1905.11946.
10. Zoph, B., & Le, Q. V. (2017). Neural architecture search with reinforcement learning. arXiv. https://arxiv.org/abs/1611.01578.
11. Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and-excitation networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 7132–7141). https://doi.org/10.48550/arXiv.1709.01507.
12. Ramachandran, P., Zoph, B., & Le, Q. V. (2017). Searching for activation functions. arXiv. https://arxiv.org/abs/1710.05941
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Information technologies in economics and environmental sciences

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.