Battle Models: Inception ResNet vs. Extreme Inception for Marine Fish Object Detection
DOI:
10.33395/sinkron.v8i4.13130Keywords:
Computer Vision, Detect Maritime Fish Objects, Deep Learning, Inception ResNet model, Xception modelsAbstract
Within the domain of deep learning applied to computer vision, there exists a significant emphasis on the competition between two prominent models, namely Inception ResNet and Xception, particularly in the field of marine fish object detection. The present study conducted a comparative analysis of two advanced neural network architectures in order to assess their efficacy in the identification and localization of marine fish species in underwater images. The two models underwent a rigorous evaluation, utilizing their capabilities in feature extraction. The findings indicate a complex performance landscape, wherein Inception ResNet exhibits remarkable accuracy in identifying marine fish objects, while Xception demonstrates superior computational efficiency. The present study elucidates the inherent trade-off between precision and computational expenditure, offering valuable perspectives on the pragmatic ramifications of choosing one model over another. Furthermore, this research underscores the significance of carefully choosing a suitable model that aligns with the particular requirements of object detection applications in the context of marine fish. This study endeavors to guide professionals and scholars in marine biology and computer vision, enabling them to make well-informed choices when utilizing deep learning techniques to detect maritime fish objects in underwater settings. The research specifically focuses on the comparison between Inception ResNet and Xception models.
Downloads
References
Büyükşahin, Ü. Ç., & Ertekin, Ş. (2019). Improving forecasting accuracy of time series data using a new ARIMA-ANN hybrid method and empirical mode decomposition. Neurocomputing, 361, 151–163. https://doi.org/10.1016/j.neucom.2019.05.099
Deepanshi, Budhiraja, I., Garg, D., & Kumar, N. (2023). Choquet integral based deep learning model for COVID-19 diagnosis using eXplainable AI for NG-IoT models. Computer Communications, 212(September), 227–238. https://doi.org/10.1016/j.comcom.2023.09.032
Durga, B. K., & Rajesh, V. (2022). A ResNet deep learning based facial recognition design for future multimedia applications. Computers and Electrical Engineering, 104(PA), 108384. https://doi.org/10.1016/j.compeleceng.2022.108384
Gawade, S., Bhansali, A., Patil, K., & Shaikh, D. (2023). Application of the convolutional neural networks and supervised deep-learning methods for osteosarcoma bone cancer detection. Healthcare Analytics, 3(February), 100153. https://doi.org/10.1016/j.health.2023.100153
Jahandad, Sam, S. M., Kamardin, K., Amir Sjarif, N. N., & Mohamed, N. (2019). Offline signature verification using deep learning convolutional Neural network (CNN) architectures GoogLeNet inception-v1 and inception-v3. Procedia Computer Science, 161, 475–483. https://doi.org/10.1016/j.procs.2019.11.147
Lee, S., Jeong, H., Hong, S. M., Yun, D., Lee, J., Kim, E., & Cho, K. H. (2023). Automatic classification of microplastics and natural organic matter mixtures using a deep learning model. Water Research, 246(September). https://doi.org/10.1016/j.watres.2023.120710
Rahman, T., Amatullah, M., Afreen, S., & Abu, M. (2023). GRU-INC : An inception-attention based approach using GRU for human activity recognition. 216(November 2022).
Soria, X., Sappa, A., Humanante, P., & Akbarinia, A. (2023). Dense extreme inception network for edge detection. Pattern Recognition, 139. https://doi.org/10.1016/j.patcog.2023.109461
Suherman, E., Hindarto, D., Makmur, A., & Santoso, H. (2023). Comparison of Convolutional Neural Network and Artificial Neural Network for Rice Detection. Sinkron, 8(1), 247–255. https://doi.org/10.33395/sinkron.v8i1.11944
Sun, P., Liu, P., Li, Q., Liu, C., Lu, X., Hao, R., & Chen, J. (2020). DL-IDS: Extracting Features Using CNN-LSTM Hybrid Network for Intrusion Detection System. Security and Communication Networks, 2020, 8890306. https://doi.org/10.1155/2020/8890306
Szegedy, C., Ioffe, S., Vanhoucke, V., & Alemi, A. (n.d.). the Impact of Residual Connections on Learning.
Yu, L., Xue, L., Liu, F., Li, Y., Jing, R., & Luo, J. (2022). The applications of deep learning algorithms on in silico druggable proteins identification. Journal of Advanced Research, xxxx. https://doi.org/10.1016/j.jare.2022.01.009
Yu, S., Xie, L., & Huang, Q. (2023). Inception convolutional vision transformers for plant disease identification. Internet of Things (Netherlands), 21(288), 1–18. https://doi.org/10.1016/j.iot.2022.100650
Zhang, R., Gao, M. R., Zhang, P. Y., Zhang, Y. M., Fu, L. H., & Chai, Y. F. (2023). Research on an ultrasonic detection method for weld defects based on neural network architecture search. Measurement: Journal of the International Measurement Confederation, 221(August), 113483. https://doi.org/10.1016/j.measurement.2023.113483
Zhang, Y., Hou, Y., OuYang, K., & Zhou, S. (2022). Multi-scale signed recurrence plot based time series classification using inception architectural networks. Pattern Recognition, 123. https://doi.org/10.1016/j.patcog.2021.108385
Downloads
How to Cite
Issue
Section
License
Copyright (c) 2023 Djarot Hindarto

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.