Learning Rate Optimization of U-Net Architecture Using Grasshopper Optimization Algorithm to Enhance Accuracy in CT Image Segmentation of COVID-19 Patients
Subject Areas : Neural networks and deep learning
Alireza Mehravin
1
*
,
Mostafa Zaare
2
,
Reza Mortazavi
3
1 - School of Mathematics and Computer Science, Damghan University, Damghan, Iran.
2 - School of Mathematics and Computer Science, Damghan University, Damghan, Iran
3 - School of Engineering, Damghan University, Damghan, Iran
Keywords: Covid-19, GOA, Image segmentation, Metaheuristic, U-Net,
Abstract :
In light of its excellent learning accuracy and rate, rapid data processing, and independence from large databases for network training, the U-Net architecture is a well-known and popular deep learning architecture for image segmentation and feature extraction. Learning rate selection and updating are crucial in network training. As U-Net is a completely nonlinear network, classical mathematical optimization algorithms increase the probability of local optima. This analytical research paper used the grasshopper optimization algorithm (GOA) as a metaheuristic approach to optimize the learning rate of U-Net. The network was trained using 256*256 CT images of the lungs of COVID-19 infected and uninfected individuals. A total of 400 CT images were employed as the training dataset, whereas 80 CT images were used as the testing data. Coding was implemented in MATLAB. The optimization of the learning rate enhanced image segmentation accuracy by 2.23%. Iterative metaheuristic algorithms would lead to longer network training times. However, the proposed network optimization method could be very useful when large databases are not available for network training and higher accuracy is preferred over time savings.
[1] Ronneberger, O., Fischer, P., & Brox, T. (2015). “U-NET: Convolutional Networks for Biomedical Image Segmentation”. arXiv (Cornell University). https://doi.org/10.48550/arxiv.1505.04597.
[2] Shamim, S., Awan, M. J., Zain, A. M., Naseem, U., Mohammed, M. A., & Garcia-Zapirain, B. (2022). “Automatic COVID-19 Lung Infection Segmentation through Modified Unet Model”. Journal of Healthcare Engineering, 2022, 1–13. https://doi.org/10.1155/2022/6566982.
[3] Saood, A., & Hatem, I. (2021). “COVID-19 lung CT image segmentation using deep learning methods: U-Net versus SegNet”. BMC Medical Imaging, 21(1). https://doi.org/10.1186/s12880-020-00529-5.
[4] Kalane, P., Patil, S. D., Patil, B. A., & Sharma, D. P. (2021). “Automatic detection of COVID-19 disease using U-Net architecture based fully convolutional network”. Biomedical Signal Processing and Control, 67, 102518. https://doi.org/10.1016/j.bspc.2021.102518.
[5] Raj, A. N. J., Zhu, H., Khan, A., Zhuang, Z., Yang, Z., Mahesh, V. G. V., & Karthik, G. (2021). “ADID-UNET—a segmentation model for COVID-19 infection from lung CT scans”. PeerJ, 7, e349. https://doi.org/10.7717/peerj-cs.349.
[6] Kaur, S., Kumar, Y., Koul, A., & Kamboj, S. K. (2022). “A Systematic Review on Metaheuristic Optimization Techniques for feature selections in Disease Diagnosis”: Open Issues and Challenges. Archives of Computational Methods in Engineering, 30(3), 1863–1895. https://doi.org/10.1007/s11831-022-09853-1.
[7] Antoniou, A., & Lu, W. (2021). Practical Optimization: Algorithms and engineering applications. https://link.springer.com/content/pdf/10.1007/978-1-0716-0843-2.pdf.
[8] Wong, W., & Ming, C. I. (2019). “A Review on Metaheuristic Algorithms: Recent Trends, Benchmarking and Applications”. 2019 7th International Conference on Smart Computing & Communications (ICSCC). https://doi.org/10.1109/icscc.2019.8843624.
[9] Yang, D., Martinez, C., Visuña, L., Khandhar, H. M., Bhatt, C., & Carretero, J. (2021). “Detection and analysis of COVID-19 in medical images using deep learning techniques”. Scientific Reports, 11(1). https://doi.org/10.1038/s41598-021-99015-3.
[10] Elaziz, M. A., Dahou, A., Abualigah, L., Yu, L., Alshinwan, M., Khasawneh, A. M., & Lu, S. (2021). “Advanced metaheuristic optimization techniques in applications of deep neural networks: a review”. Neural Computing and Applications, 33(21),14079–14099. https://doi.org/10.1007/s00521-021-05960-5.
[11] Takase, T., Oyama, S., & Kurihara, M. (2018). “Effective neural network training with adaptive learning rate based on training loss”. Neural Networks, 101, 68–78. https://doi.org/10.1016/j.neunet.2018.01.016.
[12] Nematzadeh, S., Kiani, F., Torkamanian-Afshar, M., & Aydin, N. (2022). “Tuning hyperparameters of machine learning algorithms and deep neural networks using metaheuristics: A bioinformatics study on biomedical and biological cases”. Computational Biology and Chemistry, 97, 107619.https://doi.org/10.1016/j.compbiolchem.2021.107619.
[13] Kaveh, A., & Hamedani, K. B. (2022). “Advanced metaheuristic algorithms and their applications in structural optimization”. In Springer eBooks. https://doi.org/10.1007/978-3-031-13429-6.
[14] Li, C., Jiang, J., Zhao, Y., Li, R., Wang, E., Zhang, X., & Zhao, K. (2021). “Genetic Algorithm based hyper-parameters optimization for transfer Convolutional Neural Network”. arXiv (Cornell University). https://doi.org/10.48550/arxiv.2103.03875.
[15] Thavasimani, K., & Srinath, N. K. (2022). “Hyperparameter optimization using custom genetic algorithm for classification of benign and malicious traffic on internet of things–23 dataset”. International Journal of Power Electronics and Drive Systems, 12(4), 4031. https://doi.org/10.11591/ijece.v12i4.pp4031-4041.
[16] Gaspar, A., Oliva, D., Cuevas, E., Zaldivar, D., Pérez, M. A., & Pajares, G. (2021). “Hyperparameter optimization in a convolutional neural network using metaheuristic algorithms”. In Studies in computational intelligence (pp. 37–59). https://doi.org/10.1007/978-3-030-70542-8_2.
[17] Okwu, M. O., & Tartibu, L. K. (2021). “Metaheuristic Optimization: Nature-Inspired algorithms swarm and computational intelligence, theory and applications”. In Studies in computational intelligence. https://doi.org/10.1007/978-3-030-61111-8.
[18] Saremi, S., Mirjalili, S., & Lewis, A. (2017). “Grasshopper Optimisation Algorithm: Theory and application”. Advances in Engineering Software, 105, 30–47. https://doi.org/10.1016/j.advengsoft.2017.01.004.
[19] El-Shorbagy, M. A., & El-Refaey, A. M. (2020). “Hybridization of Grasshopper optimization algorithm with genetic algorithm for solving system of Non-Linear equations”. IEEE Access, 8, 220944–220961. https://doi.org/10.1109/access.2020.3043029.
[20] Wang, R., Lei, T., Cui, R., Zhang, B., Meng, H., & Nandi, A. K. (2022). “Medical image segmentation using deep learning: A survey”. Iet Image Processing, 16(5), 1243–1267. https://doi.org/10.1049/ipr2.12419.
[21] Polat, H. (2022). “Multi-task semantic segmentation of CT images for COVID-19 infections using DeepLabV3+ based on dilated residual network”. Physical and Engineering Sciences in Medicine, 45(2), 443–455. https://doi.org/10.1007/s13246-022-01110-w.
[22] Kumar, G. M., & Parthasarathy, E. (2023). “Development of an enhanced U-Net model for brain tumor segmentation with optimized architecture”. Biomedical Signal Processing and Control,81,104427. https://doi.org/10.1016/j.bspc.2022.104427.
[23] Popat, V., Mahdinejad, M., Dalmau, O., Naredo, E., & Ryan, C. (2020). “GA-based U-Net Architecture Optimization Applied to Retina Blood Vessel Segmentation”. Proceedings of the 12th International Joint Conference on Computational Intelligence. SCITEPRESS - Science and Technology Publications. https://doi.org/10.5220/001011220192.
Journal of Optimization of Soft Computing (JOSC) Vol. 3, Issue 1, pp: (63-69), Spring-2025 Journal homepage: https://sanad.iau.ir/journal/josc |
|
Paper Type (Research paper)
Learning Rate Optimization of U-Net Architecture Using Grasshopper Optimization Algorithm to Enhance Accuracy in CT Image Segmentation of COVID-19 Patients
Alireza Mehravin 1*, Mostafa Zaare 1, Reza Mortazavi2
1. School of Mathematics and Computer Science, Damghan University, Damghan, Iran.
2. School of Engineering, Computer Department, Damghan University, Damghan, Iran.
Article Info |
| Abstract |
Article History: Received: 2025/04/28 Revised: 2025/06/11 Accepted: 2025/06/11
DOI: josc.2025.202504281205187 |
| In light of its excellent learning accuracy and rate, rapid data processing, and independence from large databases for network training, the U-Net architecture is a well-known and popular deep learning architecture for image segmentation and feature extraction. Learning rate selection and updating are crucial in network training. As U-Net is a completely nonlinear network, classical mathematical optimization algorithms increase the probability of local optima. This analytical research paper used the grasshopper optimization algorithm (GOA) as a metaheuristic approach to optimize the learning rate of U-Net. The network was trained using 256*256 CT images of the lungs of COVID-19 infected and uninfected individuals. A total of 400 CT images were employed as the training dataset, whereas 80 CT images were used as the testing data. Coding was implemented in MATLAB. The optimization of the learning rate enhanced image segmentation accuracy by 2.23%. Iterative metaheuristic algorithms would lead to longer network training times. However, the proposed network optimization method could be very useful when large databases are not available for network training and higher accuracy is preferred over time savings. |
Keywords: Covid-19, GOA, Image segmentation, Metaheuristic, U-Net. |
| |
*Corresponding Author’s Email Address: alireza.mehravin@gmail.com |
1. Introduction
U-Net is a convolutional neural network proposed by researchers at the University of Fribourg in 2015 for biomedical image segmentation purposes. Designed through convolutional networks, the U-Net architecture accelerates processing and enhances learning accuracy based on limited training data samples. Figure 1 depicts the U-Net architecture and the operations in its different layers [1, 2].
The architecture of U-Net is based on an encoder-decoder model, where the encoder part of the network learns to extract high-level features from the input image, while the decoder part of the network learns to reconstruct the output image from the learned features. The U-Net model also includes skip connections between the encoder and decoder layers. These skip connections allow the decoder layers to use the features learned by the corresponding encoder layers, which helps to
preserve spatial information and improve the accuracy of segmentation [3].
Figure 1. U-Net architecture.
One of the key advantages of U-Net is its ability to perform well with limited training data. This is achieved by using data augmentation techniques such as rotation, flipping, and scaling during training. Additionally, U-Net can be modified to work with various input sizes, making it a versatile architecture for a wide range of image segmentation tasks. U-Net has been extensively implemented in medical imaging applications, such as tumor detection and cell segmentation, as well as in other image segmentation tasks, including the segmentation of roads and buildings in satellite imagery [4, 5]. The accuracy of the network's learning can be enhanced by employing appropriate parameters for its training.
2. Materials and Methods
To obtain optimal convergence in the learning process of a neural network, it is necessary to optimize the loss function [6]. Extensive and useful interactions between optimization and machine learning methods have been important breakthroughs in state-of-the-art computing. Many optimization problems in engineering sciences are complex and cannot be solved through conventional optimization approaches, such as mathematical programming. Heuristic (or approximate) algorithms can be used to solve such problems. These algorithms would not guarantee that the optimal solution is the exact solution to the problem and can only obtain a relatively accurate solution in a long time; the accuracy of the solution varies with time [7, 8].
3. Data Description
Preprocessing is necessary to enhance the quality of image segmentation. The preprocessing stage involves various operations such as image size reduction, noise reduction, and histogram equalization [9]. To ensure the network is trained more accurately and to reduce the computational load, the dimensions of the images were scaled down to 256x256. To train the network, a dataset of lung CT images of COVID-19 infected patients and uninfected individuals with was used. A total of 400 CT images were utilized as the training dataset, whereas 80 CT images were used as the testing data. This dataset is publicly available on the Kaggle website. MATLAB R2020b was used to implement the codes on a computer equipped with an intel core i7 processor running at 2.8 GHz, along with 16GB of RAM and 4GB GPU.
4. Proposed Method
In the U-Net architecture, it has been observed that the initialization of the learning rate among its hyper-parameters has a significant impact on the accuracy of the network [10]. During the training process of the network, a certain scheduling strategy is employed to gradually decrease the initial learning rate, which in turn helps to minimize the loss function. Two hyper-parameters affect the performance of the training process, i.e., the learning rate drop factor and the learning rate drop period. The former is a value between 0 and 1 that determines the rate at which the learning rate decreases while the latter is the number of epochs after which the learning rate is decreased [11].
As U-Net is a completely nonlinear network, the use of classical mathematical optimization algorithms increases the probability of local optima. Hence, metaheuristic algorithms can be employed to obtain solutions closer to global optima at the cost of execution time [12]. Learning rate selection and updating are essential. A sub-optimal learning rate excessively lengthens the convergence of the network, leading to trapping in local minima. On the other hand, an over-optimal learning rate would diminish network performance, with the network likely to neglect the most optimal solution to the problem [13]. It should also be noted that the initiation of the learning rate is crucial since different initiation points result in different paths, playing a key role in local and global minima [14]. Equation (1) represents the role of the learning rate () in network training.
(1)
where θ represents the parameter that minimizes the loss function (L). This parameter can be assumed to denote network weights [15]. The scheduling of the learning rate can be used to improve stochastic gradient descent performance to update the weights. An optimal learning rate minimizes the iterations of the stochastic gradient descent and, thus, reduces the computational burden [16].
This paper adopted the grasshopper optimization algorithm (GOA) to optimize the learning rate hyper-parameter in U-Net training to enhance image segmentation accuracy. A visual representation of the step-by-step process used in our proposed method can be seen in Figure 2.
Figure 2. Diagram of the proposed method.
5. GOA
The GOA was introduced by Mirjalili et al. (2017). It is a metaheuristic algorithm inspired by the behavior of grasshopper swarms. It exploits swarm intelligence and is a population-based algorithm.
Exploration and exploitation are the two main features of metaheuristic algorithms. The configuration of a metaheuristic algorithm is determined by the interaction of these two features. Mirjalili et al. (2017) claimed that the lifecycle of grasshoppers intrinsically had the exploration and exploitation features; immature grasshoppers (nymphs) have soft, continuous movements and play the exploitation role, while mature ones have completely stochastic and jumpy movements and play the exploration role. These two features simultaneously exist in grasshopper swarms; therefore, efficient modeling of grasshopper behavior would provide a relatively powerful optimization algorithm. The GOA model for position updating of each grasshopper is written as:
(2)
where ubd is the upper bound and lbd denotes the lower bound of the solution space in dimension d. Furthermore, Td(t) denotes the best solution found until iteration t in dimension d, and c is a decreasing factor.
The social force function is defined as:
(3)
where d is the function input and represents the distance, f is the intensity of attraction, and l represents the attractive length scale. Here, function s stands for the effects of social interactions (attraction and repulsion) among grasshoppers. The behavior of this function is dependent on f and l values.
The adaptive parameter of c appears twice in equation (2). The outer c is very similar to the inertia weight w in particle swarm optimization (PSO) and reduces the movements of grasshoppers around the target [17-19]. In other words, this parameter moderates the exploration and exploitation of the swarm around the target. The inner c reduces the attraction zone, comfort zone, and repulsive zone between grasshoppers. This factor linearly reduces the space in which grasshoppers are to explore and exploit based on in equation (2).
Figure 3 depicts the GOA procedure.
Figure 3. GOA flowchart.
6. Results
When initializing a neural network and selecting its associated hyper-parameters, there are typically two approaches that can be taken. In the first scenario, values are assigned based on empirical knowledge or prior experience with similar models. The second scenario involves using a metaheuristic algorithm to search for optimal solutions within a range of values that we know empirically has a higher probability of network convergence.
The initial learning rate, learning rate drop factor, and learning rate drop period were empirically set to 0.003, 0.4, and 8 respectively in the first approach. In contrast, these values were determined by the GOA in the proposed method. It is worth mentioning that in both scenarios, the number of epochs was set equal to 80.
Figure 4 shows the values proposed by the GOA for the three hyper-parameters and the best cost value in the last ten iterations of the algorithm. It should be noted that 20 grasshoppers and 30 iterations were employed in the GOA.
Figure 4. Values proposed by the GOA.
The normalized confusion matrix was calculated, as shown in Figure 5. Two classes were employed in network training; one label to highlight the COVID-19 damaged areas and one label to represent the background.
Figure 5. a) U-Net confusion matrix, b) Optimized U-Net confusion matrix.
Table 1 provides several tested samples and the results of base and optimized segmentation.
6.1. Evaluation Results
Performance was evaluated using the true positive, true negative, false positive, and false negative parameters:
TP: Correct classification of pixels corresponding to damaged areas of the lung,
TN: Correct classification of pixels in background regions,
FP: Pixels corresponding to background regions incorrectly classified as damaged regions,
FN: Pixels corresponding to damaged areas of the lung that are incorrectly classified as background regions.
In a CT image of the chest, the positive class refers to the damaged lung areas, while the negative class corresponds to the background.
Table 1. Base and optimized CT image segmentation
Ground Truth
| Optimized U-Net | U-Net | Input Image
|
|
|
|
|
|
|
|
|
|
|
|
|
Equations (4-7) represent the evaluation criteria [20- 23].
(4)
(5)
(6)
(7)
Table 2 compares the base and optimized U-Net models based on the confusion matrix and evaluation criteria.
Table 2. Evaluation results(%)
Evaluation Criteria | U-Net | Optimized U-Net |
Accuracy | 90.61 | 92.84 |
Precision | 98.74 | 99.11 |
Recall | 82.26 | 86.45 |
F1-Score | 89.74 | 92.34 |
7. Discussion
As a result of the optimization conducted in this research, the accuracy of the entire network for CT scan image segmentation has observed an increase of 2.23%. Furthermore, it is worth noting that the accuracy has significantly improved by 4.19% in detecting the damaged area. It is noteworthy that the optimization method discussed here can be applied to all deep learning models during the learning process. The utilization of meta-heuristic algorithms for solving non-linear problems is a widely accepted practice. In this research, the GOA was employed to optimize the learning rate of the network. In a study closely related to the present research, Mahesh Kumar et al. were able to enhance the accuracy of segmentation in MRI images of brain tumors through the utilization of a metaheuristic algorithm known as the Adaptive Search Coyote Optimization Algorithm (AS-COA) [24]. Another study carried out by Popat et al. demonstrated that the accuracy of a neural network for the segmentation of Retina Blood Vessels can be improved by optimizing its parameters using the genetic algorithm [25]. Nevertheless, the challenge of increasing network learning time cannot be overlooked. Also, evaluating the impact of optimizing other critical hyper-parameters during the network training process, such as batch size, can be investigated.
8. Conclusion
This paper implemented GOA to optimize the U-Net learning rate and enhance the segmentation accuracy of CT images of the chest to enable accurate detection of COVID-19 infected areas within the lungs. Although iterative metaheuristic algorithms lengthen training, they are very useful in network optimization when large databases are not available for network training and improved accuracy is preferred over time savings. To optimize hyper-parameters in such models, it is important to assume a larger grasshopper population and a larger number of iterations to obtain a more optimal solution. In other words, this optimization approach is more useful in network training with smaller databases and low-resolution images.
Acknowledgement
This article is derived from the master's thesis of Alireza Mehravin. University of Damghan, Damghan, Iran.
References
[1] Ronneberger, O., Fischer, P., & Brox, T. (2015). “U-NET: Convolutional Networks for Biomedical Image Segmentation”. arXiv (Cornell University). https://doi.org/10.48550/arxiv.1505.04597.
[2] Shamim, S., Awan, M. J., Zain, A. M., Naseem, U., Mohammed, M. A., & Garcia-Zapirain, B. (2022). “Automatic COVID-19 Lung Infection Segmentation through Modified Unet Model”. Journal of Healthcare Engineering, 2022, 1–13. https://doi.org/10.1155/2022/6566982.
[3] Saood, A., & Hatem, I. (2021). “COVID-19 lung CT image segmentation using deep learning methods: U-Net versus SegNet”. BMC Medical Imaging, 21(1). https://doi.org/10.1186/s12880-020-00529-5.
[4] Kalane, P., Patil, S. D., Patil, B. A., & Sharma, D. P. (2021). “Automatic detection of COVID-19 disease using U-Net architecture based fully convolutional network”. Biomedical Signal Processing and Control, 67, 102518. https://doi.org/10.1016/j.bspc.2021.102518.
[5] Raj, A. N. J., Zhu, H., Khan, A., Zhuang, Z., Yang, Z., Mahesh, V. G. V., & Karthik, G. (2021). “ADID-UNET—a segmentation model for COVID-19 infection from lung CT scans”. PeerJ, 7, e349. https://doi.org/10.7717/peerj-cs.349.
[6] Kaur, S., Kumar, Y., Koul, A., & Kamboj, S. K. (2022). “A Systematic Review on Metaheuristic Optimization Techniques for feature selections in Disease Diagnosis”: Open Issues and Challenges. Archives of Computational Methods in Engineering, 30(3), 1863–1895. https://doi.org/10.1007/s11831-022-09853-1.
[7] Antoniou, A., & Lu, W. (2021). Practical Optimization: Algorithms and engineering applications. https://doi.org/10.1007/978-1-0716-0843-2
[8] Wong, W., & Ming, C. I. (2019). “A Review on Metaheuristic Algorithms: Recent Trends, Benchmarking and Applications”. 2019 7th International Conference on Smart Computing & Communications (ICSCC). https://doi.org/10.1109/icscc.2019.8843624.
[9] Yang, D., Martinez, C., Visuña, L., Khandhar, H. M., Bhatt, C., & Carretero, J. (2021). “Detection and analysis of COVID-19 in medical images using deep learning techniques”. Scientific Reports, 11(1). https://doi.org/10.1038/s41598-021-99015-3.
[10] Elaziz, M. A., Dahou, A., Abualigah, L., Yu, L., Alshinwan, M., Khasawneh, A. M., & Lu, S. (2021). “Advanced metaheuristic optimization techniques in applications of deep neural networks: a review”. Neural Computing and Applications, 33(21),14079–14099. https://doi.org/10.1007/s00521-021-05960-5.
[11] Takase, T., Oyama, S., & Kurihara, M. (2018). “Effective neural network training with adaptive learning rate based on training loss”. Neural Networks, 101, 68–78. https://doi.org/10.1016/j.neunet.2018.01.016.
[12] Nematzadeh, S., Kiani, F., Torkamanian-Afshar, M., & Aydin, N. (2022). “Tuning hyperparameters of machine learning algorithms and deep neural networks using metaheuristics: A bioinformatics study on biomedical and biological cases”. Computational Biology and Chemistry, 97,107619.
https://doi.org/10.1016/j.compbiolchem.2021.107619.
[13] Kaveh, A., & Hamedani, K. B. (2022). “Advanced metaheuristic algorithms and their applications in structural optimization”. In Springer eBooks. https://doi.org/10.1007/978-3-031-13429-6.
[14] Li, C., Jiang, J., Zhao, Y., Li, R., Wang, E., Zhang, X., & Zhao, K. (2021). “Genetic Algorithm based hyper-parameters optimization for transfer Convolutional Neural Network”. arXiv (Cornell University). https://doi.org/10.48550/arxiv.2103.03875.
[15] Thavasimani, K., & Srinath, N. K. (2022). “Hyperparameter optimization using custom genetic algorithm for classification of benign and malicious traffic on internet of things–23 dataset”. International Journal of Power Electronics and Drive Systems, 12(4), 4031. https://doi.org/10.11591/ijece.v12i4.pp4031-4041.
[16] Gaspar, A., Oliva, D., Cuevas, E., Zaldivar, D., Pérez, M. A., & Pajares, G. (2021). “Hyperparameter optimization in a convolutional neural network using metaheuristic algorithms”. In Studies in computational intelligence (pp. 37–59). https://doi.org/10.1007/978-3-030-705428_2.
[17] Okwu, M. O., & Tartibu, L. K. (2021). “Metaheuristic Optimization: Nature-Inspired algorithms swarm and computational intelligence, theory and applications”. In Studies in computational intelligence. https://doi.org/10.1007/978-3-030-61111-8.
[18] Saremi, S., Mirjalili, S., & Lewis, A. (2017). “Grasshopper Optimisation Algorithm: Theory and application”. Advances in Engineering Software, 105, 30–47. https://doi.org/10.1016/j.advengsoft.2017.01.004.
[19] El-Shorbagy, M. A., & El-Refaey, A. M. (2020). “Hybridization of Grasshopper optimization algorithm with genetic algorithm for solving system of Non-Linear equations”. IEEE Access, 8, 220944–220961. https://doi.org/10.1109/access.2020.3043029.
[20] Fazli, M., & Faraji, S. (2023). A survey of meta-heuristic methods for optimization problems. Journal of Optimization in Soft Computing, 1(1), Article 14020615783207. https://doi.org/10.82553/josc.2023.14020615783207.
[21] Wang, R., Lei, T., Cui, R., Zhang, B., Meng, H., & Nandi, A. K. (2022). “Medical image segmentation using deep learning: A survey”. Iet Image Processing, 16(5), 1243–1267. https://doi.org/10.1049/ipr2.12419.
[22] Polat, H. (2022). “Multi-task semantic segmentation of CT images for COVID-19 infections using DeepLabV3+ based on dilated residual network”. Physical and Engineering Sciences in Medicine, 45(2), 443–455. https://doi.org/10.1007/s13246-022-01110-w.
[23] Kiyoumarsi, P., Kiyoumarsi, F., Zamani Dehkordi, B., & Karbasiyoun, M. (2024). A feature selection method on gene expression microarray data for cancer classification. Journal of Optimization in Soft Computing, 2(3), Article 140308101189068. https://doi.org/10.82553/josc.2024.140308101189068.
[24] Kumar, G. M., & Parthasarathy, E. (2023). “Development of an enhanced U-Net model for brain tumor segmentation with optimized architecture”. Biomedical Signal Processing and Control,81,104427. https://doi.org/10.1016/j.bspc.2022.104427.
[25] Popat, V., Mahdinejad, M., Dalmau, O., Naredo, E., & Ryan, C. (2020). “GA-based U-Net Architecture Optimization Applied to Retina Blood Vessel Segmentation”. Proceedings of the 12th International Joint Conference on Computational Intelligence. SCITEPRESS - Science and Technology Publications. https://doi.org/10.5220/001011220192.