Diagnosis of Brain Tumor Position in Magnetic Resonance Images by Combining Bounding Box Algorithms, Artificial Bee Colonies and Grow Cut
Subject Areas : International Journal of Smart Electrical EngineeringMahdi Shafiof 1 , Neda Behzadfar 2
1 - Najafabad Branch, Islamic Azad University
2 - Najafabad Branch, Islamic Azad University
Keywords: Artificial bee colony algorithm, Glioblastoma tumor, brain tumor diagnosis, bounding box algorithm, grow cut algorithm,
Abstract :
Tumor detection and isolation in magnetic resonance imaging (MRI) is a significant consideration, but when done manually by people, it is very time consuming and may not be accurate. Also, the appearance of the tumor tissue varies from patient to patient, and there are similarities between the tumor and the natural tissue of the brain. In this paper, we have tried to provide an automated method for diagnosing and displaying brain tumors in MRI images. Images of patients with glioblastoma were used after applying pre-processing and removing areas that have no useful information (such as eyes, scalp, etc.). We used a bounding box algorithm, to create a projection for to determining the initial range of the tumor in the next step, an artificial bee colony algorithm, to determine an initial point of the tumor area and then the Grow cut algorithm for, the exact boundary of the tumor area. Our method is automatic and extensively independent of the operator. comparison between results of 12 patients in our method with other similar methods indicate a high accuracy of the proposed method (about 98%) in comparison s.
Diagnosis of Brain Tumor Position in Magnetic Resonance Images by Combining Bounding Box Algorithms, Artificial Bee Colonies and Grow Cut
Mahdi Shafiof1, Neda Behzadfar1,2*
1Department of Electrical Engineering, Najafabad Branch, Islamic Azad University, Najafabad, Iran
2Digital Processing and Machine Vision Research Center, Najafabad Branch, Islamic Azad University, Najafabad, Iran
*Corresponding Author: n.behzadfar@pel.iaun.ac.ir
Abstract: Tumor detection and isolation in magnetic resonance imaging (MRI) is a significant consideration, but when done manually by people, it is very time consuming and may not be accurate. Also, the appearance of the tumor tissue varies from patient to patient, and there are similarities between the tumor and the natural tissue of the brain. In this paper, we have tried to provide an automated method for diagnosing and displaying brain tumors in MRI images. Images of patients with glioblastoma were used after applying pre-processing and removing areas that have no useful information (such as eyes, scalp, etc.). We used a bounding box algorithm, to create a projection for to determining the initial range of the tumor in the next step, an artificial bee colony algorithm, to determine an initial point of the tumor area and then the Grow cut algorithm for, the exact boundary of the tumor area. Our method is automatic and extensively independent of the operator. comparison between results of 12 patients in our method with other similar methods indicate a high accuracy of the proposed method (about 98%) in comparison s.
Keyword: Glioblastoma tumor, brain tumor diagnosis, bounding box algorithm, artificial bee colony algorithm, grow cut algorithm
1. Introduction
The diagnosis of brain tumors in magnetic resonance images is very important because it provides useful information in relation to anatomical structures as well as abnormal brain tissue. Currently, in clinical applications, the area of tumor in the brain's MRI images is manually determined at that moment; therefore, when the amount of information is large, this method is very time consuming and tedious, which leads to several errors [1,2].
Although many methods have been proposed for the automatic diagnosis of brain tumors, due to the variation in shape, position, type of tissue and size of the tumor, automated autofocus is still very challenging. Since the available techniques for certain types of tumors are not feasible, the need to increase the accuracy and the ability to execute accurate methods is felt [3,4]. On the other hand, most methods act in such a way that after applying their preprocessing steps, only one algorithm is used to achieve tumor position, which results in the methods not being able to detect all cases [5,6].
In the first group of methods, the diagnosis is usually made in such a way that evolutionary algorithms are used for tumor detection. For example, in [7], the combination of genetic algorithms and morphological processes used for detection is used in a T2 image as an input, and then it converts to its smallest constituent (pixel) under a time series. It is then removed by an intruder filter and detected by the genetic algorithm of the approximate tumor area. Performing these steps provides an accuracy of 91% for the image partition. Laishram and colleagues in [8] used an automated way without the need for the user to recognize and segment the tumor. Initially, with preprocesses removed on the image, skull, and background, the image was removed using the Noise Filter and Profit Effect in the next step and then revealed using the Tumor Oscillation Optimization (PSO) algorithm in the image. They used hospital data for their study and achieved a resolution of 95% accuracy.
In the second category, a number of articles have been used to detect tumors with a fuzzy classifier approach [9,10]. These groups of studies have used structural features and quasi-statistical analyses such as histograms [11]. For example, one of these articles has been used to combine both FCM and PCM methods [12], which, FPCM is used to detect a tumor based on histogram, a method that is faster than the classic FPCM. In this method, the extracted brain is divided into five classes: GM, WM, CSF, tumor and background.
The histogram analysis results are extracted to achieve the initial size of the category centers, which are extracted from the average size of the gray levels of GM, WM, and CSF images. The background is zero and the highest brightness is selected to choose the tumor class. Also, to avoid some sorting errors, some undesirable voxels are created in the tumor class to correct a possible mistake from the operator of the morphological operation. A fuzzy method is then used to detect a tumor in magnetic resonance imaging.
The third category is automatic tumor detection techniques based on image processing techniques such as thresholding operations. Among them, the Ouadfel method, the Grimthon method, and the method mentioned in [13]. In the methods used by Ouadfel and Grimthon, after selecting the appropriate threshold, the pixels containing the tumor, Grimthon removes the pixels of the image that contains the tumor and passes the remaining pixels from a cross-pass filter [14]. Oudfel also multiplies tumor pixels in the original image to thicken the tumor area [15]. In the work carried out in [13], the Grow cut algorithm is also used. In this method, first, a number of pixels that contain the tumor area are known. These pixels are then compared with adjacent area pixels, and if they have favorable conditions for tumor growth, they are marked to move to the desired area after the process is completed. In order to transmit the pixel to the desired region, two methods of calculating the average brightness or comparing the maximum brightness intensity can be used.
2. Proposed method
In the previous section a brief description of the activities carried out in this area has been considered but it is still possible to say that most of the existing methods are not accurate enough. And they are not fast and eminent. some of them did not use all the magnitudes of magneto-image intensification, not processed. Or some methods are not necessarily the operator, and some can only be used for certain types of tumors.in this paper, we propose a new method for tumor detection in multi-magnetic resonance imaging images. Multi-mediating MRI images include T1 image or longitudinal rest, T2 or transverse rest, Flair image, and T1-post. Unlike the aforementioned methods that only used an image property as input, we intend to combine all four modalities of all the features of the MRI image in order to make the detection process more precise. In the next step, by applying the preprocessing steps, areas with no useful information are eliminated; in the second step, using a boundary algorithm based on the anatomical symmetry of the brain, we obtain a preliminary prognosis of the position of the tumor. In the third step, using an artificial bee colony algorithm [16], which is an evolutionary process based method, a tumor region is extracted that is better than other areas of brightness as the optimum output, and finally, using grow cut algorithm we create a Zoomed in tumor area. Fig. 1 shows an overview of the proposed method process. The general structure of the paper is also explained in Section 2 of the proposed method, along with the materials used in the research, in Section 3, the results of the research, and in Section 4, these results are compared with other methods.
The purpose of this paper is to provide a multiple method for determining the exact position of the brain tumor in magneto-resonance multi-modal images. The images tested as inputs consisted of 12 patients (10 men and 2 women) who had a contrast-enhancement area in their brain tumors; individuals aged 36 to 66 years old and images under the magnetic field of Tesla 3 in the form of image matrices (FOV=512*512). Table 1 shows the weight of each modality of the image.
Each modality of the magneto-resonant image represents one of the features of the tumor region, which is combined to better identify these areas and thus improve the treatment process. In the T1-post-contrast images, the tumor is spotted by the contrast agent ink injection, while in the T1-type image, the tumor is dark. The cerebrospinal fluid is also better in T2 images. Besides, the edema of the brain is more distinct in flair images [17,18]. Fig. 2 represents each of these modalities. Now combining the features of these four modalities can be eliminated by removing the background and disturbing areas including skull, eyes, cerebrospinal fluid, and ... Tumor clarity. This is accomplished by obtaining the difference between T1 and T1-Post modalities.
Fig. 1. The trend of the proposed method for the tumor site detection process
Table 1. Weight of the MRI image modalities investigated
| T1 | TE | TR |
T1-Weighted | 1238 (ms) | 6 (ms) | 3000 (ms) |
T1-Post | 1238 (ms) | 6 (ms) | 3000 (ms) |
Flair | 2250 (ms) | 120 (ms) | 1000 (ms) |
T2-Weighted | -- | 103 (ms) | 3000 (ms) |
2.1. Applying Preprocessing Steps
The preprocessing process consists of two main steps. The first one is matching step which combines the modalities and uses of all the attributes of the image, and the second one is removing disturbing areas (including skull removal, eye removal, CSF removal, etc.), as follows:
2.1.1. Matching operations
At this stage, all selected images have conformed to the standard T1 image (MNI) the MNI image with a T1 weight is considered as reference and other images are matched. This operation is implemented by the FLIRT tool from the FSL software and mentioned in [19].
2.1.2. Skull Removal Operations
Fig. 2. Multidimensional images of a patient's brain with a GBM tumor. A) T2 image. B) T1 image. C) T1-Post weight image. D) Image with Flair weight
At this stage, we use an image with a T2 weight as an input., we use a 3x3-wide low pass filter to smooth out the edges of the sharp edges, and especially the CSF edges. Then we will binary the image by selecting the appropriate threshold. The Ridler [20] method is applied to do this step. With this method, an optimal threshold is obtained for separating the background from the original image and is equal to:
(1)
where the mean of the background at the stage t is:
(2)
and the average image pixels in the same step is:
(3)
At the beginning of this method, it is assumed that we do not have any knowledge of the exact location of the objects in the image, so we consider the image quadrangles as background and other points as the main image. In each step, we compare the obtained threshold with the threshold of the previous step, and if these two are equal, the threshold is the same as the desired threshold. Fig. 4 shows the removal of the background image of the original image by the Ridler method. In the following, by using the erosion morphology operation, the choice of the largest component can be used to extract the desired region. This process is performed by the Matlab software, so that the edges are not sharp in the image and elliptical. The matrix structure as follows is used:
(3)
Fig. 4. Skull Removal Steps. A) Elemental image. B) Binary image. C) Applying morphology and obtaining the final mask. D) Image after removal of the skull.
2.1.3. Remove CSF vendors
CSF and ventricles also cause problems in detecting a tumor site. These areas are light in a picture with a weight of T2 but darker than other areas in the images with Flair weights. Therefore, by sharing regions that are brighter than the threshold (extracted using histogram characteristics) in a picture with a T2 weight and darker areas than the threshold in the Flair image, they can be identified and Deleted from the images.
2.1.4. The final pre-processed image
After applying the steps outlined, the final image is obtained as shown in Fig. 5. In the final picture, the tumor areas are distinctly clear.
(a) (b)
Fig. 5. A) The final preprocessed image. B) Malignant tumor areas in the Color Map image
Red spots in the color map are considered as target areas, which we intend to further extract using the combination of three bounding box algorithms, an artificial bee colony and an algorithm related to the growth of their tumor sites.
2.2. Determining the initial range of tumor exposure using the bounding box algorithm
The two hemispheres of the human brain have a high degree of double-sided symmetry in normal mode. This symmetry is not complete and accurate, but it is a good measure for establishing adaptation between the two sides of the brain [21]. Accurate computation of this symmetry is a time-consuming problem of symmetry, but in our proposed method, the geometric axis obtained for simulation of the brain is precise due to its preprocessing and skull removal steps. On the other hand, given the fact that finding the geometric axis of the brain is much simpler, quicker and more accurate than the actual axis of skull symmetry, the importance of its preprocessing process is more and more evident.
2.1.2. Tumor area detection and feature extraction
In this stage we first found the probable areas for Hattoorum. Then we compared these areas with tumor characteristics in these images. If there was a consonant region, we consider it as part of the tumor because the tumor may also have grown on the other side or on the supposed symmetry line. After these steps, we begin to scan the brain and determine the definite presence of the tumor. To do this, we need a series of divisions on the image and definition of the parameter, as shown in Fig. 6.
Fig. 6. Splits performed on the input image
At the beginning of the proposed algorithm, the rectangle D is assumed to be the suspected area. We examine whether the tumor is discovered. Namely, the four unknown parameters Lx, Ux and Uy are found in two linear scans of the image. First we find the values that can be Ly and Uy in a vertical move, and then for the first two values, we find the values of Lx and Ux in the horizontal motion in the two images R and l. Thus, we obtain one or more CH-R regions for Ly and Uy first values. In the next step, we examine the properties of each area to determine whether there is a tumor or not. If not, we obtain the change regions for later values of Ly and Uy. This process continues until all pixels are scanned. In these linear scans, the proposed aliquot calculates two vertical and horizontal target functions. These two functions are similar, with the difference that the horizontal objective function for the transpose of the image is used. So here, only the vertical purpose function is explained. T (l) and B (l) in Fig. 6 respectively represent the upper and lower rectangles drawn from the dotted line at the distance from the top of the image. The right hemisphere and the left brain of the same height are h and the width is equal to w.
(4)
We use the Bhattacharyya similarity coefficient for the objective function and define it as follows:
(5)
Here and represent the histogram of the illumination intensity of image l inside the region T (l) and B (l). and are also characteristic of the brightness histogram of image R inside the T (l) and B (l) regions:
(6)
BC determines the Bhattacharyya coefficient between two normalized histograms a (i) and b (i), with i determining the severity of the histogram dispersion. The BC parameter measures the similarity between two histograms of the normalized brightness. Now, by replacing Equation (6) in Equation (5) we have:
(7)
When two normalized histograms are the same, BC is one between them. When the two normalizable histograms are quite incommensurable, BC value is close to zero. Therefore, it can be concluded that when the upper regions of the image are similar and the lower regions are not similar (or vice versa), E (l) is in most cases. Whenever there is asymmetry in either one of these regions, the objective function increases at first, from zero to h, then decreases and increases again. The increase and decrease of the visited area in Ly and Uy, respectively, is the lower and upper boundary of the suspected T-tumor region D. By repeating the same steps for a horizontal target, the left and right border of the suspected tumor area is detected by scanning Lx and Uy [22]. Fig. 7 shows the process of applying the symmetry line, the initial detection ranges of the tumor and the histogram of the target function for the symmetrical image and displays the tumor image in one of the slides.
The nature of this increase, decrease, increase is due to the fact that the objective function for an image without a tumor is ascending as a whole, but in the presence of a tumor the amount of the function decreases and then increases again after it passes. Fig. 8 shows the initial position of the diagnosed tumor for 4 different patients.
Fig. 7. Determining the position of the tumor with a bounding box algorithm
Fig. 8. Determine the position of the tumor in 4 different patients from the limited bounding box
2.3. Finding the initial point of tumor location with artificial bee colony algorithm
The artificial bee colony algorithm is based on the behavior of the bees in the nature, which always move in a position that is in a more favorable condition. If we determine the bright points of the image as a desirable feed for the population, we can access the tumor points (which are brighter than other areas). The method is such that we assume some bees (we consider 7 bees in this article) are randomly distributed in the image. At first, one of the bees will move completely to another bee. This move is based on the following relationship:
(8)
where vij is the new position of the bee, xij the previous position of the bee, φ is a random number between -1 and 1 and the distance between the bees. This hypothetical mode is shown in Fig. 9. Here it is assumed that bee represented by a different color will randomly move on the side of one of the adjacent bees.
At this point, there are three criteria that determine whether a move is based on the chance to determine the exact position of the tumor. These three criteria are:
2.3.1. Principle of Electromagnetic Immunity
If the new position of the bee was better than the previous position (the image was bigger), the bee would remain in the new region, otherwise it would return to the previous region and add a unit to the beekeeping index.
2.3.2. Trial criteria (matrix trial)
This indicator counts the number of consecutive movements of the bee with no improvement. If the test case is too high, it means that the area is inappropriate and should be left out forever.
2.3.3. Roll Cycle
The bee that has a higher quality area (more light) will have more chances to be selected by other bees. The recipe for moving the bees (Fitness) is calculated according to the following Equation.
(9)
Then the voltages are normalized according to the following Equation:
(10)
In this way, the Initial point (or initial points) with more brightness is detected and the bees accumulate in that area.
Fig. 9. A) Elementary makeup of the bees in the search area. B) The accidental movement of one bee to the adjacent bee. C) Bee placement in the position of the bee
Fig. 10. A) Elementary makeup of the bees in the search area. B) Placing bees in the tumor of the tumor
2.3.4. Combining algorithms
After examining the process of tumor detection, we will combine the two methods using the artificial colony algorithm. At first, is it necessary to combine the methods for tumor detection. Obviously, when the tumor is exactly in the hypothetical middle line of the brain or in a symmetrical position, the limited boundary algorithm will not be capable of detecting it. On the other hand, when the approximate location of the tumor's presence is clear, the search process is performed with the accuracy of the colony of the bee algorithm.
Fig. 11. A) Search area without bounding box algorithm. B) Search area in the presence of a bounding box algorithm
2.4. Tumor growth
In Fig. 12, after identifying the initial point of tumor, the images of the E-identified tumor line represent a distinct and enlarged map. In order to grow the tumor region in each stage, we must examine the pixels that are located in the vicinity of the tumor area. The brightness of the pixels that are added to the tumor area should be very similar to the tumorous area and have differences with other areas. So, for each pixel, two conditions must be checked. If these two conditions are met, pixels are added to the tumor area. The starting point for comparing other pixels is the same point (s) that is determined by the bee colony algorithm.
(11)
In these equations, f (x, y) is the intensity of the pixel brightness, µt and stdt are the mean and distribution of the intensity of the luminosity, and the µb and stdb mean, and the dispersion of the non-point region at the T stage. This will continue until another new pixel is added to the tumor area. Thus, the entire tumor area can be selected. The tumor area is extracted from the original image and enlarged, which is indicated in row E of Fig. 12.
3. Results
Determining the initial position of the presence of the tumor and then identifying the bright spots in the image of the MR has been shown to cause the population to accumulate in the area. After identifying the tumor area by an artificial bee algorithm, we proceeded to remove other areas and then enlarge the desired area to extract and display the desired area, as shown in Fig. 12.
Fig.12. A) Pre-Processed image. B) Primary Position of tumor. C) Elemental Makeup of bees in the range of tumor presence. D) Put the bees in the exact position of the tumor. E) Extraction and magnification of the tumor
We reviewed the validity of the tumor diagnostic function and the tumor distribution method applying the validation criteria used in most papers. These criteria include the four parameters of the Jacquard Similarity Index (JSI), the "DSS Score", "feature" and "sensitivity". Validation is done by comparing the outcomes and hand-dividing by a radiologist. To evaluate the validity of the proposed methods, the tumor area was isolated manually by the radiologist, Gt and the tumor obtained from the proposed algorithm was called Om. After comparing the four results we may have:
True Positive (TP): where both Om and Gt are properly isolated as tumors.
True Negative (TN): where both Om and Gt are properly separated as non-tumor areas.
False Negative (FN): where Gt was isolated as a tumor, Om was extracted as a non-tumor area.
False Positive (FP): where Om is extracted from a tumor, but the radiologist (Gt) of that area is not recognized as a tumor.
Based on the results of this categorization, the Jacquard similarity index and the Dss similarity score are defined as follows:
(12)
The Jacquard similarity index is equal to 1, indicating that there is complete similarity between the two sets of Om and Gm, and the value of 0 indicates that there is no absolute similarity. On the other hand, these two criteria only represent the degree of overlap between the tumor region detected by the algorithm and the radiologist. Measuring the sensitivity of the correct positive deficit is well-detected and measuring the negative deficit factor is correct, which is correctly identified. These two criteria are defined as follows:
(13)
Regarding the results of four validation criteria including Jacquard’s similarity index, DSS score, sensitivity and measurement, the efficacy of the proposed method can be found at the stage of tumor diagnosis compared with the diagnosis of radiologist. Table 2 shows the results of these four criteria for 12 patients with glioblastoma.
4. Conclusion
In this paper, a new method for tumor detection was presented in which MRI images of 12 patients with glioblastoma tumor were used to diagnose the exact position and size of the tumor. In the first step, preprocessing was performed on the images. Applying this step has led to the combination of modalities and the removal of disturbing areas, which has a significant impact on the use of all the features of magneto-resonance image as well as increasing speed and accuracy of tumor detection. In second step, the initial and primary location of the suspected tumor area was identified by applying boundary algorithm and then, the precise location of the tumor was extracted and enlarged from 4 patients by artificial colon algorithm. The tumor extracted on these slides is shown in Fig. 13. Since many articles have evaluated and validated their method by calculating regression or graphing overlap, we performed the exact regression comparison and then compared it with other methods. If location of the tumor areas detected by the radiologist as the assessment scale on one axis and the results of the simulation on the other axis are plotted, the regression diagram is obtained as shown in Fig. 14, which represents a precision of 98%. In [7] and [8], despite the high accuracy in the results obtained from all the image modalities of resonance imaging, the features of images were not well represented, while in our proposed method, all four modalities of the image coincide and the detection operation of all of them are done. In articles 9, 10 and 11, although all of the modalities have been used and the disturbing areas have been removed, using time consuming algorithms as well as the lack of automation and accuracy can be considered as the defects of their methods, in [13] and [14], image processing techniques were used to extract a tumor not being used to remove any disturbing areas, while our proposed method eliminated all the disturbance points, and the growth stage of tumor area is extracted after complete tumor detection and computed algorithms.
Table 2. Reviewing the results of proposed method
Patient number | JSI | DSS | Sensitivity | Specifity |
1 | 0.94 | 0.97 | 0.97 | 0.98 |
2 | 0.91 | 0.94 | 0.96 | 0.93 |
3 | 0.97 | 0.91 | 0.91 | 0.91 |
4 | 0.96 | 0.90 | 0.97 | 0.92 |
5 | 0.99 | 0.96 | 0.97 | 0.91 |
6 | 0.94 | 0.93 | 0.97 | 0.99 |
7 | 0.91 | 0.97 | 0.93 | 0.98 |
8 | 0.99 | 0.98 | 0.94 | 0.06 |
9 | 0.94 | 0.98 | 0.91 | 0.97 |
10 | 0.93 | 0.97 | 0.99 | 0.95 |
11 | 0.95 | 0.92 | 0.98 | 0.97 |
12 | 0.92 | 0.94 | 0.97 | 0.98 |
Average | 0.95 | 0.94 | 0.96 | 0.96 |
Fig. 13. Tumor region extracted from the initial image.
By comparing the results of this method with the methods described in Section 1, it can be concluded that accuracy of our method for the four JSI, DSS, sensitivity and feature parameters was significantly improved. The results of this comparison are also presented in Fig. 15.
Fig. 14. Correlation chart between the proposed method and the results of diagnosis of radiologist
()
Fig. 15. Comparison of the proposed method with other methods
5. Reference
[1] L. Sallemi, I. Njeh, S. Lehericy, "Towards a computer aided prognosis for brain glioblastomas tumor growth estimation", IEEE Trans. on NanoBioscience, vol. 14, no. 7, pp. 727-733, Oct. 2015.
[2] S. Ghnomiey, "Medical image segmentation techniques: An overview", International Journal of Informatics and Medical Data Processing, vol. 1, no. 1, pp. 16-37, 2017.
[3] N. M. Aboelenein, P. Songhao, A. Koubaa, A. Noor, A. Afifi, "HTTU-net: Hybrid two track U-Net for Automatic brain tumor segmentation", IEEE Access, vol. 8, pp. 101406-101415, 2020.
[4] J. Zhang, Z. Jiang, J. Dong, Y. Hou, B. Liu, "Attention gate resU-net for automatic MRI brain tumor segmentation", IEEE Access, vol. 8, pp. 58533-58545, 2020.
[5] L. Georgiou, et. al., "Estimating breast tumor blood flow during neoadjuvant chemotherapy using interleaved high temporal and high spatial resolution MRI", Magnetic Resonance in Medicine, vol. 79, no. 1, pp. 317-326, Jan. 2018.
[6] S. Joshi, S. Gore, "Ishemic stroke lesion segmentation by analyzing MRI images using dilated and transposed convolutions in convolutional neural networks", Proceeding of the IEEE/ICCUBEA, pp. 1-5, Pune, India, Aug. 2018.
[7] S. Manikandan, K. Ramar, M. W. Iruthayarajan, K. Srinivasagan, "Multilevel thresholding for segmentation of medical brain images using real coded genetic algorithm", Measurement, vol. 47, pp. 558-568, Jan. 2014.
[8] R. Laishram, W.K. Kumar, A. Gupta, K.V. Prakash, "A novel MRI brain edge detection using PSOFCM segmentation and canny algorithm", Proceeding of the IEEE/ICESC, pp. 398-401, Nagpur, India, Jan. 2014.
[9] X. Zhang, W. Dou, M. Zhang, H. Chen, "A framework of automatic brain tumor segmentation method based on information fusion of structural and functional MRI signals", Proceeding of the IEEE/ICCSN, pp. 625-629, Beijing, China, June 2016.
[10] Y. Zhang, S. Ye, W. Ding, "Based on rough set and fuzzy clustering of MRI brain segmentation", International Journal of Biomathematics, vol. 10, no. 2 , pp. 1-11, 2017.
[11] D. Kumar, H. Verma, A. Mehra, R.K. Agrawal, “A modified intuitionistic fuzzy c-means clustering approach to segment human brain MRI image”, Multimedia Tools and Applications, vol. 78, pp. 12663–12687, 2019.
[12] H. Khotanloua, O. Colliot, J. Atif, I. Bloch, “3D brain tumor segmentation in MRI using fuzzy classification, symmetry analysis and spatially constrained deformable models”, Fuzzy Sets and Systems, vol. 160, no. 10, pp. 1457- 1473, May 2009.
[13] C. Sompong, S. Wongthanavasu, "An efficient brain tumor segmentation based on cellular automata and improved tumor-cut algorithm", Expert Systems with Applications, vol. 72, pp. 231-244, April 2017.
[14] R.P. Grimton, C.S. Singh, M. Manikandan, “Brain tumor MRI image segmentation and detection in image processing”, International Journal of Research in Engineering and Technology, vol. 3, no. 1, pp. 1-5, 2014.
[15] I. Ahmed, Q. Nida-Ur-Rehman, G. Masood, M. Nawaz, "Analysis of brain MRI for tumor detection & segmentation", Proceedings of the World Congress on Engineering, vol. 1, pp. 1-6 , London, U.K., June/July 2016.
[16] S Fazeli Nejad, G Shahgholian, M Moazzami, “Simultaneous design of power system stabilizer and static synchronous compensator controller parameters using bee colony algorithm”, Journal of Novel Researches on Electrical Power, vol. 9, no. 1, pp. 1-10, 2020.
[17] N. Behzadfar, H. Soltanian-Zadeh, "Automatic segmentation of brain tumors in magnetic resonance images", Proceedings of the IEEE-EMBS/BHI, pp. 329-332, Hong Kong, China, Jan. 2012.
[18] T.T. Tang, J.A. Zawaski, K.N. Francis, A.A. Qutub, M.W. Gaber, “Classification of brain tumors using texture based analysis of T1-post contrast MR scans in a preclinical model”, Proceedings of the SPIE, vol. 10575, Article Paper: 105753T, 2018.
[19] FMRIB Software Library, http://www.fmrib.ox.ac.uk/fsl, Retrieved on Feb. 6th 2010.
[20] http://www. Cancer backup. Org. uk/ Treatment/ Biological therapies/ Monoclonal antibody/ Bevacizumab
[21] B. N. Saha, N. Ray, R. Greiner, A. Murtha, H. Zhang, "Quick detection of brain tumors and edemas: A bounding box method using symmetry", Computerized Medical Imaging and Graphics, vol. 36, no. 2, pp. 95-107, March 2012.
[22] M. Rajchl et al., "Deepcut: Object segmentation from bounding box annotations using convolutional neural networks", IEEE Trans. on Medical Imaging, vol. 36, no. 2, pp. 674-683, Feb. 2017.