Abstract

This study proposes an optimal approach to reduce noise in mammographic images and to identify salt-and-pepper, Gaussian, Poisson, and impact noises to determine the exact mass detection operation after these noise reductions. It therefore offers a method for noise reduction operations called quantum wavelet transform filtering and a method for precision mass segmentation called the image morphological operations in mammographic images based on the classification with an atrous pyramid convolutional neural network (APCNN) as a deep learning model. The hybrid approach called a QWT-APCNN is evaluated in terms of criteria compared with previous methods such as peak signal-to-noise ratio (PSNR) and mean-squared error (MSE) in noise reduction and accuracy of detection for mass area recognition. The proposed method presents more performance of noise reduction and segmentation in comparison with state-of-the-art methods. In this paper, we used the APCNN based on the convolutional neural network (CNN) as a new deep learning method, which is able to extract features and perform classification simultaneously, but it is intended as far as possible, empirically for the purpose of this research to be able to determine breast cancer and then identify the exact area of the masses and then classify them according to benign, malignant, and suspicious classes. The obtained results presented that the proposed approach has better performance than others based on some evaluation criteria such as accuracy with 98.57%, sensitivity with 90%, specificity with 85%, and also ROC and AUC with a rate of 86.77.

1. Introduction

Breast cancer is one of the most dreaded diseases affecting women worldwide and has led to many deaths [14]. Early recognition of breast masses prolongs life expectancy in women, and subsequently, the advancement of an automated system for breast masses supports radiologists for an exact and accurate diagnosis. In fact, providing an optimal approach with the highest speed and more accuracy is an approach provided by computer-aided design techniques to determine the exact area of breast tumors to use a decision support management system as an assistant to physicians. Breast cancer is recognized as a common disease in the today’s world in women. Early detection of breast cancer leads to timely analysis of the disease, thus providing a better chance of survival. It is important to perform preprocessing steps before applying any image processing algorithm to mammographic images that detect the boundaries for nonstrongly induced deviations from the mammographic background. It is difficult to interpret digital mammograms as medical images. Therefore, a preparatory step is required for image quality processing and for more precise segmentation consequences [5]. The most important goal of this method is to enhance image and promote processing by eliminating different parts of the mammographic background. Extraction of the border region of the breast and removal of the pectoral muscles are also preprocessing elements [6].

Preprocessing is measured as a main step in finding mammographic image orientation and improving image quality. Usually, digital mammographic images include noise artifacts in the background area. These mammographic images are very complicated to interpret, so preprocessing is important for mammographic images. Analysis and investigation of suitable image processing techniques for mass area detection in mammographic images is presented in this paper. The preprocessing is the first step in noise reduction in breast cancer. Using different types of filters presented to date and examined in [7], which are almost the most optimal filters available in this field, different noises can be detected and one can find out which is the appropriate filtering method to use. It is important to determine the exact area of the tumor using image segmentation after reducing the noise from mammographic images. A common strategy for segmentation involves using an image segmentation method to detect local spots on an image to generate a possible output network. Therefore, this review presents a technique dependent on image morphological operations for the segmentation of mammographic images with the aim of precise mass area detection.

Optimizing the mass area from mammographic images based on noise reduction and image segmentation is one of the important steps before classifying the type of masses that this research will address. In this paper, we used the quantum wavelet transform and atrous pyramid convolutional neural network for breast sentinel lymph node cancer detection from mammographic images. This method is used for the first time to detect the breast cancer. Therefore, studying previous methods of noise reduction and segmentation with the aim of determining tumor area in mammographic images is an important issue that can overshadow future processes. Therefore, this section is divided into two general sections, including a study on noise reduction of mammographic images and image segmentation operations. The goal of this study is to look into different filters, such as mean filter, middle filter, and varied size Wiener filters, in a window utilizing the DDSM (Digital Database for Screening Mammography) dataset as a starting point.

2. Literature Review

The noise level in mammographic images strongly influences image analysis and classification accuracy [8]. In [9], nonlocal noise analysis-based methods for mammographic nanoparticles have been studied. X-ray grating-based mammography can revolutionize the radiological approach to chest imaging because it works well with conventional X-ray tubes and can recover in a repeat scan, weakness, differential phase, and dark field. However, the images, particularly the differential phase and dark-field images, are contaminated by noise, lowering the image quality and necessitating the image noise treatment.

Preprocessing of digital mammograms of the breast areas using an adaptive weighted frost filter is presented in [10]. Since it can identify cancer up to two years before the tumor is visible, mammography is the most effective method for early detection in patients with breast cancer. The computational cost of preprocessing and postprocessing mammographic image identification is significant. Initial processing is a crucial part of any imaging approach, with the most significant aspect being the execution of a process that can improve image quality and make it appropriate for further analysis and data extraction.

In [11], the impulse noise reduction in ultrasound mammographic images was proposed using the homogeneity modified Bayes shrink (HMBS) method. It used seven different criteria to assess image quality. The pixel intensities are first replaced with homogeneous neighborhood averages, and then, the HMBS threshold value is used to detect homogeneous zones from areas with noise from uniform filters. In [12], a deep learning-based approach is used to reduce noise in mammographic images with a physics-driven data augmentation approach. In this study, a deep learning approach based on a convolutional neural network (CNN) is proposed to reduce mammographic noise to improve image quality. The noise level is first increased, and the ensemble transmission is used to convert the Poisson noise to the white Gaussian noise. Using this data augmentation, a deep network is trained to learn image noise mapping. The results represented the optimal noise reduction in comparison with previous methods such as BM3D and DNCNN.

Analyzing cancerous masses in mammograms because of issues, such as low-contrast, unclear, fuzzy, or split boundaries, and the presence of serious contortions is a challenging task concentrated in [13]. The mentioned facts muddle the advancement of computer-aided diagnostics or CAD systems to help radiologists. In this paper, another mass detachment algorithm for mammograms dependent on strong multifunctional features and automated and maximal assessment has been proposed to introduce a maximum a posteriori (MAP). The proposed segmentation technique comprises four stages: a dynamic contrast upgrade scheme applied to a chosen region of interest (ROI), amendment of background infiltration by matching layouts, acknowledgment of mass candidate points by posterior probabilities dependent on various scales, and strong incorporation element and final definition of the mass region by a MAP scheme.

In [14], a combination of wavelet analysis and a genetic algorithm is used to classify and diagnose breast cancer in mammographic images. According to this article, there is a rising concern today about the sensitivity and reliability of detecting malformations in craniocaudal (CC), lateral oblique (LO), and mediolateral oblique (MLO) views of mammographic images. This article describes a collection of computational algorithms for segmenting and identifying mammograms in CC and MLO views that contain masses. The gray-level enhancement and gray-level amplification methods based on wavelet transform and Wiener filter are used to perform an artifact removal algorithm. Finally, in mammograms randomly selected from the Digital Database for Screening Mammography (DDSM), a method is utilized to identify and separate the masses using different thresholds, wavelet transforms, and genetic algorithms. The area overlap metric was used to assess the created computer approach (AOM). Experimental results represented that the proposed method could be used as a basis for mammographic mass segmentation in both CC and MLO views. Another important feature is that this method is limited to CC and MLO representation analysis.

Reference [15] also presented a semisupervised fuzzy GrowCut adaptive method for segmentation of region of interest mammographic images. This article proposed a semisupervised version of the GrowCut algorithm that is studied by modifying the automata evolution law by adding a Gaussian fuzzy membership function to display undefined boundaries. In this method, the manual selection of suspected lesion points is replaced by a semiautomatic step, where only the internal points are chosen utilizing the differential evolution algorithm.

In [16], various methods of decoding and encoding using a convolutional neural network are used for mammographic image segmentation. The convolutional neural network structure uses both SegNet and UNet. The approach of this research can simultaneously distinguish the masses from the images. The high accuracy of this research in segmentation operations with the aim of identifying the masses demonstrated its functional superiority to its previous methods.

Other similar approaches to the same article have already been presented. For example, in [17], deep learning based on the 2-Conductive UNet method has been used to segment fibrous and fibroglandular tissue. Multitask segmentation is also presented in several sections of mammographic images to find deep masses according to the deep learning and standard convolutional neural network method [18]. Deep learning and V-net convolutional neural networks have also been used for the segmentation of mammographic and prostate images.

In [19], a new fast unsupervised nuclear segmentation and classification scheme was proposed for automatic allred cancer scoring in immunohistochemical breast tissue images. Adaptive local thresholding and enhanced morphological procedure were used for extraction and segmentation. The obtained result represented 98% accuracy for tumor area determination. In [20], the segmentation of mammographic images with the aim of identifying and classifying benign and malignant masses is presented with an optimal region grow approach. In the preprocessing phase, the Gaussian filtering is used for noise reduction. The dragonfly optimization (DFO) algorithm is then used for image segmentation, and then, the combined approach of GLCM and GLRLM was used to extract features as input to the feedforward neural network (FFNN) method with backpropagation (BP) training used. The results represented the 97.8% accuracy.

Further studies have been performed on the segmentation and recognition of breast cancer masses from mammographic images, which can be reviewed in general. In [21], region growing segmentation technique with a specific threshold cell neural network was used to segment and detect breast cancer masses. An optimization in detection and classification is also performed with the genetic algorithm. The accuracy of this method is 96.47%. The use of microarray images to detect breast cancer masses has been studied in [22] with 95.45% accuracy in detection. The use of the backpropagation neural network for segmentation and detection was studied in [23] with 70.4% detection accuracy. The Bayesian theory-based naïve Bayesian classification method in mammographic images with an accuracy of 98.54% was presented in [24]. In mammographic images, an adaptive intelligent decision-making system for the invention of breast cancer was employed in [25] based on regression-based evolutionary methods. Breast cancer recurrence in [26] was presented using an optimized ensemble learning or an HBPCR method with an accuracy of 85% of tumor area detection.

In [27], to predict ALN by CECT dataset the deep learning method is used. They compared the various classical machine learning algorithms and convolutional neural network structures. In [28], several approaches to develop the best classifier for sentinel lymph node biopsy images are presented.

Irfan et al. used dilated semantic segmentation network for segment ultrasonic breast lesion images [29]. Jabeen et al. [30] used the probability-based optimal deep learning feature fusion method for breast cancer detection. Miraj et al. [31, 32] introduced a method based on quantization-assisted UNet study with ICA and deep feature fusion for breast cancer in ultrasound images.

3. Materials and Methods

The primary goal of this research is to present a noise reduction and image segmentation approach to determine the exact area of breast tumors and the principles of the type of mass detection and classification by the same authors. This approach has two main parts that apply the principles of image processing, machine vision, and statistical and analytical pattern recognition, including preprocessing with the aim of reducing mammographic image noise with presented method means such as quantum inverse MTF (modulation transfer function) filtering. The segmentation aims to accurately determine the mass area using the image morphological operations. After all of them, deep learning methods such as APCNN will be applied.

First, there is a need to examine in detail some of the noise in mammographic images. The noises that affect mammographic images are salt-and-pepper, Gaussian, Poisson, and impulse noises. Noises are random fluctuations in image intensities and appear as grains or particles in mammographic images. The image shows the difference in intensity values instead of the original values when the noise is affected.

3.1. Preprocessing Phase

At the first, it is necessary to normalize images. In the preprocessing phase, input data must be normalized, which have noise and need to enhance. Resizing images to a specified size was performed with logical filtering named quantum inverse MTF filtering. After the preprocessing steps, the input image is normalized. Each single image in the integration of local threshold and active contour is addressed by a two-dimensional array of pixels with integer values in the range [0, 255]. Local thresholding performs initialization of the images in two stages. Initially, the input noise image is defined as the initial image and will be utilized to eliminate the image noise. This is used as local search operations to enhance the initial images utilizing the quantum wavelet transform filtering method. Local thresholds and active contours are chosen because it is computationally faster than other methods and offers significant results in the work literature. Toward the end of the initial step, there will be a disintegrated image. The thresholding is performed on the detail coefficients in the second step, and one of these deteriorated parts is arbitrarily chosen and sent to a reconstruction operation.

When the selection value is less than the range [0, 1] or lower than the rate of local searches in the quantum wavelet transform filtering algorithm after decomposing, a new image may pass through the local search operator. All of its image is separately arranged by its pixel esteem, and the best coefficients in the image are considered as a quantum value of the work in progress when the decomposing is finished. A signal in mammographic images may be separated into various displaced or resized presentations of features that are known in the feature extraction process. Local thresholding and active contour can be utilized to decompose an image into its elements. In fact, image segmentation can be done by applying quantum wavelet transform filtering with local thresholding and active contour. For this situation, quantum wavelet transform filtering-based local thresholding and active contour coefficients can eliminate some details. Local thresholding and active contour-based quantum wavelet transform filtering have the great advantage of allocated fine details in an image. Active contour can be utilized to insulate fine-grained details of an image, while local thresholding can detect gross details and combine fine-grained details and read all rows and columns linearly and diagonally; structurally, quantum wavelet transform filtering satisfies to keep minimum noise in the mammographic image. At the first step, we define a threshold value for noise reduction method, and then, this quantum wavelet transform filtering will be applied by three parts in equations (1)–(3), and in the noise reduction steps, which determine some kinds of noises such as Gaussian, salt-and-pepper, and also blur effect, active contour will be applied to determine some variation about these noises to help quantum wavelet transform filtering for more noise reduction. Quantum wavelet transform filtering-based local thresholding and active contour can produce a much smoother display. A local threshold function and active contour with quantum wavelet transform filtering have two main features, first of which the function is oscillatory or has a wave appearance such as the following equation:

Local thresholding values are values that have [0 1] or [0 255] colors. For this situation, the vast majority of the energy at is limited to a finite period of time whose relation is given in the following equation:

The proposed method is generally determined to diminish the noise in the following equation:

Function (3) is aware of the edges of the image and attempts to protect the crucial characteristics of the image. section ensures a specific level of validity between the determined image and the original image, where evaluated image and the image are noisy. parameter is the period of adjustment of the sum of the variations, and are the parameters for adjustment, and is the sum of the points in the image. By minimizing equation (3), the goal is to reduce the overall image variability by maintaining accuracy. Parameter is the period for adjusting the sum of the variations, and are the balancing parameters, and is the sum of the points in the image. It should be noted that as a parameter to adjust the sum of the variations means that one mammographic image may have some noises such as Gaussian, salt-and-pepper, or also blur effects. So, this variation was used to determine the kinds of the noise variation and calculate its sum. The proposed quantum wavelet transform filtering in this article is an innovative part for mammographic noise reduction. QWT stands for matched filter named quantum wavelet transform technique from [33] and also quantum image filtering in the frequency domain in [32, 34], which is based on fast Fourier transform (FFT). It should be noted that the initial value of threshold is experimental, which is defined by trial and error. Also, the method of finding noisy pixel is described in Figure 1.

For finding noisy pixel in images, 4 brightness values are determined from white to black in each pixel and these values are and for gray color; and for dark gray; and for black color; and and for white color.

3.2. Image Segmentation Phase

Image segmentation is cited as a complex process in digital image processing systems. This complexity stems from the fact that precise identification of the image space requires identifying points of the peak based on the background and foreground. In this section, the edge detection is also possible, and based on the edges, different areas can be separated from each other. It can be separated in terms of light intensity and color. In fact, the output of the preprocessing section, which performs noise reduction and mammographic operations, is the input of the segmentation image. There are two reasons for using image morphological operations: first, an image is known as the search space, and this search space can be considered as a correction part by the segmentation operation. Therefore, segmentation improvement can complete dimensionality, feature selection, and extraction operations, as well as complete classification to increase the accuracy and evaluation and validation criteria as much as possible; and second, the high execution speed and convergence and nonlocal optimum trapping in image processing systems can be achieved with this algorithm Generally, in any images, there are many edges due to separating objects and its color boundaries. These edges are also presented in mammographic images, and image morphological operations can detect more edges in segmentation part in which spiders should move to find edges due to their brightness values that are determined in the preprocessing phase by local thresholding in [0, 1] or [0, 255] colors.

3.3. Classification

In fact, in this section an APCNN is used, which optimizes CNN with Moore–Penrose matrix. The classic CNN method is a neural network, but in most neural network structures, there are two general disadvantages that can be used by the descending gradient to adjust the weights in the training phase and its timing, as well as the volume of training data: first, there is a slowdown in the training part that can be quickly addressed in the training and testing phase, which is abundant after much data are available; and second, neural networks also do not have the capability to train and test the same data if a similar dataset is imported or new data are entered into the same dataset; that is, there is a lack of generalization. So, many kinds of neural networks are not generalized. At first, we consider CNN.

It is interesting that this research will optimize the CNN as an APCNN in order to provide an intelligent way to run fast with generalizability, and the reason for using the APCNN is because of the problems in neural network structures and the existence of high learning speed and adjustment of a parameter in the training phase as opposed to adjusting the many parameters in the training phase in neural networks. The main disadvantage of CNN is that it cannot perform normal extraction, feature extraction, and classification operations such as normal learning methods, but it will be done by optimizing CNN and building APCNN structures. In CNN, the input layer is attached to a series of weights to the hidden layer, and these weights are initially assigned a random value and do not need to be reset; the CNN is time-consuming in the training phase. In CNN, the hidden layer neurons are a normal neuron and do not need centroid and sigma for the neurons. Finally, the only parameter that the CNN needs to adjust is the synaptic weights between the hidden layer and the output layer. In general, the CNN has a feedforward structure and uses inverse pseudostructures to calculate synaptic weights in real time, which results in faster data training and testing. The overall architecture of the CNN is illustrated in Figure 2.

In general, it can be argued that CNN is exactly the opposite of deep learning methods and alternative classification methods such as support vector machine and naïve Bayesian methods. Because of its flexibility, the CNN method can utilize nonlinear activation functions. By default, CNN has an equation in general mode as follows:

In this equation, is the weight value between the input layer and the hidden layer. Also, is the weight value between the output layer and the input layer. is the threshold amount of neurons in the hidden layer or bias. is the transition or actuator function. is the weights of the input layer. is the haphazardly relegated bias. When recalibrated with a combination of known parameters of the overall adjustment, this information yields an output layer as follows:

The most important target in all models of the training-oriented method is to minimize the errors up to a conceivable value. The output error function is taken by the real output in CNN and can be indicated by two training sections and testing areas . For both of the functions, the output received by the real output needs to be equal to . When this expression is run, unknown parameters are supplied, and the outcome is fulfilled. The matrix can be a matrix that is very unlikely, implying that the total quantity of information attributes may not be equivalent to the quantity of information attributes in the training set phase. Therefore, reversing [H] to enhance the weight or β is considered as a significant problem. To overcome this challenge in CNN, a matrix called Moore–Penrose matrix is used that can develop approximate inverse matrix computation that can perform dimensionality selection and feature extraction operations with classification with excellent precision and remarkable velocity in comparison with other approaches. Using the Moore–Penrose matrix, is the output matrix and is the generalized inverse Penrose matrix of . Therefore, due to the optimization of the CNN, the problem of output weights in the CNN was solved as , which became the APCNN or Moore–Penrose matrix extreme learning machine. In general, during the training phase, the APCNN transforms into a chain of repeated modules and will be able to function as a conveyor adding or subtracting information from neurons.

Unlike other classification models such as deep learning structures, support vector machines and naïve Bayesian methods, no weight update operations are carried out while training the process using the APCNN method. At the intersection, the APCNN can define properties. By minimizing the APCNN energy performance, an appropriate model is taught and modeled as follows:

Under these circumstances, and are the intersection labels and particular pixels of the original image , respectively. is a negative logarithmic probability, and is a probability taken by the APCNN for each pixel . It handles the relationship between each pair of pixels that is characterized as the following equation while testing the capabilities of two APCNN matrix pairs in a fully linked layer.

In (8), indicates the number of Gaussian core, which is equal to 2, and presents a weight for the , which is the Gaussian core. is the steady function tag. Appearance of the core is indicated by , which attempts to relegate the same class tags to adjacent and neighboring pixels carrying the same intensity. Smoothness of the core is presented by and is associated with the aim of omitting useless areas. The mentioned two stages are presented as (8) and (9), respectively.

Here, and are given as the light intensities of the pixel , , , and of the defined spatial coordinates. and indicate features of each pixel pair such as brightness intensity and spatial data. The parameters of the Gaussian cores are presented by , , and , respectively. However, some points may not be cut by this method that is why an optimization needs to be performed in layers of this algorithm. In general, the layers of the APCNN method are utilizing the input layer with the number of neurons. Later, the structure of the layer to be trained and tested has convolve layers, pooling layers, and fully connected layers along with Moore–Penrose matrix. After that, a Soft-Max layer is embedded and then an output layer to show the application. The Soft-Max layer is 7 × 7 as well. The initial APCNN training and segmentation process takes place in the training layer. While training the deep neural network in the Soft-Max layer, the APCNN is utilized to enhance the segmentation and feature extraction operations. By utilizing the probabilistic principles of Bayesian filters, the situation of a dynamic system can be assessed through a progression of sensory perceptions with noise. As a matter of first importance, the most notable Bayesian law should express that a probability for an APCNN method is avoided (which is the reason it is known as an atrous pyramid), whose model is expressed by the following equation:

In case, the Bayesian laws are measured to update assumption, assuming and cases, as shown in the following equation:

It is expected that with all single estimated perception, observations, and values, up to and including and the value of of a dynamic system at can be approximated meaning that using a Bayesian formula, a probabilistic probability is determined by the following equation:

So that is the set of all observations, and similarly, the state set of values is expressed by , and defines preceding data about the state of the system (prior to any perception). The Bayesian laws become a form of the following equation according to mentioned ways:

In these equations, is a new prediction, is scaling, are probably investigations of a motion object, and is the probability before investigating the tumor masses based on sentinel lymph nodule metastasis and assessment of mitotic density. Besides, the system dynamics is and the preceding prediction in the tumor mass detection is . Now, on the assumption that the are not dependent on each other, the definition of the system as an operation of the probabilistic APCNN is calculated. The defined Bayesian models are somewhat complicated, and it is not easy to analyze them in Gaussian distributions as linear models are involved. The equations can be simplified until reaching the desired deep learning point. However, the probabilistic APCNN techniques are utilized to solve equations considering all possible alternations.

The APCNN can specify attributes. Since the segmented parts applied by image morphological operations have been entered the feature extraction phase and the main features include brightness and edge intensity, which have also been reduced noise and image enhancement in the earlier stages, the dimension reduction and feature selection operation are performed with the APCNN and represent the exact mass area in the spectral image. The APCNN will also be able to classify data based on existing data to identify three main classes, including benign, malignant, and suspicious cancers. Table 1 is a general description of the MIAS data. These properties and features are used in simulation, which will be used in MATLAB platform with 7-core processor and Intel 3.4 GHz processor with 6 MB of cache and 6 GB of memory system.

4. Results and Discussion

Simulation has been done and run in a MATLAB platform with 7-core processor and Intel 3.4 GHz processor with 6 MB of cache and 6 GB of memory system. In this research, the MIAS dataset is illustrated using statistical properties. In this dataset, there are images with both breast cancer and nonbreast cancer features and suspicious cases that are based on the statistical data of this section; this diagnosis will be performed correctly. This dataset can be downloaded at https://peipa.essex.ac.uk/info/mias.html link.

The simulation is created step by step. Initially, the input image is executed and displayed in the simulation as shown in Figure 3.

The first part of the preprocessing process is then carried out with the purpose of diminishing the size of the image and making it identical to the initial noise reduction with a simple median filter. Then, the proposed quantum wavelet transform filtering method is used to reduce noise and improve the image as a result shown in Figure 4.

Statistically, the proposed noise reduction approach has a higher capability than previous methods. Table 2 is a general description of the MIAS data, which compares and contrasts some methodologies in terms of assessment models. These properties and features are used in simulation, which will be used in the MATLAB platform with 7-core processor and Intel 3.4 GHz processor with 6 MB of cache and 6 GB of memory system.

The noise in real-world biomedical images is a well-known problem that reduces the accuracy of diagnostics. In this study, we explored the robustness of the proposed method to noise. We lower the quality of the image by adding different types of the noise, and then analyzed the drop in performance. Also, we compared our results with the results of other authors achieved on the same dataset.

The hyperparameters for the image segmentation experiments are set randomly and as test and examination method.

The segmentation with image morphological operations’ button is then pressed, which performs the image segmentation operation with the social spider algorithm at a speed of 0.5 seconds, which represent the output as shown in Figure 5.

In the social spider algorithm segmentation operation, operators of this algorithm should be defined including the initial population of spiders with 100 spiders, the vibration rate of the blade equal to 2, and the prey attack rate of 0.02 as standard, and the initial presentation of this algorithm is taken into account. The segmentation is performed at 100 iterations using both edge and color properties. Statistically, the proposed image segmentation approach has a high capability compared with previous methods. Table 3 shows a comparison between the proposed image segmentation method in terms of evaluation criteria and other methods.

As it can be seen, our method has good results in the classification part. Based on Table 3, an accuracy of 99.50% can be achieved in classification, but it should be corrected to 98.57% because the whole method is the combined fusion method in mammography for breast cancer detection. The whole proposed approach is required to be represented as a ROC diagram from the beginning, i.e., preprocessing, segmentation, and then operations on feature extraction and categorization; the output is in the form as shown in Figure 6.

A final output sample for the detection of a cancerous mass in MIAS imaging data is shown in Figure 7.

Each patient in the MIAS database has an identifier. The classification also shows the results for each patient based on the ID. The first image that is selected and presented as an example for visualization is the result of the first output in the first line classification.

For example, a patient with ID 915940 has malignant breast cancer as shown by Malingal, but a patient with ID 91762702 has benign breast cancer. Also, the patient with the ID 91376702 is considered as a suspicious condition, which is identified by the phrase Suspicious One after classification. At the end of this study, a general approach called atrous pyramid CNN is reviewed and compared, which is the extended structure of the pyramid CNN that is extremely stunning.

The sensitivity, specificity, and accuracy for 15 tested images are shown in Figure 8.

The average value for sensitivity, specificity, and accuracy is shown in Figure 9.

The limitation of this study is the use of the deep learning method that needs more time to learning and also needs huge data to train well.

The most important reasons for using CNN in this study compared with other smart methods in the classification and feature extraction phase such as convolutional neural network and recursive neural network as two deep learning techniques and conventional clustering techniques called support vector machine and naïve Bayesian methods is cited in Table 4.

5. Conclusions

In this study, we presented a noise reduction and segmentation method for mammographic images from the MIAS dataset in order to boost the performance of the level of images and to determine the precise location of the tumors. The proposed approach involves the use of a two-step operation involving preprocessing and segmentation. In the preprocessing phase, a method called quantum wavelet transform filtering was presented in addition to finding noises and reconstructing them in order to minimize as much as possible. This method moves linearly, column-wise, and diagonally with minimal repetition of the search pixels. The segmentation operation is also aimed at finding the exact mass area with the optimized social spider algorithm so that each spider is positioned on the pixels, and according to the operators of this algorithm, we start moving based on finding two features, namely, the intensity of light and edges as far as it can separate the masses from the surface of the image. After these, a new deep learning technique is used, which optimizes the convolutional neural network (CNN) by some filters such as atrous pyramid CNN (APCNN).

Data Availability

The data are available in https://peipa.essex.ac.uk/info/mias.html

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This study was supported by the Altinbas University.