Archives

  • 2019-07
  • 2019-08
  • 2019-09
  • 2019-10
  • 2019-11
  • 2020-03
  • 2020-07
  • 2020-08
  • 2021-03
  • br Table br The parameters of different comparative methods

    2019-10-21


    Table 8
    The parameters of different comparative methods.
    Comparative methods Parameters
    3-NN Set the parameter of K equal to 3
    BP neural network step value of C and g is set to 0.5.
    Set the initial epochs equal to 1000;
    Fig. 2. Classification accuracy for different comparative methods based on WBC data set.
    Fig. 3. Classification accuracy for different comparative methods based on WDBC data set.
    Fig. 4. Misclassification cost for different comparative methods based on WBC data set.
    Fig. 5. Misclassification cost for different comparative methods based on WDBC data set.
    Fig. 6. G-mean for the different comparative methods based on WBC data set.
    Fig. 7. G-mean for the different comparative methods based on WDBC data set.
    select the top n features which obtained the best fitness. In this BCI-121 process, we adopt SA algorithm to optimize GA, which can avoid trapping into local optimum. Based on this, we can obtain the best results as presented in Figs. 2 and 3, which can be seen that the classification accuracies of SAGAW feature selection approach can achieve better performances than GAW methods.
    6.4.2. Misclassification cost for the best solution
    In our proposed model, we take fully account of misclassification cost of breast cancer tumor and quantification the
    Fig. 8. Running time for the different comparative methods based on WBC data set.
    Fig. 9. Running time for the different comparative methods based on WDBC data set.
    misclassification cost of two different scenarios as described above. In this work, we set the value of the correct classification cost as 0, and the misclassification cost had to further considering two scenarios: the first is misclassified malignant tumors as benign ones and the second aspect is misclassified benign tumors as malignant ones. As noted before, the consequences of this two scenarios vary greatly, herein in our work, we take fully account of expert experience and set mcmb = 10 and mcbm = 1 so as to make prenatal testing two scenarios difference.
    Table 9
    The results of 10-fold cross verification based on WDBC data set.
    Underlying classifier
    Accuracy Misclassification cost G-mean Running time
    Note:
    a Denotes the best results, but is not the optimum results. b Denotes the optimum results.
    The results of misclassification cost for different comparative methods are presented in Figs. 4 and 5, form the results we can obviously see that the misclassification cost of IGSAGAW algorithm achieved the best results followed by GAW algorithm, followed by the baseline approaches. The main reason is that in GAW feature selection approach, we select the top n features which can obtain the maximum classification accuracy and minimum misclassification cost. And during this feature selection process, we applied BP, 3-
    NN and CS-SVM three underlying classifiers perform for classification. In IGSAGAW approach, we utilized IG ranking the importance of features firstly, then we applied SAGAW algorithm to select the top n features which obtained the best fitness. In this process, we adopt SA algorithm to optimize GA, which can avoid trapping into local optimum and achieved the optimum results.
    G-mean is the geometric mean of true positive rate (TPR) and true negative rate (TNR), which can be calculate by formulas (11) and (12). From the results of Figs. BCI-121 6 and 7, we can obviously see that the IGSAGAW approaches achieved the best results followed by GAW approaches, followed by baseline approaches. The main reason is that the number of FN and FP in IGSAGAW is less than GAW, and followed by underlying approaches.