Cet article a été publié dans un numéro de la revue, cliquez ici pour y accéder
Building a highly efficient machine learning model requires sufficient data to allow robust feature extraction capable of recognizing patterns in each class; thus, the model can distinguish among different classes. It is important to extract effective features from the available amount of data without the need for more real data or improve them using an augmentation technique. The matter gets more complicated if the data is of the image type. In this paper, a new approach for feature extraction called Feature Extraction Based on Region of Mines (FE_mines) is presented that includes three versions to deal with different medical images; this approach obtains multiple formulas for each image using the signal and image processing, then data distribution skew is used to calculate three statistical measurements that include the hidden features, which leads to increased discrimination among classes to build powerful models with better performance and high efficiency. Three experiments were conducted using three types of medical image datasets, namely: Diabetic Retinopathy (Color Fundus photography); Brain Tumor (MRI); and COVID-19 chest (X-ray). The results proved that the FE_mines approach achieved higher accuracy ranges (1 to 13)% within the three experiments than the two traditional methods (RGB and ASPS approaches). In addition, an augmentation technique to increase the size of the dataset is not required which has negative effects on performance. Furthermore, the approach simultaneously included three preprocessing techniques: feature selection, reduction, and extraction.Le texte complet de cet article est disponible en PDF.
Keywords : Feature Reduction, Feature Extraction, Medical Images, Classification