Anales de la RANM

270 A N A L E S R A N M R E V I S T A F U N D A D A E N 1 8 7 9 DEEP LEARNING GENITAL LESIONS IMAGE CLASSIFICATION González-Alday R, et al. An RANM. 2022;139(03): 266 - 273 Using HTTPS, a secure transference protocol, we ensure a secure channel between client and server, ensuring the confidentiality of the communication, as well as the integrity of the data, detecting and avoiding confusion errors in transmission. Moreover, a web service offers the possibility of easily including numerous upgrades if needed in the future, such as automating image sending, further security functiona- lities, time logs and so on. This web service includes user authentication, so users need to be registered with an e-mail address, username and password, to access their particular profile where they can upload images, annotate and manage them. Besides specifying the image class, users can access the web page interface to draw bounding boxes over the images to properly locate the lesions, making future preprocessing work easier, or to enable the use of CNN architectures for object detection and not only classification. Moreover, they can add additional patient symptoms or notes that could be of help for the model’s development. After uploading the images, they can be collected by the developing team to expand the database used to train the deep learning model. 3. RESULTS The training process of the CNN model consisted of 100 iterations (called epochs) over the training dataset images (with the data augmen- tation transformations described before) and starting from the pre-trained network weights. This training process was manually supervised, ensuring that the accuracy consistently improved and no weird oscillations occurred. As mentioned above, for carrying out the 5-folds cross-validation, this process was repeated 5 times rotating the training and testing images. Then, we evaluated the model’s performance over the complete dataset as well as its generalization capabilities, as the testing was always made on images that were “unseen” during training. The final testing result was an accuracy of 86.6% after conducting the whole cross-validation of the model. Figure 2 shows the confusion matrix for the different classes, to better visualize in which cases the CNN model is getting its predictions right and in which not. Figure 2. Confusion matrix of the model’s results during testing.

RkJQdWJsaXNoZXIy ODI4MTE=