TUESDAY, Jan. 27, 2026 (HealthDay News) -- A deep learning ensemble integrating three architectures shows potential for accurate melanoma detection, according to a study published in the December issue of Biosensors and Bioelectronics: X.Eeshan G. Dandamudi, from the University of Missouri in Columbia, and colleagues developed a deep learning ensemble integrating the ConvNeXt-Base, ResNet-50, and Swin Transformer-Base architectures to identify melanoma using images from three-dimensional total body photography. The dataset contained about 401,059 cropped images of skin lesions, with 393 malignant lesions and 400,666 benign lesions.The researchers found that the ensemble achieved an area under the curve (AUC) of 0.9208, which outperformed the individual models (AUCs of 0.8763, 0.8722, and 0.8551 for Swin Transformer, ConvNeXt, and ResNet, respectively). Traditional machine learning models also gave lower AUC values: 0.777, 0.719, and 0.749 for XGBoost, LightGBM, and CatBoost, respectively. The robustness of the model was further verified through quadruple-stratified leak-free fivefold cross-validation. The best generalization performance was seen with optimized ensemble weighting of 40 percent, 35 percent, and 25 percent with Swin Transformer, ConvNeXt, and ResNet, respectively."It will be some time before this can be used as a tool by doctors in a health care setting, but this research is a promising proof of concept," lead author Kamal Singh, Ph.D., also from the University of Missouri, said in a statement. "As researchers, if we can get better at explaining why and how AI [artificial intelligence] comes to the conclusions it makes, more health care professionals will trust that it can be a helpful tool to ultimately support clinical decision-making and improve patient outcomes."Several authors disclosed ties to the biopharmaceutical industry.Abstract/Full Text.Sign up for our weekly HealthDay newsletter