Preprint
Article

This version is not peer-reviewed.

Architectural Diversity in Mixture of Experts: A Comparative Study

Submitted:

10 December 2025

Posted:

11 December 2025

You are already at the latest version

Abstract
This work presents the integration of Mixture of Experts (MoE) architectures into the LEMUR neural network dataset to enhance model diversity and scalability. The MoE framework employs multiple expert networks and a gating mechanism for dynamic routing, enabling efficient computation and improved specialization across tasks. Eight MoE variants were implemented and benchmarked on CIFAR-10, achieving up to 93% accuracy with optimized routing, regularization, and training strategies. This integration provides a foundation for benchmarking expert-based models within LEMUR and supports future research in adaptive model composition and automated machine learning. The project work and its plugins are accessible as open source projects under the MIT license at https://github.com/ABrain-One/nn-dataset.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  ;  
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated