K., Hemanth: MoE-TLM: A Neuro-Symbolic Routing Architecture for Domain-Specialized Micro-Models

This paper introduces MoE-TLM, a modular architecture for language intelligence based on domain-specialized micro-models coordinated through a neural routing system. In contrast to conventional large language models that rely on massive parameter scaling, the proposed approach distributes intelligence across multiple lightweight expert models, each trained for a specific domain. A central routing mechanism dynamically selects one or more experts per query using confidence-based decision logic, e