Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Geometry and Topology

Aymane Touat

Abstract: We study the local recovery of magnetic invariants in smooth n-dimensional manifolds equipped with general non-reversible Finsler metrics. We prove that the exterior derivative dβ is the unique second-order antisymmetric local invariant of the length functional, independently of higher-order Finsler perturbations. This generalizes previous 2-dimensional results to higher dimensions and establishes a rigorous, practically stable procedure for isolating magnetic invariants locally.

Article
Computer Science and Mathematics
Computational Mathematics

Zhazgul Ablakeeva

,

Burul Shambetova

Abstract: This extensive review provides a thorough examination of first-order ordinary differential equations (ODEs), covering fundamental theoretical concepts, diverse analytical solution techniques, stability analysis methods, numerical approximation algorithms, and interdisciplinary applications. The paper systematically explores classical models including exponential growth, logistic dynamics, and cooling laws, while extending to advanced topics such as bifurcation analysis, stochastic exten- sions, and modern computational approaches. Special attention is given to the in- terplay between analytical and numerical methods, with practical examples drawn from ecology, physics, engineering, and biomedical sciences. The work serves as both an educational resource for students and a reference for researchers and prac- titioners working with dynamical systems across scientific domains.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Yingxin Ou

,

Sumeng Huang

,

Feiyang Wang

,

Kan Zhou

,

Yingyi Shu

Abstract:

Non-stationary time-series data poses significant challenges for anomaly detection systems due to evolving patterns and distribution shifts that render traditional static models ineffective. This paper presents a novel continual learning framework that integrates dynamic distribution monitoring mechanisms to enable adaptive anomaly detection in non-stationary environments. The proposed framework employs a dual-module architecture consisting of a distribution drift detector and an adaptive learning component. The distribution drift detector utilizes statistical hypothesis testing to identify temporal shifts in data distributions, while the adaptive learning module employs rehearsal-based continual learning strategies with dynamic memory management to maintain model performance across evolving patterns. We introduce a hybrid loss function that balances stability and plasticity, preventing catastrophic forgetting while enabling rapid adaptation to new distributions. Experimental results demonstrate an average F1-score improvement of 11.3% over the best-performing baseline, highlighting the robustness and adaptability of the proposed framework under non-stationary conditions while maintaining computational efficiency suitable for real-time applications.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

T. Marques

,

J.B. Melo

,

A.J. Pontes

,

A. Gaspar-Cunha

Abstract:

In injection molding, advanced numerical modeling tools, such as Moldex3D, can significantly improve product development by optimizing part functionality, structural integrity, and material efficiency. However, the complex and nonlinear interdependencies between the several decision variables and objectives, considering the various operational phases, constitute a challenge to the inherent complexity of injection molding processes. This complexity often exceeds the capacity of conventional optimization methods, necessitating more sophisticated analytical approaches. Consequently, this research aims to evaluate the potential of integrating intelligent algorithms, specifically the selection of objectives using Principal Component Analysis and Mutual Information/Clustering, metamodels using Artificial Neural Networks, and optimization using Multi-Objective Evolutionary Algorithms, to manage and solve complex, real-world injection molding problems effectively. Using surrogate modeling to reduce computational costs, the study systematically investigates multiple methodological approaches, algorithmic configurations, and parameter-tuning strategies to enhance the robustness and reliability of predictive and optimization outcomes. The research results highlight the significant potential of data-mining methodologies, demonstrating their ability to capture and model complex relationships among variables accurately and to optimize conflicting objectives efficiently. In due course, the enhanced capabilities provided by these integrated data-mining techniques result in substantial improvements in mold design, process efficiency, product quality, and overall economic viability within the injection molding industry.

Article
Computer Science and Mathematics
Computer Networks and Communications

Krishna Bajpai

Abstract: The evolution of high-performance computing(HPC) interconnects has produced specialized fabrics such asInfiniBand, Intel Omni-Path, and NVIDIA NVLink,each optimized for distinct workloads. However, the increas-ing convergence of HPC, AI/ML, quantum, and neuromorphiccomputing requires a unified communication substrate capableof supporting diverse requirements including ultra-low latency,high bandwidth, collective operations, and adaptive routing. Wepresent HyperFabric Interconnect (HFI), a novel design thatcombines the strengths of existing interconnects while addressingtheir scalability and workload-fragmentation limitations. Ourevaluation on simulated clusters demonstrates HFI’s ability toreduce job completion time (JCT) by up to 30%, improve taillatency consistency by 45% under mixed loads and 4× betterjitter control in latency-sensitive applications., and sustain effi-cient scaling across heterogeneous workloads. Beyond simulation,we provide an analytical model and deployment roadmap thathighlight HFI’s role as a converged interconnect for the exascaleand post-exascale era.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Towhidul Islam

,

Safa Asgar

,

Sajjad Mahmood

Abstract: Lung cancer remains one of the leading causes of cancer-related mortality worldwide, highlighting the importance of early detection for improving patient survival rates. However, current machine learning approaches for lung cancer prediction often depend on suboptimal model configurations, limited systematic ensemble comparisons, and insufficient interpretability. This study introduces a novel framework, called Lung Explainable Ensemble Optimizer (LungEEO), that integrates three methodological advances: (1) comprehensive hyperparameter optimization across 50 configurations of nine machine learning algorithms for base model selection, (2) a systematic comparison of Hybrid Majority Voting strategies, including unweighted hard voting, weighted hard voting, and soft voting with an ensemble stacking approach, and (3) a dual explainable AI (XAI) layer based on SHAP and LIME to provide parallel global and local explanations. Experiments conducted on two heterogeneous lung cancer datasets indicate that ensemble approaches consistently outperform individual models. Weighted hard voting achieved the best performance on Dataset 1 (Accuracy: 89.04%, F1-Score: 89.04%), whereas ensemble stacking produced superior outcomes on Dataset 2 (Accuracy: 87.95%, F1-Score: 87.95%). Following extensive hyperparameter tuning, Random Forest and Multi-Layer Perceptron performed consistently well as base learners on both datasets. In addition, integrating SHAP with LIME offers additional insights into model behavior, boosting the interpretability of ensemble predictions, and strengthening their potential clinical applicability. To the best of our knowledge, the combined use of these interpretability techniques within an ensemble framework has received limited attention in existing lung cancer prediction studies. Overall, the proposed LungEEO framework offers a promising balance between predictive performance and interpretability, supporting its potential use in clinical decision support.

Article
Computer Science and Mathematics
Other

Felipe Oliveira Souto

Abstract: This work presents a series of interconnected mathematical \emph{constructions} that take the zeros of the Riemann zeta function as primordial elements. Rather than seeking a conventional proof of the Riemann Hypothesis, we investigate: what kind of mathematical reality emerges when we \emph{postulate} that these zeros form the spectrum of an operator within a specific geometric arena? Our constructions reveal a remarkable chain of coherence, linking geometry (minimal surfaces), topology (M\"obius bands), statistics (GUE), and fundamental physical constants. Within the constructed framework, the critical line $\Re(s)=1/2$ appears as a \emph{necessary condition}, GUE statistics as an intrinsic geometric property, and relations between the first four zeros encode the fine structure constant $\alpha^{-1} = 137.035999084\ldots$ to experimental precision \cite{CODATA2018}. We present these constructions not as final theorems, but as substantive \emph{insights} from a perspective that treats the zeta function not merely as an object of analysis, but as a potential organizational principle of mathematical reality.

Article
Computer Science and Mathematics
Computer Networks and Communications

Galia Novakova Nedeltcheva

,

Denis Chikurtev

,

Eugenia Kovatcheva

Abstract: While smart campuses continue to evolve alongside technological advancements, existing data models often fail to comprehensively integrate the diverse array of Internet of Things (IoT) devices and platforms. This study presents a unified data model tailored to the operational requirements of campus decision-makers, facilitating seamless interconnection across heterogeneous systems. By integrating IoT, cloud computing, big data analytics, and artificial intelligence, the proposed model seeks to advance campus operations, sustainability, and educational outcomes by fostering cross-system harmonization and interoperability. The analysis demonstrates that a layered architecture—comprising data acquisition, processing and storage, analytics and decision support, application presentation, and security and privacy—constitutes the foundation of robust smart campus data models. The system is structured to collect, refine, process, and archive raw data for future reference. Analytics and decision support mechanisms generate actionable insights; application presentation delivers results to end users, and security and privacy measures safeguard in-formation. The study further contends that artificial intelligence techniques, including predictive analytics (which forecasts outcomes using historical data), personalized learning (which customizes content to individual needs), and edge intelligence (which processes data at its source), are essential for advancing these models. These enhancements yield measurable benefits, including a 15% increase in student retention through personalized learning and a 20% reduction in energy consumption through predictive energy management [1]. Emerging technologies such as 5G networks, edge and fog computing, blockchain, and three-dimensional geographic information systems (3D GIS) are instrumental in enhancing campus intelligence. For example, the adoption of 5G has led to a 30% increase in data transmission speeds, thereby enabling real-time analytics and reliable connectivity (5G and IoT: How 5G is Transforming the Internet of Things, 2024). Building upon these technological advancements, innovative data models are shown to facilitate predictive energy management, resource optimization, and performance analytics within smart campuses. Nevertheless, ongoing challenges persist, including those related to system interoperability, scalability, and data governance. This study provides actionable design guidelines and offers a balanced evaluation of the achievements and challenges of smart campus implementations.

Article
Computer Science and Mathematics
Computer Science

P. Selvaprasanth

Abstract: Food safety hazards, such as microbial contamination, chemical adulterants, and supply chain vulnerabilities, demand rapid, privacy-preserving detection systems capable of handling distributed data from global sensors and labs. This paper introduces a novel quantum-safe federated learning architecture that enables collaborative model training across decentralized food industry nodes without sharing raw data, leveraging lattice-based cryptographic primitives like Kyber and Dilithium to withstand future quantum attacks including Shor's and Grover's algorithms. CORS-secured RESTful APIs serve as the secure bridge, enforcing strict cross-origin policies with origin whitelisting, preflight validation, and quantum-resistant JWT signatures to relay inference results to React-based single-page applications (SPAs).The React SPA frontend employs virtual DOM optimizations, TanStack Query for reactive data fetching, and Recharts for interactive visualizations, achieving sub-200ms end-to-end latency for hazard dashboards that display risk heatmaps, predictive alerts, and drill-down analytics on multimodal inputs like spectroscopic data and environmental telemetry. Implementation utilizes TensorFlow Federated augmented with OpenQuantumSafe libraries, deployed on scalable microservices via Kubernetes for horizontal scaling.Experimental evaluation on synthetic USDA-inspired datasets across 50 simulated nodes yielded 96% AUC accuracy for multi-class hazard classification, with federated convergence in under 20 rounds using privacy-amplified FedAvg. The system outperformed centralized ML baselines by 40% in differential privacy metrics (epsilon < 1.0) and reduced cryptographic overhead to 15% of training time. Security analysis via formal verification tools like Tamarin confirmed resilience against man-in-the-middle, replay, and side-channel exploits. This framework provides a deployable blueprint for regulatory compliance and proactive risk mitigation in cyber-physical food systems, paving the way for edge integrations like smart IoT appliances.

Article
Computer Science and Mathematics
Robotics

Lucas Pereira

,

Martina Kovács

,

Ahmed El-Masry

,

Feidlimid Shyama

Abstract: Multimodal Large Vision--Language Models (LVLMs) have emerged as a central paradigm in contemporary artificial intelligence, enabling machines to jointly perceive, reason, and communicate across visual and linguistic modalities at unprecedented scale. By integrating advances in large language models with powerful visual representation learning, LVLMs offer a unifying framework that bridges perception, cognition, and interaction. This capability is particularly consequential for Human--Computer Interaction (HCI) and robotic applications, where effective intelligence must be grounded in sensory input, responsive to human intent, and robust in dynamic, real-world environments.This review provides a comprehensive and in-depth examination of LVLMs from the perspective of interactive and embodied systems. We begin by situating LVLMs within the broader evolution of multimodal learning, highlighting the theoretical foundations and mathematical formulations that underpin vision--language alignment, representation fusion, and autoregressive generation. We then analyze dominant architectural paradigms, including dual-encoder models, fusion-based designs, and unified token-based transformers, discussing their respective trade-offs in terms of scalability, grounding fidelity, computational efficiency, and suitability for interaction-driven and robotic contexts.Building on these foundations, the review surveys a wide range of applications in HCI and robotics. In HCI, LVLMs enable visually grounded conversational agents, intelligent user assistance, explainable interfaces, and novel forms of human--AI co-creation that lower barriers to interaction and expand accessibility. In robotics, they support language-guided manipulation, navigation, exploration, and human--robot interaction by linking high-level natural language instructions with perceptual understanding and physical action. Across both domains, LVLMs facilitate generalization, adaptability, and more natural communication, while also exposing new challenges related to reliability, safety, and user trust.We further provide a critical analysis of current limitations and open research problems, including hallucination and weak grounding, limited temporal and causal reasoning, high computational cost, lack of interpretability, dataset bias, and insufficient evaluation methodologies for long-term interaction and embodied performance. These challenges highlight the gap between impressive benchmark results and the demands of real-world deployment. Finally, we outline key future research directions, emphasizing stronger grounding mechanisms, temporal and memory-aware modeling, efficiency and sustainability, human-centered and ethical design, and interdisciplinary evaluation and governance.By synthesizing insights across machine learning, HCI, and robotics, this review frames LVLMs not merely as technical artifacts but as interactive agents embedded in social and physical contexts. Our goal is to provide researchers and practitioners with a holistic understanding of the state of the field, clarify the opportunities and risks associated with deploying LVLMs in interactive and embodied systems, and chart a path toward multimodal AI technologies that are powerful, trustworthy, and aligned with human values.

Article
Computer Science and Mathematics
Algebra and Number Theory

Haoyuan Wang

Abstract: We extend classical canonical number systems to an $n$-dimensional setting and investigate representation scope: when rings and modules admit digit expansions, with illustrative constructions and stability under products.

Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

George Obaido

,

Ibomoiye Domor Mienye

,

Kehinde Aruleba

,

Chidozie Williams Chukwu

,

Ebenezer Esenogho

,

Cameron Modisane

Abstract: Medical artificial intelligence (AI) systems depend heavily on high-quality data representations to enable accurate prediction, diagnosis, and clinical decision-making. Yet, the availability of large, well-annotated medical datasets is often limited by cost, privacy concerns, and the need for expert labeling, motivating increased interest in self-supervised representation learning approaches. Among these, contrastive learning has emerged as one of the most influential paradigms, driving significant progress in representation learning across computer vision and natural language processing. This paper presents a comprehensive review of contrastive learning in medical AI, highlighting its theoretical foundations, methodological advances, and practical applications in medical imaging, electronic health records (EHRs), physiological signal analysis, and genomics. Furthermore, the study identifies common challenges such as pair construction, augmentation sensitivity, and evaluation inconsistencies, while discussing emerging trends including multimodal alignment, federated learning, and privacy-preserving frameworks. Through a synthesis of current developments and open research directions, this paper offers insights that advance data-efficient, reliable, and generalizable medical AI systems.

Article
Computer Science and Mathematics
Applied Mathematics

Malika Ashirbekova

,

Burul Shambetova

Abstract: Public transport reliability is strongly influenced by the regularity of vehicle headways, defined as the time intervals between consecutive vehicles serving the same route. Irregular headways increase passenger waiting times, cause vehicle bunching, and reduce overall system efficiency. This paper presents a graph-based approach to the analysis and optimization of public transport headways, using the city of Bishkek, Kyrgyzstan, as a case study. The public transport network is modeled as a weighted graph, where stops are represented as vertices and route segments as edges. Headways are incorporated as temporal attributes associated with routes and vehicle movements. An optimization objective is formulated to minimize headway variability across selected routes. Using simulated operational data, the proposed approach demonstrates that graph-based modeling provides a flexible and effective framework for analyzing headway irregularities and evaluating optimization strategies. The results highlight the potential of graph-based methods to support planning and operational decision-making in urban public transport systems.

Article
Computer Science and Mathematics
Data Structures, Algorithms and Complexity

Tolga Topal

Abstract: Shannon entropy and Kolmogorov complexity describe complementary facets of information. We revisit Q2 from 27 Open Problems in Kolmogorov Complexity: whether all linear information inequalities including non‑Shannon‑type ones admit $$\mathcal{O}(1)$$-precision analogues for prefix‑free Kolmogorov complexity. We answer in the affirmative via two independent arguments. First, a contradiction proof leverages the uncomputability of $$K$$ to show that genuine algorithmic dependencies underlying non‑Shannon‑type constraints cannot incur length‑dependent overheads. Second, a coding‑theoretic construction treats the copy lemma as a bounded‑overhead coding mechanism and couples prefix‑free coding (Kraft's inequality) with typicality (Shannon-McMillan-Breiman) to establish $$\mathcal{O}(1)$$ precision; we illustrate the method on the Zhang-Yeung (ZY98) inequality and extend to all known non‑Shannon‑type inequalities derived through a finite number of copy operations. These results clarify the structural bridge between Shannon‑type linear inequalities and their Kolmogorov counterparts, and formalize artificial independence as the algorithmic analogue of copying in entropy proofs. Collectively, they indicate that the apparent discrepancy between statistical and algorithmic information manifests only as constant‑order effects under prefix complexity, thereby resolving a fundamental question about the relationship between statistical and algorithmic information structure.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Roman Parovik

Abstract: The paper proposes single-layer neural network algorithms for solving ordinary differential equations of the second order, built on the principles of functional link. According to this principle, the hidden layer of the neural network is replaced by a functional expansion block to improve input patterns using orthogonal Chebyshev, Legendre and Laguerre polynomials. The algorithms of polynomial neural networks were implemented in the Python programming language in the PyCharm environment. The operation of the algorithms of polynomial neural networks was tested by solving the initial and boundary value problems for the nonlinear Lane-Emden equation. The results of the solution are compared with the exact solution of the problems under consideration, as well as with the solution obtained using a multilayer perceptron. It is shown that polynomial neural networks can work more efficiently than multilayer neural networks. The issues of overfiting of polynomial neural networks and scenarios for overcoming it are considered.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Ade Kurniawan

,

Amril Mutoi Siregar

,

Mochammad Ariyanto

,

Muhammad Khaerul Naim Mursalim

Abstract: Deep learning-based human activity recognition (HAR) systems, despite achieving high accuracy, remain vulnerable to adversarial attacks that pose severe threats to safety-critical deployments. This paper presents a comprehensive framework for sensor-specific targeted adversarial attacks on multimodal HAR systems. We propose a hybrid optimization strategy combining momentum-based Projected Gradient Descent with adaptive Carlini-Wagner optimization, incorporating dynamic early stopping and intelligent fallback mechanisms. Our approach constrains perturbations to individual sensor modalities, enabling systematic vulnerability assessment across heterogeneous configurations. Through extensive evaluation on the MHealth dataset with 96 sensor-target combinations and 38,000+ adversarial examples, our hybrid strategy achieves 96.46% targeted attack success rate—representing 45% improvement over baseline C&W and 8% over enhanced PGD—while maintaining 49× computational efficiency. Analysis reveals accelerometers exhibit highest vulnerability (99.83%), followed by gyroscopes (96.67-99.00%) and magnetometers (91.00-95.50%). High-motion activities prove universally vulnerable (100%), while sedentary activities show sensor-dependent robustness (66-100%). Statistical validation confirms strong correlation between model confidence and vulnerability (r = 0.71, p < 0.01). Limited cross-sensor transferability (28-42%) suggests promising defense directions through sensor redundancy and ensemble methods. Our findings underscore urgent needs for adversarially robust HAR design in safety-critical applications.

Review
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Erçin Dinçer

,

Zeynep Hilal Kilimci

Abstract: Large Language Models (LLMs) have recently gained prominence for deployment on edge devices, owing to their potential to support privacy-preserving, low-latency, and offline inference. Nevertheless, their considerable computational and memory requirements present fundamental challenges in both real-time and offline scenarios. This systematic review synthesizes evidence from 49 studies, of which 40 were analyzed in depth, to investigate techniques, challenges, and applications of LLM deployment on edge devices. The studies were identified through a structured search and screening process, and data were extracted regarding model types, hardware platforms, optimization strategies, and performance outcomes. Findings indicate that hardware acceleration, model compression, and hybrid edge–cloud strategies can yield latency reductions of up to 972×, memory savings of up to 130×, and energy efficiency improvements exceeding 1600×, while largely preserving accuracy. Real-time deployments are predominantly applied in robotics, healthcare monitoring, and autonomous driving, whereas offline deployments are tailored to privacy-sensitive or batch-oriented contexts. The review also identifies persistent research gaps, including the absence of standardized benchmarks and the limited generalizability of results to real-world environments. It concludes by outlining future research directions, with particular emphasis on hardware–software co-design, federated learning, and secure task offloading.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Christopher Wanjohi

,

Tanyaradzwa Chitsuro

,

Augustine Kamara

,

Moses Kiprono

Abstract: This paper presents GenAI Financial Reporter, a multimodal artificial intelligence system designed to automate the generationof comprehensive financial analysis reports. The system leverages large language models (GPT-4o), retrieval-augmentedgeneration (RAG) with ChromaDB vector database, and multi-agent architectures to transform raw financial data intoprofessional reports enriched with text summaries, interactive visualizations, and audio narration. By integrating real-timemarket data from Yahoo Finance and SEC EDGAR filings, the system computes 27 key performance indicators (KPIs) fromstructured financial data stored in PostgreSQL and generates contextually-grounded analysis using RAG over SEC filing text.Our evaluation demonstrates vector similarity search completing in 1.3ms and full RAG queries averaging 15 seconds. Anablation study shows that RAG-enabled queries cite an average of 4 SEC filing sources per response compared to zero forbaseline approaches, improving answer provenance. The system supports multi-company comparisons, historical trendanalysis, and exports to multiple formats including PDF, DOCX, HTML, and MP3 audio. Deployed on AWS EC2 with Dockercontainerization, the system achieves production-ready reliability. We note that this is a systems paper emphasizing practical deployment;rigorous evaluation against financial benchmarks remains future work.

Technical Note
Computer Science and Mathematics
Software

Rehnumah Taslim Munmun

Abstract: Aquaculture is a major contributor to Bangladesh’s economy, but farmers still struggle to maintain proper water quality because manual testing is slow, inaccurate, and difficult to manage in rural areas. This project introduces a low-cost, real-time monitoring system using an ESP32-32 N4 with temperature, pH, and TS300B turbidity sensors. The system collects water quality data and sends it to a mobile device, allowing farmers to track pond conditions remotely and receive alerts when values cross safe limits. Field tests show that the system provides reliable readings and helps reduce fish mortality by enabling quick action. This approach offers an affordable and practical solution for small-scale farmers, supporting better farm management and promoting wider technological adoption in the aquaculture sector.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Yingyi Shu

,

Kan Zhou

,

Yingxin Ou

,

Ruobing Yan

,

Sumeng Huang

Abstract: Time-series anomaly detection faces significant challenges when dealing with imbalanced data distributions, distribution shifts, and heterogeneous feature types. Traditional supervised methods struggle due to limited labeled anomaly samples, while unsupervised approaches often produce high false positive rates. We propose a novel self-supervised learning framework that integrates contrastive representation learning with adaptive distribution monitoring and explainable AI techniques. Our framework employs tailored augmentation strategies for time-series data, learns robust representations through contrastive objectives, and utilizes SHAP values to provide interpretable anomaly explanations across heterogeneous features. Experimental results on multiple benchmark datasets demonstrate that our approach achieves an F1-score of 0.823, outperforming state-of-the-art methods by 8.9% while maintaining interpretability and robustness to distribution shifts. The framework effectively handles imbalance ratios up to 1:100 and provides actionable insights for real-world deployment.

of 629

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated