Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Analysis

Branko Sarić

Abstract: On the basis of the isomorphic algebraic structures of the field of complex numbers ℂ and the 2-dimensional Euclidean field of real vectors V₂, in terms of identical geometric products of elements, this paper brings integral identities for scalar and vector fields in V₂, which are vector analogues of the well-known integral identities of complex analysis. Consequently, in this paper, Theorem 1., which is a generalized fundamental theorem of integral calculus in the field V₂, is the vector analogue of the Cauchy theorem of complex analysis. Therefore, special attention is paid to the vector analogue of Cauchy's calculus of residues in the field V₂. Finally, at the very end of the paper, the algebraic structure of the 3D field of vectors V₃ is presented, as well as the corresponding fundamental integral identities.

Short Note
Computer Science and Mathematics
Computer Vision and Graphics

Brennan Sloane

,

Landon Vireo

,

Keaton Farrow

Abstract: High-fidelity telepresence requires the reconstruction of photorealistic 3D avatars in real-time to facilitate immersive interaction. Current solutions face a dichotomy: they are either computationally expensive multi-view systems (e.g., Codec Avatars) or lightweight mesh-based approximations that suffer from the "uncanny valley" effect due to a lack of high-frequency detail. In this paper, we propose Mono-Splat, a novel framework for reconstructing high-fidelity, animatable human avatars from a single monocular webcam video stream. Our method leverages 3D Gaussian Splatting (3DGS) combined with a lightweight deformation field driven by standard 2D facial landmarks. Unlike Neural Radiance Fields (NeRFs), which typically suffer from slow inference speeds due to volumetric ray-marching, our explicit Gaussian representation enables rendering at >45 FPS on consumer hardware. We further introduce a landmark-guided initialization strategy to mitigate the depth ambiguity inherent in monocular footage. Extensive experiments demonstrate that our approach outperforms existing NeRF-based and mesh-based methods in both rendering quality (PSNR/SSIM) and inference speed, presenting a viable, accessible pathway for next-generation VR telepresence.

Short Note
Computer Science and Mathematics
Computer Vision and Graphics

Landon Vireo

,

Brennan Sloane

,

Arden Piercefield

,

Greer Holloway

,

Keaton Farrow

Abstract: Diminished Reality (DR)—the ability to visually remove real-world objects from a live Augmented Reality (AR) feed—is essential for reducing cognitive load and decluttering workspaces. However, existing techniques face a critical challenge: removing an object creates a visual void ("hole") that must be filled with a plausible background. Traditional 2D inpainting methods lack temporal consistency, causing the background to flicker or slide as the user moves. In this paper, we propose Clean-Splat, a novel framework for real-time, multi-view consistent object removal. We leverage 3D Gaussian Splatting (3DGS) for scene representation and integrate a View-Consistent Diffusion Prior to hallucinate occluded background geometry and texture. Unlike previous NeRF-based inpainting which is prohibitively slow, our method updates the 3D scene representation in near real-time, enabling rendering at >30 FPS on consumer hardware. Extensive experiments on real-world cluttered scenes demonstrate that Clean-Splat achieves state-of-the-art perceptual quality (LPIPS) and temporal stability compared to existing video inpainting approaches.

Article
Computer Science and Mathematics
Software

Michael Dosis

,

Antonios Pliatsios

Abstract: This paper presents Sem4EDA, an ontology-driven and rule-based framework for automated fault diagnosis and energy-aware optimization in Electronic Design Automation (EDA) and Internet of Things (IoT) environments. The escalating complexity of modern hardware systems, particularly within IoT and embedded domains, presents formidable challenges for traditional EDA methodologies. While EDA tools excel at design and simulation, they often operate as siloed applications, lacking the semantic context necessary for intelligent fault diagnosis and system-level optimization. Sem4EDA addresses this gap by providing a comprehensive ontological framework developed in OWL 2, creating a unified, machine-interpretable model of hardware components, EDA design processes, fault modalities, and IoT operational contexts. We present a rule-based reasoning system implemented through SPARQL queries, which operates atop this knowledge base to automate the detection of complex faults such as timing violations, power inefficiencies, and thermal issues. A detailed case study, conducted via a large-scale trace-driven co-simulation of a smart city environment, demonstrates the framework’s practical efficacy: by analyzing simulated temperature sensor telemetry and Field-Programmable Gate Array (FPGA) configurations, Sem4EDA identified specific energy inefficiencies and overheating risks, leading to actionable optimization strategies that resulted in a 23.7% reduction in power consumption and 15.6% decrease in operating temperature for the modeled sensor cluster. This work establishes a foundational step towards more autonomous, resilient, and semantically-aware hardware design and management systems.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Tekin Ahmet Serel

,

Esin Merve Koç

,

Oğuz Uğur Aydın

,

Eda Uysal Aydın

,

Furkan Umut Kılıç

Abstract: Placental abruption is detachment of the placenta before delivery from the implantation site that may have a potential to develop life-threating emergency clinic syptoms. The multifactorial nature of this disorder and no lab testing or procedures that can diagnose placental abruption. makes it difficult to predict. Artificial intelligence (AI) and machine learning (ML) have the potential to enhance clinical decision-making and enable precise assessments. This study purposed on predictive 15 ML models for placental abruption high-lighting input characteristics, performance metrics, and validation. The medical records of 564 patients were analyzed between 2021 and 2025 for studies using AI to develop predictive models for placental abruption. Findings were analyzed with Python software and Pycaret library. The model integrated data for 5 variables (features) for the prediction. Among 15 machine learning algorithms, Logistic regression was chosen as the best model. The performance metrics were determined as follows: accuracy rate of 0.85, AUC of 0.91, recall of 0.85, precision of 0.85, and F1 score of 0.85. In the ranking based on their importance in the classification model, gestational age at delivery was observed to have the highest importance for classification. Twenty-eight unseen cases were utilized for an extra validation step. The model achieved a high accuracy on this set, with 21 cases correctly predicted. The presented 15 ML models in our study had significant accuracy in predicting placental abruption , but these models require further development before they can be applied in a clinical setting.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Kenji Yoshitsugu

,

Kazumasa Kishimoto

,

Tadamasa Takemura

Abstract:

Deep Learning (DL) has undergone widespread adoption for medical image analysis and diagnosis. Numerous studies have explored mammographic image analysis for breast cancer screening. For this study, we assessed the hypothesis that stratifying mammography images based on the presence or absence of a corresponding region of interest (ROI) improves classification accuracy for both normal–abnormal and benign–malignant classifications. Our methodology involves independently training models and performing predictions on each subgroup with subsequent integration of the results. We used several DL models, including ResNet, EfficientNet, SwinTransformer, ConvNeXt, and MobileNet. For experimentation, we used the publicly available VinDr., CDD-CESM, and DMID datasets. Our comparison with prediction results obtained without ROI-based stratification demonstrated that the utility of considering ROI presence to enhance diagnostic accuracy in mammography increases along with the data volume. These findings support the usefulness of our stratification approach, particularly as a dataset size grows.

Article
Computer Science and Mathematics
Algebra and Number Theory

Rafik Zeraoulia

,

Sobhan Sobhan Allah

Abstract: Let 1 < a1 < a2 < · · · be integers with \( \sum_{k=1}^\infty a_k^{-1}<\infty \), and set \( F(s)=1+\sum_{k=1}^\infty a_k^{-s}, \qquad \Re s>1. \) A question of Erdős and Ingham, recorded as Erdős Problem #967 in a compilation by T. F. Bloom (accessed 2025--12--01), asks whether one always has \( F(1+it)\neq 0 \) for all real t. This paper does not resolve the problem; instead, it develops a modern dynamical-systems framework for its study. Using the Bohr transform, we realise $F$ as a Hardy-function on a compact abelian Dirichlet group and interpret \( F(1+it) \)as an observable along a Kronecker flow. Within this setting we establish a quantitative reduction of the nonvanishing question to small-ball estimates for the Bohr lift, formulated as a precise conjecture, and we obtain partial results for finite Dirichlet polynomials under Diophantine conditions on the frequency set. The approach combines skew-product cocycles, ergodic and large-deviation ideas, and entropy-type control of recurrence to small neighbourhoods of -1, aiming at new nonvanishing criteria on the line \( \Re s=1 \).

Article
Computer Science and Mathematics
Algebra and Number Theory

Huan Xiao

Abstract: The Bateman-Horn conjecture is a conjecture on prime values in polynomials. We prove it by Golomb's method.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Chong Zhang

,

Chihui Shao

,

Junjie Jiang

,

Yinan Ni

,

Xiaoxuan Sun

Abstract:

To address the practical challenges of diverse anomaly patterns, strongly coupled dependencies, and high labeling costs in large-scale complex infrastructures, this paper presents an unsupervised anomaly detection method that integrates graph neural networks with Transformer models. The approach learns normal system behavior and identifies deviations without relying on anomaly labels. Infrastructure components are abstracted as nodes in a dependency graph, where nodes are characterized by multiple source observability signals. A graph encoder aggregates neighborhood information to produce structure-enhanced node representations. Self-attention mechanisms are introduced along the temporal dimension to capture long-range dynamic dependencies. This design enables joint modeling of structural relations and temporal evolution. A reconstruction-based training strategy is adopted to constrain the learning of normal patterns. Reconstruction error is used to derive anomaly scores for detection. To ensure reproducibility and ease of deployment, complete specifications of data organization, training procedures, and key hyperparameter settings are provided. Comparative experiments on public benchmarks demonstrate overall advantages across multiple evaluation metrics and confirm the effectiveness of the proposed framework in representing anomaly propagation and temporal drift characteristics in complex systems.

Technical Note
Computer Science and Mathematics
Computer Science

Daisuke Sugisawa

Abstract: In the modern microservice environment, library dependencies for inter-system communication have become bloated, and conflicts and complications during build and operation have become problems. In particular, in the conventional communication architecture that depends on the MySQL database, the multi-layer dependencies included in \texttt{libmysqlclient} restrict the flexibility of system design. In this study, a replication-protocol-compatible patch was applied to the lightweight MySQL client library Trilogy, and a loosely coupled, low-footprint IPC library connecting the control plane and the data plane was implemented. The proposed method eliminates dependencies on the internal static library group of MySQL Server, while enabling binary log events to be processed directly at the application layer. Stable operation has been achieved for more than one year in a commercial system environment, and its effectiveness has been verified through long-term operation.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Juan E Cedrún-Sánchez

,

Ricardo Bernárdez-Vilaboa

,

Laura Sánchez-Alamillos

,

Marina Medina-Galdeano

,

Carla Otero-Curras

,

F. Javier Povedano-Montero

Abstract: Background: Automated visual field testing is fundamental in ophthalmology, but differences in stimulus scaling and luminance between devices hinder direct comparison of sensitivity values. Virtual reality (VR)–based perimetry has emerged as a portable alternative, yet its relationship with conventional perimetry requires clarification. Methods: This prospective cross-sectional study included 60 healthy participants stratified into younger (&lt; 50 years) and older (≥ 50 years) groups. Differential light sensitivity was assessed in the right eye using Humphrey Automated Perimetry (HFA 30-2) and a VR-based perimeter (Dicopt-Pro) in randomized order. Pointwise sensitivity values were analyzed using linear regression and Bland–Altman analysis, and sensitivity profiles were examined as a function of visual field eccentricity. Results: A strong linear relationship was observed between HFA and Dicopt-Pro sensitivity values in both age groups (R ≥ 0.96). A systematic and approximately constant inter-device offset was identified, with mean differences of 15.7 ± 0.4 dB in younger subjects and 13.7 ± 0.5 dB in older subjects. Bland–Altman analysis showed consistent bias without proportional error. Dicopt-Pro sensitivity profiles demonstrated an eccentricity-dependent decline comparable to HFA while preserving age-related differences. Conclusions: VR-based perimetry using Dicopt-Pro shows sensitivity patterns closely aligned with conventional Humphrey perimetry when a systematic, age-specific inter-device offset is considered.

Article
Computer Science and Mathematics
Security Systems

Arjun Mehta

,

Rohan Srinivasan

,

Neha Kapoor

Abstract: We integrate static taint analysis with dynamic fuzzing to target high-impact kernel code paths. A pruning mechanism removes irrelevant taint propagation, while symbolic constraints are applied only to tainted regions to control overhead. Evaluated on 18 kernel subsystems, the hybrid fuzzer achieves 44% more taint-relevant path hits, identifying 13 bugs, including buffer overflows and pointer dereferences. Symbolic overhead remains limited (≤18%) through selective propagation. This hybrid design efficiently directs fuzzing toward semantically meaningful kernel logic, demonstrating a productive balance of taint tracking and dynamic mutation.

Article
Computer Science and Mathematics
Computer Science

Faria Nassiri-Mofakham

,

Shadi Farid

,

Katsuhide Fujita

Abstract:

Lexicographic Preference Trees (LP-Trees) offer a compact and expressive framework for modeling complex decision-making scenarios. However, efficiently measuring similarity between complete or partial structures remains a challenge. This study introduces PLPSim, a novel metric for quantifying alignment between Partial Lexicographic Preference Trees (PLP-Trees), and develops three coalition formation algorithms—HRECS1, HRECS2, and HRECS3—that leverage PLPSim to group agents with similar preferences. We further propose ContractLex and PriceLex protocols (comprising five lexicographic protocols CLF, CFB, CFW, CFA, CFP), along with a new evaluation metric, F@LeX, designed to assess satisfaction under lexicographic preferences. To illustrate the framework, we generate a synthetic dataset (PLPGen) contextualized in a hybrid renewable energy market, where consumer PLP-Trees are matched with supplier tariffs to optimize coalition outcomes. Experimental results, evaluated using Normalized Discounted Cumulative Gain (nDCG), Davies–Bouldin dispersion, and F@LeX, show that PLPSim-based coalitions outperform baseline approaches. Notably, the combination HRECS3 + CFP yields the highest consumer satisfaction, while HRECS3 + CFB achieves balanced satisfaction for both consumers and suppliers. Although electricity tariffs and renewable energy contracts—both static and dynamic—serve as the motivating example, the proposed framework generalizes to broader multiagent systems, offering a foundation for preference-driven coalition formation, adaptive policy design, and sustainable market optimization.

Article
Computer Science and Mathematics
Mathematics

Rakhimjon Zunnunov

,

Roman Parovik

,

Akramjon Ergashev

Abstract: In the theory of mixed-type equations, there are many works in bounded domains with smooth boundaries bounded by a normal curve for first and second-kind mixed-type equations. In this paper, for a second-kind mixed-type equation in an unbounded domain whose elliptic part is a horizontal half-strip, a Bitsadze-Samarskii type problem is investigated. The uniqueness of the solution is proved using the extremum principle, and the existence of the solution is proved by the Green’s function method and the integral equations method. When constructing the Green’s function, the properties of Bessel functions of the second kind with imaginary argument and the properties of the Gauss hypergeometric function are widely used. Visualization of the solution to the Bitsadze-Samarskii type problem is performed, confirming its correctness from both mathematical and physical points of view.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Yutong Wang

,

Ruobing Yan

,

Yujie Xiao

,

Jinming Li

,

Zizhao Zhang

,

Feiyang Wang

Abstract: This study addresses the challenge of long-term dependency modeling in agent behavior planning for long-horizon tasks and proposes a memory-driven agent planning framework. The method introduces hierarchical memory encoding and dynamic memory retrieval structures, enabling the agent to selectively retain and effectively utilize historical information across multiple time scales, thereby maintaining policy stability and goal consistency in complex dynamic environments. The core idea is to construct an interaction mechanism between short-term and long-term memory, where attention-guided retrieval integrates historical experience with current perception to support continuous planning and decision optimization in long-term tasks. The proposed framework consists of four key modules: perception input, memory encoding, state updating, and behavior generation, forming an end-to-end task-driven learning process. Experimental evaluations based on success rate, average planning steps, memory consistency score, and policy stability demonstrate that the proposed algorithm achieves superior performance in long-term task scenarios, effectively reducing planning redundancy and improving strategy coherence and task efficiency. The results confirm that the memory-driven mechanism provides a novel theoretical foundation and algorithmic framework for developing long-term task agents, establishing a solid basis for adaptive decision-making and continuous planning in complex environments.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Kangning Gao

,

Haotian Zhu

,

Rui Liu

,

Jinming Li

,

Xu Yan

,

Yi Hu

Abstract: Large Language Model (LLM)-based multi-agent systems have emerged as a promising paradigm for tackling complex tasks that exceed individual agent capabilities. However, existing approaches often suffer from coordination inefficiencies, a lack of trust mechanisms, and suboptimal role assignment strategies. This paper presents a novel trust-aware coordination framework that enhances multi-agent collaboration through dynamic role assignment and context sharing. Our framework introduces a multi-dimensional trust evaluation mechanism that continuously assesses agent reliability based on performance history, interaction quality, and behavioral consistency. The coordinator leverages these trust scores to dynamically assign roles and orchestrate agent interactions while maintaining a shared context repository for transparent information exchange. We evaluate our framework across eight diverse task scenarios with varying complexity levels, demonstrating significant improvements over baseline approaches. Experimental results show that our trust-aware framework achieves a 87.4% task success rate, reducing execution time by 36.3% compared to non-trust-based methods, while maintaining 43.2% lower communication overhead. The framework's ability to adapt agent roles based on evolving trust scores enables more efficient resource utilization and robust fault tolerance in dynamic multi-agent environments.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Rodrigo Ying

,

Qianxi Liu

,

Yuliang Wang

,

Yujie Xiao

Abstract: This paper addresses the challenges in traditional enterprise performance analysis, including complex multi-source data structures, ambiguous indicator correlations, and poor decision interpretability. It proposes an enterprise performance optimization decision model that integrates knowledge graphs with causal inference. The model constructs a multi-entity and multi-relation knowledge graph to semantically integrate heterogeneous information from financial, market, and operational dimensions, enabling high-level representation of structured relationships among enterprise features. It further incorporates causal structure learning and inference mechanisms to identify key performance drivers and estimate intervention effects, revealing the true causal pathways among variables. In the optimization layer, the model combines knowledge representation with causal relationships to establish an interpretable decision objective function, ensuring that predictions possess both numerical accuracy and causal consistency with logical traceability. Experiments conducted on public enterprise datasets demonstrate that the proposed method outperforms mainstream deep learning and sequence modeling approaches in terms of error control and generalization performance, showing higher robustness and stability. Sensitivity analysis further confirms that the model maintains strong adaptability and consistent performance under different embedding dimensions, noise levels, and optimization strategies. This study provides a novel methodological framework and theoretical foundation for building interpretable and intervention-oriented intelligent decision systems, offering significant implications for data-driven performance evaluation and decision optimization.

Article
Computer Science and Mathematics
Algebra and Number Theory

Hassan Bouamoud

Abstract: In this article we show that the polynomial \( t^2(4x - n)^2 - 2ntx \) does not always admits a perfect square with \( n\geq 2 \) and \( (x,t)\in \mathbb{(N^*)^2} \). We prove this when \( n=3 \) and we show by contradiction that one of x or t (in the expression \( t^2(4x - 3)^2 - 6tx \)) isn't an integer.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Kewei Cao

,

Yinghao Zhao

,

Hejing Chen

,

Xinyi Liang

,

Yihan Zheng

,

Sumeng Huang

Abstract: Credit fraud detection is a central problem in financial risk management. It is challenging due to complex relationships among transaction entities, strong coordination of fraudulent activities, and high levels of behavioral camouflage at the individual level. Traditional approaches mainly rely on features from independent samples. They often fail to represent multi-entity interaction structures effectively. From a relational modeling perspective, this study investigates a graph neural network-based method for credit fraud detection. The method represents transaction entities and their interactions as a unified graph. It performs information propagation and neighborhood aggregation in graph space. It jointly models node attributes and structural context. It therefore learns more discriminative risk representations. During modeling, the network captures both local interaction patterns and multi-hop relational information. This makes anomalous patterns embedded in complex transaction networks more explicit. Based on the proposed method, this study establishes a standardized comparative evaluation on a public credit fraud dataset. It systematically compares the method with multiple existing models. The overall results confirm the effectiveness of graph-structured modeling for credit fraud detection. The findings show that, compared with detection methods that ignore or weaken relational information, graph neural network models achieve clear advantages in stability and consistency of risk identification. They better reflect the propagation characteristics of fraudulent behavior within network structures. These results support modeling credit fraud detection as a relation-aware graph learning problem. This formulation enhances risk characterization in complex financial scenarios. It also provides a structured modeling perspective for building intelligent risk control systems.

Article
Computer Science and Mathematics
Mathematics

Yuanwen Zheng

,

Fang Gao

Abstract: Calculating finite sums and products of trigonometric functions is an important and fascinating problem, a straightforward method is using infinite series or infinite product. In this paper, we calculate four finite sums and a finite product of trigonometric functions using this method, which contributes to a deeper understanding of this problem.

of 633

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated