Computer Science and Mathematics

Sort by

Article
Computer Science and Mathematics
Computer Science

Yung-Hoh Sheu

,

Li-Wei Tai

,

Sheng-K Wu

,

Tz-Yun Chen

,

Li-Chun Chang

Abstract: This study proposes an integrated agility assessment system that combines Millimeter-Wave (MMW) radar, Ultra-Wideband (UWB) ranging, and Mixed Reality (MR) technologies to quantitatively evaluate athlete performance with high accuracy. The system utilizes the fine motion-tracking capability of MMW radar and the immersive real-time visualization provided by MR to ensure reliable operation under low-light conditions and multi-object occlusion, thereby enabling precise measurement of mobility, reaction time, and movement distance.To address the challenge of player identification during doubles testing, a one-to-one UWB configuration was adopted, in which each base station was paired with a wearable tag to distinguish individual athletes. UWB identification was not required during single-player tests. The experimental protocol included three specialized agility assessments—Table Tennis Agility Test I (TTAT I), Table Tennis Doubles Agility Test II (TTAT II), and the Agility T-Test (ATT)—conducted with more than 80 table tennis players of different technical levels (80% male and 20% female). Each athlete completed two sets of two trials to ensure measurement consistency and data stability.Experimental results demonstrated that the proposed system effectively captured displacement trajectories, movement speed, and reaction time. The MMW radar achieved an average measurement error of less than 10%, and the overall classification model attained an accuracy of 91%, confirming the reliability and robustness of the integrated sensing pipeline. Beyond local storage and MR-based live visualization, the system also supports cloud-based data uploading for graphical analysis and enables MR content to be mirrored on connected computer displays. This feature allows coaches to monitor performance in real time and provide immediate feedback.By integrating the environmental adaptability of MMW radar, the real-time visualization capability of MR, UWB-assisted athlete identification, and cloud-based data management, the proposed system demonstrates strong potential for professional sports training, technical diagnostics, and tactical optimization. It delivers timely and accurate performance metrics and contributes to the advancement of data-driven sports science applications.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Caijian Hua

,

Fangjun Ren

Abstract: Current management strategies for sorghum aphids heavily rely on indiscriminate chemical application, leading to severe environmental consequences and impacting food safety. While precision spraying offers a viable remediation for pesticide overuse, its effectiveness depends on accurate pest location and classification. To address the critical challenge of segmenting small, swarming aphids in complex field environments, we propose FESW-UNet, a dual-domain attention network that integrates Fourier-enhanced attention, spatial attention, and wavelet-based downsampling into a UNet backbone. We introduce an Efficient Multi-scale Aggregation (EMA) module between the encoder and decoder to improve global context perception, allowing the model to better capture relationships between global and local features in the field. In the feature extraction stage, we embed a Similarity-Aware Activation module (SimAM) to target key infestation regions while suppressing background noise, thereby enhancing pixel-level discrimination. Furthermore, we replace conventional downsampling with Haar Wavelet Decomposition (HWD) to reduce resolution while preserving structural edge details. Finally, a Fourier-enhanced attention module (FEAM) is added to the skip-connection layers. By using complex-valued weights to regulate frequency-domain features, FEAM fuses global low-frequency structures with local high-frequency details, improving feature representation diversity. Experiments on the Aphid Cluster Segmentation dataset show that FESW-UNet outperforms other models, achieving an mIoU of 68.76% and mPA of 78.19%. The model also demonstrated strong adaptability on the AphidSeg-Sorghum dataset, reaching an mIoU of 81.22% and mPA of 87.97%. The proposed method provides an efficient and feasible technical solution for monitoring and controlling sorghum aphids via image segmentation and demonstrates broad application potential.
Article
Computer Science and Mathematics
Probability and Statistics

Ali Laksaci

,

Ibrahim M. Almanjahi

,

Mustapha Rachdi

Abstract: In this paper, we propose an alternative kernel estimator for the regression operator of a scalar response variable S given a functional random variable T that takes values in a semi-metric space. The new estimator is constructed through the minimization of the least absolute relative error (LARE). The latter is characterized by its ability to provide a more balanced and scale-invariant measure of prediction accuracy compared to traditional standard absolute or squared error criterion. The LARE is an appropriate tool for reducing the influence of extremely large or small response values, enhancing robustness against heteroscedasticity or/and outliers. This feature makes LARE suitable for functional or high-dimensional data, where variations in scale are common. The high feasibility and strong performance of the proposed estimator is theoretically supported by establishing its stochastic consistency. The latter is derived with precision of the converge rate under mild regularity conditions. The ease implementation and the stability of the estimator are justified by simulation studies and an empirical application to near-infrared (NIR) spectrometry data. Of course the to explore the functional architecture of this data, we employ random matrix theory (RMT) which is a principal analytical tool of econophysics.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Ammar Oad

,

Imtiaz Hussain Koondhar

,

Feng Dong

,

Weibing Liu

,

Beiji Zou

,

Weichun Liu

,

Yun Chen

,

Wu Yaoqun

Abstract:

Accurate segmentation of thyroid nodules on ultrasound images remains a challenging task in computer-aided diagnosis (CAD) mainly because of low contrast, speckle noise, and large inter-patient variability of nodule appearance. Here a new deep learning-based segmentation method has been developed on the SwinUNet architecture supported by spatial attention mechanisms to enhance feature discrimination and localization accuracy. The model takes advantage of the hierarchical feature extraction ability of the Swin Transformer to learn both global context and local fine-grained details, whereas attention modules during the decoder process selectively highlight informative areas and suppresses irrelevant background features. We checked out the system's design using the TN3K thyroid ultrasound info that's out there. It got better as it trained, peaking around the 800th run with some good numbers: a Dice Similarity Coefficient (F1 Score) of 85.51%, Precision of 87.05%, Recall of 89.13%, IoU of 78.00%, Accuracy of 97.02%, and an AUC of 99.02%. These numbers are way better than when we started (like a 15.38% jump in IoU and a 12.05% rise in F1 Score), which proves the system can learn tricky shapes and edges well. The longer it trains, the better it gets at spotting even hard-to-see thyroid lumps. This SwinUnet_withAttention thing seems to work great and could be used in clinics to help doctors figure out thyroid problems.

Article
Computer Science and Mathematics
Analysis

Masatake Hoshi

,

Yutaka Tachimori

Abstract: Background: In Japan, the number of older adults living alone has been increasing, raising the risk of unnoticed health decline or solitary death. Continuous monitoring using sensors can help detect behavioral changes indicating health issues and has the potential to support both older adults and their families. Methods: We obtained behavior and temperature data, continuously recorded over a long period at 15-min intervals from sensors installed in the homes of nine older adults living alone. After data cleaning, behavioral signals were analyzed using Fourier spectral analysis and multiple regression to extract 13-dimensional behavioral feature vectors. We attempted to detect temporal changes and behavioral characteristics by whitening these data and performing correspondence analysis. Results: Spectral analysis revealed 24-hour periodicity in all users’ behavior. Based on changes in the maximum component value and adjusted R2, individuals were classified into a stable group (SG) and a fluctuating group (FG). Boundary variance and false error analyses confirmed that behavioral temporal changes and individual characteristics could be detected objectively. Conclusions: The findings showed that temporal changes in daily behavior among older adults living alone can be detected using simple continuous sensor data, suggesting potential for early detection of health-related changes and preventive support in home.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Ashutosh Agarwal

Abstract: This paper proposes and evaluates a unified machine-learning framework for enterprise portfolio management that integrates multi-horizon financial forecasting, unsupervised risk detection, and explainable reporting within a single pipeline. Using a synthetic but structurally realistic ERP-style dataset comprising 162,000 project–month records with 24 financial and operational features, the study adopts a quantitative design based on multi-source feature engineering, expanding-window temporal cross-validation, and benchmarking of five forecasting models (Linear Regression, Random Forest, XGBoost, LightGBM, CatBoost) across 1-, 3-, and 6-month horizons. Hyperparameters for the strongest models are tuned with Optuna, and three unsupervised detectors (Isolation Forest, COPOD, LODA) are applied to scaled numeric features, while SHAP is used to generate global and local explanations. Results show that gradient-boosted trees substantially outperform linear baselines, reducing MAE by roughly 25–40% and achieving R² ≈ 0.63 at 1 month, ≈ 0.57 at 3 months, and ≈ 0.43 at 6 months, with open commitments, backlog, change orders, and schedule slippage emerging as dominant drivers of future spend. The anomaly layer flags around 2% of records as high risk, capturing patterns such as vendor rate spikes, zero-commitment overspend, stalled backlog, and abrupt forecast collapses. Rather than introducing novel algorithms, the contribution of this work lies in a unified, SHAP-enabled architecture that enhances auditability and governance by turning model outputs into defensible financial narratives and providing a practical blueprint that future work can extend to real ERP data, streaming architectures, and human-in-the-loop risk governance.
Review
Computer Science and Mathematics
Computer Vision and Graphics

Mohammad Osman Khan

,

Imran Khan Apu

Abstract: Human Action Recognition (HAR) has grown into one of the most active areas in computer vision, finding uses in healthcare, smart homes, security, autonomous driving, and even human–robot interaction. Over the past decade, deep learning has transformed how HAR is approached. Instead of relying on handcrafted features, modern models learn directly from raw data, whether that comes from RGB videos, skeleton sequences, depth maps, wearable devices, or wireless signals. Existing surveys typically focus on either technical architectures or specific modalities, lacking comprehensive integration of recent advances, practical applications, and explainability. This survey addresses this gap by examining cutting-edge deep learning methods alongside their real-world deployment in fall detection, rehabilitation monitoring, and navigation systems. We analyze emerging techniques driving HAR forward: transformer architectures for temporal modeling, self-supervised learning reducing annotation requirements, contrastive learning for robust representations, and graph neural networks excelling in skeleton-based recognition through joint relationship modeling. Advanced approaches, including few-shot and meta-learning, enable novel activity recognition with limited data, while cross-modal learning facilitates knowledge transfer between sensor modalities. Federated learning preserves privacy across distributed devices, neural architecture search automates design optimization, and domain adaptation improves generalization across environments and populations, collectively advancing HAR toward efficient, adaptable, deployment-ready solutions. By synthesizing recent advances, real-world applications, and explainability requirements, this survey provides researchers and practitioners a consolidated roadmap for developing HAR systems that are accurate, interpretable, and ready for practical deployment across diverse domains.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Aravindh Sekar

,

Deb Tech

,

Cherie Noteboom

Abstract: The convergence of Artificial Intelligence (AI) and Blockchain Technology (BCT) is transforming supply-chain ecosystems by enhancing transparency, intelligence, and automation. However, existing research lacks a unified theory explaining how these technologies jointly create resilience across organizational levels. This paper extends the Strategic–Decentralized Resilience Theory (SDRT), originally developed to guide effec-tive blockchain implementation, by integrating Agentic AI capabilities to form the SDRT–Agentic AI framework. The framework conceptualizes how predictive, adaptive, and agentic (autonomous) AI capabilities reinforce SDRT’s three pillars: Strategic, Or-ganizational, and Decentralized Resilience. The framework draws on three AI modali-ties—predictive AI for strategic foresight and agility, adaptive AI for organizational learning and flexibility, and agentic AI for self-governed, trustless coordination within blockchain ecosystems. Together, these mechanisms explain how intelligent and de-centralized systems co-evolve to generate dynamic, multi-level resilience. This con-ceptual paper develops a comprehensive model and propositions describing interac-tions between AI capabilities and blockchain-based organizational structures. It con-tributes to information systems and supply-chain research by unifying two fragmented domains, AI and blockchain, under a resilience-oriented mid-range theory. Practically, the framework provides managers with a roadmap to align AI investments with de-centralized governance mechanisms, enabling proactive decision-making, adaptability, and sustainable competitiveness in increasingly autonomous digital environments.
Article
Computer Science and Mathematics
Mathematics

Anatoli Torokhti

,

Peter Pudney

Abstract: Suppose $K_{_Y}$ and $K_{_X}$ are the image and the preimage of a nonlinear operator $\f:K_{_Y}\rightarrow K_{_X}$. It is supposed that the cardinality of each $K_{_Y}$ and $K_{_X}$ is $N$ and $N$ is large. % large sets of observed and reference signals, respectively, each containing $N$ signals. We provide an approximation to the map $\f$ that requires a prior information only on { a few elements} $p$ from $K_{_Y}$, where $p\ll N$, but still effectively represents $\f(K_{_Y})$. It is achieved under quite non-restrictive assumptions. The device behind the proposed method is based on a special extension of the piecewise linear interpolation technique to the case of sets of stochastic elements. The proposed technique provides {a single} operator that transforms any element from the {arbitrarily large } set $K_Y$. The operator is determined in terms of pseudo-inverse matrices so that it always exists.
Article
Computer Science and Mathematics
Analysis

Ramesh Anusha Katta

Abstract: Social media platforms have become critical spaces where consumers and investors publicly react to major corporate events. These online reactions provide rich text data for analyzing brand sentiment and evaluating marketing campaigns. This study examines how sentiment toward Apple changed the company’s 2020 product launch within Reddit finance communities. Using a dataset of 297,533 Reddit comments mentioning Apple’s ticker (“AAPL”) posted between November 2016 and October 2021 in finance-related subreddits, comments were labeled as occurring before or after the September 11, 2020, launch. Sentiment was measured using VADER, a lexicon‐ and rule‐based sentiment analyzer optimized for social media text (Hutto & Gilbert, 2014). Descriptive statistics, correlation analyses, and independent‐samples t tests compared sentiment and engagement (upvotes) across periods and explored relationships among sentiment, text length, and upvotes. Overall sentiment was slightly positive (M = 0.13), with a small but statistically significant increase after the launch (Before: M = 0.12; After: M = 0.14). Upvotes did not differ meaningfully by period. Correlations showed that stronger sentiment was associated with longer comments but was essentially unrelated to upvotes. As an exploratory extension, a small labeled subset of comments was used to pilot fine‐tuning a transformer-based model with the Unsloth framework, building on evidence that domain-specific transformers such as FinBERT outperform lexicon-based methods on financial text (Araci, 2019). The findings suggest that Apple’s 2020 launch modestly improved conversational tone in Reddit finance discussions without changing engagement, and they highlight the value of combining fast lexicon methods with modern transformers for campaign evaluation.
Article
Computer Science and Mathematics
Algebra and Number Theory

Anastasiia Boikova

Abstract: The distribution of prime numbers has long been a central topic in analytic number theory. The Prime Number Theorem (PNT), which states that a number of primes less than x is \( x/log(x) \), provides a foundational understanding of this phenomenon. The further study of deep insights has led to the Riemann Hypothesis (RH), which implies explicit bounds on the error term in the PNT, thus enhancing the precision of the results derived from it. In this work, an algorithm is proposed for the estimation of the prime-counting function by counting the number of composite numbers eliminated during the sieving of the odd-number sequence. By applying this approach, it was found that variations in the length of the odd-number sequence during removal of composite numbers follow an oscillation pattern governed by the sinc function. Further analysis of such oscillations suggests that the function \( π(x) \) is composed of two terms: the main term, which makes the largest contribution to the distribution of the primes, and the error term, which is responsible for the accumulation of errors during the calculation of the \( \text{sinc} \) function values up to the limit \( m \), defined as \( √x /2 \). This leads to the proposal that the bound coefficient of the error term should be equal to \( √x/log x \), which is correlated with an estimate of this coefficient derived under the assumption of the truth of the RH. We hope that this perspective stimulates future work toward formalizing this approach to uncover deeper connections to the zeta function and prime number theory at large scale.
Article
Computer Science and Mathematics
Mathematics

Yuxuan Zhang

,

Weitong Hu

,

Wei Zhang

Abstract: We propose a highly speculative "toy model of everything" based on the finite-dimensional Z3-graded Lie superalgebra with cubic vacuum triality introduced in Ref. [1]. Interpreting the grade-2 sector as the physical vacuum carrying an invariant cubic form, we derive—purely from algebraic constraints and representation theory—the essential features of particle physics, cosmology, gravity, black holes, and quantum entanglement, with literally zero free parameters. While awaiting experimental tests, this framework illustrates the unifying potential of ternary vacuum symmetries beyond conventional Z2 supersymmetry.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Krunal Jesani

,

Dmitry Ignatov

,

Radu Timofte

Abstract: Neural architecture search (NAS) traditionally requires significant human expertise or automated trial-and-error to design deep learning models. We present NN-Caption, an LLM-guided neural architecture search pipeline that generates runnable image-captioning models by composing CNN encoders from LEMUR’s classification backbones with sequence decoders (LSTM/GRU/Transformer) under a strict Net API [1,2]. Using DeepSeek-R1-0528-Qwen3-8B as the primary generator [3], we present the prompt template and examples of generated architectures. We evaluate on MS COCO with BLEU-4 [4,5]. The LLM generated dozens of captioning models, with over half successfully trained and producing meaningful captions. We analyse the outcomes of using different numbers of input model snippets (5 vs. 10) in the prompt, finding a slight drop in success rate when providing more candidate components. We also report training dynamics (caption accuracy vs. epochs) and the highest BLEU-4 attained. Our results highlight the promise of LLM-guided NAS: the LLM not only proposes architectures but also suggests hyperparameters and training practices. We identify the challenges encountered (e.g., code hallucinations or API compliance issues) and detail how prompt rules and iterative code fixes addressed them. This work presents a pipeline that integrates prompt-based code generation with automatic evaluation, and adds dozens of novel captioning models to the open LEMUR dataset to facilitate reproducible benchmarking and downstream AutoML research.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Lixin Hou

,

Ning Wei

,

Mengke Wang

,

Xiaoran Yu

,

Wenshuang Tu

,

Jing Zhou

,

Hongjun Gu

Abstract:

Seed quality is a crucial factor in determining yield before sowing. Ginseng seeds undergo several processes before sowing, including picking, washing, and germination. The germination process is susceptible to damage or failure, which can directly impact the final yield of subsequent cultivation. Therefore, precise and reliable quality inspection and screening must be done before sowing to ensure a high germination rate. Based on YOLOv11n, this study proposes the YOLO-GS model to test the quality of ginseng seeds. Firstly, a SELP module was designed to enhance the network's ability to focus on the key features of ginseng seeds and improve the model's detection accuracy. Secondly, the Channel Prior Convolution Attention (CPCA) mechanism was introduced into the backbone network to dynamically assign attention weights to the feature map in both the channel and spatial dimensions, thereby enhancing the network's ability to extract features from the target. Thirdly, the C3k2 structure in the backbone was improved to account for both local feature extraction and global dependency modeling, thereby enhancing the model's accuracy. Finally, a Convolutional Attention Module (CloFormerAttnConv) based on the multi-frequency position-sensitive attention mechanism in C2PSA was introduced to achieve a dual perception of local details and global semantics while maintaining computational efficiency and improving feature extraction capabilities. The experimental findings demonstrated that the YOLO-GS model attained 97.7% mAP@0.5, with Precision, Recall, F1-Score, and mAP@0.5:0.95 reaching 96.7%, 96.4%, 90.5% and 90.3%, respectively. The model has only 4.2 million parameters. When deployed on the Jetson edge device, the model inference time is 0.6ms, providing an effective solution for real-time target detection tasks in the application of seed quality assessment of ginseng. In conclusion, the YOLO-GS model will be applicable for the precise detection of ginseng seed quality.

Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Haige Wang

,

Cong Nie

,

Chifu Chiang

Abstract: This paper addresses the problem of anomaly detection in multi-source heterogeneous data within the ETL (Extract-Transform-Load) process and proposes an intelligent detection framework that integrates temporal modeling and attention mechanisms. The method achieves effective dynamic aggregation of multidimensional features and temporal dependency modeling in ETL logs through the coordinated design of feature encoding, gated recurrent modeling, and multi-head attention allocation. At the feature level, the model uses a unified encoding structure to map raw logs, monitoring metrics, and task status information into a high-dimensional latent space, ensuring consistency of feature scales and completeness of information. At the temporal level, a GRU-based time modeling structure is introduced to capture long-term dependencies, enhancing the model's ability to perceive the evolution of anomaly patterns. At the attention level, a multi-head mechanism is applied to weight different time segments and feature dimensions, enabling adaptive focus on key moments and important features. Finally, the model combines anomaly scoring with distribution consistency constraints to achieve accurate identification and discrimination of potential anomalies. Experimental results show that the proposed framework significantly outperforms traditional rule-based detection, statistical methods, and basic deep models across various ETL task scenarios, demonstrating higher detection accuracy, stability, and generalization capability. The findings verify the effectiveness of integrating temporal modeling and attention mechanisms for anomaly detection in complex data streams and provide a feasible solution for building reliable and scalable intelligent ETL monitoring systems.
Article
Computer Science and Mathematics
Artificial Intelligence and Machine Learning

Yosef Akhtman

Abstract: We propose a unified mathematical framework showing that the representational universality of modern foundational models arises from a shared finite latent domain. Building on the Finite Ring Continuum (FRC), we model all modalities as epistemic projections of a common latent set Z Ut, where Ut is a symmetry-complete finite-field shell. Using the uniqueness of minimal sufficient representations, we prove the Universal Subspace Theorem, establishing that independently trained embeddings coincide, up to bijection, as coordinate charts on the same latent structure. This result explains cross-modal alignment, transferability, and semantic coherence as consequences of finite relational geometry rather than architectural similarity. The framework links representation learning, sufficiency theory, and FRC algebra, providing a principled foundation for universal latent structure in multimodal models.
Article
Computer Science and Mathematics
Software

Oras Baker

,

Ricky Lim

,

Kasthuri Subaramaniam

,

And Sellappan Palaniappan

Abstract: The research investigates secure recommender systems through federated learning on educational platforms because online education platforms face increasing threats to student data privacy. The research creates an innovative system which merges FL technology with collaborative filtering to generate personalised course recommendations while maintaining user data protection on client devices. The system evaluated its performance by analysing data from major platforms including edX and Coursera and Udemy and other platforms through MSE and R-squared and precision and recall and F1-score metrics. The evaluation shows that FL maintains user privacy through data aggregation restrictions but users must accept reduced recommendation quality than what centralised systems offer. The research establishes two essential findings which confirm FL maintains user privacy in secure educational settings and reveals that performance reduction from limited data constitutes a core challenge for distributed systems. The research presents two primary methodological contributions which integrate data preprocessing methods for dealing with missing information and develop a complete evaluation system for federated recommendation platforms. The research results differ from previous studies because they demonstrate how model performance deteriorates when operating under federated system constraints. The research develops educational technology FL application expertise by studying privacy-accuracy tradeoffs and presenting methods to boost federated recommender systems in protected data environments.
Article
Computer Science and Mathematics
Computer Science

Xiaoyu Deng

Abstract: To enhance the integrity assurance of encrypted data in cloud storage environments, a homomorphic encryption-based data remote verification and anti-tampering mechanism is proposed. The design incorporates a homomorphic verification protocol to enable consistency checks of data blocks in the encrypted domain, and combines a lightweight tag structure with a Merkle Hash Tree to support dynamic data operations, including insertion, modification, and deletion. Corrective codes and tag version control are employed to enable rapid data recovery following tampering detection. Additionally, attribute-encryption-based access control and a blockchain auditing mechanism are integrated to strengthen the system's security closed-loop. Analysis shows that the mechanism offers strong scalability with low computational and communication overhead while preserving data privacy, making it suitable for trusted data storage in multi-user cloud environments.
Article
Computer Science and Mathematics
Geometry and Topology

Irem Eroğlu

Abstract: In this study, we establish several coupled fixed point results in quantale-valued quasi-metric spaces, which constitutes a generalization of metric and probabilistic metric spaces. The obtained results will be illustrated with concrete examples. Furthermore, we introduce the concept of θs-completeness and, as an application of the main theorems, we derive some results in both quantale-valued partial metric spaces and probabilisic metric spaces.
Article
Computer Science and Mathematics
Computer Networks and Communications

Robert Campbell

Abstract:

The U.S. Department of Defense (DoD) faces three concurrent cybersecurity modernization mandates that together constitute what we term the Next-Generation Security Triad: post-quantum cryptography (PQC) migration by 2030--2035, Zero Trust Architecture (ZTA) implementation by FY2027, and AI system security assurance under CDAO governance. These Triad components operate under distinct timelines, funding streams, workforce competencies, and compliance frameworks---creating significant coordination challenges for CIOs, Commanding Officers, Program Management Offices, and Authorizing Officials. Current approaches treat these as separate migrations, resulting in duplicative investments, architectural misalignment, and uncoordinated risk exposure. This paper argues that the solution is not to merge the three Triad programs---each serves distinct operational purposes---but to establish a shared modernization substrate. We present a unified architectural framework comprising four substrate layers: (1) cryptographic services infrastructure, (2) identity and access management fabric, (3) telemetry and analytics pipeline, and (4) policy orchestration engine. This substrate-based approach enables each Triad component to proceed at its own pace while ensuring interoperability, reducing lifecycle technical debt, and providing measurable compliance pathways.

of 617

Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated