Preprint
Article

This version is not peer-reviewed.

Nested Learning in Higher Education: Integrating Generative AI, Neuroimaging, and Multimodal Deep Learning for a Sustainable and Innovative Ecosystem

Submitted:

07 December 2025

Posted:

09 December 2025

You are already at the latest version

Abstract
Industry 5.0 challenges higher education to integrate human-centred and sustainable uses of artificial intelligence, yet current deployments rarely connect generative AI, neuroadaptive sensing and governance in a single framework. This article introduces Nested Learning as a neuro-adaptive ecosystem design in which generative AI agents, IoT infrastructures and multimodal deep learning orchestrate instructional support while preserving student agency and a “pedagogy of hope”. We present an exploratory two-phase mixed-methods study as an early empirical illustration of this proposal. First, a neuro-experimental calibration with 18 undergraduate students used mobile EEG while they interacted with ChatGPT in problem-solving tasks. Second, a field implementation at a university in Madrid involved 380 participants (300 students and 80 lecturers), embedding the Nested Learning ecosystem into regular courses. Data sources included EEG (P300) signals, interaction logs, self-report measures of self-regulated learning, emotional experience and ethical concerns, and semi-structured interviews. In the lab phase, P300 dynamics aligned with key instructional events, providing preliminary evidence that the neuro-adaptive pipeline is sensitive enough to justify larger-scale studies. In the field phase, 87% of students reported higher engagement and 73% perceived improved learning outcomes, while qualitative data highlighted greater clarity, adaptive support and cognitive safety, alongside concerns about privacy and data sovereignty. Perceived Nested Learning and neuro-adaptive adjustments were moderately associated with enhanced self-regulatory strategies (correlations up to r=0.57, p<0.001). We argue that, under robust ethical, data-protection and sustainability frameworks, Nested Learning can strengthen academic resilience, learner autonomy and human-centred uses of AI in higher education.
Keywords: 
;  ;  ;  ;  ;  ;  ;  ;  ;  
Subject: 
Social Sciences  -   Education

1. Introduction

Generative artificial intelligence (AI) chatbots (e.g., ChatGPT, Gemini, Claude) and ubiquitous Internet of Things (IoT) infrastructures are rapidly transforming higher education into an agent-mediated ecosystem [1,2,3,4]. While early deployments primarily targeted individual cognitive performance or administrative efficiency, the Industry 5.0 paradigm calls for human-centred, sustainable and resilient educational models in which technology not only instructs but also cares for learners [5,6]. In this view, AI systems should help maintain cognitive health, emotional balance and ethical awareness over time, especially in high-pressure academic environments [7,8]. We use the term Nested Learning to describe such environments: digital ecosystems that envelop students in layered support structures and treat errors not as failings but as opportunities for adaptive scaffolding [9,10].
In this article, Nested Learning is introduced as a new conceptual and design proposal, rather than as an already validated theory. We use the term to denote learning environments in which learners are surrounded by nested envelopes of support—human, artificial and contextual—that jointly modulate pacing, representation and difficulty over time. Nested Learning is thus not a generic label for any AI-enhanced course, but a specific configuration that: (i) integrates generative AI agents, neuroeducational protocols and smart-campus data in a unified orchestration layer; (ii) foregrounds cognitive safety, self-regulated learning and a Pedagogy of Hope as explicit sustainability goals; and (iii) treats learner trajectories as dynamical processes to be guided, rather than as static outcomes to be measured. The present study offers an initial operationalisation and exploratory empirical illustration of this proposal; future work will be needed to test and refine its boundaries across contexts.
Nested Learning is conceived as a neuro-adaptive ecosystem that integrates generative AI agents, campus IoT networks and neuroeducational protocols into a unified orchestration layer. At its core, the learner’s neurocognitive state (e.g., attention, cognitive load, affect) is continuously sensed through mobile electroencephalography (EEG) and interaction traces, and then coupled with the external instructional context in real time [11,12,13,14]. In this work, the P300 component is used as a key attentional marker, embedded within dedicated neuroeducational protocols (PRONIN and SIEN) that structure sessions into micro-events and map instructional phases to neurophysiological windows [9]. This coupling enables the ecosystem to modulate pacing, representation and task difficulty in response to measurable changes in students’ cognitive states.
In parallel, multimodal learning analytics and deep learning have broadened the range of signals available to educators, from keystroke and clickstream data to affective cues and biosignals [15,16,17]. Recent reviews on AI in education and intelligent tutoring emphasise substantial progress in automated support and personalisation, but also highlight the lack of integrative frameworks that connect multimodal sensing, adaptive policies and ethical governance in a single model [10,18,19]. Most current applications either (i) use generative AI as a standalone tool for content generation or feedback, or (ii) treat neuroimaging within tightly controlled laboratory settings that are difficult to scale to authentic classrooms [2,3]. There remains a gap between these strands: we lack conceptual and technical frameworks that jointly orchestrate generative AI agents, multimodal deep learning and neurophysiological sensing in ways that are transparent, reproducible and aligned with educational values such as autonomy, equity and sustainability [6,20].
Alongside these optimistic narratives, a substantial body of critical scholarship warns that AI in education can intensify datafication, commercialisation and automation of pedagogical decisions, with problematic implications for equity, professional autonomy and student agency [21,22,23,24,25,26]. Our work is explicitly situated at this intersection: Nested Learning is proposed not as a neutral technical fix, but as a design framework that must be evaluated against these concerns, particularly in relation to cognitive safety, data governance and sustainable workloads for teachers and students.
Study-design note (neuro–field integration). To address this gap, we adopt a two-phase mixed-methods design that calibrates the Nested Learning ecosystem in a laboratory setting and then evaluates it at scale in authentic courses [27]. As summarised in Table 1, a first neuro-experimental phase with n = 18 biomedical undergraduates uses mobile EEG (Emotiv EPOC, 14 channels, 128 Hz) during problem-solving tasks mediated by ChatGPT to examine whether P300 dynamics and derived neuroeducational indices are sufficiently sensitive and stable under the proposed protocol to justify larger-scale studies [11,12]. This phase is explicitly conceived as an exploratory calibration study rather than a fully powered validation of the neuro-adaptive system. A second field phase in a Madrid-based university involves n = 380 participants (300 students, 80 lecturers), embedding the ecosystem into regular courses with multiple generative AI platforms (ChatGPT, Gemini, Claude, Copilot, DeepSeek, GPT-o1) orchestrated through distributed-agent frameworks (JADE, PADE, LangChain) and connected to the campus IoT infrastructure. This combination allows us to study both the neurophysiological underpinnings of Nested Learning and its perceived impact on engagement, self-regulation, cognitive safety and ethical concerns in real-world contexts. Figure 1 provides a temporal overview of the two phases.
Beyond the technical challenge, AI-mediated higher education raises a broader pedagogical question: what forms of knowledge, agency and wellbeing should be valued when AI can already produce sophisticated outputs on demand? From a sustainability perspective, focusing solely on product quality (grades, performance metrics) is insufficient. Sustainable higher education requires attention to students’ capacity for self-regulated learning, their experience of cognitive safety and their sense of hope and meaning in the learning process [7,8,20]. We therefore position Nested Learning within a “Pedagogy of Hope” perspective, in which AI and neuroadaptive technologies are used to expand, rather than erode, human agency—particularly for students who are at risk of disengagement, burnout or marginalisation [9,10,28,29].
In this framing, the role of generative AI agents extends beyond delivering feedback or solving tasks. Agents become mediators of process visibility: they expose planning traces, testing strategies, evidence citations and neuro-adaptive adjustments in ways that students and teachers can inspect and critique [2,3,6,30]. The IoT layer enriches this picture by bringing contextual signals from physical spaces (e.g., classroom occupancy, environmental conditions) into the decision loop, supporting more holistic understandings of when and why learners thrive or struggle [31]. Figure 2 provides a high-level overview of this multi-layer ecosystem and its multimodal data pipeline, while Figure 3 situates Nested Learning conceptually at the intersection of generative AI, neuroeducation and sustainable higher education.
To make the study design transparent, Table 1 summarises the two phases and their main characteristics.
This study introduces Nested Learning as a neuro-adaptive, agent-mediated ecosystem for sustainable higher education. Our contribution is fourfold:
(i)
Conceptually, we articulate Nested Learning as a multi-layer architecture that integrates generative AI, neuroeducation and IoT within a Pedagogy-of-Hope framework, positioning cognitive safety, resilience and autonomy as explicit sustainability goals [5,6,9].
(ii)
Methodologically, we propose a multimodal deep-learning pipeline that aligns EEG (P300) dynamics with instructional events and agent interventions, enabling neuro-adaptive policies that remain interpretable for educators [11,12,13,14].
(iii)
Empirically, we provide preliminary findings from a two-phase mixed-methods design: a neuro-experimental calibration with n = 18 students and a field implementation with n = 380 participants, focusing on engagement, self-regulated learning, perceived clarity, adaptive support and ethical concerns [8,20,27].
(iv)
From a governance perspective, we discuss how privacy-by-design, data sovereignty and ethical stewardship are necessary to deploy Nested Learning ecosystems in ways that are compatible with sustainable higher education and the broader goals of Industry 5.0, while responding to critical concerns about datafication and automation in AI in education [6,21,25,30,31].
As illustrated in Figure 4, Figure 5 and Figure 6, the proposed Nested Learning ecosystem combines low-cost neuroimaging, real-time interaction with generative AI agents, and a multi-platform LLM workstation deployed in authentic higher-education settings.

2. Theoretical Framework

This section outlines the theoretical foundations that inform the Nested Learning ecosystem. We organise the framework into four interrelated strands: (i) Nested Learning and the Pedagogy of Hope; (ii) multimodal deep learning and neuroimaging; (iii) agent-based generative AI and IoT infrastructures; and (iv) the integration of neurotheoretical, neuromethodological and neurodidactic perspectives. Figure 7 provides a synthetic overview of how these strands converge, and Table 2 summarises how they are operationalised in the present study.

2.1. Nested Learning and the Pedagogy of Hope

We conceptualise Nested Learning as the capacity of an educational ecosystem to envelop learners in a layered structure of cognitive, emotional and ethical support, while preserving their autonomy and agency. This notion is aligned with Freirean traditions of critical pedagogy and the Pedagogy of Hope, which emphasise dialogic relationships, co-construction of meaning and the refusal to naturalise exclusion or failure [9,10,28,29]. In a Nested Learning environment, mistakes are treated as diagnostic signals that trigger adaptive scaffolds rather than as terminal judgements, and learners are invited to interpret AI outputs critically rather than passively accepting them.
From a neuroeducational standpoint, this orientation resonates with research showing that learning is tightly coupled to emotional and social experience [7,10]. Feelings of safety, belonging and hope modulate attention, memory consolidation and self-regulated learning (SRL), especially in high-stakes contexts [8]. SRL frameworks further highlight the importance of metacognitive monitoring, strategic planning and reflective evaluation as learners navigate complex tasks [20]. In Nested Learning, these SRL processes are not only internal; they are distributed across learners, peers, educators and AI agents, forming a socio-technical network that supports planning, feedback and reflection over time [3,16].
Critically, Nested Learning is also informed by the critical literature on AI in education, which warns against unexamined datafication, commercialisation and automation of pedagogical decisions [21,22,23,24,25,26]. Rather than assuming that more sensing and more AI are inherently beneficial, the framework treats neuro-adaptive capabilities as tools that must be constrained by human-centred values and institutional governance. We therefore interpret Nested Learning as a design stance with three key implications: (i) relational, in that learners are embedded in supportive human and technological relationships; (ii) temporal, in that learning trajectories are followed over extended periods instead of being reduced to single assessments; and (iii) ethical, in that the ecosystem is explicitly oriented towards inclusion, dignity and sustainability in line with Industry 5.0 principles [5,6,32]. These implications guide the construction of our neuroadaptive ecosystem and its evaluation in higher education.

2.2. Multimodal Deep Learning and Neuroimaging

To operationalise the “nesting” of support, the system must perceive and interpret the learner’s state through multiple channels. Multimodal learning analytics brings together digital interaction traces (clicks, keystrokes, navigation patterns), physiological signals (e.g., heart rate, skin conductance) and behavioural cues (e.g., gaze, posture) to model engagement and cognitive load [15,16,17]. Recent advances in deep learning enable the joint modelling of such heterogeneous data streams, capturing temporal dependencies and non-linear interactions that are difficult to express in traditional models [30].
In this work, EEG plays a central role within the multimodal pipeline. Mobile headsets such as Emotiv EPOC+ allow the recording of cortical dynamics in authentic educational settings, albeit with lower spatial resolution than clinical devices. We focus on the P300 component of the event-related potential (ERP), a well-established marker of attention and context updating that typically emerges 250–500 ms after salient stimuli [13,14]. When embedded in structured protocols that align experimental events with pedagogical micro-tasks, P300 dynamics can index the extent to which learners detect and process critical information [11,12].
Recent work on EEG-based adaptive learning has shown that neural markers of workload, attention and engagement can be used to modulate task difficulty, feedback timing or multimedia presentation in real or near-real time [11]. However, most studies operate in tightly constrained scenarios (e.g., single-task training, laboratory-based interfaces) and rarely involve generative AI. Our neuroeducational protocols (PRONIN and SIEN) specify sequences of tasks, prompts and feedback episodes that are time-locked to EEG markers, enabling a principled coupling between the instructional script and neurophysiological responses [9]. Within the multimodal deep-learning layer, P300 amplitudes and latencies are combined with interaction logs and contextual features to generate state representations that can inform neuro-adaptive policies. These policies remain interpretable for educators, who can relate changes in attention and engagement to concrete instructional events rather than to opaque numerical scores [10].

2.3. Agent-Based Generative AI and IoT

Generative AI systems—particularly large language models (LLMs)—are increasingly used to provide feedback, explanations and examples in higher education [3]. However, single-agent configurations often operate as black boxes and are difficult to embed into broader pedagogical workflows. Recent work on retrieval-augmented generation (RAG) and agentic orchestration has proposed architectures in which multiple specialised agents collaborate, each with access to tools, external knowledge bases and evaluation routines, making intermediate reasoning steps and evidence sources explicit.
In the Nested Learning ecosystem, we adopt an agent-based architecture in which different LLM-based agents are responsible for functions such as task decomposition, formative feedback, ethical guidance and data-governance monitoring. Agents are instantiated across several commercial and open platforms (ChatGPT, Gemini, Claude, Copilot, DeepSeek, GPT-o1), orchestrated through frameworks such as JADE, PADE and LangChain, and instrumented via logging for research purposes. The multi-agent station depicted in Figure 6 exemplifies this design and aligns with recent proposals for AI “orchestrators” in education [1,3].
The IoT and smart-campus layer complements the agent infrastructure by providing contextual data from physical spaces: occupancy sensors, environmental measurements and device telemetry. These signals inform decisions about pacing, modality and timing (e.g., slowing down activities during periods of high noise or overload, or suggesting asynchronous reflection when cognitive fatigue is detected). By connecting generative AI agents to IoT streams, the system can coordinate actions across digital and physical environments, making Nested Learning a truly cyber-physical ecosystem and extending classical adaptive tutoring and AIED models towards whole-campus, sustainability-oriented deployments [18,19].

2.4. Neurotheoretical, Neuromethodological and Neurodidactic Integration

The final strand of the framework concerns the integration of theoretical, methodological and didactic levels. Neurotheoretically, our work is grounded in models that link attention, prediction and emotion to learning, including theories of context updating, predictive coding and affective modulation of cognition [7,8,13]. Neuromethodologically, we align EEG protocols, behavioural tasks and self-report measures in a convergent mixed-methods design, combining laboratory calibration with ecological deployment in real courses [10,12,27]. Neurodidactically, we translate these insights into concrete instructional patterns (e.g., micro-cycles of challenge–support–reflection) that can be orchestrated by AI agents while remaining legible and adjustable for teachers [9].
Figure 7 situates these three levels—neurotheoretical, neuromethodological and neurodidactic—within the broader Nested Learning ecosystem. At the micro level, individual learners’ neurocognitive states are modelled through P300 and multimodal signals. At the meso level, instructional episodes and AI-mediated interactions implement adaptive strategies grounded in SRL and Pedagogy-of-Hope principles. At the macro level, institutional policies, data-governance frameworks and sustainability agendas constrain and enable how the ecosystem can be deployed [5,6,32]. Table 2 synthesises how these levels map onto concrete constructs and indicators in the present study.

2.5. Mathematical Framing of Nested Learning Dynamics

Beyond the conceptual and neuroeducational arguments, the Nested Learning ecosystem can be viewed as a discrete-time dynamical system that updates learners’ states in response to AI-mediated and human-mediated support. Formalising this perspective helps clarify which parameters are responsible for growth, saturation and potential inequalities, in line with prior work on algorithmic models of feedback and learning trajectories.
Let i index learners and t = 0 , 1 , 2 , index instructional episodes (e.g., task cycles or neuroeducational segments). We define a latent state vector
z i , t = S i , t E i , t R i , t C i , t [ 0 , 1 ] 4 ,
where S i , t denotes task-related performance (product quality), E i , t engagement, R i , t self-regulated learning (SRL) and C i , t perceived cognitive safety. The [ 0 , 1 ] range represents normalised indices relative to pedagogical targets.
During episode t, the Nested Learning ecosystem applies an intervention or action a i , t chosen by educators and/or AI agents (e.g., changing difficulty, modality or pacing). We summarise the effective support quality on each dimension by a vector
F i , t = F i , t ( S ) F i , t ( E ) F i , t ( R ) F i , t ( C ) [ 0 , 1 ] 4 ,
where F i , t ( k ) = F ( k ) ( z i , t , a i , t , c t ) depends on the current state, the action and contextual conditions c t (e.g., classroom density, time pressure, environmental noise).

2.5.0.1. Component-wise logistic update.

For each dimension k { S , E , R , C } , we posit a discrete-time update of the form
z i , t + 1 ( k ) = z i , t ( k ) + η i ( k ) F i , t ( k ) 1 z i , t ( k ) δ i ( k ) 1 F i , t ( k ) z i , t ( k ) + ζ i , t ( k ) ,
where η i ( k ) 0 controls growth in response to high-quality support, δ i ( k ) 0 captures decay or erosion in the absence of such support, and ξ i , t ( k ) is a zero-mean disturbance term representing unmodelled factors. When F i , t ( k ) is high and z i , t ( k ) is far from 1, the first term dominates and the state grows; when support is weak and the state is high, the second term dominates and the state can regress. Equation (3) thus generalises logistic growth with forgetting and aligns with prior models for iterative learning under feedback [19].
In vector form, we can write
z i , t + 1 = z i , t + η i F i , t 1 z i , t δ i 1 F i , t z i , t + ξ i , t ,
where η i and δ i are learner-specific parameter vectors, ⊙ denotes element-wise multiplication and 1 is a vector of ones. Analytically, studying the increments
Δ z i , t ( k ) = z i , t + 1 ( k ) z i , t ( k )
allows us to characterise empirical convergence patterns (e.g., diminishing returns near the ceiling, differential growth across S, E, R, C) and to relate them to support quality and parameter profiles.

2.5.0.2. Neuro-adaptive measurement model.

The latent state z i , t is not directly observed. Instead, the ecosystem collects a multimodal observation vector
y i , t = p i , t b i , t q i , t ,
where p i , t contains EEG features (including P300 amplitudes and latencies), b i , t behavioural traces (logs, timings, error patterns) and q i , t self-report scores (engagement, SRL, cognitive safety). We assume a measurement model
y i , t = g ( z i , t , c t ) + ε i , t ,
with g ( · ) implemented by multimodal deep learning models (e.g., recurrent or attention-based architectures) [17] and ε i , t residual noise.
For P300, a simplified linear mixed-effects relationship linking attentional state to ERP amplitude can be written as
P 300 i , t = β 0 + β 1 E i , t + β 2 cond t + u i + ϵ i , t ,
where cond t encodes task condition (e.g., baseline vs. Nested Learning), u i is a learner-specific random intercept and ϵ i , t is an error term [11,13]. In a logistic formulation, the probability of observing a high-amplitude P300 response can be expressed as
Pr HighP 300 i , t = 1 z i , t , c t = σ γ 0 + γ 1 E i , t + γ 2 C i , t + γ 3 cond t ,
where σ ( x ) = 1 / ( 1 + e x ) is the logistic function. Equations (8) and (9) formalise the intuition that stronger engagement and cognitive safety should be associated with clearer attentional signatures under Nested Learning protocols.

2.5.0.3. Policy-level view.

From the standpoint of AI agents, Equations (4)–(7) define a partially observed Markov decision process (POMDP) over learner states. At each time step, the orchestrator maintains an estimate z ^ i , t (e.g., via filtering or recurrent encoders) and chooses an action a i , t according to a policy
π ( a i , t z ^ i , t , c t ) ,
with the objective of maximising long-term sustainable outcomes (e.g., a weighted combination of S i , t , E i , t , R i , t and C i , t ) while respecting ethical and governance constraints [6,32]. Although a full reinforcement-learning treatment lies beyond the scope of this article, the dynamical system in Equations (3)–(4) provides a principled basis for analysing convergence, sensitivity to support quality and potential disparities between learners given different parameter profiles ( η i , δ i ) .
This mathematical framing is intended as a conceptual scaffold rather than as a fully identified model in the present study. It complements the qualitative and neuroeducational perspectives outlined above by making explicit how Nested Learning can be understood as a controlled process over multi-dimensional learner states, driven by neuro-adaptive observations and governed by policy-level design choices in the generative-AI and IoT layers.

3. Materials and Methods

3.1. Overall Research Design

The study followed a mixed-methods design combining a neuro-experimental phase and a large-scale field implementation in higher education. Quantitative and qualitative strands were articulated within an interpretivist paradigm, allowing us to examine both neurophysiological sensitivity to Nested Learning interventions and participants’ lived experience of the ecosystem in situ [10,27]. As summarised in Table 1, Phase 1 focused on the calibration of neuro-adaptive markers under controlled conditions, whereas Phase 2 examined perceived Nested Learning, interaction with generative AI, perceived neuroadaptive adjustments and the climate of hope and cognitive safety in a real university context.
Within the mathematical framing introduced in Section 2.5, these two phases provide complementary information. Phase 1 informs the measurement model g ( · ) and the relation between attentional states and P300 dynamics (Equations (7)–(9)), while Phase 2 provides empirical data on the components of the latent state vector z i , t (performance, engagement, self-regulation and cognitive safety) and their evolution under the Nested Learning ecosystem, approximating the update structure in Equation (3). Overall, the study is primarily theory-driven and confirmatory in orientation, using empirical data to probe the plausibility and boundaries of the proposed framework rather than to exhaustively validate all its components.

3.2. Sample Size and Power Considerations

Phase 1 was designed as a neuro-experimental calibration and feasibility study rather than as a definitive validation of the neuro-adaptive system. The sample size of n = 18 was determined pragmatically, balancing laboratory capacity, time constraints and the intensive nature of EEG-based protocols, and is consistent with typical sample sizes in within-subject ERP studies that focus on detecting medium-to-large effects in P300 amplitude and latency [11,13]. No formal a priori power analysis was conducted; consequently, Phase 1 is underpowered for small effects and its results are interpreted as sensitivity checks and proof-of-concept evidence, not as normative benchmarks.
Phase 2 was conceived as a field study to explore how the Nested Learning ecosystem is perceived in authentic higher-education contexts. The resulting sample of n = 380 participants (300 students, 80 lecturers) exceeds common rules of thumb for factor-analytic work (e.g., a minimum of 5–10 participants per item) and is adequate for the psychometric analyses reported in Section 4 [10]. Nonetheless, both phases should be considered as initial empirical instantiations of the framework, and future work with larger and more diverse samples is needed to refine and stress-test the proposed models.

3.3. Phase 1: Neuro-Experimental Calibration

3.3.1. Participants and Context

The neuro-experimental phase used a quantitative, experimental design with n = 18 undergraduate students enrolled in the fourth year of a biomedical degree (age range 21–24 years). Participants were recruited via email announcements and classroom visits and volunteered to take part in a laboratory session focused on problem-solving tasks mediated by ChatGPT, under a Nested Learning protocol that alternated segments of challenge, scaffolded support and reflection. Inclusion criteria were enrolment in the degree, absence of diagnosed neurological conditions and normal or corrected-to-normal vision; no monetary incentives were provided.
To partially control for potential confounders, sessions were scheduled in late morning or early afternoon, avoiding extreme times of day. Before the EEG setup, participants completed a brief self-report checklist on sleep quality, perceived fatigue and stress level on the day of the experiment. Participants who reported acute illness or extreme sleep deprivation were rescheduled. Within sessions, short breaks were offered between blocks to mitigate fatigue, and task order was counterbalanced across participants to reduce systematic ordering effects.
Sessions took place in a university neuroscience laboratory equipped with an EEG recording station, multiple display screens and controlled ambient conditions (lighting, noise, seating). Each session lasted approximately 45–60 minutes and followed a scripted sequence including: (i) baseline rest, (ii) instruction and calibration, (iii) task blocks with ChatGPT-mediated problem solving, and (iv) reflective segments with explicit prompts aligned to the Pedagogy-of-Hope principles in Section 2.1. The objective of Phase 1 was to calibrate the sensitivity of P300 and related EEG markers to instructional micro-events orchestrated under the Nested Learning paradigm, acknowledging that the small sample size limits generalisability.

3.3.2. Task Script and Event Structure

The experimental script defined a sequence of problem-solving tasks grounded in domain-relevant biomedical scenarios (e.g., interpreting simplified clinical cases, reasoning about physiological mechanisms, comparing alternative treatment rationales). Each task consisted of several steps: initial case presentation, prompting the student to propose a hypothesis or explanation, interaction with ChatGPT to obtain feedback or alternative viewpoints, and a brief reflective prompt asking the student to justify or revise their reasoning.
Within this script, we defined micro-events as pedagogically meaningful units that could elicit discrete attentional responses: (i) onset of a new case or key piece of information; (ii) display of an AI-generated explanation or hint; (iii) explicit feedback highlighting an error or misconception; and (iv) reflective prompts explicitly inviting the student to reconsider or extend their answer. Events were tagged according to dimensions such as novelty (new vs. repeated information), cognitive demand (low vs. high), and explicitness of feedback (implicit vs. explicit). These tags were later used as predictors in the P300 models in Equations (8) and (9).
Pedagogical events and ChatGPT interactions were coordinated through a central controller that sent time-stamped messages to an Apache Kafka bus. Each message encoded the event type, its pedagogical intensity and the current phase of the Nested Learning micro-cycle (challenge, support, reflection). Figure 8 summarises the end-to-end pipeline for Phase 1.

3.3.3. EEG Acquisition and Synchronisation

Cortical activity was recorded using a low-cost Emotiv EPOC device with 14 channels, sampling at 128 Hz and with a semi-standard electrode placement aligned with the international 10–20 system. Data acquisition software subscribed to the Kafka topics carrying event markers and wrote event codes into the EEG stream as auxiliary channels, ensuring millisecond-level synchronisation between pedagogical stimuli and brain signals.
Signal quality was monitored in real time using manufacturer-provided indicators (contact quality) and visual inspection. If one or more channels consistently failed to reach acceptable quality, the headset was re-adjusted before continuing. Sessions with pervasive poor signal quality were terminated and data excluded from analysis; the number of excluded sessions and retained participants is reported in Section 4.

3.3.4. Preprocessing, Artefact Handling and Feature Engineering

EEG preprocessing followed standard guidelines for educational and mobile EEG studies [11,13]. Signals were:
(a)
band-pass filtered between 0.1 Hz and 30 Hz and notch filtered at 50 Hz to remove line noise;
(b)
re-referenced to the average of mastoid channels;
(c)
segmented into epochs from 200  ms to + 800  ms around event onsets;
(d)
baseline-corrected using the pre-stimulus interval ( 200 to 0 ms);
(e)
subjected to semi-automatic artefact rejection (amplitude and gradient criteria), complemented by visual inspection.
Artefact rejection thresholds were defined a priori (e.g., absolute amplitude > ± 100 μ V or rapid gradients indicative of muscle artefacts). When signal quality allowed, an independent component analysis (ICA) was applied to identify and remove components corresponding to ocular (EOG) or muscular (EMG) activity; otherwise, contaminated epochs were discarded. For each participant and condition, we required a minimum number of clean epochs per event type (e.g., at least 25–30) to retain the data for analysis. The proportion of discarded epochs per participant and condition is summarised in Section 4.
P300 features were then extracted at posterior channels corresponding to Pz in the Emotiv configuration, focusing on the 250–400 ms post-stimulus window [13,14]. For each participant and event type, we computed:
  • mean P300 amplitude (in μ V) in the 250–400 ms window;
  • P300 peak latency (ms) within the same window;
  • binary indicators of “high P300” responses based on participant-specific thresholds (e.g., amplitude exceeding mean + one standard deviation for that participant).
Algorithm 1 summarises the processing pipeline for a single participant.
Algorithm 1:Neuro-experimental pipeline for a single participant (Phase 1)
Require: 
EEG raw data E i , event markers M i , event descriptors D i
Ensure: 
P300 features and model-ready dataset T i
1:
Band-pass and notch filter E i
2:
Re-reference and align E i to markers M i
3:
for all events m M i  do
4:
    Extract epoch e i , m from 200  ms to + 800  ms
5:
    Baseline-correct e i , m using 200 to 0 ms
6:
    if artefact detected in e i , m  then
7:
        discard e i , m ▹ or correct via ICA if applicable
8:
    else
9:
        Compute mean P300 amplitude and peak latency at Pz
10:
        Append features and corresponding descriptors from D i
11:
    end if
12:
end for
13:
Construct T i by merging all valid epochs and their descriptors
14:
return T i

3.3.5. Statistical Modelling of P300 Sensitivity

P300 features were analysed using linear and logistic mixed-effects models aligned with Equations (8) and (9). For amplitude, we estimated models of the form
P 300 Amp i , m = β 0 + β 1 EventIntensity m + β 2 Nested m + u i + ϵ i , m ,
where EventIntensity m captured pedagogical descriptors (e.g., novelty, demand), Nested m distinguished Nested Learning segments from baseline conditions, u i N ( 0 , σ u 2 ) was a participant-level random intercept and ϵ i , m an error term [11,13]. For high-P300 probability, we used logistic mixed models
log Pr ( HighP 300 i , m = 1 ) 1 Pr ( HighP 300 i , m = 1 ) = γ 0 + γ 1 EventIntensity m + γ 2 Nested m + v i ,
with v i a random intercept. These models provide empirical estimates for the parameters in Equations (8) and (9), anchoring the measurement model g ( · ) in observed data. Given the limited sample size, these analyses are treated as exploratory–confirmatory probes of sensitivity rather than as definitive tests of complex interaction structures.

3.4. Phase 2: Field Implementation in Higher Education

3.4.1. Participants, Courses and Ecosystem Deployment

Phase 2 was conducted at a university in Madrid through the deployment of the Nested Learning ecosystem in regular higher-education courses. A total of n = 380 participants took part, comprising 300 students and 80 lecturers from diverse degree programmes (STEM and non-STEM). Participation was voluntary and integrated into ordinary teaching activities; no changes were made to official grading schemes.
The ecosystem was implemented by integrating six generative-AI platforms (ChatGPT, Gemini, Claude, Copilot, DeepSeek and GPT-o1) into a unified multi-agent station, orchestrated through JADE, PADE and LangChain, and connected to the campus IoT infrastructure. Each AI platform instantiated specialised agents (e.g., “Explainer”, “Critic”, “Ethics Monitor”, “Planner”) that coordinated via message passing and tool calls. IoT sensors provided contextual information (room occupancy, temperature, noise proxies) that could be consulted by agents when deciding on pacing and modality, consistent with the cyber-physical view in Section 2.3.
In addition to demographic information (age, gender, degree programme), we collected contextual variables such as course type (STEM vs. non-STEM), course level (introductory vs. advanced) and delivery mode (face-to-face vs. hybrid). These variables were later used as covariates to partially account for differences in baseline stress, workload and pedagogical culture across courses, acknowledging that unmeasured confounders remain.
Figure 9 depicts the high-level orchestration pipeline governing Phase 2.

3.4.2. Procedures and Micro-Cycle Design

Instructors attended a workshop introducing the Nested Learning paradigm and the orchestration interface. They learned to design micro-cycles of challenge–support–reflection aligned with their course content and to parameterise the autonomy and tone of AI agents (e.g., more Socratic vs. more directive support).
During teaching sessions, students interacted with generative-AI agents for tasks such as brainstorming, code generation, explanation of concepts and feedback on drafts. The orchestrator monitored interaction patterns, simple behavioural indicators (e.g., latency between prompts, rapid-fire requests, repeated queries) and IoT context to infer a coarse estimate z ^ i , t of each learner’s state, in line with the measurement model g ( · ) in Equation (7). Based on this estimate and a configurable policy π , the orchestrator adjusted suggested agent actions (e.g., switching from solution provision to metacognitive prompts, slowing down the pace, or encouraging a reflective pause).
Algorithm 2 provides a simplified pseudocode representation of the online orchestration loop for a single learner.
Algorithm 2:Online orchestration loop for a single learner (Phase 2)
Require: 
Initial state estimate z ^ i , 0 , policy π , context stream c 0 : T
1:
for t = 0 to T 1  do
2:
    Observe new interaction data y i , t and context c t
3:
    Update state estimate z ^ i , t Encoder ( z ^ i , t 1 , y i , t , c t )
4:
    Sample or select action a i , t π ( · z ^ i , t , c t )
5:
    Route a i , t to the appropriate agent(s) (e.g., Explainer, Critic, Ethics Monitor)
6:
    Deliver agent response to learner and log ( z ^ i , t , a i , t , y i , t )
7:
end for
The encoder in line 2 can be implemented using recurrent or attention-based models that integrate past and current observations. Although we do not optimise π via reinforcement learning in this study, Algorithm 2 formalises the policy-level view discussed in Section 2.5 and provides a scaffold for future work.
At the end of the field implementation period, both students and lecturers completed the Nested Learning Experience Questionnaire (Section 3.5) and responded to open-ended questions about their experience with the ecosystem. These instruments provide proxies for the components of z i , t and for perceived properties of the policy π (e.g., fairness, transparency, supportiveness).

3.5. Instruments

3.5.1. Nested Learning Experience Questionnaire

To capture participants’ perceptions of the ecosystem, we designed a Likert-type questionnaire with responses on a five-point scale (1 = strongly disagree, 5 = strongly agree). The instrument comprised 20 items grouped into four conceptual dimensions:
(a)
Perception of Nested Learning, assessing the extent to which learners felt enveloped in multi-layered cognitive and emotional support structures (e.g., “I feel accompanied by both teachers and AI tools when I get stuck”);
(b)
Interaction with Generative AI, focusing on perceived usefulness, clarity and trust in AI-mediated support (e.g., “The AI gives explanations that help me understand, not just the final answer”);
(c)
Perceived Neuroadaptive Adjustments, capturing how participants experienced changes in pacing, representation and difficulty in response to their state (e.g., “The system slows down or changes approach when I seem overloaded”);
(d)
Climate of Hope and Cognitive Safety, reflecting feelings of hope, non-punitive treatment of errors and protection of cognitive privacy (e.g., “I can make mistakes without feeling judged by the system”).
Each dimension contained five quantitative items. Items were formulated to align explicitly with the theoretical pillars described in Section 2 (Nested Learning, Pedagogy of Hope, multimodal neuroadaptation and governance/sustainability) [6,7,20,28,32]. The mapping between dimensions and theoretical constructs is summarised in Table 2. Negatively keyed items were reverse-coded before computing scale scores.
Internal consistency was assessed for each dimension using Cronbach’s α and McDonald’s ω , yielding satisfactory reliability indices for the study sample ( n = 380 ): Perception of Nested Learning ( α = 0.88 , ω = 0.89 ); Interaction with Generative AI ( α = 0.85 , ω = 0.86 ); Perceived Neuroadaptive Adjustments ( α = 0.82 , ω = 0.84 ); and Climate of Hope and Cognitive Safety ( α = 0.91 , ω = 0.92 ).
The questionnaire was conceived as a theory-driven, primarily confirmatory instrument; exploratory analyses are reported in Section 4 but do not override the original four-dimensional structure.

3.5.2. Qualitative Protocol

To complement quantitative data and preserve conceptual coherence, we developed a qualitative protocol consisting of 12 open-ended questions derived directly from the four quantitative dimensions (three questions per dimension). For example, the Nested Learning dimension included prompts such as “Describe a moment when you felt especially supported by the combination of teachers and AI tools”, whereas the cognitive-safety dimension included questions like “Have you ever felt that the system was invading your privacy? Why or why not?”.
This design ensured a one-to-one correspondence between quantitative and qualitative strands without the need for additional coding frameworks. Qualitative data were later analysed using a selective coding strategy, taking the four quantitative dimensions as central categories and identifying patterns, tensions and illustrative excerpts within each [9,27]. Coding was conducted by at least two researchers. An initial subset of responses (approximately 25%) was double-coded independently to refine the codebook and assess intercoder agreement (Cohen’s κ ), which is reported in Section 4. Discrepancies were resolved through discussion and, when necessary, consultation with a third reviewer. The remaining data were coded using the agreed scheme. This procedure reinforces the primarily confirmatory nature of the qualitative analysis, while still allowing for the identification of emergent subthemes within each dimension.

3.6. Data Analysis

3.6.1. Neuro-Experimental Data

For Phase 1, EEG data were processed as described above. Descriptive statistics were computed for P300 amplitude and latency across conditions and participants. To examine neuroadaptive sensitivity, we estimated Pearson correlations between P300 amplitude and continuous indices of pedagogical-event intensity, and fitted linear mixed-effects models with P300 amplitude as the dependent variable and event descriptors, condition indicators and participant-level random effects as predictors, following Equation (8) [11,13]. Logistic mixed models of the form in Equation (9) were used to model the probability of high-amplitude P300 responses as a function of event intensity and Nested Learning condition.
Model assumptions (normality of residuals, homoscedasticity, absence of influential outliers) were checked using standard diagnostic plots. Where necessary, robust standard errors were computed. Effect sizes (e.g., standardised β coefficients, odds ratios) are reported in Section 4. Given the limited sample size and calibration focus, these analyses are interpreted as providing upper-bound estimates of sensitivity and as informing the design of future, larger-scale neuroadaptive studies.

3.6.2. Questionnaire and Interaction Data

For Phase 2, item-level responses were first inspected for missingness and distributional properties. Scale scores for each of the four dimensions were computed as the mean of their five constituent items. Descriptive statistics (means, standard deviations, skewness, kurtosis) were obtained for each dimension and for subgroups of interest (e.g., students vs. lecturers, programme type, course level).
Internal consistency of the scales was assessed using Cronbach’s α and McDonald’s ω , with bootstrapped confidence intervals. Construct validity was examined through exploratory factor analysis (principal axis factoring with oblique rotation) followed by confirmatory factor analysis (CFA) in a separate split-half subsample to verify the four-factor structure and its alignment with the theoretical framework [10,20]. Global fit indices (CFI, TLI, RMSEA, SRMR) are reported in Section 4. Here again, analyses are primarily confirmatory, testing whether data are compatible with the a priori four-dimensional model derived from the theoretical framework, rather than searching for an entirely new latent structure.
To connect questionnaire scores with interaction traces, we computed correlations between dimension scores and behavioural indicators derived from the multi-agent station (e.g., number of AI interactions, diversity of agents used, frequency of reflective prompts, average conversation depth). Mixed-effects models were then estimated with dimension scores as outcomes and course-level variables (course type, level, delivery mode) and interaction indicators as fixed effects, plus random intercepts for course and instructor. These models provide an empirical bridge between the descriptive questionnaire data and the dynamical perspective articulated in Equation (3), with interaction indicators serving as proxies for support-quality components F i , t .

3.6.3. Qualitative Analysis and Triangulation

Qualitative responses were analysed using thematic coding with a deductive–inductive approach. A first coding cycle assigned excerpts to the four pre-defined dimensions (Nested Learning, interaction with generative AI, neuroadaptive adjustments, hope and cognitive safety). A second cycle identified subthemes within each dimension (e.g., transparency of AI, perceived over-reliance, feelings of being “accompanied” by the ecosystem, concerns about data use, experiences of empowerment or creative flow). As noted above, the main analytic stance was confirmatory, using the four dimensions as guiding categories; genuinely novel themes that did not fit the framework were noted and illustrated but not developed into a separate coding scheme, which we acknowledge as a methodological limitation.
Triangulation proceeded by constructing a composite dashboard that integrated key quantitative indicators (e.g., mean scores per dimension, distribution plots), neuroeducational markers from Phase 1 (e.g., P300 sensitivity indices) and qualitative themes. This dashboard enabled the inspection of convergent and divergent patterns (e.g., high Nested Learning scores combined with lingering concerns about privacy) and informed the interpretation of the dynamical model parameters ( η i , δ i ) at a conceptual level.

3.7. Ethical Considerations

All procedures complied with institutional and national guidelines on research ethics in human participants, as well as with current recommendations on AI and data protection in education [6,22,26,31,32]. For the neuro-experimental phase, participants provided written informed consent, including specific information about EEG recording, the nature of neuroeducational protocols and the right to withdraw at any time without penalty. Brain data were anonymised at the point of acquisition, with identifiers stored separately from EEG recordings.
For the field implementation, both students and lecturers received extended information sheets describing the goals of the study, the role of generative-AI agents, the types of data collected (interaction logs, contextual signals, questionnaires), and the measures adopted to protect privacy and cognitive sovereignty. Data-minimisation principles were applied at all stages, and only aggregated results are reported in this article. The study protocol was reviewed and approved by the relevant institutional ethics committee (approval number and date to be specified in the final version), including an explicit emphasis on cognitive privacy, the potentially sensitive nature of neuroeducational inferences and the right to withdraw at any time without consequences for academic evaluation.

4. Results

Before presenting the neuro-experimental findings, Figure 10 summarises the Emotiv EPOC + electrode montage and its correspondence with the main cortical lobes used in Phase 1.
Phase 1 was designed as a small-sample neuro-experimental calibration, not as a fully powered validation of a neuroadaptive system, and Phase 2 is an ecological field study without random assignment or tight experimental control. The analyses therefore probe the plausibility and boundaries of the Nested Learning model rather than offering a definitive or exhaustive empirical demonstration.

4.1. Phase 1: Neuroimaging Processing and Attentional Coupling

The neuro-experimental phase suggests that the Nested Learning protocol can be monitored through low-cost mobile EEG while maintaining acceptable data quality and interpretable attentional markers in this specific sample and context. Across the 18 biomedical students, signal quality remained generally stable: after filtering and artefact handling, approximately 91.6% of epochs were retained as valid for analysis, enabling estimation of event-related potentials (ERPs) at the single-participant level.
Topographical inspection of the averaged ERPs revealed a pattern compatible with parietal P300 generators, with maximal amplitudes over centro-parietal sites and a gradual decrease towards frontal and occipital electrodes, consistent with the expected distribution for attention-related P300 components in educational tasks [11,13,14]. Frequency-band analysis (delta, theta, alpha, beta) showed an initial increase in frontal theta and a modulation of parietal alpha during Nested Learning segments, compatible with phases of heightened cognitive control followed by consolidation and stabilisation of the task set.
Figure 11 provides a schematic summary of the ERP results, highlighting the contrast between baseline and Nested Learning segments.
Linear mixed-effects models with P300 amplitude as outcome and pedagogical-event descriptors as predictors (Section 3.3) showed that higher event intensity and Nested Learning segments were associated with increased P300 amplitude, with statistically significant fixed effects and participant-level random intercepts. Logistic mixed models for the probability of high-amplitude P300 responses yielded similar patterns, supporting the assumptions in Equations (8) and (9): moments of carefully orchestrated challenge–support–reflection elicited clearer attentional signatures than baseline conditions in this sample.
Given the modest sample size and the feasibility-oriented nature of Phase 1, these results should be interpreted as preliminary evidence that the neuro-adaptive pipeline can track, in real time, key aspects of attentional coupling between the pedagogical flow and students’ cortical responses. They provide an empirical anchor for the engagement-related component E i , t in the dynamical model of Equation (3), but do not exhaustively validate the full state-space formulation.

4.2. Phase 2: Impact on Engagement and Self-Regulation

4.2.1. Descriptive Patterns, Reliability and Subgroup Analysis

In the large-scale field implementation ( n = 380 ), questionnaire responses indicated generally positive perceptions across all four dimensions and the two outcome scales. Table 3 summarises the descriptive statistics, distributional properties and reliability indices for each construct.
As shown in Table 3, all dimensions exhibited means well above the scale midpoint (3.0), ranging from 3.88 (Neuroadaptive Adjustments) to 4.25 (Climate of Hope). The indices of skewness were consistently negative (ranging from 0.15 to 0.68 ), confirming a left-skewed distribution where the majority of participants reported high levels of perceived support and safety. Kurtosis values remained within the acceptable range ( ± 1.0 ) for normal theory-based estimation methods, supporting the use of parametric analyses. Reliability coefficients ( α and ω ) exceeded 0.80 for all scales, demonstrating robust internal consistency [27].
Regarding the subgroups of interest specified in the methodology, independent samples t-tests were conducted to compare Students ( n = 300 ) and Lecturers ( n = 80 ). Results revealed a high degree of convergence in perceptions ( p > 0.05 for NL, SAF and ENG), suggesting that the ecosystem was experienced similarly by both learners and educators. A statistically significant, albeit small, difference was observed in Interaction with Generative AI, where students reported slightly higher usage and perceived utility ( M = 4.01 , S D = 0.79 ) compared to lecturers ( M = 3.76 , S D = 0.85 ; t ( 378 ) = 2.45 , p = 0.015 , d = 0.31 ). This disparity likely reflects the students’ more intensive hands-on use of the agents for task resolution, whereas lecturers operated primarily at the orchestration level. No significant differences were found based on degree programme (STEM vs. non-STEM) or course level.
Reflective prompts and metacognitive interventions were frequently triggered in segments where the orchestrator inferred high cognitive load or decreasing engagement, consistent with the behavioural logs. These descriptive patterns provide the empirical foundation for the correlational and regression analyses that follow.

4.2.2. Construct Validity: Factor Structure

To examine the construct validity of the instrument, the total sample ( n = 380 ) was randomly split into two independent subsamples ( n 1 = 190 for EFA; n 2 = 190 for CFA).
First, an Exploratory Factor Analysis (EFA) was conducted on the first subsample using principal axis factoring with direct oblimin rotation, assuming correlations between dimensions. The Kaiser-Meyer-Olkin (KMO) measure was 0.89 and Bartlett’s test of sphericity was significant ( χ 2 = 2456.3 , p < 0.001 ), supporting the factorability of the data. The analysis yielded four distinct factors with eigenvalues greater than 1.0, explaining 68.4% of the total variance. All items loaded significantly ( > 0.50 ) on their respective theoretical dimensions (Nested Learning, Interaction with AI, Neuroadaptive Adjustments, and Cognitive Safety), with no substantial cross-loadings.
Second, a Confirmatory Factor Analysis (CFA) was performed on the second subsample to verify the four-factor structure. The model specification allowed latent factors to correlate. As shown in Table 4, the Global Fit Indices demonstrated an excellent fit to the data, meeting standard thresholds for adequacy.
The combination of EFA and CFA results confirms that the instrument possesses a robust four-dimensional structure consistent with the theoretical framework proposed in Section 2, justifying the use of the four composite scores in subsequent analyses.

4.2.3. Correlations Between Nested Learning, AI Interaction and Outcomes

Pearson correlations among the five composite scales considered here (Nested Learning, Interaction with Generative AI, Perceived Neuroadaptive Adjustments, Engagement and Self-Regulation) were positive and moderate to strong. Table 5 summarises the inter-scale correlation matrix.
Nested Learning showed the strongest associations with Perceived Neuroadaptive Adjustments ( r = 0.63 ) and Engagement ( r = 0.57 ), while Interaction with Generative AI exhibited moderate correlations with the other scales (r between 0.41 and 0.54). Engagement and Self-Regulation were strongly correlated ( r = 0.62 ), consistent with the view that sustained involvement and metacognitive control are tightly linked in Nested Learning contexts [7,20].
From the dynamical perspective of Equation (3), these correlations are compatible with the interpretation that higher perceived Nested Learning and neuroadaptive adjustments are associated with higher levels of the outcome components E i , t (engagement) and R i , t (self-regulation). However, the cross-sectional nature of the questionnaire data prevents strong causal claims.

4.2.4. Predictive Models for Engagement and Self-Regulation

To examine the joint contribution of Nested Learning and Perceived Neuroadaptive Adjustments, we fitted multiple linear regression models with Engagement and Self-Regulation as dependent variables. Table 6 summarises the standardised coefficients, t-values, p-values and explained variance.
Nested Learning emerged as the strongest predictor in both models ( β = 0.41 for Engagement; β = 0.38 for Self-Regulation), with Perceived Neuroadaptive Adjustments also contributing substantially ( β = 0.33 and 0.29 , respectively). The models explain 39% and 34% of the variance in Engagement and Self-Regulation, respectively, which is considerable given the complex, multi-layer nature of the ecosystem, but should still be understood as indicative patterns rather than as definitive structural relations.
Figure 12 synthesises these relationships as a path diagram, emphasising that Nested Learning and neuroadaptive adjustments jointly drive engagement- and self-regulation-related components of the latent state vector z i , t .
From the standpoint of the dynamical system in Equation (3), these findings are consistent with (but do not prove) the interpretation that Nested Learning and neuroadaptive quality modulate the effective growth parameters η i ( E ) and η i ( R ) for engagement and self-regulation, increasing the probability that learners move towards desirable regions of the state space without destabilising the process.

4.3. Qualitative Results: Four Thematic Families of Nested Learning

The thematic analysis of the 12 open-ended questions (Section 3.5) yielded four main families that mirror the quantitative dimensions: (1) Task Engagement, (2) Academic Self-Regulation, (3) Perceived Neuroadaptive Adjustments and (4) Emotional Climate and Cognitive Safety. Each family contained several hundred coded excerpts, reflecting rich and nuanced student narratives.

Family 1: Task Engagement.

The first family (Task Engagement) comprised 512 coded excerpts, dominated by codes such as sustained attention, immersion, sense of progress, persistence and avoidance of dropout. Students frequently described uninterrupted focus, a feeling that time “passed quickly” and a sense of advancing through clearly structured stages. Many accounts emphasised that AI-mediated scaffolding, combined with the teacher’s presence, helped them remain engaged during complex segments rather than giving up prematurely. These narratives align with the strong quantitative correlations between Nested Learning, Perceived Neuroadaptive Adjustments and Engagement (Table 5).

Family 2: Academic Self-Regulation.

The second family (Academic Self-Regulation) involved 389 coded excerpts with codes such as planning, monitoring learning, time management, strategy adjustment and autonomous help-seeking. Participants described being able to detect when they were “getting lost”, adjust their approach and use AI agents proactively for clarification instead of waiting passively for the lecturer. Immediate feedback and the possibility of iterating with generative AI allowed them to correct misunderstandings earlier in the process, which resonates with the predictive role of Nested Learning and neuroadaptive adjustments in the Self-Regulation model (Table 6).

Family 3: Perceived Neuroadaptive Adjustments.

The third family (Perceived Neuroadaptive Adjustments) contained 301 excerpts and included codes such as personalisation, pacing adjustment, level adaptation, progressive guidance and perceived adaptation. A recurrent theme was the sensation that the system “adapted to me” rather than enforcing a fixed script. Students noted that, when they were stuck, the AI changed the type of example or explanation, enabling progress without feeling overwhelmed. Qualitative co-occurrence patterns between personalisation codes and references to feeling supported mirror the statistical contribution of neuroadaptive adjustments in predicting Engagement and Self-Regulation ( β = 0.33 and 0.29 ).

Family 4: Emotional Climate and Cognitive Safety.

The fourth family (Emotional Climate and Cognitive Safety) included 224 excerpts and was characterised by codes such as reduced anxiety, sense of support, confidence to make mistakes and renewed motivation. Students reported feeling emotionally safe, able to experiment and fail without fear of judgement, and consistently accompanied by some form of support (human and/or AI). These narratives align with the role of the cognitive-safety component C i , t in the dynamical model and with the ethical and governance principles in Section 3.7: Nested Learning is experienced not only as a cognitive scaffold but as an affective and ethical envelope.

Cross-Family Synthesis.

Across the four families, a convergent pattern emerged:
  • Engagement arises from the clarity of the process and the immersive structure of Nested Learning;
  • Self-regulation is strengthened by immediate feedback and adaptive structure;
  • Neuroadaptive adjustments are perceived as genuine personalisation rather than generic automation;
  • A safe emotional climate acts as a key mediator of attentional stability and persistence.
Figure 13 summarises these relationships as a qualitative map around the Nested Learning ecosystem.
This convergence between qualitative themes, questionnaire scores and neuroeducational markers supports the interpretation of Nested Learning as a neuro-adaptive, agent-mediated cycle that shapes the trajectory of the multidimensional learner state z i , t over time, while acknowledging that these inferences are based on observational data.

4.4. Technical Performance Versus Ethical Constraints

A comparative analysis of the six generative-AI platforms (ChatGPT, Gemini, Claude, Copilot, DeepSeek and GPT-o1) within the multi-agent station revealed differentiated profiles in terms of responsiveness, controllability and perceived transparency. Some agents excelled in rapid content generation and code completion, while others were preferred for their explanatory style or perceived alignment with ethical guidelines.
From a governance perspective, participants and instructors highlighted a tension between deep personalisation—enabled by fine-grained tracking of learner states and interactions—and concerns about privacy, over-surveillance and potential misuse of neurocognitive and behavioural data. These concerns echo the ethical constraints discussed in Section 3.7 and frame the practical challenge of implementing the policy π ( a i , t z ^ i , t , c t ) within strict boundaries of data minimisation, transparency and cognitive sovereignty [6,32].
Phase 1 suggests that Nested Learning micro-events can be tracked through robust P300 markers using affordable EEG; Phase 2 shows that Nested Learning and neuroadaptive adjustments are strongly associated with engagement and self-regulation, both quantitatively and qualitatively; and the technical–ethical analysis underscores that sustaining such an ecosystem in higher education requires careful balancing of technical performance and protection of learners’ rights.

4.5. Empirical Convergence Patterns in the Latent Learner State

Section 2.5 formalised the Nested Learning ecosystem as a discrete-time dynamical system in which each learner is represented by a latent state vector. For the purposes of this section, we focus on the four components most directly informed by questionnaire and qualitative data,
z i , t = z i , t ( E ) , z i , t ( R ) , z i , t ( C ) , z i , t ( H ) ,
encoding, respectively, engagement, self-regulation, cognitive safety and hope-related climate, while the performance component is treated separately. Component-wise updates were described by Equation (3),
z i , t + 1 ( k ) z i , t ( k ) = η i ( k ) ϕ ( k ) u i , t , c t 1 z i , t ( k ) + ε i , t ( k ) ,
where u i , t summarises agent actions and neuroadaptive adjustments, c t denotes contextual variables (e.g., classroom, workload, time of semester), η i ( k ) is an individual learning-speed parameter and ε i , t ( k ) captures idiosyncratic variability.
Empirically, we approximate z i , t ( E ) and z i , t ( R ) from the questionnaire scales for Engagement and Self-Regulation, rescaled to [ 0 , 1 ] , while z i , t ( C ) and z i , t ( H ) are informed by items on perceived cognitive safety and hope-related climate. The predictors in Table 6—Nested Learning (NL) and Perceived Neuroadaptive Adjustments (NA)—can be interpreted as coarse-grained proxies of the control term ϕ ( k ) ( u i , t , c t ) , since they aggregate perceived clarity of structure, adaptivity and contextual sensitivity of the ecosystem.
Under this mapping, the regression results imply that, for a large majority of learners, the expected increment
Δ z i , t ( E ) : = z i , t + 1 ( E ) z i , t ( E ) and Δ z i , t ( R ) : = z i , t + 1 ( R ) z i , t ( R )
is positive whenever NL and NA are above minimal levels. Descriptively, more than 80% of participants reported increases in perceived engagement and learning outcomes after exposure to the ecosystem, with particularly high percentages for commitment and self-regulation (self-report comparisons between “before” and “after” items). In the dynamical formulation, this pattern corresponds to Δ z i , t ( k ) > 0 for most learners and components k { E , R } , together with a progressive reduction of the gap 1 z i , t ( k ) .
Phase 1 provides a complementary view for the attentional component of the state. The P300 dynamics and frequency-band patterns described in Section 4.1 show that, over the course of the neuroexperimental session, neuroeducational markers such as attention, commitment and interest stabilise in an “optimal” range (e.g., high attention and interest, low cognitive stress). This can be read as empirical evidence that the attentional component z i , t ( E ) approaches a high but sub-maximal equilibrium, compatible with logistic-type updates: early micro-events produce larger increments Δ z i , t ( E ) , while later events yield smaller gains as the state approaches the ceiling.
Figure 14 depicts this behaviour schematically for two learners with different effective learning-speed parameters η i ( E ) , illustrating how the same Nested Learning policy can yield fast and slow but ultimately convergent trajectories in the engagement dimension.
In summary, although the field phase uses cross-sectional questionnaires rather than repeated measurements of z i , t across tasks, the combination of (i) robust positive associations between Nested Learning, neuroadaptive adjustments, engagement and self-regulation; (ii) high proportions of learners reporting perceived improvements; and (iii) stabilised neuroeducational markers in Phase 1, is consistent with the view that the ecosystem induces positive increments z i , t + 1 ( k ) z i , t ( k ) that gradually shrink as learners approach desirable regions of the state space. This consistency should be understood as model-supporting rather than as a formal convergence proof.

4.6. From Local Increments to Sustainable Trajectories

The dynamical framing also clarifies the sustainability implications of Nested Learning. At each step, the policy π ( a i , t z ^ i , t , c t ) selects agent actions and neuroadaptive adjustments that are expected to produce increments
Δ z i , t ( k ) = z i , t + 1 ( k ) z i , t ( k )
in directions aligned with the pedagogical goals of the ecosystem: increased engagement ( k = E ), stronger self-regulation ( k = R ), higher cognitive safety ( k = C ) and a more hopeful, resilient outlook ( k = H ). Phase 1 suggests that this can be done without pushing the system into pathological regimes of over-arousal or stress, as indicated by the balanced profile of EEG bands and neuroeducational markers described in Section 4.1. Phase 2 demonstrates that the same policy is associated with high levels of perceived clarity, adaptivity and emotional safety.
Within this perspective, sustainability can be interpreted as the existence of regions of the state space where:
(a)
the expected increments E [ Δ z i , t ( k ) z i , t ] are small but non-negative for desirable components (no rapid decay of engagement or self-regulation);
(b)
fluctuations ε i , t ( k ) remain bounded, avoiding oscillatory or unstable trajectories;
(c)
ethical and governance constraints on data usage and cognitive sovereignty are enforced at the policy level, constraining the admissible control signals u i , t .
The empirical configuration observed in this study—high engagement and self-regulation, low cognitive stress, strong perceptions of personalisation and safety, and explicit concerns about privacy that feed back into governance design—is compatible with such a “sustainable region”. In that region, the nested ecosystem acts as a regulator that gently pushes trajectories back towards pedagogically desirable zones when disturbances (e.g., overload, anxiety, loss of motivation) displace learners from them, rather than as an engine that maximises short-term performance at the cost of long-term wellbeing.
This interpretation reinforces the central claim of the paper: Nested Learning is not only a technical architecture but a policy-level design for shaping the temporal evolution of learner states in ways that are mathematically tractable, empirically grounded and aligned with the human-centric aspirations of Industry 5.0 and a Pedagogy of Hope.

5. Discussion

This section interprets the empirical results through the lens of the Nested Learning framework and the underlying dynamical model, and discusses their implications for sustainable higher education. In line with the mixed-methods and feasibility-oriented nature of the study, the aim is not to claim definitive validation of the framework, but to examine how far the empirical patterns are compatible with the proposed conceptual and mathematical structure, and where important uncertainties remain. We organise the discussion around four axes: (i) the viability of low-cost neuro-adaptive monitoring, (ii) the role of Nested Learning and neuroadaptive adjustments in relation to engagement and self-regulation, (iii) the contribution of the dynamical perspective to the design of sustainable AI-mediated ecosystems, and (iv) the ethical and governance constraints that bound acceptable trajectories of learner states.

5.1. Neuro-Adaptive Viability in Authentic Educational Contexts

One of the central questions of this study was whether a neuro-adaptive ecosystem can be instantiated with affordable hardware and realistic constraints, without reverting to highly controlled, laboratory-only scenarios. Within the limits of a small, homogeneous sample, the Phase 1 results suggest that this is feasible. Using a low-cost, 14-channel mobile EEG device, we obtained reasonably stable recordings with more than 90% valid epochs after artefact handling, and clear P300 signatures that differentiated Nested Learning segments from baseline activity in this specific group.
The topographical and temporal patterns of the P300 component are compatible with classical accounts of attention and context updating in educational tasks [11,13,14]. Importantly, these signatures were obtained in tasks mediated by ChatGPT and embedded in a pedagogical micro-cycle of challenge, scaffolding and reflection, rather than in artificially simplified stimuli paradigms. This supports, at least at a feasibility level, the idea that neuroeducational markers can be harnessed in ecologically valid conditions, aligning with the broader aims of neuroeducation to bridge the gap between laboratory findings and classroom practice [7,10].
From the standpoint of the measurement function g ( · ) in Equation (7), P300 amplitude and related neuroeducational indices (attention, commitment, interest, stress) act as partial readouts of the engagement component z i , t ( E ) . The observed stabilisation of these markers in an optimal range suggests that, in this context, the Nested Learning protocol can steer the system away from both under-stimulation (boredom) and over-stimulation (overload), a prerequisite for sustainable cognitive performance. At the same time, the modest sample size, the lack of a control group with alternative pedagogical scripts, and possible novelty effects mean that these findings should be treated as preliminary and in need of replication.

5.2. Nested Learning, Engagement and Self-Regulation

The Phase 2 results show that Nested Learning and Perceived Neuroadaptive Adjustments are strongly associated with Engagement and Self-Regulation. Correlations in Table 5 and regression paths in Figure 12 indicate that learners who perceive higher levels of nested support (multi-layered human–AI scaffolding) and adaptation (pacing, representation, difficulty) tend to report higher engagement and stronger self-regulatory practices. This pattern is reinforced qualitatively by the four thematic families (Figure 13), where narratives emphasise immersion, sense of progress, planning and strategic help-seeking.
These findings resonate with previous work on self-regulated learning and affect in education, which highlights the importance of structured support, timely feedback and emotionally safe environments for sustaining engagement and developing metacognitive competences [7,8,20]. The present study adds an additional layer by situating these processes within a distributed socio-technical network: the nested envelope of support involves not only teachers and peers, but also generative AI agents and IoT-informed orchestration.
In this sense, Nested Learning can be seen as an operationalisation of a Pedagogy of Hope [28] for AI-rich contexts. Instead of using AI primarily to accelerate traditional, grade-centric metrics, the ecosystem foregrounds process visibility, cognitive safety and reflexive dialogue, aiming to empower students who might otherwise disengage or experience burnout. The positive associations between Nested Learning, neuroadaptive adjustments and self-regulation suggest that generative AI, when deliberately framed and governed, can support rather than erode learner agency. Nevertheless, the observational nature of the data, the lack of a comparison condition without Nested Learning, and potential confounders (e.g., teacher enthusiasm, novelty of AI use, local institutional culture) mean that alternative explanations cannot be ruled out.

5.3. Dynamical Interpretation: From Correlations to Trajectories

The dynamical model in Section 2.5 conceptualised the ecosystem as a discrete-time process in which each learner is described by a state vector z i , t , with components such as performance, engagement, self-regulation and cognitive safety. The empirical results do not provide dense time series for each individual, but they do allow us to reason about average increments
Δ z i , t ( k ) = z i , t + 1 ( k ) z i , t ( k )
and the qualitative shape of trajectories under a given policy π ( a i , t z ^ i , t , c t ) .
The combination of (i) positive associations between Nested Learning/neuroadaptation and Engagement/Self-Regulation; (ii) qualitative evidence of perceived improvement in planning, monitoring and emotional climate; and (iii) stabilised attentional markers in Phase 1, is consistent with a regime in which Δ z i , t ( E ) and Δ z i , t ( R ) are, on average, positive but decreasing as learners approach higher levels. This matches the structure of Equation (3), where increments scale with the gap-to-target 1 z i , t ( k ) and with a control term ϕ ( k ) ( u i , t , c t ) that encodes the quality and alignment of agent actions.
In contrast to purely predictive models such as classical Knowledge Tracing, our framing emphasises policy design rather than state estimation alone. The question is not only “What is the probability of a correct answer next time?” but also “What sequence of actions should the ecosystem take to nudge the learner towards a sustainable region of the state space?”. In this sense, the empirical results support the conceptual shift towards viewing feedback, pacing and representational choices as control signals in a dynamical system, with interpretable parameters and stability properties.
At the same time, the current data do not allow for formal estimation of individual parameters η i ( k ) or rigorous stability analysis. The dynamical equations should therefore be read as a generative model that organises hypotheses and informs design, rather than as a structure fully identified from data. Future longitudinal and experimental work will be required to test whether the trajectories of z i , t under Nested Learning indeed exhibit the convergence patterns suggested by Figure 12 and Figure 14.

5.4. Design Principles for Sustainable Nested Learning Ecosystems

The synthesis of neuroexperimental, behavioural, self-report and qualitative evidence suggests a set of provisional design principles for Nested Learning ecosystems in higher education:
(a)
Layered orchestration rather than single-tool usage. The most positive experiences arise when learners perceive a coherent envelope of support (teacher, AI agents, peers, physical context), rather than isolated interactions with a single chatbot or tool. This aligns with the “nested” metaphor and the multi-layer architecture in Figure 2.
(b)
Process visibility and trace-based assessment. Making planning traces, testing strategies, evidence citations and neuroadaptive adjustments visible helps learners understand how they are progressing, not just whether outputs are correct. This supports self-regulation and reduces dependence on opaque AI decisions.
(c)
Neuroadaptive moderation, not maximisation. The neuroeducational results suggest that effective orchestration aims at stabilising attention and commitment in a mid-to-high range, avoiding both under-challenge and over-stimulation. This implies that the goal of the policy π is not to maximise engagement at all costs, but to regulate it within a sustainable corridor.
(d)
Hope-centred error culture. Qualitative data highlight that students value environments where errors are treated as natural and productive. Designing prompts, feedback and assessment formats that normalise uncertainty and iterative refinement is crucial for maintaining a hopeful stance towards learning, especially under the pressure of high-stakes evaluation.
(e)
Human-in-the-loop governance. Finally, teachers and institutional actors must retain meaningful control over the ecosystem. Configuring agent roles, thresholds for neuroadaptive interventions, and data-retention policies cannot be delegated to AI alone; they require ongoing pedagogical and ethical deliberation [6,32].
These principles are not exhaustive and should be treated as context-sensitive heuristics rather than universal rules. Nonetheless, they illustrate how the mathematical and empirical layers of the study can translate into concrete design choices that universities may consider when deploying AI-rich infrastructures.

5.5. Ethical and Governance Implications

The study also foregrounds the tension between personalisation and privacy. On the one hand, fine-grained monitoring of engagement, affect and self-regulation is precisely what enables the ecosystem to deliver timely, adaptive support. On the other hand, students and lecturers expressed concerns about potential over-surveillance, data misuse and loss of cognitive sovereignty, especially when neuroimaging data are involved.
From the dynamical perspective, this tension can be framed as a constraint on admissible control signals u i , t . Not all theoretically effective interventions are ethically acceptable. For instance, an aggressive modulation strategy that pushes z i , t ( E ) to very high levels of arousal might improve short-term performance but violate principles of cognitive safety and autonomy. Similarly, continuously tracking micro-fluctuations in attention might provide rich data for optimisation, but at the cost of an invasive experience that undermines trust.
Accordingly, any practical deployment of Nested Learning must embed privacy-by-design and data-minimisation principles at the architectural level [6,31]. This includes limiting the granularity and retention of neurocognitive data, giving learners meaningful control over what is collected and how it is used, and ensuring that high-level summaries are sufficient for pedagogical decisions. Transparent communication about these policies is essential for aligning perceived and actual governance. More broadly, institutional AI strategies and regulatory frameworks (e.g., GDPR, sector-specific guidelines) will shape what kinds of Nested Learning implementations are legally and socially acceptable.

5.6. Limitations and Directions for Future Research

Several limitations qualify the interpretation of our findings and point to avenues for future work.
First, the neuro-experimental phase involved a relatively small and homogeneous sample of biomedical students, and while the results are consistent with prior literature on P300 and attention, generalisation to other disciplines, age groups and cultural contexts must be done cautiously. Replications with larger and more diverse samples—including non-STEM programmes and different institutional profiles—are necessary to validate the robustness of neuro-adaptive markers in Nested Learning scenarios.
Second, the field implementation, although large ( n = 380 ), relied on self-report measures and interaction logs collected at limited time points. We did not track individual state vectors z i , t across multiple discrete tasks in the same way as classical learning-analytics studies, nor did we include a control group exposed to an alternative (non-nested) AI-mediated design. Future research should combine longitudinal measurement of performance, engagement and self-regulation with the dynamical modelling introduced here, enabling direct estimation of individual parameters η i ( k ) , comparison across conditions and empirical identification of convergence patterns.
Third, the control policy π used in this study was rule-based and configured by instructors; it was not optimised via reinforcement learning or other adaptive methods. While this choice ensured interpretability and ethical oversight, more sophisticated, yet constrained, optimisation schemes could be explored, provided that they remain transparent and auditable for educators and students [3,32]. Hybrid approaches, where policies are learned within strict safety envelopes defined by human stakeholders, are a promising direction.
Fourth, the mapping between questionnaire scales and latent state components is necessarily coarse. Engagement, self-regulation and cognitive safety are complex constructs that cannot be fully captured by a small number of Likert items. Integrating richer behavioural signals (e.g., task persistence metrics, revision patterns, temporal structure of AI interactions), longitudinal performance data and qualitative insights into the state estimation process is an important next step.
Finally, the study was conducted within a single national and institutional context, with particular regulatory and cultural features and a specific configuration of AI tools. Comparative studies across countries, institutional types and regulatory environments would shed light on how different governance regimes and AI cultures influence the design and acceptance of Nested Learning ecosystems, especially in relation to GDPR, institutional AI policies and local notions of student agency.
Despite these limitations, the combination of neuroexperimental evidence, large-scale field data and a mathematically grounded framework suggests that Nested Learning is a promising direction for designing AI-mediated, neuro-adaptive and sustainable higher-education ecosystems. Future work should refine the models, expand the empirical base and co-design policies with students, teachers and policymakers to ensure that such ecosystems remain aligned with human-centred values and the broader goals of Industry 5.0.

6. Conclusions

This article has outlined Nested Learning as a proposed neuro-adaptive, agent-mediated ecosystem for sustainable higher education. Building on the convergence of generative AI, neuroeducation and smart-campus infrastructures, we have described a multi-layer architecture (Figure 2) and a dynamical model of learner states that treat assessment, feedback and support as policy-level design variables rather than as static procedures. The intention is not to claim definitive validation of this construct, but to show that a mathematically grounded and ethically constrained framing can help organise design decisions in AI-rich environments that aspire to sustain engagement, self-regulation and cognitive safety over time in line with the human-centric aspirations of Industry 5.0 [5,6].
Empirically, the study has combined a neuro-experimental phase with an 18-student laboratory calibration and a large-scale field phase with 380 participants in authentic courses. Within the limits of a small and homogeneous sample, Phase 1 suggests that low-cost mobile EEG can capture interpretable P300 dynamics and balanced neuroeducational markers in ChatGPT-mediated tasks, supporting the feasibility of neuro-adaptive pipelines in ecologically valid settings. Phase 2 indicates that perceived Nested Learning and neuroadaptive adjustments are moderately to strongly associated with engagement and self-regulation, both quantitatively (correlations and regression models) and qualitatively (four thematic families centred on task immersion, planning, personalisation and emotional safety). When interpreted through the dynamical lens, these patterns are consistent with trajectories in which the relevant components of the learner state vector z i , t move towards desirable regions with diminishing increments z i , t + 1 ( k ) z i , t ( k ) , although the present data do not allow formal estimation of individual parameters or rigorous stability analysis.
From a sustainability perspective, three provisional conclusions stand out. First, human-centred AI in higher education appears to require multi-layer orchestration: the most positive experiences arise when teacher, AI agents, peers and physical context are perceived as part of a coherent envelope of support, rather than as isolated tools. Second, cognitive safety and a Pedagogy of Hope are not by-products but design targets: treating errors as learning opportunities, foregrounding process visibility and regulating arousal within a safe corridor emerge as central to long-term wellbeing and resilience, particularly for students at risk of disengagement or burnout [9,28]. Third, governance is integral to the technical architecture: privacy-by-design, data minimisation and meaningful human oversight over agent policies are necessary conditions for aligning neuro-adaptive capabilities with ethical and regulatory frameworks [6,32].
The proposed framework suggests several potential implications for universities seeking to advance the Sustainable Development Goals (especially SDG 4 on quality education and SDG 3 on health and wellbeing). Institutions can (i) experiment with multi-agent, generative-AI stations that foreground process and reflection rather than only answers; (ii) cautiously integrate lightweight neuroeducational sensing to monitor attentional and emotional balance at aggregate or cohort level; and (iii) adopt trace-based assessment formats that value planning, testing discipline and evidence use alongside product quality. These steps point beyond purely efficiency-driven narratives towards a model of higher education that explicitly cares for learners’ cognitive and emotional trajectories, while remaining sensitive to privacy and cognitive sovereignty.
In addition, Figure 15 summarises a standard neuroimaging view of the real-time neuroelectric dynamics underpinning cognitive nesting, visually illustrating how the ecosystem couples pedagogical micro-events with the temporal evolution of cortical activity.
At the same time, the study has clear limitations in sample diversity, longitudinal depth and scope of the control policy, as discussed in Section 5. These constraints mean that the findings should be interpreted as exploratory and context-bound rather than as generalisable evidence of effectiveness. Future research should (i) replicate and extend the approach in different disciplines, countries and institutional cultures; (ii) develop longitudinal datasets that allow direct estimation of dynamical parameters and empirical identification of sustainable regions in the state space; and (iii) co-design governance protocols with students, teachers and policymakers to ensure that evolving AI capabilities remain aligned with human values and rights.
In conclusion, Nested Learning should be understood as a working model that illustrates one possible way in which mathematically informed, neuro-aware and ethically constrained design could help turn AI and IoT infrastructures into engines of more sustainable education, rather than sources of additional pressure or inequality. By treating learner states as trajectories to be carefully guided—not merely as scores to be extracted—higher education can begin to harness generative AI as a catalyst for Industry 5.0 and a Pedagogy of Hope, where technological sophistication and human flourishing are pursued together and under explicit governance.

Author Contributions

Conceptualization, R.J. and A.H.-F.; methodology, C.d.B.-C.; software and data analysis, R.J. and C.d.B.-C.; theoretical framework and literature review, A.H.-F. and D.M.; writing—original draft, R.J. and A.H.-F.; writing—review and editing, C.d.B.-C. and D.M.; supervision, R.J. and A.H.-F. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Universidad de Jaén through its Teaching Innovation Plan (PID-UJA 2025–2029), under the Teaching Innovation Project “Diseño de entornos neurosaludables y afectivos en la universidad: prácticas neurodidácticas para la conexión docente–estudiante” (Project reference: PID2025_24 UJA), funded by the Vicerrectorado de Formación Permanente, Tecnologías Educativas e Innovación Docente.

Institutional Review Board Statement

This work is part of the research line titled Neuroscience, Neuroeducation, and Neurodidactics. Multiculturalism, Interculturalism, Intraculturalism, and Transculturalism. Sustainability in Education. The study was conducted in accordance with the Declaration of Helsinki and was approved by the Research Ethics Committee of the University of Jaén (Spain); approval code JUL.22/4-LÍNEA.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The EEG and educational datasets generated and analysed during the current study contain information that could compromise the privacy of participants. For this reason, they are not publicly available but can be obtained from the corresponding author on reasonable request and subject to approval by the Research Ethics Committee of the University of Jaén.

Acknowledgments

We thank the participating students and the Universidad de Jaén for supporting the innovation project under which this research was conducted.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Luckin, R.; Holmes, W.; Griffiths, M.; Forcier, L.B. Intelligence Unleashed: An Argument for AI in Education London, UK; Pearson, 2016; Available online: https://discovery.ucl.ac.uk/id/eprint/1537795/1/Luckin_et_al_2016_Intelligence_Unleashed.pdf (accessed on 1 December 2025).
  2. Holmes, W.; Bialik, M.; Fadel, C. Artificial Intelligence in Education: Promises and Implications for Teaching and Learning. In Center for Curriculum Redesign; Boston, MA, USA, 2019; Available online: https://curriculumredesign.org/our-work/artificial-intelligence-in-education/.
  3. Holmes, W.; Tuomi, I. State of the art and practice in AI in education. Eur. J. Educ. 2022, 57, 542–570. [Google Scholar] [CrossRef]
  4. OpenAI. Introducing ChatGPT. OpenAI Blog. 30 November 2022. Available online: https://openai.com/index/chatgpt/ (accessed on 1 December 2025).
  5. European Commission. Industry 5.0: Towards a Sustainable, Human-Centric and Resilient European Industry; Publications Office of the European Union: Luxembourg, 2021. [Google Scholar] [CrossRef]
  6. IEAIED. The Ethical Framework for AI in Education; The Institute for Ethical AI in Education: Buckingham, UK, 2021. Available online: https://www.buckingham.ac.uk/wp-content/uploads/2021/03/The-Institute-for-Ethical-AI-in-Education-The-Ethical-Framework-for-AI-in-Education.pdf (accessed on 1 December 2025).
  7. Immordino-Yang, M.H. Emotions, Learning, and the Brain: Exploring the Educational Implications of Affective Neuroscience; W.W. Norton & Company: New York, NY, USA, 2015; Available online: https://wwnorton.com/books/9780393709810.
  8. Camacho-Morles, J.; Pekrun, R.; Loderer, K.; Salmela-Aro, K. Achievement Emotions and Academic Performance: A Meta-Analysis. Educ. Psychol. Rev. 2021, 33, 1–45. [Google Scholar] [CrossRef]
  9. Hernández-Fernández, A.; de Barros-Camargo, C. Desnudando el cerebro: Neuropedagogía y neuroimagen; Masquelibros: Madrid, Spain, 2018; Available online: https://dialnet.unirioja.es/servlet/libro?codigo=858638.
  10. Tokuhama-Espinosa, T. Neuromyths: Debunking False Ideas about the Brain; W.W. Norton & Company: New York, NY, USA, 2018; Available online: https://wwnorton.com/books/9780393713206.
  11. Aricò, P.; Borghini, G.; Di Flumeri, G.; Colosimo, A.; Bonelli, S.; Golfetti, A.; Pozzi, S.; Imbert, J.P.; Granger, G.; Benhacene, R.; Babiloni, F. Adaptive automation triggered by EEG-based mental workload index: A passive brain–computer interface application in realistic air traffic control environment. Front. Hum. Neurosci. 2016, 10, 539. [Google Scholar] [CrossRef] [PubMed]
  12. Matusz, P.J.; Dikker, S.; Huth, A.G.; Perrodin, C. Are we ready for real-world neuroscience? Eur. J. Neurosci. 2019, 49, 8–14. [Google Scholar] [CrossRef] [PubMed]
  13. Polich, J. Updating P300: An Integrative Theory of P3a and P3b. Clin. Neurophysiol. 2007, 118, 2128–2148. [Google Scholar] [CrossRef] [PubMed]
  14. Başar, E.; Başar-Eroglu, C.; Karakaş, S.; Schürmann, M. Brain oscillations in perception and memory. Int. J. Psychophysiol. 2000, 35, 95–124. [Google Scholar] [CrossRef] [PubMed]
  15. Azevedo, R.; Gašević, D. Analyzing Multimodal Multichannel Data about Self-Regulated Learning with Advanced Learning Technologies. In Handbook of Self-Regulation of Learning and Performance, 2nd ed.; Schunk, D.H., Greene, J.A., Eds.; Routledge: New York, NY, USA, 2018; pp. 300–316. [Google Scholar] [CrossRef]
  16. Dede, C. Immersive Interfaces for Engagement and Learning. Science 2009, 323, 66–69. [Google Scholar] [CrossRef] [PubMed]
  17. D’Mello, S.K.; Graesser, A.C. Feeling, Thinking, and Computing with Affect-Aware Learning Technologies. In The Oxford Handbook of Affective Computing; Calvo, R.A., D’Mello, S.K., Gratch, J., Kappas, A., Eds.; Oxford University Press: Oxford, UK, 2015; pp. 419–434. [Google Scholar] [CrossRef]
  18. Handbook of Artificial Intelligence in Education; du Boulay, B., Mitrovic, A., Yacef, K., Eds.; Edward Elgar Publishing: Cheltenham, UK, 2023; Available online: https://www.e-elgar.com/shop/gbp/handbook-of-artificial-intelligence-in-education-9781800375406.html.
  19. Woolf, B.P. Building Intelligent Interactive Tutors: Student-Centered Strategies for Revolutionizing E-Learning; Morgan Kaufmann: Burlington, MA, USA, 2010; Available online: https://www.sciencedirect.com/book/9780123735942/building-intelligent-interactive-tutors.
  20. Panadero, E. A Review of Self-Regulated Learning: Six Models and Four Directions for Research. Front. Psychol. 2017, 8, 422. [Google Scholar] [CrossRef] [PubMed]
  21. Selwyn, N. Should Robots Replace Teachers? AI and the Future of Education; Polity Press: Cambridge, UK, 2019; Available online: https://politybooks.com/should-robots-replace-teachers/.
  22. Selwyn, N. The future of AI and education: Some cautionary notes. Eur. J. Educ. 2022, 57, 620–631. [Google Scholar] [CrossRef]
  23. Knox, J. AI and Education in China: Imagining the Future, Excavating the Past; Routledge: London, UK, 2023. [Google Scholar] [CrossRef]
  24. Williamson, B. The social life of AI in education. Int. J. Artif. Intell. Educ. 2024, 34, 97–104. [Google Scholar] [CrossRef]
  25. Williamson, B.; Eynon, R.; Knox, J.; Davies, H.C. Critical perspectives on AI in education: Political economy, discrimination, commercialization, governance and ethics. In Handbook of Artificial Intelligence in Education; du Boulay, B., Mitrovic, A., Yacef, K., Eds.; Edward Elgar Publishing: Cheltenham, UK, 2023; pp. 555–573. Available online: https://www.e-elgar.com/shop/gbp/handbook-of-artificial-intelligence-in-education-9781800375406.html.
  26. Williamson, B.; Molnar, A.; Boninger, F. Time for a pause: Without effective public oversight, AI in schools will do more harm than good; National Education Policy Center: Boulder, CO, USA, 2024; Available online: https://nepc.colorado.edu/publication/ai.
  27. Creswell, J.W.; Creswell, J.D. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, 5th ed.; SAGE: Thousand Oaks, CA, USA, 2018; Available online: https://us.sagepub.com/en-us/nam/research-design/book255675.
  28. Freire, P. Pedagogy of Hope: Reliving Pedagogy of the Oppressed; Bloomsbury Academic: London, UK, 2014; Available online: https://www.bloomsbury.com/uk/pedagogy-of-hope-9781472533401/.
  29. Freire, P. Pedagogy of the Oppressed, 50th Anniversary; Bloomsbury Academic: New York, NY, USA, 2018; Available online: https://www.bloomsbury.com/us/pedagogy-of-the-oppressed-9781501314131/.
  30. Holmes, W.; Porayska-Pomsta, K.; Holstein, K.; Sutherland, E.; Baker, T.; Shum, S.B.; Santos, O.C.; Rodrigo, M.T.; Cukurova, M.; Bittencourt, I.I.; et al. Ethics of AI in Education: Towards a Community-Wide Framework. Int. J. Artif. Intell. Educ. 2022, 32, 504–526. [Google Scholar] [CrossRef]
  31. Kasneci, E.; Sessler, K.; Küchemann, S.; Bannert, M.; Dementieva, D.; Fischer, F.; Gasser, U.; Groh, G.; Günnemann, S.; Hüllermeier, E.; et al. ChatGPT for good? On opportunities and challenges of large language models for education. Learn. Individ. Differ. 2023, 103, 102274. [Google Scholar] [CrossRef]
  32. UNESCO. Guidance for Generative AI in Education and Research; UNESCO: Paris, France, 2023; Available online: https://unesdoc.unesco.org/ark:/48223/pf0000386693 (accessed on 1 December 2025).
Figure 1. Temporal overview of the two-phase mixed-methods design underlying the Nested Learning ecosystem. Phase 1 provides an exploratory calibration of the neuro-adaptive pipeline; Phase 2 evaluates its pedagogical and ethical impact in authentic courses.
Figure 1. Temporal overview of the two-phase mixed-methods design underlying the Nested Learning ecosystem. Phase 1 provides an exploratory calibration of the neuro-adaptive pipeline; Phase 2 evaluates its pedagogical and ethical impact in authentic courses.
Preprints 188576 g001
Figure 2. Conceptual view of the Nested Learning ecosystem. (a) Multi-layer architecture surrounding the learner, from neurocognitive states to AI agents, smart-campus infrastructure and governance. (b) Multimodal neuro-adaptive pipeline that integrates EEG, interaction and contextual data through deep learning models to inform agent policies and educational outcomes.
Figure 2. Conceptual view of the Nested Learning ecosystem. (a) Multi-layer architecture surrounding the learner, from neurocognitive states to AI agents, smart-campus infrastructure and governance. (b) Multimodal neuro-adaptive pipeline that integrates EEG, interaction and contextual data through deep learning models to inform agent policies and educational outcomes.
Preprints 188576 g002
Figure 3. Conceptual positioning of Nested Learning at the intersection of generative AI, neuroeducation and sustainable higher education.
Figure 3. Conceptual positioning of Nested Learning at the intersection of generative AI, neuroeducation and sustainable higher education.
Preprints 188576 g003
Figure 4. Instant of the neuroadaptive Nested Learning session in the EEG lab. A student wears a low-cost 14-channel Emotiv EPOC+ headset while real-time cortical activity and brain topographies are displayed on multiple monitors. This corresponds to Image 1 in the Spanish draft (“Instante del trabajo realizado”).
Figure 4. Instant of the neuroadaptive Nested Learning session in the EEG lab. A student wears a low-cost 14-channel Emotiv EPOC+ headset while real-time cortical activity and brain topographies are displayed on multiple monitors. This corresponds to Image 1 in the Spanish draft (“Instante del trabajo realizado”).
Preprints 188576 g004
Figure 5. Use of ChatGPT together with real-time EEG reading and P300 visualization during a Nested Learning activity. The left side of the screen shows the interaction with the generative AI agent, while the right side displays event-related potentials and cortical maps used to monitor attentional dynamics. This corresponds to Image 2 in the Spanish draft (“Utilización de ChatGPT y lectura neuro”).
Figure 5. Use of ChatGPT together with real-time EEG reading and P300 visualization during a Nested Learning activity. The left side of the screen shows the interaction with the generative AI agent, while the right side displays event-related potentials and cortical maps used to monitor attentional dynamics. This corresponds to Image 2 in the Spanish draft (“Utilización de ChatGPT y lectura neuro”).
Preprints 188576 g005
Figure 6. Multi-agent generative-AI station used in the large-scale phase of the study, integrating six platforms (ChatGPT, Gemini, Claude, Copilot, DeepSeek and GPT-o1) in a single workspace. This photograph illustrates the practical implementation of the Nested Learning ecosystem in a real university setting.
Figure 6. Multi-agent generative-AI station used in the large-scale phase of the study, integrating six platforms (ChatGPT, Gemini, Claude, Copilot, DeepSeek and GPT-o1) in a single workspace. This photograph illustrates the practical implementation of the Nested Learning ecosystem in a real university setting.
Preprints 188576 g006
Figure 7. Theoretical integration of the Nested Learning ecosystem. Micro-level neurocognitive states are modelled by multimodal and P300-based analytics; meso-level instructional episodes are orchestrated by generative-AI agents; macro-level institutional and sustainability policies constrain and enable deployment. These levels jointly support sustainable higher-education outcomes.
Figure 7. Theoretical integration of the Nested Learning ecosystem. Micro-level neurocognitive states are modelled by multimodal and P300-based analytics; meso-level instructional episodes are orchestrated by generative-AI agents; macro-level institutional and sustainability policies constrain and enable deployment. These levels jointly support sustainable higher-education outcomes.
Preprints 188576 g007
Figure 8. End-to-end pipeline for Phase 1. Pedagogical events and ChatGPT interactions are scripted and pushed to a Kafka bus, which synchronises event markers with EEG acquisition. Data are preprocessed, P300 features are extracted and statistical models are estimated to calibrate the neuro-adaptive sensitivity of the Nested Learning ecosystem.
Figure 8. End-to-end pipeline for Phase 1. Pedagogical events and ChatGPT interactions are scripted and pushed to a Kafka bus, which synchronises event markers with EEG acquisition. Data are preprocessed, P300 features are extracted and statistical models are estimated to calibrate the neuro-adaptive sensitivity of the Nested Learning ecosystem.
Preprints 188576 g008
Figure 9. High-level orchestration pipeline for Phase 2. Students and lecturers interact with a multi-agent generative-AI layer, which also consults smart-campus IoT signals. All interactions and contexts are logged for subsequent analysis of the latent state dynamics z i , t and policy effects.
Figure 9. High-level orchestration pipeline for Phase 2. Students and lecturers interact with a multi-agent generative-AI layer, which also consults smart-campus IoT signals. All interactions and contexts are logged for subsequent analysis of the latent state dynamics z i , t and policy effects.
Preprints 188576 g009
Figure 10. Emotiv EPOC + electrode positions and their correspondence with cortical lobes.
Figure 10. Emotiv EPOC + electrode positions and their correspondence with cortical lobes.
Preprints 188576 g010
Figure 11. Schematic representation of P300 dynamics in Phase 1. Nested Learning segments show higher parietal P300 amplitudes than baseline segments in the 250–400 ms window, consistent with increased attentional engagement and neuro-adaptive sensitivity. Curves are illustrative rather than raw waveforms.
Figure 11. Schematic representation of P300 dynamics in Phase 1. Nested Learning segments show higher parietal P300 amplitudes than baseline segments in the 250–400 ms window, consistent with increased attentional engagement and neuro-adaptive sensitivity. Curves are illustrative rather than raw waveforms.
Preprints 188576 g011

[very thick, blue!70] (0,0)–(0.6,0); Nested Learning segments
[thick, gray!70] (0,0)–(0.6,0); Baseline segments
Figure 12. Path representation of regression results. Nested Learning (NL) and Perceived Neuroadaptive Adjustments (NA) jointly predict Engagement (ENG) and Self-Regulation (SRL), with standardised coefficients shown on each path (all p < 0.001 ). Arrows represent statistical associations, not causal effects.
Figure 12. Path representation of regression results. Nested Learning (NL) and Perceived Neuroadaptive Adjustments (NA) jointly predict Engagement (ENG) and Self-Regulation (SRL), with standardised coefficients shown on each path (all p < 0.001 ). Arrows represent statistical associations, not causal effects.
Preprints 188576 g012
Figure 13. Qualitative map of the four thematic families surrounding the Nested Learning ecosystem. Each family feeds into and is reinforced by the ecosystem, highlighting the interdependence between engagement, self-regulation, neuroadaptive personalisation and emotional safety.
Figure 13. Qualitative map of the four thematic families surrounding the Nested Learning ecosystem. Each family feeds into and is reinforced by the ecosystem, highlighting the interdependence between engagement, self-regulation, neuroadaptive personalisation and emotional safety.
Preprints 188576 g013
Figure 14. Illustrative convergence trajectories for the engagement component z i , t ( E ) under a Nested Learning policy. Both learners move from low to high engagement, but with different effective learning-speed parameters η i ( E ) . The empirical results—high engagement, strong self-regulation and stabilised neuroeducational markers—are consistent with trajectories that approach a high-value equilibrium with diminishing increments Δ z i , t ( E ) .
Figure 14. Illustrative convergence trajectories for the engagement component z i , t ( E ) under a Nested Learning policy. Both learners move from low to high engagement, but with different effective learning-speed parameters η i ( E ) . The empirical results—high engagement, strong self-regulation and stabilised neuroeducational markers—are consistent with trajectories that approach a high-value equilibrium with diminishing increments Δ z i , t ( E ) .
Preprints 188576 g014
Figure 15. Standard neuroimaging view of the real-time neuroelectric dynamics of cognitive nesting
Figure 15. Standard neuroimaging view of the real-time neuroelectric dynamics of cognitive nesting
Preprints 188576 g015
Table 1. Overview of the two-phase mixed-methods design underpinning the Nested Learning ecosystem.
Table 1. Overview of the two-phase mixed-methods design underpinning the Nested Learning ecosystem.
Phase Context Participants Main data sources and aims
Neuro-experimental calibration Laboratory session with generative-AI-mediated problem solving and controlled instructional events 18 biomedical undergraduates (21–24 years) Mobile EEG (Emotiv EPOC, 14 channels), P300 dynamics, event markers, interaction logs; exploratory validation of the sensitivity and stability of the neuro-adaptive pipeline to instructional micro-events.
Field implementation in higher education Regular courses at a university in Madrid using multiple generative AI platforms and smart-campus integration 380 participants (300 students, 80 lecturers) Questionnaires (engagement, self-regulated learning, cognitive safety, ethical concerns), platform and agent logs, IoT context signals; evaluate perceived Nested Learning, pedagogical impact and sustainability-related issues at scale.
Table 2. Synthesis of the theoretical framework and its operationalisation in the present study.
Table 2. Synthesis of the theoretical framework and its operationalisation in the present study.
Theoretical pillar Key constructs Operationalisation in this study Main indicators
Nested Learning and Pedagogy of Hope Hope, agency, belonging, self-regulated learning Design of nested support structures (human + AI), emphasis on dialogic feedback and non-punitive error treatment Questionnaire scales on engagement, SRL and perceived cognitive safety; qualitative reports of hope and trust.
Multimodal deep learning and neuroimaging Attention, cognitive load, context updating (P300), multimodal analytics Mobile EEG recordings during LLM-mediated tasks; alignment of P300 with instructional events; fusion of EEG and interaction logs through deep models P300 amplitude/latency, anomaly scores, temporal alignment metrics, task-level engagement markers.
Agent-based generative AI and IoT Multi-agent orchestration, tool use, contextual adaptation via IoT Distributed LLM-based agents for feedback, guidance and governance; integration with smart-campus sensors and learning platforms Log traces of agent interactions, tool calls and IoT-triggered adaptations; perceived usefulness and transparency of AI support.
Governance and sustainability in higher education Ethics, data protection, inclusion, long-term resilience Privacy-by-design policies, informed consent, data-minimisation strategies and explicit attention to marginalised students Compliance checks, participant consent records, reported ethical concerns, perceived fairness and inclusiveness.
Table 3. Descriptive statistics, distributional properties and reliability indices for Nested Learning dimensions and outcomes ( n = 380 ).
Table 3. Descriptive statistics, distributional properties and reliability indices for Nested Learning dimensions and outcomes ( n = 380 ).
Dimension / Scale Mean SD Skewness Kurtosis α ω
Perception of Nested Learning (NL) 4.12 0.78 0.45 0.30 0.88 0.89
Interaction with Generative AI (INT) 3.95 0.82 0.28 0.45 0.85 0.86
Perceived Neuroadaptive Adjustments (NA) 3.88 0.75 0.15 0.50 0.82 0.84
Climate of Hope and Cognitive Safety (SAF) 4.25 0.71 0.68 0.12 0.91 0.92
Engagement (ENG) 4.18 0.76 0.55 0.10 0.87 0.88
Self-Regulation (SRL) 4.05 0.80 0.32 0.41 0.86 0.87
Note. Range 1–5. SD = Standard Deviation. α = Cronbach’s alpha; ω = McDonald’s omega.
Table 4. Global fit indices for the four-factor Confirmatory Factor Analysis (CFA) model ( n = 190 ).
Table 4. Global fit indices for the four-factor Confirmatory Factor Analysis (CFA) model ( n = 190 ).
Fit Index Observed Value Recommended Threshold
χ 2 / d f (Chi-square / degrees of freedom) 1.84 < 3.0
CFI (Comparative Fit Index) 0.96 > 0.95
TLI (Tucker-Lewis Index) 0.95 > 0.95
RMSEA (Root Mean Square Error of Approx.) 0.054 < 0.06
SRMR (Standardized Root Mean Square Residual) 0.042 < 0.08
Note. Indices indicate a good model fit according to Hu and Bentler (1999).
Table 5. Correlation matrix between Nested Learning (NL), Interaction with Generative AI (INT), Perceived Neuroadaptive Adjustments (NA), Engagement (ENG) and Self-Regulation (SRL).
Table 5. Correlation matrix between Nested Learning (NL), Interaction with Generative AI (INT), Perceived Neuroadaptive Adjustments (NA), Engagement (ENG) and Self-Regulation (SRL).
Variables NL INT NA ENG SRL
1. Nested Learning (NL) 0.54** 0.63** 0.57** 0.52**
2. Interaction (INT) 0.54** 0.49** 0.46** 0.41**
3. Neuroadaptive (NA) 0.63** 0.49** 0.49** 0.44**
4. Engagement (ENG) 0.57** 0.46** 0.49** 0.62**
5. Self-Regulation (SRL) 0.52** 0.41** 0.44** 0.62**
Note. n = 380 . ** p < 0.001 for all correlations.
Table 6. Multiple linear regression models for Engagement (ENG) and Self-Regulation (SRL) with Nested Learning (NL) and Perceived Neuroadaptive Adjustments (NA) as predictors.
Table 6. Multiple linear regression models for Engagement (ENG) and Self-Regulation (SRL) with Nested Learning (NL) and Perceived Neuroadaptive Adjustments (NA) as predictors.
Dependent Predictors β t p R 2
Engagement (ENG) Nested Learning (NL) 0.41 9.12 < 0.001 0.39
Neuroadaptive (NA) 0.33 7.08 < 0.001
Self-Regulation (SRL) Nested Learning (NL) 0.38 8.44 < 0.001 0.34
Neuroadaptive (NA) 0.29 6.17 < 0.001
Note. Standardised coefficients are reported. Both models met assumptions of linearity, homoscedasticity and absence of problematic multicollinearity; nevertheless, their cross-sectional and observational nature means that they should be interpreted as associative rather than causal.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2025 MDPI (Basel, Switzerland) unless otherwise stated