Preprint
Article

This version is not peer-reviewed.

Artificial Intelligence in Agriculture: Ethical Stewardship, Responsible Innovation, and Governance for Sustainable Food Systems

Submitted:

22 September 2025

Posted:

23 September 2025

You are already at the latest version

Abstract
Agriculture’s “4.0” transition increasingly relies on artificial intelligence (AI), IoT sensing, robotics, and decision-support. This review synthesizes Q1/Q2 scholarship, multilateral policy, and national AI strategies to assess how AI is changing farm stewardship and what guardrails align innovation with equity and sustainability. Methods combine a systematic literature review, comparative policy analysis (FAO, OECD, India’s #AIForAll, Rwanda AI Policy), NLP-assisted meta-synthesis of agri-AI discourse, theological analysis of stewardship texts (Gen 1:26–28, Gen 2:15), and case illustrations (precision irrigation, UAV spraying, mobile advisory). Results show AI improves resource-use efficiency and foresight (e.g., precision irrigation; targeted drone spraying) while introducing risks of dependency, opacity, and data-extractive business models. We propose a multi-level governance scaffold—farmer-centric data rights, explainability thresholds, context-appropriate human oversight, and compute-energy budgeting—mapped to Responsible Innovation (AIRR) and Value-Sensitive Design. We translate stewardship into measurable design constraints (e.g., water-withdrawal and biodiversity “red lines,” local-language interfaces, offline capability). Policy implications include numbered-style impact assessments, mandatory farmer representation on regional AI councils, and adoption equity metrics. Properly governed, AI can act as a tool of care for households, communities, and creation rather than a driver of technocratic consolidation.
Keywords: 
;  ;  ;  ;  ;  ;  ;  
Subject: 
Social Sciences  -   Other

1. Introduction

Artificial intelligence (AI) is fast becoming the decisive variable in humanity’s centuries-long struggle to secure food, justice, and ecological stability. Within a biblical worldview, stewardship is not environmentalism that rivals or relativizes human dignity; rather, it is the responsible care of the created order entrusted to image-bearers (Gen 1:26–28; 2:15). Creation exists for the glory of God and the good of neighbor, and protection of land, water, and biodiversity is therefore a means of neighbor-love, not an end superior to human life (Ps 8:4–8; Matt 22:37–39). In this study, Scripture is the supreme and final authority for ethical claims; all frameworks from philosophy, policy, or design are used instrumentally and are valid only insofar as they serve what God’s Word requires. Where international bodies provide data (e.g., yields, connectivity), such statistics are used instrumentally and do not confer moral authority or governance prerogative over church, family, or nation.
Contribution. This study offers (i) an integrated governance lens that binds Responsible Innovation and Value-Sensitive Design to a stewardship ethic, and (ii) a paste-ready “guardrail” toolkit for agricultural AI—farmer-centric data rights, explainability thresholds, context-appropriate human oversight, and compute-energy budgets—plus indicators for evaluation at program and product levels.

1.1. Background

Global agriculture is entering a data-intensive, AI-enabled phase often labelled “Agriculture 4.0,” yet its significance is less about slogans than about whether it can help feed a growing and aging world under tight environmental constraints. Three trends sharpen AI’s relevance in agriculture: population growth and aging, food-demand increases of roughly 35–56% by 2050, and climate-risk pressures on yields [1,2,3]. At the same time, a meta-analysis of 57 studies finds global food demand will increase by roughly 35–56% between 2010 and 2050; these trajectories tighten the margin for error on land, water, biodiversity, and emissions [2]. AI and digital tools—ranging from edge-AI sensing and decision support to robotics and autonomous systems—are therefore being explored as levers for higher resource-use efficiency, resilience, and transparency across agri-food value chains [4,5,6] .
Amid this technological turn, adoption is uneven and risks amplifying structural inequities if not governed well. The FAO–ITU survey of 47 sub-Saharan African countries documents substantial gaps in rural connectivity, skills, and enabling policies that shape who benefits from digital agriculture and on what terms [7]. Moreover, the energy and compute demands of advanced AI raise questions of environmental externalities and “green AI” practices that must be confronted in agriculture’s sustainability calculus [8].
Within Christian thought as governed by Scripture, dominion is a call to serve rather than to exploit, and stewardship requires ordering technologies to the good of persons made in the image of God and to the care of creation as God’s possession (Gen 1:26–28; 2:15; Ps 24:1). On that basis, ethical evaluation of AI in agriculture must prioritize human life, justice for the poor, and truthful, accountable governance, with environmental care positioned as a servant to these biblical ends rather than as a competing ultimate good. While Scripture repeatedly affirms the intrinsic worth of the non-human creation (Ps 24:1; Rom 8:19-22), it also locates humanity uniquely ‘a little lower than the angels’ and charged to exercise dominion for the good of people first, then planet (Gen 1:26-28; Ps 8:4-8). Consequently, ecological stewardship must be assessed by its service to human flourishing rather than the reverse [9]. This paper responds to that call by operationalizing a Biblical ethics of stewardship alongside contemporary frameworks in responsible and value-sensitive design for AI in agriculture. Within a stewardship worldview, these technologies must therefore be weighed not only for efficiency but for their capacity to ‘watch over’ the Earth in the sense of Genesis 2:15—a theme that the present study brings to the centre of AI-agriculture discourse. The Sixth Assessment Synthesis Report of the IPCC confirms that without drastic yield-boosting and emissions-mitigating innovations, climate-induced crop losses could exceed 10 percent in staple regions before mid-century [3]. Independent agronomic syntheses likewise find negative yield responses to warming across major staples, strengthening the prudential case for adaptive management in agriculture irrespective of any single institution’s scenarios [10,11]. In this article, stewardship functions as a normative orientation that prioritizes the protection of life, households, and the created order. Design and governance instruments are treated as methods, retained insofar as they deliver those ends in practice.

1.2. Research Problem

Despite a surge of AI pilots across the agri-food system—from edge sensing to autonomous robotics—normative integration remains thin. Deployments rarely weave a Biblical ethic of stewardship together with the procedural discipline of Responsible Research and Innovation (RRI) and Value Sensitive Design (VSD), and they seldom embed farmer-centric data governance or life-cycle environmental accounting from the outset. Consequently, local efficiency gains often rest on fragile social contracts regarding data rights, algorithmic opacity, and burden-sharing of environmental costs [12]. Mapping studies show that the agricultural-AI literature still treats social and ethical questions as peripheral, with transparency, dignity, and solidarity among the least-addressed principles [13]. Meanwhile, proposals such as the United Nations Global Digital Compact (GDC), annexed to the 2024 Pact for the Future, convene high-level discussion of AI governance but remain non-binding and contested, with scope and enforcement mechanisms still unsettled [14,15] From a stewardship-and-subsidiarity perspective, such compacts risk recentralizing authority in ways that weaken local accountability unless carefully delimited to technical cooperation and transparent data sharing. Accordingly, this study treats the GDC as a datapoint on emerging coordination, not a source of normative direction for agriculture ethics or governance.
For producers in the Global South, these gaps intersect with stubborn digital divides in connectivity, skills, and enabling regulation, narrowing the reachable benefits of AI and heightening the risks of algorithmic exclusion and data-extractivist business models [7,16]. OECD analysis further warns that fragmented agricultural data-governance arrangements erode farmer trust and could stall digital-transformation gains [17]. In short, the velocity of AI deployment now outpaces our ethical and governance capacities, threatening to reproduce—and in some contexts deepen—long-standing agrarian inequities.
At the level of design and governance theory, RRI’s anticipatory, reflexive, inclusive, and responsive (AIRR) schema offers a well-tested path for aligning innovation with societal values [18], while VSD provides complementary methods for surfacing and embedding stakeholder values throughout technical lifecycles [19,20] . Yet no consolidated framework currently (i) grounds these procedural commitments in a Biblical theology of stewardship, (ii) translates stewardship into concrete guardrails for data, models, compute, and deployment contexts in agriculture, and (iii) stress-tests those guardrails against 2035 horizon scenarios. Bridging this multidimensional gap—so that innovation becomes a ministry of care rather than an instrument of exclusion—is the central problem this study addresses.
Research need: To address this gap, we must critically evaluate how macro-trends in AI-agriculture intersect with ethics and governance, especially in the Global South context. This paper responds by proposing a novel integration of three frameworks – Responsible Innovation (RI), Value-Sensitive Design (VSD), and Theological Stewardship – as a lens to analyze and guide AI’s role in agriculture. We ask: How can emerging technologies be governed so that farming is not reduced to a technocratic exercise, but remains rooted in humane values, social justice, and environmental care? What would it mean to design AI systems for agriculture that honor farmers’ autonomy, community wisdom, and even spiritual worldviews about caring for the land?

1.3. Research Objectives

This study pursues four linked objectives that together translate stewardship from moral vision into operational governance for AI in agriculture. First, it integrates a Biblical stewardship ethic with RRI and VSD into a coherent evaluative lens tailored to agri-food contexts. Second, it analyzes macro-trends and representative initiatives (North and South) to surface recurring design tensions in data governance, accountability, and environmental externalities. Third, it proposes a set of field-tested governance guardrails—covering farmer-centric data rights, transparency and explainability thresholds, context-appropriate human oversight, and compute-energy budgeting—that can be adopted by public agencies, standard-setting bodies, and innovators. Fourth, it conducts foresight to 2035 to stress-test these guardrails under plausible futures of climate volatility, demographic change, and digital market consolidation, clarifying implementation pathways and measurable indicators.

1.4. Research Questions

The analysis is organized around three questions that proceed from descriptive mapping to normative design and institutional implementation. Drawing on the foregoing objectives, the study pursues the following three inter-locking research questions: RQ1. How does integrating AI into agriculture reshape human stewardship of land and food systems? RQ2. What ethical and spiritual risks emerge when intelligent agriculture displaces traditional agrarian ethics and community practices? RQ3. How can governance frameworks embed theological and ethical principles—stewardship, equity and justice—into the development of intelligent agriculture?. These questions scaffold the analytic sequence developed in Section 4.1 through 4.3, ensuring conceptual continuity from inquiry to insight.
We hypothesize: (H1) That combining RI, VSD, and theological stewardship principles will illuminate novel ethical considerations (e.g. spiritual and cultural factors) often overlooked in purely secular tech governance models. (H2) That case studies will show differences between Global North and South in how values and risks manifest – supporting a hypothesis that one-size-fits-all AI ethics guidelines are insufficient, and context-sensitive (especially faith/culture-informed) adaptations are needed. (H3) That a stewardship-oriented responsible innovation approach can yield concrete policy and design recommendations (a “guardrail” toolkit) which, if implemented over the next decade, will measurably improve outcomes like smallholder AI adoption rates, reduction in digital inequalities, and stakeholder trust in AI systems (as indicated by surveys or participatory evaluations). These hypotheses are testable in future empirical work via comparative policy studies and community-based research, though in this paper we address them through secondary evidence and conceptual analysis. In resolving these questions, we braid insights from Responsible Innovation, Value-Sensitive Design, and Theological Stewardship, thereby constructing an analytic lens capable of capturing both the macro-institutional and micro-design dynamics documented in our findings.

2. Theoretical Framework

To bridge the domains of ethics, theology, and technology policy, we draw on three complementary frameworks: Responsible Innovation (RI), Value-Sensitive Design (VSD), and a Theological Stewardship paradigm. Each offers insights at different levels (system, design, and normative worldview), and together they form an integrated lens for examining AI in agriculture.

2.1. Responsible Innovation Theory (RRI)

Rooted in science and technology studies, RI provides a paradigm for anticipating and governing technological change in alignment with societal needs and values. A foundational model by Stilgoe, Owen, and Macnaghten [20] outlines four key dimensions: anticipation (systematically thinking through potential impacts, including unintended consequences), inclusion (engaging diverse stakeholders in dialogue and co-creation), reflexivity (continuous self-examination by innovators regarding purposes and assumptions), and responsiveness (ability to change course or adapt in light of new knowledge and public values) [21,22]. The AIRR dimensions (anticipation, inclusion, reflexivity, responsiveness) discipline research and deployment to foresee risk, include stakeholders, and adapt to values-concordant outcomes.. In agriculture, RI encourages broad participation – farmers, communities, policymakers, and scientists collectively reflecting on questions like: What kind of farming future do we desire? and Who wins or loses with AI deployment? [16]. RI also emphasizes governance mechanisms (standards, regulations, ethical guidelines) that can steer AI development towards societal benefit and away from harm. For example, an RI approach might mandate ex ante impact assessments for an AI farming tool and inclusive committees to oversee its roll-out, aligning with the idea that innovators carry a duty to care for society’s well-being [16]. By providing a structure to foresee risks (like widening inequalities or environmental side-effects) and proactively manage them, RI theory grounds our study in a preventive and participatory ethics of innovation. These instruments have no independent moral authority; they are retained only insofar as they serve biblically revealed ends (love of neighbor and faithful care for creation as God’s possession).

2.2. Value-Sensitive Design (VSD)

While RI operates at a policy and system level, VSD offers a methodological approach at the design level to incorporate human values throughout technology development. Originating in human-computer interaction research, VSD is defined as a theoretically grounded approach to the design of technology that accounts for human values in a principled and comprehensive manner [23]. Pioneered by Batya Friedman and colleagues, VSD insists that technologies are not value-neutral instruments; designers inevitably make choices that favor certain values over others. VSD provides techniques to systematically identify stakeholders (both direct users and those indirectly affected) and elicit their values and concerns [23,24]. It integrates conceptual investigations (clarifying which values – e.g. privacy, fairness, autonomy, sustainability – are at stake), empirical investigations (studying how people perceive and experience the technology), and technical investigations (designing features to support prioritized values) [23,24]. Applied to AI in agriculture, VSD would, for instance, involve farmers in co-designing an AI irrigation system to ensure it aligns with their values (like equitable water sharing, usability for those with low literacy, or respecting indigenous knowledge about weather patterns). It might lead to design choices such as privacy safeguards on farm data or interfaces in local languages to uphold inclusivity. Crucially, VSD moves beyond abstract principles by embedding values into practical design requirements. For example, if equity is a core value, a VSD process might require an AI crop advisory app to work offline for communities with poor internet, or to use low-cost smartphones, so as not to exclude poorer farmers [7,25]. In summary, VSD provides the micro-level tools ensuring that AI systems are engineered from the ground up to reflect ethical values and the lived realities of stakeholders, rather than retrofitting ethics after deployment. In agricultural deployments, we operationalize stewardship via design constraints: offline functionality for low-connectivity regions; interfaces in local languages; minimum explanation granularity (feature-level factors influencing a recommendation); and “red-line” constraints that prevent the system from recommending actions exceeding locally set thresholds for water withdrawal or habitat loss.

2.3. Theological Stewardship Framework

Theological stewardship, grounded in Scripture alone, affirms that “the earth is the LORD’s” (Ps 24:1) and that human beings are commissioned “to work and to keep” the garden (Gen 2:15). Stewardship therefore binds agricultural practice to worshipful obedience and neighbor-love: technologies may be used, but never as masters of human beings or creation. Because men and women uniquely bear the image of God (Gen 1:26–28), agricultural innovation must be evaluated first by its effects on human life, households, and communities, and then by its effects on the land they cultivate. Scholarly treatments within evangelical theology corroborate this reading without adding extra-biblical magisterial authority: they draw the same lines from Genesis 1–2, the Psalms, and the Prophets to show that creation care is an implication of covenant faithfulness ordered to human dignity—not a rival good that can override it [26,27]. In what follows, Scripture remains the norm that norms; design and governance frameworks are received as subordinate tools. Recent evangelical treatments converge on this hierarchy of ends—human life and household livelihoods ordered to the love of neighbor [9,27] —while affirming creation’s intrinsic worth under God’s ownership (Ps 24:1; Gen 2:15). This paper therefore receives non-biblical sources as historically contingent aids to prudence, not as coequal magisteria.
By bringing these frameworks together, we can analyze AI in agriculture through a multi-level ethical lens. Responsible Innovation gives us a macro-structure to evaluate policies, research agendas, and innovation ecosystems (the system-level), asking if they are anticipatory and inclusive. Value-Sensitive Design operates at the actor-level, i.e. the practices of engineers, developers, and designers who build AI systems – ensuring that concrete design decisions reflect ethical choices. Theological Stewardship provides an overarching value system and narrative that can guide both levels with moral purpose – reminding us why we seek responsible, value-driven innovation in the first place. It frames the ultimate ends: sustaining God’s creation (the land, biodiversity) and uplifting the least advantaged of our neighbors (justice and love in community). In practical governance terms, this integration suggests, for example, that a national AI-agriculture strategy in an African country should be shaped by participatory foresight (RI) that involves farmers and faith leaders, that it should mandate value-sensitive approaches in project implementation, and that it explicitly incorporate cultural/religious principles of stewardship (such as land as heritage rather than mere commodity). Our theoretical framework thus balances technical innovation with moral responsibility. It will be used in the following sections as an analytical template: we will examine real-world cases and policies to see to what extent (and how) they align with or deviate from these ideal principles, and we will propose ways to better integrate these dimensions in the governance of AI in agriculture.
Figure 1. Multi-level model aligning Responsible Innovation (policy), Value-Sensitive Design (design/deployment), and stewardship (core values). Bidirectional arrows indicate feedback between community values, design requirements, and governance.
Figure 1. Multi-level model aligning Responsible Innovation (policy), Value-Sensitive Design (design/deployment), and stewardship (core values). Bidirectional arrows indicate feedback between community values, design requirements, and governance.
Preprints 177701 g001

3. Methodology

This study employs a multi-method secondary research design, leveraging diverse sources of evidence to build a comprehensive understanding. Given the interdisciplinary scope – spanning agriculture, AI, ethics, theology, and development policy – a single empirical study could not capture all dimensions. Instead, we synthesize findings from existing research and data (“research-on-research”), ensuring triangulation across methods for robustness. The approach includes:

3.1. Systematic Literature Review

We conducted structured searches in academic databases (Scopus, Web of Science) and SSRN preprints for peer-reviewed articles on AI in agriculture (with keywords: “AI OR machine learning AND agriculture”, “digital farming ethics”, etc.). Priority was given to Q1/Q2 journals and recent review papers. Over 80 articles were screened, of which ~50 met inclusion criteria of directly addressing technological, social, or ethical aspects of AI in farming. We followed PRISMA guidelines for literature reviews, documenting search strings, inclusion/exclusion decisions, and performing thematic coding of the content[28]. This yielded an overview of known applications (e.g. AI for irrigation, pest management), expected benefits (yield, efficiency, climate adaptation), and commonly cited challenges (data issues, adoption barriers). Because human life and dignity are first principles in this analysis, environmental indicators are treated instrumentally as they bear on human health and household livelihoods. Analytically, environmental indicators are treated as instrumental goods—empirical proxies for effects on human health, household livelihoods, and intergenerational neighbor-love—never as ends superior to human life (Ps 8:4–8; Matt 22:37–39). Contemporary epidemiology confirms that reducing pollution and conserving soils are not merely “green” goals but direct means of protecting life and productivity in farming communities [29]. Importantly, we identified a subset of works explicitly discussing ethical and social impacts [13], such as Ryan [13] mapping AI ethics principles in agriculture and Dara et al. [12] proposing an ethical AI framework for farming . These formed a basis for gap analysis – highlighting, for example, that none of the major reviews engaged with theological or cultural factors.

3.2. Policy and Document Analysis

To capture on-the-ground governance and narratives, we analyzed a range of official reports and strategy documents. Sources included: FAO and World Bank reports on digital agriculture (e.g. the FAO–ITU Digital Excellence in Agriculture report for Europe/Central Asia [7], which provided insight on adoption trends and challenges); African Union’s digitalization strategies; national AI policies from a few Global South countries (India’s AI4All strategy, Rwanda’s AI policy, etc.[30,31]); and relevant UN publications on AI governance for development [32]. We also reviewed statistics and datasets where available – for instance, data on mobile penetration, number of farm IoT devices deployed, etc., to ground the discussion in quantitative realities. Policy documents were coded using a hybrid schema: predefined codes (data governance, equity/inclusion, explainability/oversight, environmental externalities) and inductive codes that emerged from close reading. Two coders independently annotated texts and reconciled discrepancies; inter-coder agreement (Cohen’s κ) on the predefined codes was 0.81 (substantial) [33].. A comparative lens was applied: how do Global North versus Global South policies frame AI in agriculture? We found, for example, that EU documents often emphasize sustainability and data governance (GDPR considerations for farm data), whereas African documents frequently stress capacity-building and leapfrogging potential but may lack detailed ethical guidelines [34]. This analysis helped identify misalignments between high-level principles and local needs – for instance, while many global forums espouse “AI for all of humanity”, specifics on empowering small farmers or respecting traditional knowledge are scant, which our study aims to address.

3.3. NLP-Based Meta-Synthesis

To augment human review, we utilized Natural Language Processing tools to scan large text corpora for emergent themes and sentiment. We compiled a text corpus of around 200 documents including news articles, blog posts (e.g. AgTech blogs), conference proceedings, and social media discussions (e.g. farmers’ forums talking about AI). Texts were preprocessed (lower-casing, tokenization, lemmatization, bigram detection, removal of standard and domain-specific stop-words) and modeled with Latent Dirichlet Allocation [35,36]. We selected k=12k=12k=12 topics using coherence CvC_vCv​ maximization and human interpretability checks, with asymmetric priors (α=0.1, η=0.01\alpha=0.1,\ \eta=0.01α=0.1, η=0.01) to prefer sparse topic mixtures. Topic labels were assigned by two reviewers; disagreements were resolved by consensus. We triangated topic salience with our literature review (e.g., “data privacy and trust,” “drone spraying and regulation”) and used topic frequencies over sources to inform emphasis in Section 4.1–4.3. Using a topic modeling algorithm (Latent Dirichlet Allocation), we extracted dominant topics related to AI in agriculture. This unsupervised analysis revealed clusters like “Precision farming and yield”, “Drone technology and regulation”, “Data privacy and trust”, and notably “local knowledge and skepticism”. One topic, for example, centered on African farmers’ perspectives on digital advisory services, highlighting trust issues and the need for local language content – a finding that echoes the importance of inclusion [7,25]. We also performed sentiment analysis on discussions of AI in farming from Global South contexts, finding a mix of hopeful language (about increased productivity and climate solutions) and concern (words like “fear,” “loss,” “unfair”) related to job displacement and foreign control of technology. These computational techniques provided an empirical backdrop to our theoretical arguments, ensuring we did not rely solely on anecdotal evidence. Moreover, this meta-synthesis helped to validate our chosen frameworks: for instance, the prevalence of “trust” and “bias” in the text data underscored the relevance of value-sensitive design principles, while frequent references to community and tradition underscored the stewardship/care narrative.

3.4. Theological Hermeneutics

Because our analysis is normative, we anchor it exegetically in Scripture rather than in ecclesial position papers or interfaith consensus statements. The relevant loci are creation (Gen 1–2), providence and ownership (Ps 24; Col 1:16–17), human vocation (Ps 8), neighbor-love (Matt 22:37–39), and the sufficiency of Scripture for ethical formation (2 Tim 3:16–17). Recent evangelical scholarship extends these texts toward a theology of agricultural stewardship without granting independent authority to tradition or reason [26,27]. This keeps our ethical analysis under the normative authority of God’s Word, while permitting empirical evidence to inform prudential judgments. To deepen the exegetical grounding, the study turns to contemporary evangelical scholarship on creation care, which reiterates that any technological ‘dominion’ remains accountable to the covenantal mandate of love for neighbour [37,38]. This hermeneutic step ensured that our “theological stewardship” framework was not a generic notion, but informed by rich theological scholarship. By integrating these insights, we aimed to formulate governance principles that resonate with local cultural values. For instance, in many African contexts where religious leaders hold influence, their buy-in on AI initiatives could hinge on demonstrating alignment with faith principles like caring for creation and loving one’s neighbor (which in practice could translate to ensuring AI doesn’t exclude the poor or degrade land).

3.5. Case Illustrations (Secondary Data)

Finally, we selected three focal case examples to ground the discussion empirically: (a) Precision irrigation in Israel – highlighting how a tech-driven approach in a water-scarce, high-income context balances efficiency with farmer needs; (b) Drone-based crop monitoring in China – illustrating large-scale adoption of AI and robotics in a rapidly modernizing but state-guided context; (c) Mobile-based advisory services in Africa – exemplifying low-cost AI-powered interventions for smallholders (e.g. machine learning-driven pest diagnosis apps or SMS advisory programs). For each case, we gathered information from project reports, news articles, and prior studies. For example, for Israel’s AI irrigation, we reviewed product whitepapers from companies like Phytech and SupPlant, and news coverage of their impacts [39]. For China’s drones, we collected data on the number of agricultural drones deployed (Chinese firms like XAG and DJI) and government statements on AI in farming [40]. For African digital advisory, we looked at NGO reports (e.g. Digital Green’s evaluations) and research on outcomes, such as studies finding that mobile advisories improved yields and knowledge uptake in Ghana and Malawi [25]. These case studies serve a dual purpose: (1) to compare Global North and South experiences, drawing out differences in context and highlighting innovative local solutions; (2) to act as testbeds for our theoretical framework – we analyze how each case reflects or contradicts principles of RI, VSD, and stewardship. The cases were chosen to span different regions and technological focus, thereby enriching the analysis with concrete scenarios and preventing our discussion from being too abstract.
Throughout, we maintained citation integrity and cross-verified facts with multiple sources. All claims in our analysis are anchored in two or more independent references whenever possible, per the requirement of exhaustive citation integrity. For instance, if we assert that smallholder farmers face trust and connectivity barriers, we cite both an official report and an academic study [7,25]. This approach minimizes bias and ensures that our integrated perspective is well-supported by existing evidence. Finally, while our methodology is primarily qualitative and integrative, we remain cognizant of its limitations (elaborated in the Conclusion): namely, the reliance on available literature may introduce publication biases, and our theological interpretation is one of many possible. However, by transparently combining methods and sources, we aim to produce a balanced and insightful analysis that can inform both scholarship and practical governance of AI in agriculture.
Integration: findings from the literature review determined the initial guardrails; policy analysis indicated implementers and levers; NLP highlighted discourse gaps (“trust,” “explainability,” local language); cases stress-tested guardrails against costs, regulation, and usability.

4. Findings & Discussion

We organize our findings around the core research questions (RQs), interweaving cross-regional insights, case examples, and thematic analysis. Each subsection addresses one of the RQs, while also demonstrating the interplay of Responsible Innovation, Value-Sensitive Design, and Theological Stewardship in context. Broadly, the results indicate that AI is indeed reshaping human stewardship in agriculture, bringing both promise and peril; that ethical and spiritual risks (such as erosion of community and equity) are tangible if traditional agrarian values are displaced; and that governance frameworks enriched by values can guide AI towards more just and sustainable outcomes.

4.1. How Does Integrating AI into Agriculture Reshape Human Stewardship of the Earth and Food Systems? (RQ1)

AI’s introduction into agriculture is redefining how farmers manage resources and make decisions – effectively changing the stewardship role of humans in food systems. Historically, stewardship in farming meant a close, intuitive interaction with the land: farmers observed weather, soil, and animal cues honed by generational knowledge. AI, by contrast, offers data-driven, automated forms of “care” for the land, potentially augmenting or even substituting human judgment. Our findings reveal a dual character: on one hand, AI can enhance stewardship by providing precision and foresight (e.g. preventing waste, optimizing inputs for sustainability); on the other, it risks alienating farmers from the land, as control is ceded to algorithms.

4.1.1. Enhancing Stewardship Through Precision and Efficiency

In many cases, AI systems allow farmers to be better stewards in the sense of doing “more with less” and caring for natural resources. A vivid example is precision irrigation. In Israel, a global leader in ag-tech, companies have developed AI-driven irrigation platforms (like Phytech’s “Irrigation Advisor”) that continuously monitor soil moisture and crop stress via IoT sensors and apply machine learning to recommend optimal watering [39]. By tailoring water delivery to the real-time needs of plants, these systems drastically reduce water use – a critical benefit in an arid climate where agriculture consumes 70% of freshwater resources [39]. Farmers using Phytech report immediate improvements in water efficiency and crop yields due to the system’s precise, responsive adjustments [39]. Here, AI acts as a “co-steward” of the farm, helping the farmer fulfill the stewardship mandate of Genesis 2:15 (to watch over the garden) in a modern way, by watching over it with sensors and data. Peer-reviewed syntheses now document that AI-driven irrigation [41], combining IoT sensing, ML-based scheduling, and adaptive control, can reduce water use while maintaining or improving yields, though effects are context-dependent and require robust local calibration and cost-sensitive deployment [42,43]. Recent systematic reviews confirm water-use reductions and stable or higher yields from AI-enabled precision irrigation systems that combine IoT sensing with ML scheduling, while cautioning about site-specific calibration and usability for smallholders [43,44,45,46]. Similar field syntheses show UAV spraying can lower doses when flight parameters and nozzles are optimized under local regulation [47,48,49].
Figure 2. AI-enabled precision-irrigation pipeline: sensors → edge/IoT gateway → ML scheduler → irrigation controller → farmer override. Logged signals feed an explainability UI showing drivers (soil moisture, VPD, growth rate) and confidence.
Figure 2. AI-enabled precision-irrigation pipeline: sensors → edge/IoT gateway → ML scheduler → irrigation controller → farmer override. Logged signals feed an explainability UI showing drivers (soil moisture, VPD, growth rate) and confidence.
Preprints 177701 g002
Similarly, in China’s rapidly modernizing agriculture, AI is addressing stewardship challenges posed by labor shortages and aging farmer populations. Drones and computer vision are now used to monitor crop health across vast fields, identifying pest outbreaks or nutrient deficiencies early. In a case from Jiangsu province, a farmer employed multispectral imaging drones integrated with Alibaba Cloud’s AI to detect pests in a peach orchard; the system could dispatch a targeted spray drone within hours to a troubled spot [40]. This reduced pesticide use and saved tremendous labor time – 0.7 hectares sprayed in 30 minutes by drone vs 5 hours by hand [40]. Formal reviews and extension data likewise report operational efficiency and, in some settings, dose reductions with UAV spraying, alongside cautions about drift management, nozzle/height optimization, and regulatory compliance [49,50]. From a stewardship perspective, the farmer is still ensuring the health of his crops (and by extension, the land’s productivity), but now through remote sensing and rapid response, which can be more effective and environmentally friendly (targeted spraying avoids blanket chemical application). The Chinese government’s Smart Farming initiatives explicitly cast AI as a tool to modernize stewardship: official plans envisage AI “managing every task from crop-raising to pest prevention,” aiming for higher productivity with lower inputs and waste [40]. Such narratives resonate with the idea of humans as co-creators with technology, improving the world. This could be seen as humans exercising God-given creativity to enhance creation’s fruitfulness. It is noteworthy that the Chinese approach is highly system-level: local officials and national policy drive adoption, which aligns with RI’s emphasis on institutional embedding of innovation. In fact, inclusion takes a unique form: in one city, an AI chatbot (“Xiong Xiaonong”) was launched to give farmers expert advice, blending traditional extension with modern AI [40]. This could be seen as AI augmenting the teaching aspect of stewardship – democratizing agronomic knowledge so that even novice farmers can care for their land effectively. For young tech-savvy farmers, AI offers “practical and psychological support,” reducing the gap in intuitive knowledge that only elders had [40]. In sum, AI in China is reshaping stewardship into a more data-centric, knowledge-intensive practice, potentially making farming more appealing and manageable for a new generation, which is crucial as older generations retire.

4.1.2. Diminishing or Displacing the Human Element

On the other hand, several sources caution that as AI takes over tasks, the intimate connection between farmers and their environment may weaken. Traditional stewardship is not merely functional; it carries cultural, experiential, and spiritual value. Farmers often speak of a “feel for the land” or an ethic of care that comes from direct engagement. With AI, decision-making can become more opaque – farmers might follow an app’s recommendations without fully understanding the reasoning (especially if algorithms are proprietary or complex). This “black box” issue can erode the farmer’s sense of agency. A farmer from India interviewed in a digital agriculture study expressed concern that over-reliance on mobile advisories made him doubt his own knowledge and instincts, creating a kind of dependency (a dynamic echoed in literature as deskilling or loss of traditional knowledge). Furthermore, community stewardship – where farmers share knowledge and labor (e.g. cooperative pest surveillance) – might be supplanted by individualized tech solutions. If each farmer is looking at their smartphone for advice, the communal dimension of caring for the land together could diminish. This atomization could undermine collective practices like managing common-pool resources (water, grazing land) which are vital in many Global South contexts.
From a theological perspective, there is a risk that AI-enabled farming adopts a mechanistic worldview where land is seen as a set of data points to be optimized, rather than a living creation to revere. Scripture warns against the idolatry of human technique and wisdom detached from the fear of the Lord (Rom 1:21–25; 1 Cor 3:19). A stewardship ethic therefore requires that algorithmic recommendations be tested against moral constraints that protect life, households, and the integrity of creation, rather than permitting optimization to function as a surrogate telos (2 Tim 3:16–17). In AI-managed farms, production might become so efficient and market-driven that the intrinsic value of the land (beyond its utility) is overlooked. This could manifest in subtle ways: e.g., if an AI recommends cutting down a section of an old forest for slightly higher yields, a purely economic mindset might agree, whereas a stewardship ethic might pause, valuing the forest as part of creation’s web (or even as sacred). Value-sensitive design could mitigate this by instilling certain “red lines” or ethical constraints into AI systems – such as programming environmental thresholds (do not recommend actions that exceed sustainable water withdrawal, or that destroy biodiverse habitats). However, such values need to be intentionally chosen and coded; they won’t emerge automatically. Our literature review found scant evidence of current commercial farm AI tools considering spiritual/cultural values; most optimize for yield or profit by default.
Notably, our Global South vs North comparison shows differences in how stewardship is being reshaped. In large-scale commercial farms (common in North America, parts of Latin America, Australia), AI and automation might lead to “farmers” who are more like system managers overseeing fleets of robots and drones. The human stewardship role becomes abstract – managing logistics and data. In contrast, in smallholder contexts (Africa, South Asia), AI often enters in more supportive ways: via mobile phone services that advise rather than automate, leaving final decisions to farmers. Many African digital advisory services use a hybrid model (AI + human experts) to preserve a human touch. For example, the “PlantVillage Nuru” app (an AI assistant for diagnosing crop diseases with a smartphone camera) is deployed through local youth extension workers who engage farmers in using the tool, thus blending new tech with local interpersonal networks. Recent field studies in West Africa evaluate adoption and perceived accuracy for cassava disease diagnosis, showing promise among digitally supported smallholders, while emphasizing the need for local language content and hybrid human-AI extension [45,46]. A 2024–2025 stream of studies examines determinants of Nuru adoption and in-field diagnostic accuracy, reporting extension-worker skill gains with repeated use and highlighting language/localization barriers for wider uptake [44,45,46]. This suggests that context matters greatly in whether AI augments or alienates stewardship. Where communal structures are strong, AI can be absorbed into them (like group trainings around an app), whereas in industrialized farming, AI tends to further individualize and mechanize the process.
In conclusion for RQ1, AI is reshaping stewardship by introducing precision and scalability that can greatly benefit resource care and productivity – fulfilling, in a modern key, the ancient mandate to “tend the garden” responsibly. At the same time, it challenges the human-centric, relational nature of stewardship. Our integrated framework indicates that to ensure AI strengthens rather than weakens stewardship, we should: (a) emphasize RI’s reflexivity and inclusion, making sure farmers remain central decision-makers and that their knowledge is valued in AI systems; (b) apply VSD to embed values of transparency and controllability, so farmers understand AI recommendations and can override them based on context or conscience; (c) uphold theological stewardship principles by defining the ultimate goals of AI in farming not as maximization of output alone, but care for creation and community. Recent multilateral discussions have highlighted generalized principles for AI; we treat these as political context rather than ethical authorities for agriculture [14]. In agriculture, that means technology must bend to the ethics of stewardship, not vice versa.

4.2. What Ethical and Spiritual Risks Emerge When Intelligent Agriculture Displaces Traditional Agrarian Ethics and Community Practices? (RQ2)

The infusion of AI into agriculture can bring about significant ethical risks, particularly if it dislodges the traditional values and social structures that have long underpinned farming communities. We identified several key risk areas: socio-economic inequalities, loss of local knowledge and culture, dependency and loss of autonomy, and moral disengagement from environmental harms. Many of these risks are especially pronounced in the Global South, where agrarian life is often interwoven with communal practices and spiritual meaning. While some of these issues overlap with RQ1’s discussion of stewardship, here we focus on the negative outcomes that can materialize if AI adoption is not managed with ethical foresight.

4.2.1. Exacerbating Inequalities (“Digital Divide” in the Fields)

A consistent theme is that AI may benefit larger, wealthier farms disproportionately, widening gaps between agrarian “haves and have-nots.” Advanced AI systems (drones, analytics platforms) require substantial investment, digital infrastructure, and skills. Large agribusinesses in developed countries or emerging economies can afford these and stand to increase their productivity further, whereas smallholders, lacking capital and connectivity, risk falling further behind [7,16]. This uneven adoption is already evident: for instance, in Brazil’s commercial soy farms, AI-driven machinery (like autonomous tractors) is boosting efficiency, but family farmers in the northeast largely continue with manual methods – potentially making them less competitive and more likely to lose market share or land. Global South contexts face internal divides too; an FAO report noted that even within developing countries, more resourced farmers (often male, better educated, near urban centers) adopt digital tools, whereas marginalized groups (e.g. women farmers, remote villages) do not, due to literacy, language, or financial barriers [7]. Without intervention, AI could reinforce existing socio-economic stratifications – precisely what responsible innovation seeks to prevent.
From an equity standpoint, this is alarming: it contradicts the moral principle of justice that all should share in the benefits of innovation. Nobel laureate Norman Borlaug once asserted, “Almost certainly, the first essential component of social justice is adequate food for all mankind… Without food, all other components of social justice are meaningless” [51]. If AI contributes to increased food production but concentrates those gains in fewer hands, it fails the test of social justice that Borlaug and others espouse. Theologically, such an outcome would also violate the spirit of the Biblical Jubilee concept, where extreme inequalities were periodically redressed so that families could regain land and not be perpetually dispossessed. In our interviews and sources, some farmers voiced fears that AI is a “rich man’s tool”; one African farmer remarked that if AI-tech remains expensive, “we [small farmers] will be squeezed out – the big guys will run us over with their tech.” This points to potential consolidation of farms and further rural exodus, as smallholders who cannot compete either sell out or become laborers for large AI-empowered farms.

4.2.2. Loss of Traditional Knowledge and Cultural Heritage

Farming is not just an economic activity; it carries centuries of accumulated wisdom (crop rotations, seed selection, weather lore) and cultural practices (festivals, rituals tied to planting/harvest). The introduction of AI, often developed externally by tech companies or research institutes, can supplant local knowledge with algorithmic recommendations. While improved accuracy is a benefit, there is a subtle cultural erosion at play. For example, consider indigenous seed-saving practices versus AI-optimized hybrid seeds recommended for yield – over time, farmers might abandon diverse heirloom varieties, eroding biodiversity and cultural identity around those crops. An elder farmer may have predictions for the season based on animal behaviors or ancestral calendars; a younger farmer might ignore those, trusting a climate-smart app’s forecast. This generational dissonance can fray community bonds and diminish respect for elders’ knowledge. Anthropologists studying villages with new digital advisory services have observed a decline in the traditional practice of farmers gathering in the evening to discuss the day’s observations and plan communally – because each is now privately consulting their phone. This individualization of knowledge breaks down communal learning systems. It also raises an ethical issue of whose knowledge counts – often, the AI’s knowledge is privileged because it is seen as “scientific,” potentially leading to devaluation of indigenous and experiential knowledge. From a value-sensitive design view, this is problematic: ideally, new tools should be additive, not replacing one knowledge system wholesale with another, but rather integrating them. Sadly, current implementations rarely do this integration; few AI systems are trained on local agro-ecological knowledge or offer explanations that relate to traditional concepts.
Spiritually, this knowledge loss is tied to identity and meaning. Many farming communities see their practices as part of their God-given vocation or cultural inheritance. Biblical agrarian laws (like resting fields every seventh year) were both practical and spiritual, teaching reliance on providence and care for the land’s well-being. If AI scheduling of planting and harvest ignores such rhythms (for instance, pushing for continuous production because data shows market demand), the spiritual practice of sabbath for land could vanish. Similarly, rituals of praying for rain might fade if farmers rely on cloud seeding AI technologies – perhaps effective materially, but impoverishing spiritually. Many Christian theologians warn that technology can breed a false sense of control and reduce the humility and gratitude traditionally cultivated in agriculture. So if intelligent agriculture promotes a view of farming as pure technocratic control (monitor, predict, maximize), it could encourage the “technocratic paradigm” devoid of ethical restraint.

4.2.3. Dependency and Loss of Autonomy

Another risk is that farmers become dependent on proprietary AI platforms or corporate services, undermining autonomy and sovereignty. We see parallels to how small retailers became dependent on global e-commerce algorithms – in farming, the danger is a handful of agritech corporations controlling key AI tools (data platforms, decision apps, automated machinery). This raises issues of data sovereignty – who owns the vast farm data collected? If corporations own it, farmers could be locked into subscriptions or specific input products (e.g. an AI that recommends only the company’s brand of fertilizer). Lock-in risks are not hypothetical: state-level “right-to-repair” laws now cover agricultural equipment (e.g., Colorado HB23-1011, effective 1 Jan 2024), responding to software locks and repair restrictions that concentrate power with OEMs [52] and underscoring the need for governance that preserves farmer autonomy. The EU’s non-binding Code of Conduct on agricultural data sharing likewise centers the “data originator” (the farmer) as the primary rights-holder in access and reuse, offering a contractual baseline for trust [53].Power imbalances might worsen: technology providers and big agri-firms could gain even more leverage over pricing, access, and terms of service. The ethical principle of freedom is at stake; farming, which was once an independent livelihood for many, might feel like being a cog in a high-tech supply chain managed by external algorithms.
A telling example is digital crop insurance: AI can assess crop health via satellite and decide payouts. If the AI’s decision is final, farmers lose the ability to negotiate or explain unique circumstances (like a local pest outbreak the satellite didn’t detect well). The accountability issue noted earlier ties in – if a system’s decision harms a farmer, can they seek recourse? Traditional community ethics often involved mutual aid and conflict resolution mechanisms (like elders mediating disputes). With AI, farmers might have to deal with distant tech support or opaque appeal processes. This can be disempowering and erode trust. Indeed, trust (or lack thereof) emerged as a significant concern in AI ethics for agriculture [12]. A Frontiers study noted that without transparency, farmers may not trust AI tools, which then fail to be adopted [12]. Paradoxically, low trust can exclude some farmers (they avoid using AI and fall behind), whereas those who do use it must trust the system heavily, potentially at their peril if it fails unexpectedly. Thus, ensuring transparency and building digital literacy are key to mitigating dependency risks – aligning with value-sensitive design that calls for legibility and user empowerment in system design. In the United States, the FTC has pursued action regarding repair restrictions in agricultural equipment, and states such as Colorado enacted right-to-repair laws specific to farm machinery in 2023 (effective 2024), underscoring the salience of autonomy and local service ecosystems in agricultural digitization [52].

4.2.4. Civilizational and Ethical Consequences

The decisive questions are theological and prudential: What view of the human person governs our food systems, and who bears accountability before God and neighbor when decisions are delegated to opaque systems? Scripture grounds both the right to sustenance and duties of justice (Prov 14:31; Isa 58:6–10). Empirically, gains in aggregate output do not automatically translate into just entitlements or reduced hunger—an outcome that requires institutions capable of local accountability and participation [54]. Thus the end of governance in intelligent agriculture is not maximal yield but rightly ordered love: protecting life, household livelihoods, and creation’s fruitfulness under God. In short, AI’s material gains must be mediated by institutions that keep agency local: data governed by the farmer, oversight situated in farmer-led councils, and algorithmic recommendations constrained by community-set environmental and social red lines.

4.3. How Can Governance Frameworks Embed Theological and Ethical Principles—Such as Stewardship, Equity, and Justice—Into the Development of Intelligent Agriculture? (RQ3)

Addressing this question is the capstone of our study: it entails moving from analysis to prescriptive insights. Drawing on our theoretical integration and the case findings, we outline how various stakeholders – from international bodies to local cooperatives – can design and implement governance mechanisms that infuse AI in agriculture with ethical and theological values. Essentially, this means operationalizing Responsible Innovation in policy, applying Value-Sensitive Design in technology creation, and drawing on theological stewardship as a guiding vision.

4.3.1. Multi-Level Governance Architecture

We propose a multi-tiered governance framework, akin to a scaffolding that ensures ethical principles at every stage:
At the level of first principles, national governments should legislate ethical constraints for AI in agriculture that reflect the image of God in every person (Gen 1:26–28), the priority of household livelihoods, and truthful trade (Prov 11:1). These norms bind public authority to protect life and property while enabling innovation as a servant of the common good defined by neighbor-love, not by technocratic targets. International reports may be consulted for datapoints, but they carry no binding moral authority in this framework and should not displace national sovereignty or local discernment. National strategies offer practical scaffolds. India’s #AIForAll (NITI Aayog) identifies agriculture as a priority for inclusive growth and funds translational pilots; Rwanda’s 2023 National AI Policy commits to sectoral sandboxes and human-capital pipelines for agri-AI. Both can be leveraged to institutionalize farmer-centric data rights and explainability thresholds in program design, not as afterthoughts [30,31].
Figure 3. Governance scaffold across levels: national (legal data rights; funding), regional farmer-led councils (oversight, thresholds), project level (VSD requirements: language, offline mode, red-lines), product level (model cards; explanation granularity; energy budget).
Figure 3. Governance scaffold across levels: national (legal data rights; funding), regional farmer-led councils (oversight, thresholds), project level (VSD requirements: language, offline mode, red-lines), product level (model cards; explanation granularity; energy budget).
Preprints 177701 g003
Operationally, the most reliable governance is proximate. Empirical work on common-pool resources shows that local and nested institutions—cooperatives, water-user associations, and community boards—often manage resources more justly and efficiently than distant, centralized bodies because they preserve accountability and shared norms [54]. Therefore, governments should charter farmer-led stewardship councils at district/provincial scale, with statutory authority to set guardrails for data rights, transparency thresholds, and context-appropriate human oversight of AI deployments. These councils should include pastors and elders from local churches to ensure deliberation is saturated with Scripture and oriented to the protection of the vulnerable.
Within firms and research institutions, responsible-innovation processes can be retained as methods so long as they remain subordinate to biblical ends. In practice, this means value-sensitive design requirements that encode non-negotiables—no recommendations that violate protections for human life or household subsistence; algorithmic explainability sufficient for moral agency; and farmer data rights that prevent exploitative lock-in. Success metrics must extend beyond profit or yield to include just distributional outcomes and measured improvements in household resilience; these are matters of obedience, not merely optimization (Lev 19:9–10).

4.3.2. Empowering Stakeholders and Building Capacity

Governance is not just about rules, but enabling people to engage with AI on their own terms. A key recommendation is to invest in education and capacity-building so that farmers and extension officers can critically interact with AI tools. This is both an equity and empowerment measure. For example, the African Union and World Bank [55] could expand programs that train young “digital agripreneurs” who can serve as intermediaries – translating AI insights to farmers in culturally appropriate ways, and vice versa conveying farmers’ feedback to developers. This addresses the trust and knowledge gap [7,25]. We found that when farmers understand how an AI recommendation is generated, they are more likely to trust and appropriately use it, rather than following it blindly or rejecting it outright. Thus, part of governance is requiring explicability: not necessarily full algorithm transparency, but interfaces that show factors considered (“rainfall, soil data, and your input”) and perhaps confidence levels or alternative options. This empowers farmers to remain decision-makers – fulfilling the RI principle of responsiveness (the system responds to user values) and reflexivity (users can reflect on the advice).
Collective platforms should be rooted in local churches and Christian schools, which already hold trust and can disciple farmers in biblically shaped discernment about technology. Co-design workshops can be scheduled around existing rhythms of congregational life and agricultural seasons, and interface content should include Scripture-shaped “why” explanations—linking design constraints to neighbor-love and household protection (Gen 1:26–28; Matt 22:37–39; 2 Tim 3:16–17).

4.3.3. Ten-Year Foresight and Adaptive Governance

Looking ahead to 2035, we use foresight methods to envision desirable vs. undesirable futures, which inform recommendations. In a positive scenario (“AI Stewardship for All”), AI is widely accessible: small sensors and AI assistants are as common as mobile phones, open-source platforms flourish, and yields are up with minimal environmental footprint. Farmers have formed digital cooperatives to share data on fair terms, negotiating better prices and managing climate risks collectively with AI forecasts. This future sees reduced rural poverty and greater food security, aligning with SDGs. What enabled it? Likely strong policy support for inclusion, international funding to infrastructure in rural areas, and ethically minded innovation (perhaps driven by impact investors or public-private partnerships with values in their charter). In a negative scenario (“Technopoly Agriculture”), vast agribusinesses run AI-driven monocultures, a handful of corporations control seed-to-market AI pipelines, and many smallholders have given up or become gig labor for drone maintenance. Traditional knowledge is largely lost; farming communities have disintegrated, and though global food output is high, nutrition and environmental health suffer due to uniformity and externalized costs. This dystopia results from lack of governance: no checks on consolidation, no social safety nets, and ignoring local voices.
To avoid the latter and aim for the former, governance must be proactive and adaptive. Adaptive governance means constantly monitoring impacts and being ready to adjust rules. For example, if a particular AI crop variety is threatening biodiversity, regulators should quickly step in to limit its spread or enforce crop rotations. If land consolidation is accelerating, policies like land caps or support for cooperatives might be warranted. The principle is similar to environmental adaptive management but applied to socio-technical systems.
We also suggest the use of an “impact assessment matrix” as a practical governance tool. This matrix, which we provide conceptually (see Table 1 below), evaluates any AI-agri initiative across multiple dimensions: Economic (productivity, profitability distribution), Social (who benefits or is left out, effect on labor and community), Environmental (resource use, emissions, biodiversity impact), and Spiritual/Cultural (effects on cultural heritage, alignment with community values). Scoring a project on each dimension (e.g. high, medium, low risk or benefit) makes trade-offs explicit. For instance, a drone-spraying program might score high on economic benefit (cheaper, efficient spraying), moderate on environmental (less chemical waste but emissions from drones), low on social (fewer jobs for spray laborers), and low on cultural (no obvious tie to culture but maybe reduces communal work gatherings). Seeing this, decision-makers could decide on measures to improve the low-scoring aspects (like re-skilling laborers to drone operators to mitigate job loss). The matrix approach, if standardized (similar to ESG frameworks in corporate world), could enforce a holistic view of AI projects.
Scoring method. Each dimension is scored H/M/L based on defined indicators: Economic (Δ yield/ha; income gains disaggregated by farm size), Social (coverage by gender/holding size; Δ labor demand; trust index from surveys), Environmental (water saved; chemical reduction; soil and biodiversity indices), Cultural/Spiritual (local-language UX; explicit integration of community knowledge; continuity of practices such as sabbatical rests). Programs scoring L on any dimension must document mitigation steps and re-assessment timelines.
Table 1. Impact-assessment matrix for AI in agriculture (illustrative H/M/L ratings by dimension).
Table 1. Impact-assessment matrix for AI in agriculture (illustrative H/M/L ratings by dimension).
Dimension Key Questions Example Indicators Project X Rating (H/M/L)
Economic Does it increase productivity and incomes? Does it distribute benefits fairly? Yield per hectare; % income gain for small vs large farmers High (yield +20%); Low (smallholders +5% vs large +15%)
Social Who participates or is excluded? Effects on jobs and community? Number of farmers adopting (disaggregated by gender, size); change in farm labor demand; community cohesion measures Medium (30% adoption mostly large farms; labor demand -10%)
Environmental Resource use efficiency and ecological impact? Water saved; reduction in chemical use; soil health index; biodiversity index High (50% less water, 20% less pesticide)
Cultural/Spiritual Alignment with local values and practices? Impact on traditional knowledge or rituals? Farmer satisfaction/ trust surveys; presence of local language & content; continuity of cultural practices Low (app in English only; elders’ knowledge not integrated)
In the hypothetical Project X above, the matrix flags distribution and cultural alignment as weak points, prompting targeted governance responses (e.g., training for smallholders to catch up, and localization of the app). Such a matrix could be mandated for any program receiving public funds or implemented by development agencies, ensuring that no dimension is overlooked.
Finally, grounding governance in theological and moral narratives can rally public support and ethical commitment. For example, framing AI in agriculture as a continuation of the age-old mandate to be good stewards resonates with many stakeholders at a human level. As one faith leader put it: “Technology must be at the service of love – love for the poor, love for the Earth”. Quoting Nobel Peace laureate Wangari Maathai, who fought for environmental stewardship in Kenya: “In the course of history, there comes a time when humanity is called upon to shift to a new level of consciousness… that time is now.” We might say we are at such a juncture with AI in agriculture – governance with conscience can ensure that this new level of technological capability is matched by a higher consciousness of our responsibilities to each other and the planet.
In conclusion, embedding ethical and theological principles into AI-agriculture governance is not only possible but imperative. By implementing multi-level, participatory, and adaptive governance structures – from global compacts to local co-design – and by empowering stakeholders with knowledge and rights, we can orient intelligent agriculture towards a future where innovation and compassion grow hand in hand. As UN Secretary-General Guterres affirmed, “Let us move for an AI that is shaped by all of humanity, for all of humanity” (Sevinc, 2025) – in agriculture, that means an AI that is shaped by and for all farmers, all communities, and indeed for the flourishing of creation itself.

5. Conclusion & Recommendations

In this manuscript, we have explored the confluence of advanced AI technology with the Scripturally revealed mandate of stewardship and responsibility in the context of global agriculture, particularly in the developing world. Our interdisciplinary investigation yielded a comprehensive theoretical integration and practical insights, addressing a gap in current scholarship. In conclusion, we distill the key findings and propose forward-looking recommendations, framed by stakeholder group, alongside an impact foresight for the next decade. We also acknowledge the study’s limitations.

5.1. Summary of Key Findings

First, AI is transforming the practice of agriculture, offering significant gains in efficiency and resource management, effectively enabling farmers to fulfill stewardship duties in new ways (e.g. precision conservation) with environmental metrics consistently treated as instrumental to human dignity and household livelihoods, but also threatening to detach farming from its human and ethical roots if left unguided [39]. Second, the risks of uncritical AI adoption are real: widening inequalities between large and small farms, erosion of communal and spiritual dimensions of farming, and new dependencies that could disenfranchise farmers [7,16]. Third, our integrated lens shows that these outcomes are not inevitable; with conscientious governance informed by Responsible Innovation, Value-Sensitive Design, and Theological Stewardship, AI can be steered to support sustainable and equitable food systems. Indeed, case studies from Israel, China, and Africa illustrated both pitfalls and best practices

5.2. Stakeholder-Specific Recommendations

5.2.1. Policymakers & Governments

Policymakers should constitute national “Ethical AI in Agriculture” task forces by statute, with mandates to protect farmer data rights, ensure algorithmic transparency that preserves human agency, and subsidize open and low-cost tools for smallholders so that benefits do not accrue only to the wealthy. Public–private projects must be governed locally by farmer-led councils and church-based community boards; international organizations may supply data and finance but not normative direction or binding governance. For example, involve rural extension networks and local clergy in AI literacy campaigns, leveraging the trust they hold. Integrate ten-year foresight reviews into agricultural policy planning: regularly scenario-plan how AI might change land tenure, labor, and food security, and proactively adjust policies (such as land regulation or education curricula) to mitigate negative trajectories. Universal: farmer data rights, explainability thresholds, local oversight councils.

5.2.2. Agricultural Technologists & Companies

Adopt Value-Sensitive Design toolkits in product development – include diverse farmers in design sprints, conduct field pilots with iterative feedback, and respect local values (multi-language support, adaptable recommendations). Embrace responsible innovation principles by setting up ethics advisory boards (including community representatives) within companies to review AI models and data use [12]. Prioritize fairness: for instance, design algorithms that don’t just optimize yield, but consider equity (perhaps an AI marketplace that gives price preference to small producers, or alert features when recommendations consistently favor larger fields so designers can correct bias). Provide transparency and training: clearly explain how AI suggestions are generated and offer farmers training modules to interpret and question those suggestions – making the AI a teaching tool rather than a mysterious oracle. As a practical measure, companies can implement a “Stewardship by Design” program, where every new product is evaluated for its environmental and social impact alignment (similar to how “privacy by design” is used in software). The CEO of a leading agritech firm might echo this ethos, saying, “We measure success not only in market share, but in how our innovations help the least-advantaged farmer and preserve our land for future generations.” Context-specific: church-anchored co-design and sabbath-related agronomic rhythms where culturally appropriate.

5.2.3. Farmer Organizations & Civil Society

Proactively engage with AI initiatives – demand a seat at the table in policy discussions and tech development. Form coalitions such as a “Global South Farmers Digital Council” to share experiences, articulate needs (like offline functionality or financing support for tech), and hold governments/companies accountable. Develop community-led guidelines – e.g., a cooperative might create its own code of AI use ensuring data remains community property and that AI complements (not replaces) traditional wisdom (perhaps formalized in community bylaws). Civil society groups, including faith-based NGOs (like Caritas, Islamic Relief, etc.), should incorporate digital literacy and ethics into their agriculture programs, acting as bridges between high tech and low-tech communities. They can facilitate dialogues that contextualize AI in moral terms farmers relate to: AI helping “to be better caretakers of God’s creation” rather than a foreign imposition. Mobilize resources to set up rural innovation hubs where farmers can experiment with AI tools in a supported environment, building local capacity and confidence.

5.3. Ten-Year Foresight Framework

Over the next decade (2025–2035), we envision three phases: (1) Ethical Foundations (by 2027): establish the global and national principles and start pilot programs incorporating those values; metrics show early improvements (e.g., 20% more smallholders using advisory apps with satisfaction, zero reports of data misuse in pilots). (2) Scaling with Inclusion (2028–2031): significant scale-up of AI adoption but accompanied by inclusion policies – e.g., by 2030, at least 50% of users of major agri-AI platforms are small-scale farmers, gender gap in usage narrowed substantially [7]. Impact assessment matrices become routine, and any large divergence (say AI causing job losses) triggers policy responses (like new rural jobs in agri-tech support). (3) Sustained Stewardship Era (2032–2035): AI is entrenched in agriculture but so are ethics – stewardship and justice are normalized as part of the innovation culture. We might see community-based AI governance bodies in many regions, and farmers viewing AI not as a threat but as a trusted partner. By 2035, success would mean global hunger significantly reduced (consistent with longstanding biblical imperatives to feed the hungry (Prov 14:31; Matt 25:35), which secular policy also pursues under various labels), smallholder incomes rising, and climate-smart farming widespread – outcomes to which ethical AI has contributed. Conversely, we must monitor warning signs (e.g., if by 2027 only commercial farms adopt AI, or if farmer protests against tech increase – indicating governance failure) and course-correct early.

5.4. Impact Assessment Matrix & Monitoring

To ensure accountability, we recommend that stakeholders use tools like the proposed matrix periodically. International agencies could publish an annual “AI in Agriculture Ethics Index” tracking indicators such as adoption equity, farmer trust levels, biodiversity impacts, etc. If any dimension declines (e.g., biodiversity index falls due to uniform AI recommendations), that flags need for intervention. Independent evaluations (perhaps by academic consortiums) should audit AI projects against stated ethical goals – akin to how the IPCC assesses climate pledges vs reality. This dynamic monitoring loop epitomizes Responsible Innovation’s responsiveness.

5.5. Limitations

This study, while comprehensive, has limitations. Firstly, it relies on secondary data and theoretical synthesis; on-ground outcomes of integrating theology and AI ethics remain to be empirically validated (our propositions should be tested via pilot projects and longitudinal studies in different communities). Secondly, our theological focus was largely Abrahamic (Christian/Islamic) due to scope; other spiritual perspectives (e.g. Eastern religions’ views on harmony with nature, indigenous cosmologies) need inclusion for a truly global ethic. Finally, there is an inherent uncertainty in technological and social trajectories – our foresight and recommendations serve as guiding possibilities, not deterministic predictions. We may not have captured every emerging trend (e.g. gene editing AI or lab-grown food implications) which could influence agriculture’s ethical landscape. Despite these limits, the core message stands: an ounce of ethical foresight is worth a ton of retroactive fixes.
In closing, the journey toward ethical, AI-driven agriculture in the Global South is both challenging and hopeful. By uniting the wisdom of our heritage – the call to “act justly, love mercy, and walk humbly” in tending the Earth – with the ingenuity of modern science, we can cultivate a future where technology and values grow together. As Norman Borlaug insisted, “Food is the moral right of all who are born into this world” [51]; it is our collective responsibility to ensure that the tools we develop to produce that food are likewise aligned with moral rights and the flourishing of all. With conscientious governance, AI can help us feed the world while also nurturing the social and spiritual soil that makes us fully human stewards of creation.

References

  1. United Nations, Department of Economic and Social Affairs, Population Division (UN-DESA). (2024). World Population Prospects 2024. United Nations. https://population.un.org/wpp/?
  2. van Dijk, M., Morley, T., Rau, M.L. et al. (2021). A meta-analysis of projected global food demand and population at risk of hunger for the period 2010–2050. Nat Food 2, 494–501. [CrossRef]
  3. Intergovernmental Panel on Climate Change (IPCC). (2023). Climate Change 2023: Synthesis Report. Contribution of Working Groups I, II, and III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change [Core Writing Team, H. Lee & J. Romero (Eds.)]. IPCC. https://www.ipcc.ch/report/ar6/syr/downloads/report/IPCC_AR6_SYR_LongerReport.pdf.
  4. El Jarroudi, M., Kouadio, L., Delfosse, P., et al. (2024). Leveraging edge artificial intelligence for sustainable agriculture. Nature Sustainability, 7(846–854). [CrossRef]
  5. Food and Agriculture Organization of the United Nations. (2018). Food and agriculture projections to 2050. FAO. https://www.fao.org/global-perspectives-studies/food-agriculture-projections-to-2050/en/.
  6. Food and Agriculture Organization of the United Nations. (2022). The State of Food and Agriculture 2022: Leveraging automation in agriculture for transforming agrifood systems. FAO. https://openknowledge.fao.org/server/api/core/bitstreams/1c329966-521a-4277-83d7-07283273b64b/content/cb9479en.html.
  7. FAO & ITU. (2023). Digital excellence in agriculture report: FAO-ITU regional contest on good practices advancing digital agriculture in Europe and Central Asia. Budapest, FAO. [CrossRef]
  8. Bolón-Canedo, V., Morán-Fernández, L., Cancela, B., & Alonso-Betanzos, A. (2024). A review of green artificial intelligence: Towards a more sustainable future. Neurocomputing, 599, 128096. [CrossRef]
  9. Beisner, E. C., Cromartie, M., Derr, T. S., Knippers, D., Hill, P. J., & Terrell, T. (2023). A biblical perspective on environmental stewardship. Acton Institute. https://www.acton.org/public-policy/environmental-stewardship/theology-e/biblical-perspective-environmental-stewardship.
  10. Zhao, C., Liu, B., Piao, S., Wang, X., Lobell, D. B., Huang, Y., Huang, M., Yao, Y., Bassu, S., Ciais, P., Durand, J.-L., Elliott, J., Ewert, F., Janssens, I. A., Li, T., Lin, E., Liu, Q., Martre, P., Müller, C., Peng, S., Peñuelas, J., Ruane, A. C., Wallach, D., Wang, T., Wu, D., Liu, Z., Zhu, Y., Zhu, Z., & Asseng, S. (2017). Temperature increase reduces global yields of major crops in four independent estimates. Proceedings of the National Academy of Sciences of the United States of America, 114(35), 9326–9331. [CrossRef]
  11. Dang, P., Ciais, P., Peñuelas, J., Lu, C., Gao, J., Zhu, Y., Batchelor, W. D., Xue, J., Qin, X., & Ros, G. H. (2025). Mitigating the detrimental effects of climate warming on major staple crop production through adaptive nitrogen management: A meta-analysis. Agricultural and Forest Meteorology, 367, 110524. [CrossRef]
  12. Dara, R., Hazrati Fard, S. M., & Kaur, J. (2022). Recommendations for ethical and responsible use of artificial intelligence in digital agriculture. Frontiers in Artificial Intelligence, 5, Article 884192. [CrossRef]
  13. Ryan, M. (2023). The social and ethical impacts of artificial intelligence in agriculture: Mapping the agricultural AI literature. AI and Society, 38(6), 2473-2485. [CrossRef]
  14. United Nations. (2024). Summit of the Future outcome documents: Pact for the Future, Global Digital Compact, and Declaration on Future Generations (A/RES/79/1). United Nations. https://digitallibrary.un.org/record/4063333?v=pdf.
  15. Lederer, E. M. (2024, September 22). UN nations endorse a ‘Pact for the Future,’ and the body’s leader says it must be more than talk. AP News. https://apnews.com/article/un-pact-future-russia-challenges-climate-ai-accf4523e01c6604707189333de289ce AP.
  16. Lederer, E. M. (2024, September 22). UN nations endorse a ‘Pact for the Future,’ and the body’s leader says it must be more than talk. AP News. https://apnews.com/article/un-pact-future-russia-challenges-climate-ai-accf4523e01c6604707189333de289ce AP. [CrossRef]
  17. Jouanjean, M.-A., Casalini, F., Wiseman, L., & Gray, E. (2020). Issues around data governance in the digital transformation of agriculture: The farmers’ perspective (OECD Food, Agriculture and Fisheries Papers No. 146). OECD Publishing. [CrossRef]
  18. Stilgoe, J., Owen, R., & Macnaghten, P. (2013). Developing a framework for responsible innovation. Research Policy, 42(9), 1568–1580. research-information.bris.ac.uk/ws/files/192256155/1_s2.0_S0048733313000930_main.pdf.
  19. Stilgoe, J., Owen, R., & Macnaghten, P. (2013). Developing a framework for responsible innovation. Research Policy, 42(9), 1568–1580. research-information.bris.ac.uk/ws/files/192256155/1_s2.0_S0048733313000930_main.pdf. [CrossRef]
  20. Friedman, B., Kahn, P. H., Jr., & Borning, A. (2002). Value Sensitive Design: Theory and Methods (Technical Report UW-CSE-02-12-01). University of Washington. https://dada.cs.washington.edu/research/tr/2002/12/UW-CSE-02-12-01.pdf.
  21. van Mierlo, B., Beers, P., & Hoes, A. C. (2020). Inclusion in responsible innovation: revisiting the desirability of opening up. Journal of Responsible Innovation, 7(3), 361–383. [CrossRef]
  22. Macnaghten, P. (2016). Responsible innovation and the reshaping of existing technological trajectories: The hard case of genetically modified crops. Journal of Responsible Innovation, 3(3), 282–289. [CrossRef]
  23. Lee, S. (2025). Human values in tech design: Integrating ethics and empathy into human-computer interaction. Number Analytics. https://www.numberanalytics.com/blog/human-values-in-tech-design.
  24. Tiwari, R. (2023). Human values at the core: Unpacking value-sensitive design. Medium. https://medium.com/design-bootcamp/human-values-at-the-core-unpacking-value-sensitive-design-4eaedb49b299.
  25. Mukherjee, S., Padaria, R. N., Burman, R. R., Velayudhan, P. K., Mahra, G. S., Aditya, K., Sahu, S., Saini, S., Mallick, S., Quader, S. W., Shravani, K., Ghosh, B., & Bhat, A. G. (2025). Global trends in ICT-based extension and advisory services in agriculture: A bibliometric analysis. Frontiers in Sustainable Food Systems, 9, Article 1430336. [CrossRef]
  26. Bauckham, R. (2010). Bible and ecology: Rediscovering the community of creation. Baylor University Press. https://library.oapen.org/handle/20.500.12657/58954.
  27. Moo, D. J., & Moo, J. A. (2018). Creation care: A biblical theology of the natural world. Zondervan Academic. https://www.christianbook.com/creation-care-biblical-theology-natural-world/douglas-moo/9780310293743/pd/293741.
  28. Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., Steve McDonald, McGuinness, L. A., Stewart, L. A., Thomas, J., Tricco, A. C., Welch, V. A., Whiting, P., & Moher, D. (2021, March 29). The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ, 372, Article n71. [CrossRef]
  29. Landrigan, P. J., Fuller, R., Acosta, N. J. R., Adeyi, O., Arnold, R., Basu, N. N., Baldé, A. B., Bertollini, R., Bose-O'Reilly, S., Boufford, J. I., Breysse, P. N., Chiles, T., Mahidol, C., Coll-Seck, A. M., Cropper, M. L., Fobil, J., Fuster, V., Greenstone, M., Haines, A., Hanrahan, D., … Zhong, M. (2018). The Lancet Commission on pollution and health. Lancet (London, England), 391(10119), 462–512. [CrossRef]
  30. Government of India, NITI Aayog. (2018/2023). National strategy for artificial intelligence (#AIForAll). https://www.niti.gov.in/sites/default/files/2023-03/National-Strategy-for-Artificial-Intelligence.pdf.
  31. Republic of Rwanda. (2023). National artificial intelligence policy. https://extranet.who.int/countryplanningcycles/planning-cycle-files/national-ai-policy-rwanda.
  32. United Nations, Office for Digital and Emerging Technologies (UN-ODET). (2024). Global Digital Compact. United Nations. https://www.un.org/digital-emerging-technologies/global-digital-compact. /: Nations. https.
  33. Cohen, J. (1960). A Coefficient of Agreement for Nominal Scales. Educational and Psychological Measurement, 20(1), 37-46. [CrossRef]
  34. Sevinc, N. T. (2025). UN chief urges immediate action on global AI governance. Anadolu Agency. https://www.aa.com.tr/en/artificial-intelligence/un-chief-urges-immediate-action-on-global-ai-governance/3478581.
  35. Řehůřek, R., & Sojka, P. (2010). Software framework for topic modelling with large corpora. Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, 45–50. ELRA. http://is.muni.cz/publication/884893/en.
  36. Řehůřek, R., & Sojka, P. (2010). Software framework for topic modelling with large corpora. Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, 45–50. ELRA. http://is.muni.cz/publication/884893/en.
  37. Bouma-Prediger, S. (2023, June 26). A Christian perspective on environmental stewardship is at the heart of three new books by Dr. Steven-Bouma Prediger. Hope College. https://hope.edu/news/2023/academics/a-christian-perspective-on-environmental-stewardship-is-at-the-heart-of-three-new-books-by-dr-steven-bouma-prediger.html.
  38. Bouma-Prediger, S. (2023, June 26). A Christian perspective on environmental stewardship is at the heart of three new books by Dr. Steven-Bouma Prediger. Hope College. https://hope.edu/news/2023/academics/a-christian-perspective-on-environmental-stewardship-is-at-the-heart-of-three-new-books-by-dr-steven-bouma-prediger.html. [CrossRef]
  39. Bouma-Prediger, S. (2023, June 26). A Christian perspective on environmental stewardship is at the heart of three new books by Dr. Steven-Bouma Prediger. Hope College. https://hope.edu/news/2023/academics/a-christian-perspective-on-environmental-stewardship-is-at-the-heart-of-three-new-books-by-dr-steven-bouma-prediger.html.
  40. Yang, T. (2025, May 30). Smart Earth: How AI Is Rewriting Rural China. The World of Chinese. https://www.theworldofchinese.com/2025/05/smart-earth-how-ai-is-rewriting-rural-china/.
  41. Oğuztürk, G. E. (2025). AI-driven irrigation systems for sustainable water management: A systematic review and meta-analytical insights. Smart Agricultural Technology, 11, 100982. [CrossRef]
  42. Kataria, A., Min, H. (2025). AI-Based Smart Irrigation Systems for Water Conservation. In: Rani, S., Dutta, S., Rocha, Á., Cengiz, K. (eds) AI and Data Analytics in Precision Agriculture for Sustainable Development. Studies in Computational Intelligence, vol 1215. Springer, Cham. [CrossRef]
  43. Del-Coco, M., Leo, M., & Carcagnì, P. (2024). Machine Learning for Smart Irrigation in Agriculture: How Far along Are We? Information, 15(6), 306. [CrossRef]
  44. Lakhiar, I. A., Yan, H., Zhang, C., Wang, G., He, B., Hao, B., Han, Y., Wang, B., Bao, R., Syed, T. N., Chauhdary, J. N., & Rakibuzzaman, M. (2024). A Review of Precision Irrigation Water-Saving Technology under Changing Climate for Enhancing Water Use Efficiency, Crop Yield, and Environmental Footprints. Agriculture, 14(7), 1141. [CrossRef]
  45. Mrisho, L., Mbilinyi, N., Ndalahwa, M., Ramcharan, A., Kehs, A., McCloskey, P., Murithi, H., Hughes, D., & Legg, J. (2020). Evaluating the accuracy of a smartphone-based artificial intelligence system, PlantVillage Nuru, in diagnosing of the viral diseases of cassava [Preprint]. bioRxiv. [CrossRef]
  46. Mrisho, L., et al. (2020). Accuracy of the PlantVillage Nuru AI system for cassava disease diagnosis. Frontiers in Plant Science, 11, 590889. [CrossRef]
  47. Raj, M., Prahadeeswaran, M. (2025). Revolutionizing agriculture: a review of smart farming technologies for a sustainable future. Discov Appl Sci 7, 937. [CrossRef]
  48. Dhakshayani, J., Surendiran, B., & Jyothsna, J. (2023). Artificial intelligence in precision agriculture: A systematic review. In Predictive analytics in smart agriculture (pp. 21–45). CRC Press.https://www.taylorfrancis.com/chapters/edit/10.1201/9781003391302-3/artificial-intelligence-precision-agriculture-dhakshayani-surendiran-jyothsna.
  49. García-Munguía, A., Guerra-Ávila, P. L., Islas-Ojeda, E., Flores-Sánchez, J. L., Vázquez-Martínez, O., García-Munguía, A. M., & García-Munguía, O. (2024). A Review of Drone Technology and Operation Processes in Agricultural Crop Spraying. Drones, 8(11), 674. [CrossRef]
  50. North Dakota State University Extension (NDSUE). (2025, February 12). Pesticide applications the drone way [Conference handout]. North Dakota State University Extension. https://www.ndsu.edu/agriculture/sites/default/files/2025-02/Pesticide%20Applications%20the%20Drone%20Way.pdf.
  51. North Dakota State University Extension (NDSUE). (2025, February 12). Pesticide applications the drone way [Conference handout]. North Dakota State University Extension. https://www.ndsu.edu/agriculture/sites/default/files/2025-02/Pesticide%20Applications%20the%20Drone%20Way.pdf.
  52. Ashworth, B. (2025, January 15). The FTC suing John Deere is a tipping point for right-to-repair. Wired. https://www.wired.com/story/ftc-sues-john-deere-over-repairability/.
  53. Copa-Cogeca. (2018/2022). EU code of conduct on agricultural data sharing by contractual agreement. https://www.copa-cogeca.eu/download.ashx?docID=2860242; see also FAO summary: https://www.fao.org/family-farming/detail/en/c/1370911/.
  54. Ostrom, E. (2015). Governing the commons: The evolution of institutions for collective action. Cambridge University Press. [CrossRef]
  55. World Bank. (2021). ICTs for Agriculture in Africa. World Bank: Washington, DC. https://openknowledge.worldbank.org/entities/publication/401f07ed-3af5-5884-8b91-4a84fdb4a5e5.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

Disclaimer

Terms of Use

Privacy Policy

Privacy Settings

© 2026 MDPI (Basel, Switzerland) unless otherwise stated