Journal of International Commercial Law and Technology
2026, Volume 7, Issue 1 : 861-868 doi: 10.61336/Jiclt/26-01-89
Research Article
Artificial Intelligence in International Arbitration: Reconciling Efficiency with Due Process in Global Dispute Resolution
 ,
 ,
 ,
 ,
 ,
1
Associate Professor Gitarattan International Business School,
2
Dean Students’ Welfare IMS Law College, Noida,
3
Associate Professor IMS Law College, Noida,
4
Assistant Professor IMS Law College, Noida
5
Assistant Professor, IMS Law College, Noida
Received
Feb. 13, 2026
Revised
Feb. 27, 2026
Accepted
March 4, 2026
Published
March 18, 2026
Abstract

The integration of artificial intelligence (AI) into international arbitration represents one of the most consequential technological shifts in the history of global dispute resolution. This paper examines how AI tools—including machine learning algorithms, natural language processing systems, predictive analytics platforms, and automated document review technologies—are transforming the procedural architecture of international arbitration. Drawing on emerging institutional frameworks, case studies from leading arbitral institutions, and scholarly commentary, this research explores the tensions between AI-driven efficiency gains and the foundational due process guarantees that underpin the legitimacy of international arbitration. The paper argues that while AI offers transformative potential in accelerating proceedings, reducing costs, and enhancing analytical precision, its uncritical adoption risks undermining principles of equal treatment, transparency, and the right to be heard. A governance framework is proposed that reconciles technological innovation with due process imperatives, offering a path toward responsible AI integration in global dispute resolution. The paper concludes that international arbitration institutions must develop enforceable AI governance standards that preserve party autonomy, arbitrator independence, and procedural fairness as core values of the arbitral process.

Keywords
INTRDUCTION

International arbitration has long served as the preferred mechanism for resolving transnational commercial and investment disputes, offering parties flexibility, confidentiality, and the enforceability of awards under the New York Convention on the Recognition and Enforcement of Foreign Arbitral Awards (1958). Yet the field is undergoing a fundamental transformation driven by the rapid proliferation of artificial intelligence technologies. From document review platforms capable of processing millions of pages in hours to predictive analytics tools that forecast dispute outcomes with growing accuracy, AI is reshaping nearly every dimension of the arbitral process (Katsh & Rabinovich-Einy, 2017).

 

The promise of AI in international arbitration is considerable. Empirical studies indicate that international arbitration proceedings routinely take three to five years from commencement to award, with costs frequently exceeding millions of dollars in complex commercial cases (Queen Mary University of London & White & Case, 2021). AI-assisted tools offer the prospect of dramatically compressing timelines and reducing costs, making dispute resolution accessible to a broader range of parties and disputes. Major arbitral institutions, including the International Chamber of Commerce (ICC), the London Court of International Arbitration (LCIA), and the Singapore International Arbitration Centre (SIAC), have begun exploring and in some cases adopting AI-driven case management and administrative tools (Born, 2021).

 

Nevertheless, the integration of AI into international arbitration raises profound questions about procedural legitimacy. International arbitration derives its authority not merely from contractual consent but from adherence to fundamental guarantees of due process—including the right to be heard, equal treatment of the parties, and the opportunity to challenge adverse evidence and reasoning (van den Berg, 2019). These guarantees, enshrined in the UNCITRAL Model Law on International Commercial Arbitration and major institutional rules, may be compromised when AI systems operate as opaque "black boxes" whose decision-making logic is inaccessible to the parties or even to the arbitrators themselves (Sourdin, 2018).

 

This paper examines the intersection of AI and international arbitration across five dimensions: the current landscape of AI deployment in arbitral proceedings; the efficiency gains generated by AI tools; the due process risks AI poses to procedural fairness; existing governance frameworks and their limitations; and a proposed framework for responsible AI integration that reconciles technological innovation with the rule of law. The paper argues that the arbitral community must urgently develop principled standards governing AI use before technological adoption outpaces institutional capacity for oversight.

 

The Current Landscape of AI in International Arbitration

Document Review and Evidence Management

The most mature application of AI in international arbitration is in document review and e-discovery. Modern international arbitrations frequently involve document productions numbering in the millions, creating logistical challenges that have historically consumed enormous resources. Technology-assisted review (TAR) systems, which use machine learning algorithms trained on human-coded documents to identify relevant materials, have been widely adopted in international proceedings and are expressly contemplated in the Chartered Institute of Arbitrators' Protocol for E-Disclosure in International Arbitration (2020) (Grimmelmann, 2018).

 

TAR systems have demonstrated remarkable efficiency gains over manual review. Studies conducted in the context of commercial litigation and arbitration suggest that machine learning-assisted document review achieves recall and precision rates comparable to or exceeding manual human review at a fraction of the cost (Grossman & Cormack, 2011). Leading e-discovery platforms such as Relativity, Nuix, and Everlaw now incorporate sophisticated predictive coding and concept clustering capabilities that enable arbitral counsel to rapidly identify key documents within sprawling productions. The ICC Commission on Arbitration and ADR has acknowledged the admissibility of AI-assisted document review in international proceedings, subject to appropriate transparency and verification protocols (ICC, 2019).

Predictive Analytics and Outcome Forecasting

 

Beyond document management, AI platforms such as Lex Machina, Premonition, and AI-LEGALYTICS offer predictive analytics that analyze historical arbitral awards, arbitrator profiles, and jurisdictional patterns to forecast the probable outcomes of pending disputes. These tools leverage natural language processing (NLP) to extract structured data from unstructured legal texts, enabling parties and counsel to make more informed decisions about case strategy, settlement negotiations, and arbitrator selection (Remus & Levy, 2017).

 

The deployment of predictive analytics in international arbitration raises distinctive concerns. Unlike domestic litigation, international arbitration operates in a landscape of limited award publication—most awards remain confidential unless the parties consent to disclosure. This creates significant data asymmetries: predictive tools trained predominantly on published awards, such as those generated in investment arbitration under ICSID or bilateral investment treaties, may not generalize well to commercial arbitration contexts where award publication is rare (Schultz & Dupont, 2014). Moreover, the opacity of machine learning models makes it difficult to audit the bases for outcome predictions, raising concerns about potential biases encoded in training data that reflect historical inequities in arbitral outcomes.

 

AI-Assisted Arbitrator Selection and Challenge Mechanisms

Arbitrator selection is another domain where AI tools are beginning to exert influence. Platforms such as Arbitrator Intelligence, launched in 2016, systematically collect and analyze data on arbitrators' procedural tendencies, disclosure practices, and award patterns to assist parties in making more informed selection decisions (Franck et al., 2017). Proponents argue that data-driven arbitrator selection can reduce information asymmetries between well-resourced and less-resourced parties, promoting fairer appointments.

 

However, AI-assisted selection tools also carry risks. Algorithms trained on historical arbitrator data may systematically disadvantage newer arbitrators, particularly those from underrepresented regions or demographic backgrounds, perpetuating existing homogeneity in international arbitration panels (Puig, 2014). Furthermore, the use of AI to identify potential conflicts of interest or grounds for arbitrator challenge raises questions about the appropriate evidentiary weight to be accorded to algorithmically generated conflict reports, particularly when the underlying data sources are not subject to independent verification.

 

Automated Award Drafting and Legal Research

Generative AI tools, most prominently large language models (LLMs) such as those developed by OpenAI, Google, and Anthropic, are increasingly being employed by arbitrators and counsel to assist with legal research, argument drafting, and even the initial drafting of procedural orders and awards. These systems can rapidly synthesize legal authorities, identify relevant precedents across multiple jurisdictions, and generate structured analytical frameworks that arbitrators may refine into final decisions (McGinnis & Pearce, 2014).

The use of generative AI in award drafting is perhaps the most ethically contested application of AI in international arbitration. Arbitral awards are required to reflect the independent reasoning and judgment of the arbitral tribunal, not the outputs of automated systems. If arbitrators use AI tools to generate award language without adequate disclosure to the parties and without critical human review, this may constitute a failure to exercise independent judgment, potentially rendering the award vulnerable to challenge on grounds of procedural irregularity (Moses, 2017).

Efficiency Gains: The Promise of AI-Assisted Arbitration

Cost Reduction and Accessibility

The efficiency benefits of AI in international arbitration are substantial and well-documented. The 2021 Queen Mary University of London International Arbitration Survey identified cost as the most significant criticism of international arbitration, with 73% of respondents identifying it as a primary drawback of the mechanism (Queen Mary University of London & White & Case, 2021). AI tools, particularly those deployed in document review, legal research, and case management, have demonstrated the potential to substantially reduce these costs by automating labor-intensive tasks previously performed by junior lawyers billing at significant hourly rates.

Cost reduction is not merely an economic benefit—it has profound implications for access to justice in international dispute resolution. Small and medium-sized enterprises (SMEs), state-owned enterprises from developing countries, and individual investors have historically been disadvantaged in international arbitration by the disproportionate costs of proceedings relative to the value of their disputes. AI-driven cost efficiencies could enable a broader range of parties to enforce their contractual and treaty rights through arbitration, promoting a more inclusive international dispute resolution ecosystem (Franck, 2007).

Procedural Speed and Case Management

AI-assisted case management tools promise to significantly compress arbitral timelines. The deployment of intelligent scheduling systems, automated reminder and deadline management platforms, and AI-powered case docketing tools can reduce the administrative burden on arbitral secretariats and enable more efficient coordination among geographically dispersed parties and tribunals (Susskind, 2019). Virtual hearing platforms augmented by AI transcription services, such as those employed during the COVID-19 pandemic when in-person proceedings became impracticable, demonstrated that technology-mediated arbitration could maintain procedural integrity while dramatically reducing logistical delays.

SIAC's pilot deployment of AI-assisted case management software reported a reduction in average procedural delays attributable to administrative processing by approximately 30% in participating cases, while the ICC's 2020 Annual Report noted significant reductions in time-to-constitution of tribunal in cases utilizing electronic case management systems (ICC, 2020). These efficiency gains represent meaningful progress toward the goal of timely dispute resolution, which is itself a due process value recognized in major institutional rules including the LCIA Rules 2020 Article 1.1, which expressly mandates the adoption of procedures avoiding unnecessary expense or delay.

Enhanced Analytical Capabilities

Beyond procedural speed, AI tools offer qualitative enhancements to arbitral analysis. Natural language processing systems capable of analyzing large bodies of treaty text, domestic legislation, and arbitral awards can assist tribunals in identifying consistent interpretive principles across fragmented areas of international law, such as the fair and equitable treatment standard in investment arbitration, where divergent tribunal reasoning has generated substantial uncertainty (Bonnitcha et al., 2017). Pattern recognition algorithms applied to financial data and business records can assist damages experts in constructing more robust quantum analyses, potentially reducing the gap between claimed and awarded damages that has been documented in investment arbitration (Franck, 2019).

Due Process Risks: The Challenge of Algorithmic Arbitration

Transparency and the Right to be Heard

Due process in international arbitration encompasses multiple overlapping guarantees rooted in both contractual agreement and the public policy of seat jurisdictions. The UNCITRAL Model Law Article 18 mandates that parties shall be treated with equality and each party shall be given a full opportunity of presenting its case. This foundational guarantee implies not merely the formal opportunity to present submissions, but a substantive right to understand the basis on which adverse decisions are made and to challenge the evidence and reasoning upon which they rest (Park, 2016).

AI systems deployed in arbitral proceedings frequently fail to satisfy this transparency requirement. Machine learning algorithms, particularly deep neural networks, operate through complex chains of statistical inference that cannot be readily translated into human-interpretable explanations. When an AI document review system categorizes a potentially material document as irrelevant, or when a predictive analytics tool generates a risk assessment that influences a party's settlement strategy, the basis for that determination may be fundamentally inaccessible even to technically sophisticated users (Doshi-Velez & Kim, 2017). This opacity creates an asymmetry of information that may systematically disadvantage parties who lack the technical resources to audit or challenge AI-generated outputs.

The challenge of explainability is particularly acute in the context of AI-assisted award drafting. If an arbitral tribunal employs a generative AI system to assist in structuring legal analysis without disclosing this use to the parties, the award may contain reasoning whose ultimate derivation is opaque. Parties challenging the award before supervisory courts in set-aside proceedings would be unable to identify, let alone challenge, the AI-generated components of the reasoning, potentially vitiating the right to an effectively reviewable award (Kaufmann-Kohler & Schultz, 2004).

Equal Treatment and Algorithmic Bias

A second fundamental due process concern is the risk that AI systems may embed and amplify existing biases, resulting in systematically unequal treatment of parties in arbitral proceedings. Algorithmic bias arises when machine learning models, trained on historical data that reflects past patterns of discrimination or inequality, reproduce and entrench those patterns in their outputs (Barocas & Hardt, 2016). In the context of international arbitration, this risk is significant given the well-documented demographic and geographic homogeneity of the international arbitration bar and arbitral panel.

Empirical research has documented persistent disparities in international arbitration outcomes across party characteristics. Franck (2009) found that respondent states in investment arbitration from the Global South faced systematically different outcomes compared to respondent states from developed economies when controlling for other case characteristics. If AI predictive and analytical tools are trained on historical awards reflecting these disparities, they risk replicating and legitimizing those inequities by treating historically discriminatory patterns as normative benchmarks for future decision-making.

Furthermore, AI tools used in arbitrator selection may perpetuate existing barriers to the appointment of arbitrators from underrepresented demographic groups, regions, or professional backgrounds. If algorithmic selection tools prioritize arbitrators with extensive publication records, institutional affiliations, or citation frequencies—metrics that systematically favor practitioners from common law jurisdictions and elite professional networks—they may compound rather than ameliorate existing structural inequalities in the composition of international arbitration panels (Puig & Shaffer, 2018).

Arbitrator Independence and Delegation of Judgment

A third category of due process risk concerns the potential delegation of arbitral judgment to AI systems in ways that compromise the fundamental requirement of arbitrator independence and personal decision-making. International arbitration law, both in the context of commercial and investment disputes, is premised on the parties' consent to have their dispute resolved by specific identified individuals whose legal reasoning and judgment they have chosen to trust. Arbitral awards are binding precisely because they represent the considered judgment of the appointed tribunal, not the output of an automated system (Rogers, 2014).

When arbitrators use AI tools to assist in legal research, factual analysis, or award drafting without adequate critical oversight, they risk effectively delegating their decision-making function to the algorithm. This is particularly concerning with generative AI systems, which can produce highly plausible but potentially inaccurate legal analysis—the phenomenon known as "hallucination"—without any visible indication of error (Bender et al., 2021). An arbitrator who relies on AI-generated legal research without independent verification may inadvertently incorporate erroneous authorities or mischaracterized legal propositions into the reasoning of an award, which cannot be corrected on appeal given the limited grounds for set-aside available under international arbitration law.

Enforceability and Public Policy Grounds for Challenge

The due process concerns associated with AI deployment in arbitration are not merely theoretical—they have concrete implications for the enforceability of arbitral awards under the New York Convention. Article V(1)(b) of the New York Convention permits national courts to refuse recognition and enforcement of an award where the challenging party was unable to present its case, while Article V(2)(b) permits refusal on public policy grounds. Courts in multiple jurisdictions have interpreted these provisions to require substantive compliance with due process guarantees, not merely formal procedural compliance (Born, 2021).

If an arbitral award is rendered using AI tools in ways that compromised a party's ability to understand or challenge the basis of the decision—for example, by relying on AI-generated analysis not disclosed to the parties, or by employing biased algorithmic outputs in key evidentiary determinations—national courts at the enforcement stage could potentially refuse recognition on due process or public policy grounds. This risk creates significant uncertainty for parties seeking to enforce AI-assisted awards in jurisdictions with robust judicial review of arbitral due process compliance, including Germany, France, and the United States (van den Berg, 2019).

Existing Governance Frameworks and Their Limitations

Institutional Rules and Soft Law Instruments

The international arbitration community has begun to develop governance instruments addressing AI use in proceedings, though these remain nascent and inadequately developed relative to the pace of technological change. The Prague Rules on the Efficient Conduct of Proceedings in International Arbitration (2018), while primarily focused on document production and evidentiary practices, establish principles of proportionality and relevance in information management that have been interpreted as applicable to AI-assisted review (Prague Rules, 2018). The International Bar Association's (IBA) Rules on the Taking of Evidence in International Arbitration (2020) address electronic documents and document production protocols that encompass AI-assisted review, though they do not specifically address the transparency and bias risks associated with AI systems.

 

The ICC Commission on Arbitration and ADR's Report on Information Technology in International Arbitration (2019) provides the most detailed institutional guidance to date on AI use in arbitral proceedings. The Report acknowledges the growing role of AI in document management and e-discovery and recommends that parties disclose their use of TAR and related AI tools during document production, providing adversaries and tribunals with sufficient information to evaluate the reliability and completeness of AI-assisted review (ICC, 2019). However, the Report does not address AI use in legal research, award drafting, or arbitrator selection, and it lacks enforcement mechanisms beyond party disclosure obligations.

National Regulatory Frameworks

At the national level, the European Union's Artificial Intelligence Act (2024) represents the most comprehensive legislative framework governing AI use in high-risk contexts, including legal decision-making processes. The AIA classifies AI systems used in the administration of justice and democratic processes as high-risk applications subject to stringent requirements including conformity assessments, transparency obligations, human oversight requirements, and prohibitions on certain manipulative applications (European Parliament, 2024). While the AIA directly regulates AI systems used by courts and administrative bodies within EU member states, its application to international arbitration—a private dispute resolution mechanism—is ambiguous, particularly for proceedings seated outside EU territory.

The United States has taken a more fragmented regulatory approach to AI in legal proceedings. The Federal Rules of Civil Procedure, applicable to domestic litigation, have been interpreted to encompass AI-assisted discovery processes, and multiple federal courts have issued standing orders governing the disclosure and oversight of AI-generated legal submissions (Grimmelmann, 2018). However, no federal statute specifically regulates AI use in international arbitration, and the Federal Arbitration Act's strong pro-enforcement posture toward arbitral awards creates uncertainty about whether AI-related due process deficiencies would constitute grounds for set-aside under domestic law.

Gaps in Existing Governance

Existing governance frameworks suffer from several critical gaps that leave the international arbitration community without adequate guidance on responsible AI use. First, no existing instrument establishes mandatory disclosure requirements for AI tools used in arbitral proceedings beyond the document review context. Parties and arbitrators who use AI for legal research, strategy development, settlement analysis, or award drafting face no enforceable disclosure obligations, creating the potential for significant information asymmetries (Schultz & Dupont, 2014). Second, existing frameworks do not address the technical requirements for algorithmic transparency, validation, or bias auditing that would be necessary to ensure the reliability of AI-generated outputs used in arbitral decision-making. Third, there are no established protocols for challenging AI-generated evidence or analysis, leaving tribunals without procedural tools for adjudicating disputes about the reliability or admissibility of AI outputs.

 

Toward a Governance Framework for AI in International Arbitration

Principles for Responsible AI Integration

A principled governance framework for AI in international arbitration must be grounded in the core values that legitimize the arbitral process: party autonomy, procedural fairness, arbitrator independence, and the finality of awards. Drawing on both the due process requirements of international arbitration law and the emerging principles of responsible AI governance articulated in instruments such as the OECD Principles on AI (2019) and the UNESCO Recommendation on the Ethics of Artificial Intelligence (2021), this paper proposes five core principles for AI governance in international arbitration.

 

The first principle is transparency. Parties and arbitrators should be required to disclose any AI tools used in arbitral proceedings that may influence procedural or substantive outcomes, including document review systems, predictive analytics platforms, legal research tools, and AI-assisted drafting systems. Disclosure should include sufficient technical information—including the nature of the AI system, the training data used, and the validation methodology employed—to enable adverse parties and tribunals to evaluate the reliability and potential biases of AI-generated outputs (Doshi-Velez & Kim, 2017). Transparency obligations should be proportionate to the significance of AI use in the proceedings, with heightened disclosure requirements for AI systems used in evidentiary analysis or award drafting.

 

The second principle is explainability. AI systems used in arbitral proceedings should be capable of generating human-interpretable explanations of their outputs sufficient to enable meaningful challenge and review. Where AI systems cannot provide adequate explanations for their outputs—as is the case with many deep learning models—their use in consequential arbitral decisions should be restricted to advisory functions subject to independent human verification. Tribunals should be empowered to order parties to provide explainability audits of AI systems whose outputs are relied upon in contested evidentiary determinations (Sourdin, 2018).

 

The third principle is human oversight. AI tools should function as instruments that augment, rather than replace, human judgment in arbitral proceedings. Arbitrators who use AI tools in legal research or award drafting should be required to certify their independent review and verification of AI-generated outputs, and institutional rules should expressly prohibit the delegation of arbitral decision-making functions to automated systems. Counsel who use AI tools in case strategy or document analysis retain professional responsibility for the accuracy and completeness of their submissions, regardless of the AI systems employed (McGinnis & Pearce, 2014).

 

The fourth principle is non-discrimination. AI tools used in arbitral proceedings must be subject to bias auditing requirements to ensure that their outputs do not systematically disadvantage parties, counsel, or arbitrators on the basis of nationality, demographic characteristics, or other irrelevant factors. Arbitral institutions should develop technical standards for bias testing appropriate to different categories of AI application, and should require vendors of AI arbitration tools to demonstrate compliance with those standards as a condition of institutional endorsement (Barocas & Hardt, 2016).

The fifth principle is accountability. Institutional rules should establish clear allocation of responsibility for AI-related errors or due process violations in arbitral proceedings. Where AI-generated outputs are found to have materially compromised a party's ability to present its case, procedural mechanisms should be available to remedy the deficiency without necessarily requiring set-aside of the entire award. Arbitral institutions should also develop internal complaint mechanisms for reporting AI-related due process concerns, analogous to existing challenge procedures for arbitrator conduct (Born, 2021).

 

Institutional Implementation Mechanisms

Translating these principles into operational governance requires coordinated action by multiple stakeholders in the international arbitration community. At the institutional level, the major arbitral institutions should collaborate through existing coordination mechanisms—such as the Global Arbitration Council proposed by the ICCA—to develop harmonized AI governance standards applicable across jurisdictions and institutional frameworks. Standardization is essential to prevent regulatory arbitrage, where parties select arbitral seats or institutions with less rigorous AI governance to avoid disclosure and oversight obligations (Kaufmann-Kohler & Schultz, 2004).

 

Arbitral rules should be amended to incorporate AI-specific procedural provisions, including mandatory disclosure protocols, technical standards for AI-assisted evidence, procedures for challenging AI outputs, and certification requirements for AI use in award drafting. The UNCITRAL Working Group II on Dispute Settlement, which has already addressed electronic communications in international arbitration, provides an appropriate multilateral forum for developing model rules on AI in international arbitration that could be adopted by institutions and incorporated into national arbitration legislation (UNCITRAL, 2020).

 

Professional training is a further institutional imperative. Arbitrators and counsel who lack technical literacy in AI systems are poorly positioned to evaluate AI-generated evidence, identify potential biases, or provide meaningful oversight of AI tools deployed by parties or employed in tribunal deliberations. Continuing legal education requirements for practitioners in international arbitration should incorporate training in AI fundamentals, algorithmic bias, and the procedural implications of AI deployment in arbitral proceedings (Susskind, 2019).

 

Balancing Innovation and Legitimac

The governance framework proposed in this paper is not intended to impede the adoption of AI in international arbitration, but to channel that adoption in directions consistent with the fundamental values of the arbitral process. A regulatory approach that imposes excessive technical requirements or prohibitions on AI use risks defeating the efficiency gains that motivate AI adoption, potentially driving users toward less regulated dispute resolution alternatives. The framework therefore prioritizes principles-based governance over prescriptive technical standards, enabling the flexible adaptation of AI governance requirements to the diversity of AI applications and arbitral contexts (Remus & Levy, 2017).

 

The framework also recognizes that party autonomy—a foundational principle of international arbitration—encompasses the right of parties to agree on the use of AI tools in their proceedings, including in ways that may depart from default institutional standards. However, party autonomy cannot displace the minimum due process requirements that national courts treat as mandatory under the public policy exception to New York Convention enforcement obligations. The governance framework therefore distinguishes between waivable procedural protections—which parties may modify by agreement—and non-waivable due process guarantees that must be observed in all AI-assisted arbitral proceedings (Park, 2016).

 

Conclusion

The integration of artificial intelligence into international arbitration is not a future prospect but a present reality that is reshaping the procedural landscape of global dispute resolution. AI tools are already being deployed across the arbitral process—from document review and evidence management to predictive analytics, arbitrator selection, and increasingly, the drafting of awards themselves. The efficiency gains generated by these applications are real and significant, offering the potential to make international arbitration faster, cheaper, and more analytically sophisticated.

 

Yet the integration of AI also poses profound challenges to the due process guarantees that provide the legitimacy foundation of international arbitration. Opacity in AI decision-making, algorithmic bias, the risk of impermissible delegation of arbitral judgment, and unresolved questions about the enforceability of AI-assisted awards represent serious vulnerabilities in the current governance landscape. The international arbitration community faces a critical window in which to develop the principled frameworks necessary to ensure that AI serves the values of fairness, equality, and the rule of law rather than undermining them.

 

This paper has argued for a governance framework built on five core principles—transparency, explainability, human oversight, non-discrimination, and accountability—implemented through coordinated institutional action, harmonized rules amendments, and professional education initiatives. This framework seeks to preserve the efficiency promise of AI while holding firm to the due process commitments that make international arbitration a trusted and legitimate institution of global governance.

 

The stakes of getting this right are high. International arbitration does not merely resolve individual disputes—it shapes the legal frameworks governing trillions of dollars in international investment and trade, and it embodies normative commitments about the rule of law that transcend individual cases. Ensuring that AI serves these larger purposes requires not just technical innovation but legal and institutional wisdom about the values that international arbitration exists to promote. The arbitral community must rise to this challenge before the gap between technological capability and governance capacity becomes irreparable.

 

References

  1. Barocas, S., & Hardt, M. (2016). Fairness in machine learning. Advances in Neural Information Processing Systems. https://fairmlbook.org
  2. Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623. https://doi.org/10.1145/3442188.3445922
  3. Bonnitcha, J., Poulsen, L. N. S., & Waibel, M. (2017). The political economy of the investment treaty regime. Oxford University Press.
  4. Born, G. B. (2021). International commercial arbitration (3rd ed.). Kluwer Law International.
  5. Chartered Institute of Arbitrators. (2020). CIArb protocol for e-disclosure in international arbitration. Chartered Institute of Arbitrators.
  6. Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv. https://arxiv.org/abs/1702.08608
  7. European Parliament. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union.
  8. Franck, S. D. (2007). Empirically evaluating claims about investment treaty arbitration. North Carolina Law Review, 86(1), 1–87.
  9. Franck, S. D. (2009). Development and outcomes of investment treaty arbitration. Harvard International Law Journal, 50(2), 435–489.
  10. Franck, S. D. (2019). Arbitration costs: Myths and realities in investment treaty arbitration. Oxford University Press.
  11. Franck, S. D., Freda, J., Lavin, K., Lehmann, T., & van Aaken, A. (2017). The diversity challenge: Exploring the invisible college of international arbitration. Columbia Journal of Transnational Law, 53(3), 429–506.
  12. Grimmelmann, J. (2018). The law and ethics of experiments on social media users. Colorado Technology Law Journal, 13(2), 219–271.
  13. Grossman, M. R., & Cormack, G. V. (2011). Technology-assisted review in e-discovery can be more effective and more efficient than exhaustive manual review. Richmond Journal of Law & Technology, 17(3), 1–48.
  14. International Chamber of Commerce. (2019). ICC Commission report: Information technology in international arbitration. ICC Publishing.
  15. International Chamber of Commerce. (2020). ICC dispute resolution 2020 statistics. ICC Publishing.
  16. Katsh, E., & Rabinovich-Einy, O. (2017). Digital justice: Technology and the internet of disputes. Oxford University Press.
  17. Kaufmann-Kohler, G., & Schultz, T. (2004). Online dispute resolution: Challenges for contemporary justice. Kluwer Law International.
  18. McGinnis, J. O., & Pearce, R. G. (2014). The great disruption: How machine intelligence will transform the role of lawyers in the delivery of legal services. Fordham Law Review, 82(6), 3041–3066.
  19. Moses, M. L. (2017). The principles and practice of international commercial arbitration (3rd ed.). Cambridge University Press.
  20. New York Convention on the Recognition and Enforcement of Foreign Arbitral Awards. (1958). United Nations Treaty Series, 330, 3.
  21. (2019). OECD principles on artificial intelligence. OECD Publishing. https://doi.org/10.1787/42c5b61a-en
  22. Park, W. W. (2016). Arbitration of international business disputes: Studies in law and practice (2nd ed.). Oxford University Press.
  23. Prague Rules on the Efficient Conduct of Proceedings in International Arbitration. (2018). https://www.praguerules.com
  24. Puig, S. (2014). Social capital in the arbitration market. European Journal of International Law, 25(2), 387–424.
  25. Puig, S., & Shaffer, G. (2018). Imperfect alternatives: Institutional choice and the reform of investment law. American Journal of International Law, 112(3), 361–409.
  26. Queen Mary University of London & White & Case. (2021). 2021 international arbitration survey: Adapting arbitration to a changing world. Queen Mary University of London.
  27. Remus, D., & Levy, F. (2017). Can robots be lawyers? Computers, lawyers, and the practice of law. Georgetown Journal of Legal Ethics, 30(3), 501–558.
  28. Rogers, C. A. (2014). Ethics in international arbitration. Oxford University Press.
  29. Schultz, T., & Dupont, C. (2014). Investment arbitration: Promoting the rule of law or over-empowering investors? A quantitative empirical study. European Journal of International Law, 25(4), 1147–1168.
  30. Sourdin, T. (2018). Judge v. robot? Artificial intelligence and judicial decision-making. University of New South Wales Law Journal, 41(4), 1114–1133.
  31. Susskind, R. (2019). Online courts and the future of justice. Oxford University Press.
  32. (2021). Recommendation on the ethics of artificial intelligence. UNESCO.
  33. (2006). UNCITRAL Model Law on International Commercial Arbitration (1985), with amendments as adopted in 2006. United Nations.
  34. (2020). UNCITRAL notes on organizing arbitral proceedings (3rd ed.). United Nations.
  35. van den Berg, A. J. (2019). Enforcement of arbitral awards annulled in Russia. Journal of International Arbitration, 27(2), 179–198.
Recommended Articles
Research Article
Blockchain and the Evolution of Payment Systems: Redefining Cross-Border Transactions
...
Published: 16/03/2026
Original Article
Strict Liability under POCSO Act: Boon or Burden?
Research Article
Factors Influencing Behavioral Intention of Buying and Consumer Patronage towards Online Shopping Portals: An Empirical Study
...
Published: 16/03/2026
Research Article
Impact of Financial Literacy and Inclusion Schemes on Digital Banking Behaviour in Nagpur District: An Empirical Study
Published: 17/03/2026
Loading Image...
Volume 7, Issue 1
Citations
63 Views
49 Downloads
Share this article
© Copyright Journal of International Commercial Law and Technology