Digital Disruption of Evidence

Picture of José Pablo Valverde

José Pablo Valverde

In early February, the XV International Arbitration Congress CAI Costa Rica 2024 was held. This event brought together key figures in Ibero-American arbitration to discuss the latest developments, experiences, and trends in international arbitration.

In 2024, I had the honor of moderating one of the final panels of the event, titled “Digital Disruption and Integrity of Evidence,” a topic that is in vogue, especially with the advent of Artificial Intelligence (AI), and even Generative Artificial Intelligence, which allows for the creation and reproduction of images, videos, voice reproductions, among others.

At this event, an analysis was conducted on robot arbitrators and whether arbitrations should be resolved by this type of arbitrator. This discussion recalled the image of Themis, the Goddess of Justice, who holds a perfectly balanced scale in her hands and a blindfold over her eyes. This image could perfectly apply to robot arbitrators, in the sense that, in a conflict presented to them, it is evident that, in resolving the dispute, the robot will consider the facts between the parties, the background, and most likely how similar cases have been resolved in the past. What the robot arbitrator will not do is apply subjective criteria, personal interpretations, reasoning, or sound judgment.

Currently, in some regions, such as China, AI has been used to resolve judicial disputes. A case occurred in the city of Hangzhou, where 10 debtors were unable to meet their financial obligations, so they were part of a judicial process where, in a single 30-minute hearing before AI, the dispute was resolved.

The use of robot arbitrators is an alternative that has been implemented in simple cases, to expedite the processing of cases and aid in reducing the judicial backlog. However, their use in complex cases presents serious challenges when logic, sound judgment, reasoning, and interpretations are required, qualities that are part of a human judge’s analysis and study when resolving a dispute.

Another element discussed at the conference was the use of false or adulterated evidence through Generative AI. Generative AI, as indicated, is a type of artificial intelligence that can create new ideas and content, such as conversations, stories, images, videos, and music. Thus, with this tool, a conversation or video can be falsely created, which undoubtedly could have a direct impact on one of the theses of one of the parties involved in a trial, subsequently becoming a crucial element for resolving the dispute.

Currently, the Silicon Valley Arbitration & Mediation Center (SVAMC), an arbitration and mediation center specializing in the technology sector, announced in July 2023 the development of a “Guide on the Use of Artificial Intelligence in International Arbitration.” This arbitration and mediation center was founded in 2014 in California, United States, and in addition to conflict resolution, they annually publish “The Tech List,” which is a ranking of the best arbitrators and mediators worldwide with particular experience in the technology sector. This guide was created to offer a series of “best practices” on the use of AI tools for all parties involved in arbitration. The first draft was published in August 2023, and since then, a period of comments and suggestions has been launched by lawyers, students, and even international dispute resolution centers, to include such changes in the final version.

This guide is the first to address the ethical and practical implications of integrating AI tools into the legal world. This guide contains 3 chapters and the main elements it mentions are:

    • Provides a broad definition of Artificial Intelligence.
    • Participants in an arbitration process must know and understand the uses, limitations, and risks of AI applications, as well as mitigate them as much as possible.
    • There is a special emphasis on protecting confidentiality, as the proper use of AI in arbitration must also safeguard the confidentiality of information.
    • There is a discussion about the duty to disclose the use of AI; thus, two options were created, and it is still under analysis which one to adopt. Option A states that disclosure of the use of AI depends on how it is used, the impact it could have on the process, and if it is an unexpected tool. In contrast, Option B determines that, as a rule, it is not an obligation to disclose the use of this technology, except if it has been used to prepare evidence, important documents, or expert opinions.
    • Prohibits arbitrators from delegating decision-making to an AI tool and they also cannot base their decision on information generated by AI that is not included in the record.

The disruption of AI in judicial processes will necessarily require Arbitration Centers (local and international) and procedural regulations of each local jurisdiction to regulate the intrusion of AI, establish sanctions for those parties who are aware of the situation and attempt to include adulterated or AI-generated evidence to benefit their interests, sanctions that must even be analyzed from the perspective of professional ethics.