Our present time is in constant evolution and Artificial Intelligence (AI) has demonstrated its ability to revolutionize entire industries and improve our quality of life in many aspects. In our global interconnectedness, it is a reality that organizations are already working on implementing AI for various purposes, including improving processes and the way they do business. Regulation of AI could set standards, thus ensuring that the technology is used for the benefit of society, with balanced regulation that promotes responsible technological progress and protects the interests and rights of individuals.
In the context of cybersecurity and online fraud, the malicious use of AI to launch more sophisticated and scalable phishing attacks has been on the rise. In addition, so-called “deepfakes” are exploited to impersonate the biometric data of their victims and take over their records and accounts.
Artificial Intelligence (AI) is a rapidly evolving set of technologies that can generate a broad spectrum of economic and social benefits to all sectors by improving prediction, optimizing operations and resource allocation, and personalizing service delivery. Artificial Intelligence can facilitate the achievement of positive results from a social and environmental point of view, as well as provide essential competitive advantages for companies and the economy of any country, especially if used appropriately in the Central and Latin American region.The development of these technologies generates a multitude of cases that raise questions of legality and current legal regulations.
In this context, it will be important to have a general notion of the following techniques:
- Machine Learning: Implies that systems learn to extract patterns and conclusions from a data input with results that allow them, autonomously, to replicate them using other data sets, as is the case with image recognition software that recognizes shapes, figures, objects, etc.
- Deep Learning: It takes Machine Learning a step further and allows continuous learning that evolves over time with information gathered from the environment, imitating the functioning of the human brain through a series of neural layers that collect various aspects of the input data, process the information and produce a value or decision to be applied.
As far as we, the legal profession, are concerned, the need for regulation and establishment of codes of good practice to avoid contentious processes related to interpretation problems is being discussed, as well as the need for legal advice and assistance in the face of problems caused by this technology.
We could assume a very concrete role in the transformation processes that our societies, institutions and markets are undergoing, since Artificial Intelligence technologies are a clear opportunity for efficiency and a versatile data and information processing tool for companies, generating competitive advantages. However, they also involve risks that will materialize in real damage if they are not properly regulated and managed, considering the rights and interests of all affected groups.
In European latitudes, a regulatory framework on artificial intelligence is proposed within the framework of the following trends or objectives:
- Ensure that AI systems introduced and used in the EU market are safe and respect existing legislation on fundamental rights and EU values.
- Ensure legal certainty to facilitate investment and innovation in Artificial Intelligence.
- Improve governance and effective enforcement of existing fundamental rights legislation and security requirements applicable to AI systems.
- Facilitate the development of a single market to make legal, safe, and reliable use of AI applications and avoid market fragmentation. Current European regulatory trends follow an approach that distinguishes the level of risk generated with the use of AI:
- High risk.
- Medium risk.
- Low or minimal risk.
For its part, the U.S. National Institute of Standards and Technology (NIST) has published the AI Risk Management Framework to guide organizations that design, develop, deploy or use AI systems to help limit the many risks of these technologies. This framework is non-binding, so companies may adhere to it on a voluntary basis.
The trends in our region can be seen in the following aspects:
- Privacy and data security: As AI uses or draws on large amounts of data, it is crucial to ensure that it is collected and used ethically and lawfully. This includes ensuring that individuals give consent to share their data, and that it is adequately protected against potential breaches or attacks. Design, together with data protection and information security specialists, an outline of best practices in risk management to ensure the confidentiality of both the organization’s information and that of third parties. Best practices should include techniques for users to learn how to implement data anonymization and pseudo-anonymization mechanisms to ensure the protection of the organization’s and third parties’ information.
- Analyze the level of user knowledge and design awareness and knowledge plans for generative AI: Design and implement specific communication campaigns on text-generative AI at the level of their organizations; implement continuous training programs for different levels of users; train on risk management policies and practices adapted to text-generative AI; emphasize the role of individuals in the validation and verification of AI-generated texts; establish multidisciplinary consulting teams to be able to provide advice and support to users on the use of text-generative AI in their organizations; and provide training on the use of text-generative AI.
- Responsibility in case of accidents or errors: Gradually, AI is being used in critical applications, such as medical diagnostics and control of autonomous vehicles. In this regard, it will be essential to establish protocols for investigations and responsible parties in the event of an accident or error.
- Ethical aspects of AI: As AI evolves, as it will be executing automated decisions, it is important to ensure that these decisions are not based on bias or stereotypes and that certain groups of people are not discriminated against.
AI is increasingly present in our daily life activities. It is a tool that can improve and optimize processes. Generative AI is a specific type of artificial intelligence designed to create diverse content, including text, images, audio, video. However, as AI develops and is used in a variety of fields, there are also legal and ethical concerns that need to be addressed. Many countries are developing specific legal regulations to address the legal challenges related to AI, which will require flexible and adaptable legal frameworks to evolve along with the technology. This includes aspects mentioned throughout this article:
- The protection of personal data.
- The prevention of bias and discrimination.
- The security of data systems.
All this implies creating a functional ecosystem where 5G infrastructures, high bandwidth, cloud systems, edge computing, content generation, application development, cybersecurity and regulatory framework, coordination with the economic agents of the various sectors of our countries, among others, are articulated.
In line with this, UNESCO published last July 2023, its AI readiness assessment methodology, a diagnostic tool to help governments ensure that Artificial Intelligence is developed and deployed ethically, in line with its recommendation on the ethics of Artificial Intelligence, unanimously adopted by its Member States in November 2021.
In this context, it is a comprehensive assessment that tests the adequacy and relevance of existing national laws and policies to positively frame technological development and calibrate the technical capabilities of the public and productive sectors.
Countries are at different stages of readiness to implement UNESCO’s recommendation on the ethics of AI and there is no one-size-fits-all approach. There are also different social preferences and conditions, risk thresholds and innovation environments. The tool developed by UNESCO considers these specificities while bringing an international perspective.
In 2023, fifty countries will collaborate with UNESCO in the application of the assessment tool, among them Costa Rica, Dominican Republic, Antigua and Barbuda, Cuba, and Barbados. The country reports, based on the diagnostic assessment, will be published in the UNESCO AI Ethics Observatory to be launched in the coming weeks and launched with the Alan Turing Institute (UK). The main objective is to be an online transparency portal for the latest data and analysis on the ethical development and use of AI worldwide, as well as a platform for sharing best practices.
As a law firm, we join efforts to promote, raise awareness and disseminate the UNESCO recommendation on the ethics of Artificial Intelligence, which forms the first global framework for the ethical use of AI and guides countries on how to maximize the benefits of AI and reduce the risks associated with it. These efforts contain values and principles, but also detailed policy recommendations in all relevant areas. UNESCO is particularly concerned about the ethical issues raised by these innovations in the areas of anti-discrimination and stereotyping, including gender issues, reliability of information, privacy and data protection, human rights and the environment.
Our article echoes the call of UNESCO through its Director General Audrey Azoulay with whose lines we conclude the vision of this: “The world needs higher ethical standards for artificial intelligence: this is the great challenge of our time. UNESCO’s recommendation on the ethics of AI sets the appropriate normative framework. All our member states adopted this recommendation in November 2021, and it is time to implement the strategies and regulations at the national level. We need to lead by example and make sure we meet the objectives of this one.”