Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

Brazil: AI landscape and what to expect from the upcoming legislation

The Brazilian Federal Senate established a commission of legal experts ('the AI Commission') who were commissioned with the task of drafting an Artificial Intelligence Legal Framework ('AI Legal Framework').

In this Insight article, Fabio Ferreira Kujawski and Ingrid Soares, from Mattos Filho Advogados, summarise the main findings of the AI Commission that shall guide the discussions in the Brazilian Congress about an upcoming federal statute for AI systems, whilst also analysing existing laws and regulations that currently impact the use and development of AI systems in Brazil.

blackdovfx / Signature collection / istockphoto.com

Background

Artificial intelligence ('AI') is probably one of the most important technologies of our times. Over the past years, the debate on AI-related regulation and risks posed by AI applications has matured significantly in Brazil, reflecting new changes in the AI legal landscape.

In December 2022, the AI Commission approved the AI Legal Framework, which was preceded by public hearings and consultations to broaden society's contribution. The AI Legal Framework must be approved by the National Congress to turn into law.

Such changes were motivated by civil society organisations that demanded more transparency on the AI algorithms and systems and effective mechanisms to ensure accountability of the AI developers and protection of consumers' rights. The challenge for lawmakers dedicated to AI systems (and all other disruptive technologies) is to design laws and regulations that grant sufficient protection to individuals while avoiding obstructing innovation.

AI Legal Framework approved by the AI Commission

The AI Commission analysed several existing bills affecting AI systems, including Bill No. 5,051/2019, Bill No. 872/2021, and Bill No. 21/2020 (approved by the House of Representatives in early 2021).

The new version of the AI Legal Framework defines the principles of AI systems and the rights of individuals affected by them (notwithstanding other rights set forth in consumer laws and other statutes). The supplier of an AI system has the duty to classify the system based on the risks it may pose to individuals, and this classification may be changed by a supervising authority. The supplier of AI systems shall adopt minimum governance and transparency requirements. We will tackle these rights in more details below.

The AI Legal Framework also brought certain definitions that are relevant to understand the scope of the systems contemplated by the law and the stakeholders subject to it. According to the AI Legal Framework:

  • AI system: means a computer system with different degrees of autonomy, designed to infer how to achieve a given set of goals, through approaches based on machine learning or logic and knowledge representation, by means of machine or human-inputted data, in order to produce predictions, recommendations, or decisions that can influence the virtual or real environment.
  • AI system supplier: means an individual or public or private legal entity that develops an AI system directly or upon request, aiming at placing it in the market or applying the system into a service it provides, under its name or brand, either for a charge or free of charge.
  • AI system operator: means an individual or public or private legal entity who deploys or uses an AI system for its benefit, except for personal (non-professional) use purposes.
  • AI agents: mean both suppliers and operators of AI systems.
  • Discrimination: means any distinction, exclusion, restriction, or preference, in any area of public or private life whose primary purpose or effect is to void or restrict the recognition, enjoyment, or exercise of one or more rights established in appliable laws in equal conditions, due to personal characteristics, such as geographical origin, race, colour, or ethnicity, gender, sexual orientation, socioeconomic class, age, deficiency, religion, or political opinions.

AI principles

The AI Legal Framework included new principles for the development, deployment, and use of AI systems, such as:

  • human participation in the AI cycle and effective human supervision;
  • trustworthiness and robustness of AI systems and information security;
  • due process of law and right to oppose to AI systems decisions;
  • traceability of decisions during the life cycle of AI systems as a mean to hold individuals or entities accountable for any harm caused by the system;
  • prevention, precaution, and mitigation of systemic risks originating from intentional or non-intentional use and non-foreseeable effects of AI systems; and
  • non-maleficence and proportionality between the methods employed and the legitimate purposes of the AI systems.

Rights of individuals affected by AI systems

The AI Legal Framework identified certain rights for individuals affected by AI systems. The primary rights conferred by individuals regarding AI systems include:

  • the right to be informed about interactions with AI systems before they take place;
  • the right to receive explanations regarding decisions, recommendations, or predictions made by an AI system within 15 days;
  • the right to contest decisions made by AI systems;
  • the right to non-discrimination and to correct system biases;
  • the right to human participation in certain decisions; and
  • the right to privacy and data protection.

Risk classification

Similar to the Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence ('the AI Act'), the new version of the AI Legal Framework adopts a risk-based approach toward regulating AI systems. If approved, the AI Legal Framework will establish that AI suppliers will have to classify the degree of risk of the system before offering it to the market or using it.

The risks posed by AI systems are twofold:

Excessive risk

AI systems that pose excessive risks would not be permitted to operate in Brazil. Such systems include:

  • systems that employ subliminal techniques to induce individuals to behave in a way that is harmful to their health and safety;

  • systems that exploit the vulnerabilities of specific groups of people (associated with age and specific disabilities) in order to induce them to behave in a way that is harmful to their health and safety; and
  • systems employed by the Government to evaluate, classify, and rank individuals based on their social behaviour and attributes through universal scoring, such as social credit systems.

High risk

High-risk AI systems include:

  • critical infrastructure management and operation;
  • professional evaluation;
  • credit rating and evaluation systems;
  • autonomous vehicles (when their use may create risks to people's health and safety);
  • health systems intended to aid medical procedures and diagnosis; and
  • biometric identification systems.

Governance of AI systems

AI agents must implement governance mechanisms throughout the entire life cycle of the system, capable of guaranteeing the security of systems and assuring the rights of individuals. In general, the governance structure and internal processes shall include:

  • transparency measures regarding the use of AI systems in the interaction with individuals, including adequate human-machine interfaces that are sufficiently clear and informative;
  • appropriate governance measures to prevent and mitigate potential discriminatory biases;
  • adoption of Privacy by Design and Privacy by Default techniques to minimise the use of personal data; and
  • adoption of adequate data segregation and organisation parameters for training, testing, and validation of system results.

For high-risk AI systems, in addition to the above, AI agents will need to implement additional measures, such as system operation automatic registration tools to allow assessment of its accuracy and robustness and carry out tests to assess appropriate levels of reliability. Moreover, AI agents will need to implement an inclusive team responsible for the design and development of the AI system, guided by the pursuit of diversity.

Finally, the AI Legal Framework provides that AI agents are obligated to prepare Algorithmic Impact Assessments ('AIA'), whenever the preliminary assessment considers the AI system as high-risk. The AIA shall consider, at least, the following:

  • known and foreseeable risks associated with the AI system at the time it was developed;
  • benefits associated with the AI system;
  • the likelihood of adverse consequences;
  • operating logic of the AI system; and
  • mitigation measures and an indication of residual risks of AI system, accompanied by quality control tests, among others.

The AI Legal Framework imposed a reporting obligation in case of relevant security incidents, disruption of critical infrastructure operations, severe damage to property or the environment, as well as serious violations of fundamental rights.

Government procurement of AI Systems

Government at federal, state, and municipal levels that wishes to retain AI systems shall carry out prior public consultation and hearing on the planned use of AI systems. The use of biometric systems by the public authorities will be preceded by a normative act that establishes guarantees for the exercise of the rights by affected individuals and protection against unlawful or abusive use. The processing of race, colour, or ethnicity data is prohibited, unless otherwise provided expressed in law.

Sanctions and liability

The AI Legal Framework established a possibility of strict liability between the AI system operator and the AI system supplier, especially in cases of excessive or high-risk systems, to the extent of each agent's culpability.

The suggested sanctions for breach of the law may include a fine of up to BRL 50 million (approx. €9,080) per violation, fine of up to 2% of total net turnover of the infringing entity conglomerate in Brazil, in the previous fiscal year, as well as temporary or permanent suspension (partial or total) of the development, supply, or operation of the AI systems.

If the new AI Legal Framework is approved, the executive branch must appoint a competent authority to implement and enforce the law.

Overview of other statutes that may also apply to AI systems

We can categorise Brazilian AI law-making initiatives into three major groups:

  • federal Laws;
  • the AI National Strategy; and
  • international commitments.

Federal laws

In addition to general consumer laws, there are certain federal laws that apply to AI systems and therefore, deserve attention from AI developers.

Law No. 12.965 of 23 April 2014, Establishing Rights and Duties for the Use of the Internet in Brazil ('the Internet Law')

The Internet Law establishes principles, rights, and duties for internet use in Brazil. The Internet Law assures a series of rights to users, which are especially relevant for the implementation of online AI systems, including:

  • the right to privacy and freedom of expression;
  • inviolability of intimacy and private life;
  • publicity and clarity of use policies; and
  • accessibility, considering the user's physical-motor, perceptual, sensory, intellectual, and mental characteristics, among others.

The Internet Law also contemplates specific sections concerning:

  • law enforcement authorities' rights to access metadata and subscriber information (and when a court order is required for this purpose);
  • net neutrality; and
  • a safe harbour for user generated content, exempting platforms from liabilities over user content, save for certain exceptions.

Law No. 13.709 of 14 August 2018, General Personal Data Protection Law (as amended by Law No. 13.853 of 8 July 2019) ('LGPD')

The LGPD is Brazil's first comprehensive data protection legislation. The LGPD applies to any processing operation performed by an individual or legal entity, whether public or private, regardless of the means, the country where the controller/processor is headquartered, or the country where the data is located, provided that:

  • the processing operation occurs in Brazil;
  • the objective of the processing operation is to offer or provide goods or services to individuals in Brazil; or
  • the personal data subject to processing was collected in Brazil.

Compliance with the LGPD is essential to processing personal data in AI systems and applications. Regarding algorithmic transparency, the LGPD confers on data subjects the right to request the review of automated decisions (including for profiling purposes). The review does not need to be made by a natural person, under the LGPD, differently from what is currently contemplated in the AI Legal Framework. The data controller must provide, whenever requested, precise and adequate information regarding the criteria and procedures used for the automated decision, save for commercial and industrial secrecy that can be withheld. Where the information is not provided for commercial and industrial purposes, the Brazilian data protection authority ('ANPD') may audit the company to verify discriminatory aspects in the automated processing of personal data.

Law No. 9.503 of 12 September 1997, Brazilian Traffic Code

The Traffic Code establishes that drivers must always have control of the vehicle and several traffic authorities' regulations impose restrictions on driver's distraction while driving. All such laws and regulations pose significant challenges for connected car AI systems.

Ordinary Law No. 17.611 ('Ceará State AI Act')

The Ceará State AI Act establishes guidelines for using AI systems within the State of Ceará. Notably, the Ceará State AI Act determines that all AI systems in the State of Ceará must be managed and supervised by individuals.

AI National Strategy

National Strategy for Artificial Intelligence ('EBIA')

In 2021, the Ministry of Science, Technology and Innovation published the first Brazilian AI National Strategy, which aims to guide future regulations and public policies on AI in Brazil. The EBIA intends to balance the ethical use of the technology while boosting research and innovation. The EBIA establishes nine thematic axes, as follows:

  • legislation, regulation, and ethical use of AI;
  • AI governance;
  • international aspects;
  • qualifications for a digital future;
  • workforce and training;
  • research and development;
  • innovation and entrepreneurship;
  • AI application on productive sectors;
  • AI application on public sector; and
  • public security.

Finally, it presents a diagnosis of the current situation of AI in the world and Brazil, highlights the challenges to be faced, and offers a vision of the future.

International commitments

Brazil has been making significant efforts to become a member country of the Organization for Economic Co-operation and Development ('OECD') and has endorsed the OECD and G20 AI Principles on AI. Brazil also joined the Global Partnership on AI ('GPAI'), an international initiative to promote responsible AI use that respects human rights and democratic values. Therefore, the Brazilian Congress will ensure that future legislation and sectoral regulations observe the OECD Principles on AI.

In November 2021, Brazil adopted the Recommendation on the Ethics of Artificial Intelligence prepared by the United Nations Educational, Scientific and Cultural Organization ('UNESCO'), being recognised as one of the 'early adopters' of the UNESCO AI Recommendations.

Conclusion

Although the final version of the AI Legal Framework approved by the AI Commission is the result of considerable debate and public contribution, the draft can still be subject to changes in the voting process in the Senate and House of Representatives. However, if approved without modifications and turned into law, the AI Legal Framework will become effective one year thereafter.

The version approved by the AI Commission reflects the most sophisticated and advanced discussions on AI regulation worldwide. However, some particularities of the AI Legal Framework must be taken into consideration. Despite being a comprehensive normative proposal, the AI Legal Framework imposes obligations on AI agents that may require significant effort to comply. Not all organisations that develop or implement AI systems in Brazil have the necessary infrastructure and financial bandwidth to comply with all such rules.

One possibility to consider would be to allow a lighter regime for small businesses and start-ups, taking as an example what the ANPD did in October 2021, when it approved a resolution that released these entities from certain burdensome obligations under the LGPD.

In addition, there are opportunities for improvement in the definitions brought by the AI Legal Framework - in particular, the liability regime among the AI agents, and a definition of AI system's 'life cycle', given the recurrence of this expression in the text.

While the new AI Legal Framework is incisive in determining algorithmic transparency measures, the approved version lacks incentives for innovation and AI research in Brazil, two essential factors to ensure technological development and the country's positioning in the technological race. In this regard, the final version of the AI Legal Framework should follow the EU's approach to AI centres on excellence and trust, aiming to boost research and industrial capacity while ensuring safety and fundamental rights.

Despite the critics, topics, such as risk classification of AI systems, governance, and transparency mechanisms, and the ethical use of AI should continue to prevail and reflect in other spheres, public policies, and court rulings.

Organisations should closely follow the progress and discussions of the AI Legal Framework in the Brazilian Congress.

Fabio Ferreira Kujawski Partner
[email protected]
Ingrid Soares Associate
[email protected]
Mattos Filho Advogados, São Paulo

Feedback