Support Centre

You have out of 5 free articles left for the month

Signup for a trial to access unlimited content.

Start Trial

Continue reading on DataGuidance with:

Free Member

Limited Articles

Create an account to continue accessing select articles, resources, and guidance notes.

Free Trial

Unlimited Access

Start your free trial to access unlimited articles, resources, guidance notes, and workspaces.

Israel: AI policy - regulation and ethics for responsible AI innovation

On December 18, 2023, in order to mitigate risks and address challenges arising from the artificial intelligence (AI) field, the Israeli Ministry of Innovation, Science, and Technology, together with the Ministry of Justice, published a final AI Policy detailing the policy principles of regulation and ethics for the development and use of AI systems in the private sector in Israel.

In this Insight article, Dalit Ben-Israel and Lior Haliva Wasserstein, of Naschitz Brandes Amir, review the AI Policy's findings in the field of data protection, outline the emerging challenges, examine the application of existing law, and explore the principles that the AI Policy seeks to adopt in this context.

shunli zhao/Moment via Getty Images

This policy was developed pursuant to a government resolution (only available in Hebrew here) tasking the Ministry of Innovation, Science, and Technology with advancing a national AI plan for Israel. The final version of the AI Policy was formulated following public comments and comprehensive consultations on a draft of this policy previously published for public consultation, with diverse stakeholders and government departments, the Israel Innovation Authority, civil society organizations, academic institutions, and private sector actors.

The policy principles establish that regulation in the field of AI will be enacted on a sectoral basis, will be advanced and flexible, and will be as consistent as possible with the emerging regulatory policies and regulations in the international arena.

Following the public comments process, the main addition to the AI Policy is the proposal to establish an AI Policy Coordination Center. This center would serve as an expert-based interagency body, tasked with assisting relevant government bodies in developing and implementing regulations and ensuring consistency, promoting interagency coordination, and presenting comprehensive government policy proposals to decision-makers on AI regulation. The center would also enable the Government to track developments in the world and, if necessary, recommend specific aspects requiring regulation in Israel as well as lead Israel's representation and involvement in international forums with respect to AI regulation and standards.

The main recommendations on AI regulation and ethics policy

In addition to its many benefits and high economic potential, the use of AI poses significant challenges to the legal and regulatory systems around the world and in Israel. The AI Policy identifies seven main challenges arising from the use of AI in the private sector:

  • discrimination;
  • human oversight;
  • disclosure of AI interactions;
  • safety;
  • accountability;
  • explainability; and
  • privacy.

To address these challenges, the AI Policy sets forth common policy principles and several practical recommendations. One of the main recommendations is to establish a governmental policy framework for AI regulation. The AI Policy outlines key strategies such as adopting sector-specific regulations, aligning with international practices, implementing a risk-based approach, using soft regulatory tools for gradual development of the framework, and enhancing collaboration between the public and private sectors.

The applicability of existing privacy laws in the context of AI

The development and use of AI systems require the use of large quantities of data, including personal information, the collection and processing of which is regulated by privacy laws. Existing privacy laws are expected to apply to data processing activities conducted as part of 'big data' or machine learning processes, but new challenges have emerged in the AI context that may not be adequately addressed.

The Israeli Protection of Privacy Law, 5741-1981 (PPL) sets only two legal bases for the processing of personal information: legal authorization or the informed consent of the individual concerned. The PPL also establishes the requirement to inform the individual from whom personal information is collected about the types of information collected and the purposes for which the information is being collected. In addition, under the purpose limitation principle, personal information collected may be used only for the purposes for which it was collected. It is not permissible to collect information with the informed consent of the individual for one purpose and then use it for another purpose without obtaining additional consent or without legal authorization for such processing. In addition, in the process of registering a database with the Israel Privacy Protection Authority (PPA), the purposes of the database must be listed and any deviation from such purposes requires an amendment of the registration.

The PPA published an opinion addressing its interpretation of the notification obligation in the context of the collection and use of personal information when algorithm and AI-based systems are used. The PPA notes that when the collection of personal information is carried out using automated systems, it is necessary to ensure that all the required details are presented to the data subject in accordance with the notification requirements of the PPL, despite the intrinsic difficulty of providing transparency on how such systems reach decisions. The PPA further notes that when the processing of personal information is carried out using automated systems, the notification at the information collection stage must specify the manner of operation of the systems and explain the AI decision process to the extent that it is relevant for the formation of consent, and to the extent that such a specification is possible from a legal, technological, and commercial perspective. The PPA also recommends that the data subject be informed about the types of personal information that the systems may use and the source of such personal information. If such information is not provided, consent of a data subject for AI-based processing cannot be deemed as informed consent, therefore nullifying the consent, such that the use of the personal information will be a violation of privacy. The PPA's opinion is significant because it provides guidance to organizations that collect and use personal information using AI. The opinion emphasizes the importance of transparency and accountability in the use of AI, and it provides practical guidance on how to comply with the duty to inform in this context.

It is noteworthy to mention that in their guidance on telemedicine services, the PPA refers to the privacy risks stemming from AI-based diagnosis using big data and stresses the importance of full transparency in comprehensible language regarding the categories of personal information collected from data subjects, the purposes for collection, the manner in which the personal information will be analyzed, and what shall be done with the personal information after the diagnosis is complete. In addition, the guidance states that if the personal information collected may also serve additional purposes other than for treatment, such as for research or training the AI system, this should be explicitly clarified to the data subject and separate consent should be obtained (based on the purpose limitation principle).

In addition, the PPL and the Protection of Privacy Regulations (Data Security) 5777-2017 establish provisions regarding the security of information in a database which are equally applicable to AI systems collecting and using personal information. According to the provisions of the PPL, a person has the right to access the personal information about themselves included in a database, as well as, in certain circumstances, the right to request to correct or delete it if it is outdated, incorrect, or incomplete. The exercise of this right in connection with AI systems is a true challenge.

These frameworks are complemented by sector-specific regulation in fields such as medical diagnosis, secondary use of medical data for research purposes, banking, and insurance. As stated above, these and other provisions are expected to apply to personal data in relation to the use of AI, despite the unique challenges presented by AI.

Privacy challenges

As described below, the development and use of AI-based systems raise a number of unique challenges related to the need to ensure the protection of the fundamental right to privacy.

Firstly, AI systems challenge the ability to comply with the purpose limitation principle. This principle is challenged in the development stage of AI, as in many cases training AI models requires using the personal information originally collected for other new purposes. The ability to obtain the consent of data subjects for additional use raises many practical challenges, such as the difficulty of contacting relevant data subjects again and obtaining informed consent, especially when the information was collected a long time ago and from many data subjects. The existing exceptions to a breach of privacy under Section 18 of the PPL apply in the traditional context of a breach of privacy and not in relation to the PPL chapter relating to databases, and therefore cannot be relied upon to allow the secondary uses of the personal data.

The use of AI-based systems can challenge the ability to meet the requirements of transparency and the duty to inform. Privacy laws are designed to ensure that data subjects can control their personal information and determine how it is used. Transparency also improves the ability of the data subjects to exercise their right under the PPL to access personal information and enforce rectification rights, in a significant and effective way. However, in many cases where AI-based systems that are not explainable are used, there may be difficulty in meeting this requirement. This is because some aspects of the system's use will not be known to the operator, and accordingly, it will not be possible to provide them to the data subject. In other cases, intellectual property and trade secret protection conflict with the level of explainability offered to users of the AI-based systems.

The use of AI-based systems may also challenge the principle of informed consent. As mentioned above, the relevant legal basis for the use of personal information in Israel is consent. The PPL states that such consent must be informed consent. The requirement of informed consent obligates the entity collecting the personal information to present the data subject with sufficient information regarding the types of personal information collected, the processing purposes, and the third-party recipients so that the data subject will have a complete picture before deciding whether to agree to the disclosure of personal information. The difficulty in meeting the transparency requirement described above, and the difficulty in fulfilling the duty to inform due to the nature of some AI systems that are not explainable, may, in certain cases, challenge the ability to obtain informed consent from the data subject and thus prevent the processing.

The use of AI-based systems can create an inherent tension between the need for big data and the obligation to delete excess personal information in accordance with the data minimization principle. The ability to access big data and the ability to analyze large amounts of personal information has the potential to promote and develop the use of AI-based systems. During the model development phase, the systems are trained on large sets of personal information. However, at the information collection stage, it is not always possible to know what insights and benefits the system will be able to derive from the personal information collected. This means that collecting and managing large amounts of personal information can assist in the development of systems, even if some of the personal information collected is not essential for providing the service itself, and therefore not compliant with data minimization principles.

Further, it is possible to use AI-based systems to extract sensitive information. The ability to draw conclusions and implications from using big data and machine learning allows for the integration and analysis of different personal information components, and thus to draw conclusions that include sensitive information and non-sensitive information. Studies show that, under certain circumstances, AI is capable of identifying patterns that predict the tendencies and behavior of a person, and which reflect on their health condition or sexual orientation from biometric information (such as a person's bone structure or behavior). This ability raises the question of whether it is appropriate to limit the type of personal information on which AI-based systems are allowed to base decisions.

There is a concern that AI-based systems will be used to re-identify de-identified or anonymized information. Due to the ability of AI-based systems to process a wide variety of personal information from a variety of sources, their use can increase the risk of identifying data subjects in various datasets. This concern is exacerbated when it comes to the combination of personal information from different sources and in large volumes.

The use of AI-based systems can also raise difficulties in defining roles and areas of responsibility in relation to personal information. Personal information may be used in the development phase of the AI system, in the testing phase of the system, and in the use phase of the system. At times, in the development phase of the system, developers rely on third-party databases that contain personal information for the purpose of training the system. In these cases, defining the areas of responsibility and liability of each party can be complex and challenging.

Measures to mitigate privacy challenges

In light of these concerns about privacy risks, the AI Policy describes common measures intended to reduce the risk of privacy violations. This is not an exhaustive list, and there may be additional measures and mechanisms to reduce these risks.

With regard to the need to obtain informed consent when personal information is used, it should be noted that the position of the PPA is that when the collection of personal information is carried out using automated systems (such as bots), including systems based on AI, it is necessary to ensure that all details required under Section 11 of the PPL are presented to the data subject - meaning that it must be specified to the data subject, at the time of approaching the data subject, what personal information will be collected, for what purposes, and with whom the information may be shared and for what purpose. The PPA also recommends that the data subject be informed about the types of personal information that the AI systems may use and the source of such personal information. It should be noted that Section 11 of the PPL and the notification obligation only applies when personal information is collected directly from a data subject, and therefore when the personal information used to train an AI system is provided to the developer by a third party, the developer does not have a notification obligation to the data subject. In such cases, the concern of deviation from the original purposes for which the personal information was collected and lack of expectancy of the data subject remains unsolved.

In accordance with the data minimization principle, organizations should take steps to ensure that the purposes for which the personal information is being processed are clearly defined and documented and meet the reasonable expectations of data subjects regarding the use of their personal information. In this context, it is important to consider the need to minimize the personal information used in the AI development processes so that it is limited to the relevant and necessary personal information for the defined purposes.

In some cases, it is appropriate to consider the possibility of using privacy-enhancing mechanisms that will help to reduce the amount of personal information, such as:

  • de-identification which involves removing or altering personal identifiers from personal information, such as names, addresses, and phone numbers;
  • synthetic data which involves creating artificial data that is similar to real personal information but does not contain any personally identifiable information; and
  • noise addition which involves adding random noise to personal information which can make it more difficult to identify individuals.

These mechanisms can be used to help protect privacy while still allowing for the development and use of AI systems.

The use of privacy-enhancing mechanisms is a complex issue and there is no one-size-fits-all solution. The specific mechanism that is most appropriate will depend on the specific AI system and the personal information that is being collected.


As opposed to the EU, the Israeli Government has chosen to regulate AI by 'soft regulation' with additional sector-specific guidelines and regulations. Adapting laws for new technology is usually delayed by the development of the technology itself. It seems that the Israeli Government currently prefers to monitor the legislation and regulatory frameworks evolving in other jurisdictions as a basis for further development of its own regulatory framework for AI and, at this stage, not position itself in the forefront of the legislative process.

On the one hand, the resources and efforts of enacting specific AI legislation are immense and complex, so the Government prefers to defer such decisions and processes to a later stage and watch and learn from the EU and the US before pursuing formal legislation. The General Data Protection Regulation (GDPR) is an example of foreign law, not directly applicable in Israel, that became a gold standard for privacy compliance and interpretation of privacy laws around the globe. The same is expected to happen with the recently enacted EU AI Act. In addition, the existing privacy laws and principles, with additional guidance from regulators such as the PPA, are already capable of handling some of the AI privacy challenges. Israeli companies focusing their products on foreign markets will be obligated to comply with the foreign AI legislation (mainly in the EU and US) if they wish to sell into those markets and such compliance will indirectly also benefit the local market.

On the other hand, the concern raised by certain stakeholders is that Israel will remain behind and may be overlooked in the race to develop and deploy AI-based systems, lacking formal legislation and regulation. In this respect, the PPL, dated 1981, is currently in the process of being amended to align portions of it (such as the definition of personal information) with modern privacy laws, mainly the GDPR. However, this process is slow and partial with the Government already pursuing an additional amendment (Amendment 15) to immediately follow.

At this pace, AI-specific legislation seems very remote and unrealistic, even in the following years. The purpose limitation principle in the current version of the PPL is linked to the obligation to register databases, which the proposed bill recommends limiting only to public entities and data brokers. In this respect, and in the event that an alternative section on purpose limitation is not added to the law, the use of previously collected personal information for training AI models without proper notification or consent, and contrary to the expectation of the data subjects, may become a practice. In addition, setting basic principles, such as banned uses of AI and permitted secondary uses under a set of common guidelines, is essential for a holistic approach throughout market sectors. Lacking formal legislation may cause the adoption of different rules in various sectors and an un-uniform approach.

Many hope that the proposed AI Policy Coordination Center will be established sooner rather than later and be able to efficiently align and coordinate the various sectorial AI proposed regulations into a common, aligned approach as a basis for future central legislation, and at the same time monitor development and implementation abroad, to import the relevant legislation after lessons have been learned and experience has been gained abroad.

Dalit Ben-Israel Partner, Head of IT & Data Protection Practice
[email protected]
Lior Haliva Wasserstein Associate IT & Data Protection Practice, AI Lead
[email protected]
Naschitz Brandes Amir, Tel-Aviv