Support Centre

International

Insights

In this Insight article, Roger Vilanova Jou, Senior Associate at PwC, delves into the growing impact of generative artificial intelligence (AI), which is sparking debate and regulatory consideration worldwide. As organizations grapple with AI's transformative potential, the question arises: Do we need a new role to navigate its governance effectively?

In this article, Arun Babu and Gayathri Poti, from Kochhar & Co., delineate the primary disparities between the Digital Personal Data Protection Act (DPDPA) and the General Data Protection Regulation (GDPR) from a business perspective, analyzing the rationale behind these distinctions and their practical implications.

In part one of this insight series, Dr. Paolo Balboni, Noriswadi Ismail, Davide Baldini, and Kate Francis, of ICT Legal Consulting, delved into the growing influence of artificial intelligence (AI) in areas such as recruitment, talent management, and cybersecurity. In part two, they outlined potential concerns that may arise from the use of AI in the provision of health services. In part three, they explore the imperative of addressing bias, siloed governance, and data breach risks in healthcare, emphasizing the critical need for comprehensive mitigation strategies and interdisciplinary collaboration to ensure AI's responsible integration into healthcare systems.

From enhanced diagnostic precision to improved treatment efficiency, from new drug discovery to appointment scheduling, artificial intelligence (AI) is revolutionizing healthcare as we know it. As is regularly the case with disruptive technologies, however, there are significant risks which may arise from the use of AI in healthcare. In part one of this insight series, Dr. Paolo Balboni, Noriswadi Ismail, Davide Baldini, and Kate Francis, of ICT Legal Consulting, delved into the growing influence of AI in areas such as recruitment, talent management, and cybersecurity. In part two, they outline potential concerns which may arise from the use of AI in the provision of health services. These concerns are legal, technical, and ethical and must necessarily be duly considered by developers and deployers of AI systems for the benefits of AI in healthcare to be reaped by society while mitigating to the extent possible relevant high-stakes risks that may arise.

In part one of this Insight article, Daniel Necz, Associate from Eversheds Sutherland, highlighted key considerations and regulatory frameworks essential to navigating the landscape of generative artificial intelligence (AI) systems. In part two, Daniel explores automated decision-making and data security in AI employment practices, offering insights on transparency, bias mitigation, and regulatory compliance for organizations.

Generative artificial intelligence (AI) has emerged as a ground-breaking technology, raising creativity to a new level and testing the limits of various fields of law such as intellectual property and data privacy. With respect to the privacy compliance of generative AI systems, a new resolution of the 45th Global Privacy Assembly (GPA) puts forward important insights and explores relevant issues, challenges, and approaches to be followed by providers of such systems where they engage in processing personal data. In part one of this Insight article, Daniel Necz, Associate from Eversheds Sutherland, highlights key considerations and regulatory frameworks essential to navigating this evolving landscape.

The recent conclusion of negotiations with the EU has brought forth the final version of the Artificial Intelligence Act (AI Act), representing an important milestone in shaping artificial intelligence (AI) governance. Now that the AI Act is upon us, it is, now more than ever, important for organizations to review their AI and machine learning (ML) tools and assess whether they can be used in a responsible and trustworthy way. Organizations should assess whether AI/ML tools are transparent, explainable, fair, non-discriminatory, non-biased, meaningful, and secure. It is important to assess these topics in a timely manner.  

In part one of this AI series, Danique Knibbeler and Sarah Zadeh, from NautaDutilh N.V., analyzed the interplay between the AI Act and the General Data Protection Regulation (GDPR). Part two delves into the implementation of these criteria during the design of the AI/ML tool and what kind of default settings can be used, to safeguard the trustworthy use of the AI/ML tool. More specifically, they will analyze whether Privacy by Design and by Default can help to implement these settings in the right way. It must be noted that the concept of Privacy by Design is not a recent development; invented by Dr. Ann Cavoukian in the 1990s, it aimed to integrate privacy-enhancing features into the design and production of technology from the very start. This proactive approach would ensure that the technology made available to consumers was equipped with inherently robust privacy settings by either default or design.  

The use of artificial intelligence (AI) technologies has exploded in recent years, with businesses eager to harness the power of machine learning and data analytics. However, the rapid adoption of AI has raised significant privacy concerns, leading to increased scrutiny from regulators. In this Insight article, Iain Borner, Chief Executive Officer at The Data Privacy Group, explores the intersection of privacy and AI, unraveling the complexities surrounding responsible implementation in the ever-evolving regulatory landscape.

The rapid development of artificial intelligence (AI) has created much debate across the legal community about its risks and benefits. AI can increase productivity and efficiency, and create new opportunities for communities and businesses. However, AI use is not without challenges, including that it poses regulatory requirements for governments. In November 2021, the United Nations Educational, Scientific and Cultural Organisation (UNESCO) published its Recommendation on the Ethics of Artificial Intelligence (the Recommendation). Katherine Sainty, Katherine Voukidis, Kaelah Dowman, and Sarah Macken, from Sainty Law, take a look at what the Recommendation is and the 11 key policy areas that governments should consider when using AI.

In this Insight article, Conor Hogan and Matthew Goodbun, from the British Standards Institution (BSI), delve into the transformative impact of generative artificial intelligence (AI) on diverse industries, exploring its acceleration of business outcomes and the concurrent rise of privacy compliance challenges.

In this Insight article, Sarah Cameron and Krish Khanna, from Pinsent Masons LLP, delve into the intricacies of global artificial intelligence (AI) regulation, examining diverse national approaches and their implications for businesses, standards, and international collaboration.

From October 12, 2023, businesses have been able to use the new UK-US Data Bridge, a partial adequacy decision covering in-scope US organizations that have self-certified under the UK-US Data Bridge, to make transfers of personal data from the UK to the US. For many UK and US businesses, this has been an important and much-needed addition to the EU-US Data Privacy Framework (EU-US DPF) which had already been available for businesses to use for transfers of personal data from the EU to the US since July 2023. Jonathan McDonald and Emily Barwell, from Osborne Clarke, provide an overview of the UK-US Data Bridge and what it covers, as well as a look at other UK transfer mechanisms.