Support Centre

Asia-Pacific

Insights

In a time when competing approaches to artificial intelligence (AI) governance develop in different parts of the world, Singapore is charting a path that emphasizes pragmatism and enablement.

The National AI Strategy, a high-level strategy statement by the Singaporean government, envisions Singapore as a global hub for developing, test-bedding, deploying, and scaling solutions, with an additional focus on strengthening the country's AI ecosystem enablers. Since its publication four years ago, developments in Singapore's landscape of AI governance have been consistent with this approach, employing a decidedly 'light touch' in regulation and emphasizing the provision of practical tools and frameworks for responsible development and adoption. In this Insight article, Jeffrey Lim, Director at Joyce A. Tan & Partners LLC, will summarize Singapore's approach to AI governance in this context.

It has been almost 60 days since the enactment of the Vietnam Government's Decree No. 13/2023/ND-CP on the Protection of Personal Data (PDPD), the country's first comprehensive legislative instrument governing personal data protection, and thus the deadline for outbound data transfer impact assessments (OTIA) submissions is approaching. By adopting the PDPD's broad extraterritoriality,[1] Vietnam's Government has set a bold goal to effectively safeguard Vietnamese citizens' rights and interests over their personal data. Therefore, businesses, particularly multinational corporations frequently involved with the international transfer of personal data, must be aware of how the PDPD regulates their transfer activities.

This Insight article provides a comprehensive look at the outbound transfer of personal data, and related requirements, including the matters necessary for the compilation of OTIA, as well as differences between OTIA and data processing impact assessments (DPIA) under the PDPD, allowing businesses to take note of some key compliance takeaways.

In many aspects, the Personal Information Protection Law (PIPL), which became effective on November 1, 2021, looks very similar to the EU's General Data Protection Regulations (GDPR). However, many of these similarities remain as high-level principles under the PIPL, while more detailed content has been rolled out step by step. Earlier this year, the Cyberspace Administration of China (CAC), established export security assessment procedures and Standard Contractual Clauses (SCCs) for data exports. Now, the CAC is shifting its focus to compliance audits. On August 3, 2023, the CAC presented the draft Administrative Measures for Compliance Audit of Personal Information Protection (Draft Audit Measures) soliciting public comments. For Data Protection Officers (DPOs) and compliance officers, this topic will become another important task to include in their planning for implementation in 2024.

In this Insight article, Julian Sun, from Taylor Wessing, delves into the key provisions of the Draft Audit Measures and sheds light on the evolving compliance audit framework, highlighting its importance, nuances, and potential impacts for companies operating in China.

The emergence of artificial intelligence (AI), particularly with the introduction of powerful generative AI-powered chatbots like Open AI's ChatGPT, Google LLC's Bard, Microsoft Corporation's Bing Chat, Baidu, Inc's ERNIE Bot, and Alibaba's Tongyi Qianwen, has captured considerable attention this year. These powerful language tools are revolutionizing human-technology interactions due to their increasing ability to generate text indistinguishable from those written by humans. Generative AI is also being used for generating other content such as images, videos, computer codes, etc. That said, various experts have warned that advancing the development of AI technologies without appropriate safeguards could cause detrimental effects to humanity. In fact, in July 2023, seven tech companies jointly expressed their voluntary commitment to developing AI responsibly according to the principles of safety, security, and trust1. Ada Chung Lai-Ling, Privacy Commissioner for Personal Data, Hong Kong, China, discusses the considerations and risks regarding the use of generative AI, as well as the ever-evolving regulatory landscape.

The EU's General Data Protection Regulation (GDPR) is often touted as the 'gold-standard' for personal data protection and has been in force for more than five years. In August 2023, India enacted the much-awaited Digital Personal Data Protection Act, 2023 (the DPDP Act)1. While the DPDP Act may not be as granular as the GDPR in many aspects, it signifies a crucial milestone in India's journey towards upholding digital data protection. Harsh Walia, Supratim Chakraborty, Shobhit Chandra, Sumantra Bose, Sanjuktha Yermal, Shramana Dwibedi, and Vanshika Lal, from Khaitan & Co., provide a comparison between the GDPR and the DPDP Act and their approaches to areas such as data processor obligations, children's data, and cross border data transfers.

The absence of a comprehensive data protection law has affected India's progress towards becoming a global leader in business, technology, and outsourcing. The enactment of the Digital Personal Data Protection Act, 2023 (the Act), brings with it a promise of enabling the processing of personal data in a way that respects both individual rights and the legitimate needs of businesses to process data for lawful purposes. But what are the key implications businesses need to be aware of? Since the Act is yet to take effect and will likely be rolled out in phases, readiness to comply will be paramount. 

In this Insight article, Harsh Walia, Partner at Khaitan & Co., explores the implications of the Act for businesses, offering guidance on how to navigate the new obligations. This proactive approach will not only ensure adherence to legal requirements but also cultivate a culture of responsible data practices in this digital age.

On August 11, 2023 the Digital Personal Data Protection Act, 2023 (the Act) received the assent of the President of India and was published in the Official Gazette, representing a landmark in India's long journey towards the adoption of a comprehensive privacy framework. In particular, the Act aims to regulate the processing of digital personal data in a manner that balances the need to lawfully process data with the rights of individuals regarding the protection of their personal data.

In this Insight article, OneTrust DataGuidance Research provides an overview of the Act, highlighting its key requirements.

Success in the legal industry requires firms to streamline business processes, improve efficiency, and focus their attention on delivering exceptional client service. The race to success has seen many firms adopt tools powered by artificial intelligence (AI). AI-powered tools have the potential to assist lawyers with conducting legal research, contract drafting, and document review more efficiently, allowing them to focus the time saved on advising on more complex legal issues, client service, and business development.

However, there are concerns about how lawyers can use AI-powered tools while simultaneously fulfilling their legally mandated professional duties if the security, accuracy, and quality of information of AI-powered tools are continuously questioned. Katherine Sainty, and Julia Colubriale, from Sainty Law, explore the application of AI-powered tools in the Australian legal profession and their interaction with the professional responsibilities under the Legal Professional Uniform Law 2014 (LPUL)1 and the Australian Solicitor's Conduct Rules 2015 (ASCR)2 that lawyers must adhere to when incorporating these tools into their legal practice.

In this Insight article, Huzaifah Sehgal, from Walker Martineau Saleem LLP, explores the rapid rise of artificial intelligence (AI) in the Asia-Pacific region, its impact on various industries, the challenges related to AI regulation, and the need for an accountable and ethical approach to foster trust and further integration of AI technologies in the APAC region.

The use of artificial intelligence (AI) in Australia's tech landscape is growing rapidly, presenting unique and unprecedented challenges to businesses and consumers. Since late 2022, the use of a form of AI called large language models (LLM) has grown exponentially. The generative AI market is expected to continue to grow to a value of $20.6 billion by 2032.

Katherine Sainty and Ottilia Thomson, from Sainty Law, examine LLMs, the balance between their potential benefits for businesses, coinciding privacy concerns, and potential AI-specific legislative reform.

On April 11, 2023, the Cyberspace Administration of China (CAC) released draft Administrative Measures for Generative Artificial Intelligence (Draft Measures) on April 11, 2023. The Draft Measures, which comprise 21 articles, aim to promote a healthy development and standardized application of generative artificial intelligence (AI) technology, while allowing room for research and development in this area.

Kevin Duan, Partner at Han Kun Law Offices, analyzes the regulatory issues and potential challenges that the Draft Measures may pose in practice.

The requirements for personal information protection in the health and pharmaceutical industries are complex. Danjun Wu, Partner at Guantao Law Firm, provides a two-part overview: part one introduces the basic requirements for personal information protection to be met by medical institutions and pharmaceutical companies, and part two introduces the specific protection obligations to be fulfilled by medical institutions and pharmaceutical companies in the key contexts of processing personal information.