Data Protection Leader | September 2020
The latest issue of our bi-monthly publication, Data Protection Leader, is here!
As always, this edition brings you a selection of news, features, and more informative editorials from top voices in the industry, including insight from Alex Sharpe, Principal at Sharpe Management Consulting LLP, who looks at CMMC as a competitive advantage, and Claire François’ insight looking at the practical steps that can be taken post-Schrems II. There are also interviews with Dr. Marie-Lousie Gätcher-Alge, Data Protection Commissioner of the Principality of Liechtenstein, and Evan Davies, Data Protection Officer at YouGov.
In his regular editorial, Eduardo Ustaran, Partner at Hogan Lovells, discusses the unspoken truths regarding the Schrems II decision:
“Every instance of government access to data creates a risk, so what the CJEU is saying is that when European data becomes available to foreign states, we must remain vigilant about this risk and take steps to ensure that the democratic balance is not lost. This is not radical political grandstanding, but a court doing its job.”
Dr. Carlo Plitz and Philipp Quiel of reuschlaw Legal Consultants assess the guidance on data transfers issued by the LfDI Baden-Württemberg authority:
“As one of the first statements, the Guidelines recall what has been written and said many times after the judgment was published: data transfers based solely on the EU-U.S. Privacy Shield are no longer lawful, since the Privacy Shield was declared invalid by the CJEU with immediate effect. In contrary to what companies that are located in the US may hear from the U.S. Department of Commerce, data importers continuing to rely on the Privacy Shield is not of help for ensuring that data transfers meet the conditions set out in Chapter V of the GDPR.”
In an article looking at the guidance issued by the ICO and Alan Turing Institute, Bridget Treacy and Olivia Lee, from Hunton Andrews Kurth LLP, look at how to explain decisions made using AI systems that process personal data;
“The core purpose of the Guidance is to help organisations explain decisions made by AI systems to those who are affected by them, with a view to improving transparency and accountability. To be effective, an explanation needs to take into account the context in which it is given, and the target audience. Explanations should not necessarily be approached in the same way, and there is no 'one size fits all' formula for creating an appropriate explanation.”