Over the last few years, we’re seeing how applications of Artificial Intelligence (AI) are starting to permeate many aspects of our lives.
If used well, Artificial Intelligence has vast potential to make organisations more efficient, effective and innovative. However, AI also raises significant data protection risks and compliance challenges for organisations.
For this reason, the Information Commissioner’s Office (ICO) has provided a brief guidance to organisations on the management of data protection risks arising from AI applications.
In this post we will summarise the key takeaways of it.
Artificial Intelligence & Data Protection
Accountability is a legal obligation for any organisation processing personal data and a key principle of the General Data Protection Regulation (GDPR).
However, when adopting AI data controllers will have to re-assess whether their existing governance and risk management practices remain fit for purpose.
To do so they would need to be able to understand and manage the key risk areas specific to Artificial Intelligence.
Risk Areas Specific To Artificial Intelligence
- Fairness, transparency and accuracy of data
Establishing clear policies and good practices for the procurement of high-quality training and test data are important, especially if organisations do not have enough data internally or have reason to believe it may be unbalanced or contain bias.
Accuracy and the appropriate measures to evaluate it should be considered from the design phase, and should also be tested throughout the AI lifecycle.
- Fully automated decision making models
The degree and quality of human review and intervention before a final decision is made about an individual is the key factor in determining whether an AI system is solely or non-solely automated (this classification is key as the GDPR will apply in a different way).
For effective Artificial Intelligence risk management, training is a vital component. Reviewers need to understand how an AI system works and its limitations; to anticipate when the system may be misleading or wrong and why; to understand how their own expertise is meant to compliment the system; and to provide meaningful explanations for either rejecting or accepting the AI system’s output.
Personal data must always be processed in a manner that ensures appropriate levels of security against unauthorised processing, accidental loss, destruction or damage.
Some of the unique characteristics of Artificial Intelligence mean compliance with security requirements can be more challenging than with more established technologies. That’s why it is really important to review risk management practices to ensure personal data is secure in an AI context, including testing and verification challenges, outsourcing risks, and re-identification risks.
AI systems must satisfy a number of data protection principles and requirements, which may at times pull organisations in different directions. For example, while more data can make AI systems more accurate, collecting more personal information has implications in terms of erosion of privacy.
Organisations using Artificial Intelligence need to identify and assess such trade-offs, and strike an appropriate balance between competing requirements.
- Data minimisation and purpose limitation.
AI systems generally require large amounts of data but organisations must comply with the minimisation principle under data protection law if using personal data.
This means ensuring that any personal data is adequate, relevant and limited to what is necessary for the purposes for which it is processed. Data minimisation techniques have to be fully considered from the design phase.
- Individuals’ right to be forgotten, data portability, and right to access personal data.
Under the GDPR individuals have a number of rights relating to their personal data. These rights apply to personal data used at the various points in the development and deployment lifecycle of an AI system.
For example, while it may be difficult to identify the individual to whom the training data relates, it may still be personal data for the purposes of the GDPR, and so will still need to be considered when responding to data subject rights requests.
A case-by-case assessment may be needed, but organisations that have considered this question and implemented processes to help deal with such requests will be better placed than organisations that fail to think about the question in advance.
To face these increasingly complex challenges, organisations will need diverse and well-resourced teams to support them.
If you need legal advice on data protection, book a call with our legal team and we will provide legal solutions tailored to your business.