You will have most likely heard about ChatGPT in the news recently along with much debate about AI in general.
What is AI?
AI has a number of different meanings to different people but in the context of data protection, the Information Commissioner’s Office describes AI as “the theory and development of computer systems able to perform tasks normally requiring human intelligence.”
AI is one of the fastest growing tech sectors in the UK and the government has been considering how best to unleash its full potential whilst taking into account a number of the legitimate concerns that have been raised. The government has released a 10 year plan to “make Britain a global AI superpower” and, as part of that, has been looking at AI processing of personal data.
AI is increasingly used across a number of important sectors including healthcare, finance, share trading and recruitment. Along with the benefits that AI brings it also brings a number of risks and legal considerations.
What changes are in the pipeline?
The government introduced the Data Protection and Digital Information (no.2) Bill on 8th March 2023 (Data Protection and Digital Information (No. 2) Bill (parliament.uk)) and on 29th March 2023 released a white paper for a pro-innovation approach to AI regulation (A pro-innovation approach to AI regulation – GOV.UK (www.gov.uk)) and the Information Commissioner’s response can be found here. One of the stated aims of the Bill is to increase public and business confidence in AI technologies. Having received some significant pushback to its initial proposals (see GDPR changes could ease AI rules for insurers using personal health data – Health & Protection (healthcareandprotection.com)) in its consultation paper, the government seems to have taken a step back for now at least. The Bill is, of course, subject to change as it makes its way through the parliamentary process. The EU is also considering an AI Act which may have a significant impact on the use of AI.
What are the concerns about the use of AI to process personal data?
Aside from fears about the creation of time travelling robots, a number of other concerns have been raised about the use of AI including job security, the security of IT systems and the processing of personal data by AI.
There have been a number of recent examples of the difficult relationship between data protection and AI processing:
- the Italian data protection authority has recently banned the use of ChatGPT in Italy, having found that there did not appear to be a legal basis for the collection and processing of personal data, that there had been a data breach and noting that there were no age verification safeguards (NB the authority has subsequently confirmed that it will lift its ban if ChatGPT complies with certain measures by 30th April);
- recent congress hearings in the United States into Tiktok heard concerns about the Chinese government using AI to process personal data and produce targeted adverts at users; and
- worryingly a law professor has recently claimed that ChatGPT cited a non-existent newspaper report which falsely accused him of sexual harassment on a university trip that he never went on for a university that he was never employed by!
The Information Commissioner’s Office (“ICO”)
The Information Commissioner’s Office updated its guidance on AI and data protection on 15th March 2023 (Guidance on AI and data protection | ICO).
The first steps in using AI are to understand the risks to individuals in processing personal data, to determine how to address those risks and to determine the impact that addressing those risks has on the use of the AI. A recent report (2023 Landscape – AI Now Institute) by the AI Now Institute has suggested that the burden should be placed “on companies to affirmatively demonstrate that they are not doing harm, rather than on the public and regulators to continually investigate, identify, and find solutions for harms after they occur”.
[AC_PRO id=23903]
Bias and discrimination
A significant issue with AI is the introduction of bias into the system and resulting discriminatory outputs. Bias can be introduced in a number of ways including issues with the data that the AI learns from, the design and testing of the system and poorly designed objectives. It is important to continuously review and assess your AI systems and their output to ensure that they meet the data protection fairness requirements and the requirements of other legislation.
Automated decision-making
Article 22 of the UK GDPR as currently drafted precludes organisation from taking decisions based solely on automatic processing (“automatic decision-making”) where those decisions produces a legal effect (or a similarly significant effect) for an individual (“significant decisions”).
Currently the only exceptions to this are where the decision is a) necessary for the performance of a contract b) authorised by law or c) based on the data subjects explicit consent
If enacted, the Bill will no longer restrict automated decision-making to the three circumstances outlined above for the processing of personal data that is not special category data (although those circumstances will still apply for processing special category data). However, other protections will remain including in relation to law enforcement processing. Where significant decisions are made, the data subject has to be notified after the decision has been taken; the data subject must be able to make representations about the decision; the data subject must be able to obtain human intervention (e.g. a review) of the decision; and the data subject must be able to contest the decision.
The bit of government can-kicking that appears to have disturbed some is that the Secretary of State may by regulations amend a) what constitutes a significant decision that would produce an effect on a data subject that is similarly significant to a legal one and b) the safeguards for decisions taken through solely automated processing. The reason for this is stated to be to allow the law to keep pace with advancing technologies (something that the law is often slow to do) and changing societal expectations of what a significant decision is in a privacy context.
The question remains whether the law will keep up with AI to allow the country to reap its benefits whilst also protecting against some of the risks. The government has significant ambitions for AI and has left itself some room to manoeuvre with the proposed legislation.
For more information on artificial intelligence and the legal precaution contact our Commercial team here.
This note is not intended to consider all of the ways that data protection legislation applies to the use of AIs, but instead considers some issues that are specific to AI processing of personal data and looking broadly at automated decision-making.