2023 will be the year of AI. January, Microsoft announced Billions invested in OpenAI, the company behind the image generation tool Doll E and a chatbot Chat GPTThe public sector is increasingly leveraging the power of AI to perform administrative tasks and create more customized services that meet user needs.
The term “artificial intelligence” or “AI” refers to the use of machines and computer systems capable of performing tasks that normally require human intelligence. The field has advanced rapidly over the last few years. Advances in deep learning algorithms, cloud computing, and data storage have enabled machines to process and analyze large amounts of data quickly and accurately. The ability of AI to interpret human language means that virtual assistants like Siri and Alexa can now understand and respond to complex voice commands at lightning speed.
The public sector is increasingly leveraging the power of AI to perform administrative tasks and create more customized services that meet user needs. Local governments are using AI to simplify staff scheduling, predict demand for services, and estimate the risk of individuals committing fraud. Healthcare providers can now provide automated diagnoses based on medical imaging data from patients, reducing wait times.
risk
Every major technological advance has potential risks and downsides. On Monday, AI voice software company ElevenLabs said it had found an “increasing number of voice duplication exploits.”according to reporthackers used ElevenLabs software to create deepfake voices of celebrities (including Emma Watson and Joe Rogan) to make racist, transphobic, and violent comments.
There are concerns about the impact of AI on jobs and the jobs of the future. In April 2021, an Amsterdam court ordered Uber to reinstate British and Portuguese taxi drivers fired for ‘robo-firing’. Use of algorithms to make termination decisions without human involvement. The court concluded that Uber made its decision “based solely on automated processing” in the sense that: Article 22(1) of GDPR. He was ordered to reinstate the driver’s account and pay compensation.
Similar to the ethical issues surrounding the use of AI in decision-making processes that affect people’s lives, AI-driven algorithms can lead to unintended bias and inaccurate decisions if not properly monitored and regulated. There is a nature. In 2021, privacy pressure group NOYB filed GDPR. Complaint It claims Amazon’s algorithms discriminate against some customers, denying them the opportunity to pay for their products with monthly bills.
There is also the risk that AI will be deployed without considering its privacy implications. In May 2022, the UK Information Commissioner’s Office fined Clearview AI Inc more than his £7.5m for GDPR violations. Clearview’s online database contains his 20 billion images of people’s faces and data gleaned from information published on the internet and social media platforms around the world. The company, which bills itself as “the world’s largest facial network,” allows customers, including police, to upload images of people to its app, which uses his AI to match all images in his Clearview database. increase. The app then provides a list of matching images and a link to his website where the images originated.
the actual procedure
recently, ICO Conducts Survey After concerns were raised about the use of algorithms in welfare system decision-making by local authorities and the DWP. In this case, the ICO found no evidence to suggest that the claimant suffered any harm or financial loss as a result of the use of the algorithm. However, it highlights some practical steps that local and central governments can take when using algorithms and AI.
1. Adopt a data protection by design and default approach
Data processed using algorithms, data analysis or similar systems should be reviewed reactively and proactively to ensure that it is accurate and up-to-date. If a local authority decides to engage a third party to process personal data using algorithms, data analytics or AI, assess their ability to process personal data in accordance with the UK GDPR Responsible.
2. Be transparent about how you use people’s data
Local governments should periodically review their privacy policies to ensure compliance with Sections 13 and 14 and identify areas for improvement. In addition, it is necessary to call attention to new uses of an individual’s personal data.
3. Identify potential risks to people’s privacy
Local governments should consider conducting a data protection impact assessment (DPIA) so they can identify and minimize data protection risks from using algorithms, AI, or data analytics . DPIA must consider not only compliance risks, but also broader risks to people’s rights and freedoms, including the potential for serious social or economic harm.
In April 2021, the European Commission announced that “European Union AI LawThe proposal still has a long way to go before it becomes law, but it could be a catalyst for the UK to regulate AI further.
Using AI has significant benefits. However, it can negatively impact people’s lives and deny them basic rights. As such, understanding what AI technology means and how to use it fairly and lawfully is critical for data protection/information governance practitioners to understand.
Interested in learning more about this rapidly developing field? AI and machine learning workshop We explore the common challenges presented by this subject, with a focus on GDPR and other information governance and records management issues.
Are you an experienced GDPR practitioner looking to take your skills to the next level? See us Advanced Certificate in GDPR Practices.