Artificial Intelligence, or short AI, is a broad and vague term that has been defined in many ways. Maybe, the first thing that comes to your mind when thinking about AI is humanoid robots that try to take over the world in a futuristic scenario. Or, you associate AI with machines in a more traditional sense, like a computer who behaves like a human to take over intelligence tasks and decision-making. In any case, most of what people mean when they talk about AI is machine learning–a sub-domain of AI,–which “uses statistical methods to find rules in the form of correlations that can help to predict certain outcomes.”
This series will introduce you to some of the most talked about and fastest developing modern technologies: artificial intelligence, algorithms, machine learning systems, and the phenomenon of Big Data. Besides digging into the functionality and uses of these systems, we will critically assess their real-world implications for individuals and their data privacy.
Definition — Algorithm
An algorithm is ”a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer."
Definition — Big Data
Big Data refers to the collection and existence of very large datasets that are used by machines to generate new insights about us and the world, finding correlations and causes that we humans, by ourselves, would not be able to. Because of this, Big Data Practices can turn facts that were previously useless or seemingly irrelevant to us into valuable information. As a result, more and more personal data is collected and linked with other pieces of information in case it could become useful through Big Data methods later on. The process of turning something, anything, into data is called ‘datafication’.
Where do we come across it?
Nowadays, there’s almost no sector in the world that is unaffected by artificial intelligence. We are, therefore, likely to come across AI wherever we go, both in person and, even more so, online.
Finance —
In the financial sector, for example, AI nowadays fuels credit scoring systems that determine people’s eligibility for loans or amount of interest they have to pay. AI also supports with or fully takes over on tasks such as portfolio optimisation and asset management, or comes into play for market prediction and trading purposes.
Transport
—
AI has also found applications in the transport sector, for example, to guide traffic management or feed into the online tools we all use on a regular basis, such as Google Maps or Apple Maps.
Health
—
In the health sector, AI is used for a variety of different tasks. For example, systems can be trained to diagnose illnesses for sick patients, or to calculate diseases transmissions and outbreaks.
Advertising
—
Applications in the advertising and marketing sectors include the use of AI systems to regulate online advertising, build profiles on people’s preferences, and allow companies and organisations to target tailored groups of society with their ads.
Social Media
—
Social Media companies and platforms employ large amounts of AI systems as well. The biggest of them, like Alphabet/Google, Facebook/Instagram/Meta and Twitter, are also among the most dedicated to developing such AI systems. They use AI systems to automate tasks, such as content moderation, or allow for new functions of the platform, including the building of profiles of their users (as well as people who aren’t signed up on their platforms)
Military & Defence
—
In the military and defence sector, AI systems come into play for the development of autonomous and semi-autonomous weapons, or the prediction of armed attacks.
Supply Chain Management
—
Other sectors and areas of work which benefit from the use of AI include, for example, supply chain management, where AI systems help with resource gathering, optimisation and energy distribution.
Recruiting & Hiring
—
AI can come into play for recruiting and hiring tasks, for example, to scan and filter out CVs and applications as a whole.
Law Enforcement
—
And law enforcement entities use AI for surveillance, e.g. through some highly criticised systems such as facial recognition technology in street cameras, or predictive policing methods to guide policing operations.
Digital Tools
—
Lastly, AI systems are present in many of the digital tools and devices that we use on a daily basis: mobile phone cameras can detect people’s faces and recognise our voices, apps use automatic machine translation to help us translate foreign language websites, and chatbots assist customers with their shopping on e-commerce platforms.
AI & Data Privacy
So, how is artificial intelligence affecting data privacy, and in reverse, how is it being affected by data privacy laws and regulations? A simplified answer to this question is that AI is driven by data, including personal data, i.e. data about us that can directly or indirectly identify us, and as such, AI concerns the field of data privacy.
Besides the many laws that apply to the use of personal data, and therefore, also to any AI system that is trained and fuelled by this data, there are specific legal provisions in data privacy laws that address AI and/or automated decision-making.
For example, the EU framework on data protection, the General Data Protection Regulation (GDPR), explicitly requires those collecting personal data to ask for consent of individuals to have their data used in automated decision-making or profiling. It also contains obligations on data controllers to inform individuals about the functioning of those automated systems in a way that is understandable and accessible. These are important legal steps in providing individuals with more knowledge over what’s happening to their data and with the necessary control to opt-out of systems they feel uncomfortable with.
The reality, however, is that this provision alone contains too many loopholes and does not provide individuals with REAL control. Many services driven by AI are only accessible to us if we consent to our data being used, and as such, the choice is not between sharing our data or not, but between being able to use a service or not.
On top of that, the GDPR as well as other data privacy laws and regulations around the world, are not yet as well understood by those collecting and using data as they should be. Companies, organisations, and institutions still struggle with incorporating data privacy practices in their environments. This is also true for those developing AI systems. Data privacy and other considerations around the ethical use of data are rarely part of the compulsory curricula in AI, data science and IT-related subjects at universities. As a result, AI professionals may not think enough about the implications for data privacy when creating a new system.
Another privacy-related problem with some AI systems is their ability to infer information about people based on the data that they have given away. If trained on enough data coming from a large variety of people, AI systems can become sophisticated enough to correlate pieces of information about us and then, make conclusions about other things in our lives. To provide a more obvious example: Without telling anyone online about your eating habits, your social media activity, search results and other online activity, might allude to your dietary restrictions, perhaps even including references to any allergies and intolerances. AI systems can gather this sort of information and make pretty precise conclusions based on it. However, this ability does not end with the fairly obvious conclusions. For example, AI systems nowadays can find out about your political affiliation, sexual preferences or religious denomination, solely based on the sort of things you like, follow and/or say on social media. Alongside these pieces of sensitive information, and information on protected characteristics, there is a lot more, and more fine-grained data that AI systems can produce about people without them consciously giving it away. This raises multiple, very significant data privacy issues.
Finally, it is indispensable to mention that AI systems have become sophisticated enough to decrypt and de-anonymise data that we might feel is safe in data controllers’ hands. What does this mean? Many data privacy laws and regulations deem fully anonymous data, namely, the data that cannot be traced back to us individually, as exempt from many obligations. A data controller could choose to collect only data that isn’t identifiable or anonymise personal data after the purpose of its collection has lapsed. However, some AI systems have been trained on enough data about people and the world, to be able to identify people even if most traces have been deleted form the records. Using AI systems to systematically de-anonymise data might not be legal, but even if so, the existence of such systems means that AI has become a direct threat to data privacy.
Besides the many privacy-related issues around AI, we can also observe increasing numbers of ethical problems connected to the use of these sophisticated systems. Many of these will be touched upon in this series, as well as in our ‘Demystifying Data Ethics’ series, which looks at the larger context of the ethical use of data.
Comments