Ethics refers to what constitutes a good life (in theory) and how one should act in a certain situation to achieve the best possible outcome (in practice). It’s about the moral considerations that are involved in our doings.
Over time, ethics has found applications in many personal, political, and professional aspects of life. With the rapid development of technology, ethics is increasingly becoming an important factor to consider in the technical fields and the education thereof. This is because technologies are built by people and necessarily reflect their values, beliefs, and biases and, as such, these technologies can embed and support ethical ideas to help people lead better lives as much as they can reinforce and replicate the bad in society, accidentally and/or deliberately.
While this dynamic applies to any human-made innovation, the risks and problems arising from it are exacerbated by the data-driven revolution that we’re currently experiencing. New technologies, especially those driven by data, Big Data and data analytics, move at unprecedented speeds and scales, and are more pervasive than anything we’ve seen before. As a tool to guide and regulate innovation, the law often reacts too slowly to keep up with new developments, and lawmakers lack the expertise to make effective and sustainable policy decisions towards these advancements. Meanwhile, the BigTech sector and its main contributors, such as Google, Meta, Microsoft, Apple, and Amazon, have gained so much influence that they can play by their own rules and lobby or even threaten states that decide to do something about it.
The study and practice of data ethics has emerged from this. In short, data ethics is concerned with the moral and ethical implications of the collection, and use of data, and the ways in which we can anticipate, understand, and shape these implications. Ideally, to prevent replicating the bad, and to accelerate the good.
What kind of issues does it touch on?
Because Data Ethics as a field of study and practice deals with the implications of novel data practices–and those come into play in virtually all areas of life–it touches on many kinds of issues. As we’re dealing with data practices that are built by humans, and that in turn target, use and leverage individuals’ data, Data Ethics comes into play in almost all areas of life
Definition — Data Ethics
Foridi and Taddeo (2016) have defined data ethics “as a new branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing and use), algorithms (including artificial intelligence, artificial agents, machine learning and robots) and corresponding practices (including responsible innovation, programming, hacking and professional codes), in order to formulate and support morally good solutions (e.g. right conducts or right values).”
A number of ethical risks and problems arising from the use of data have been brought about by the emergence of ‘Big Data’, i.e., the collection and existence of very large datasets that are used by machines to generate new insights about us and the world, finding correlations and causes that we humans, by ourselves, would not be able to. Because of this, Big Data Practices can turn facts that were previously useless or irrelevant to us into valuable information. As a result, more and more personal data is collected and linked with other pieces of information in case it could become useful through Big Data methods later on. The process of turning something, anything, into data is called ‘datafication’. It comes into play in many different forms, many of which may be ethically worrying. An example could be that a person’s likes, shares and tweets on social media are fed into algorithms which can find correlations between this seemingly unrelated information and people’s political affiliation or sexual orientation. This, in turn, enables advertisers and companies to target very particular groups, including those that are vulnerable or marginal, or open to manipulation.
Taking a step back, Data Ethics also considers who works on the development of data practices and data-driven technologies, and who’s data is being used to fuel these innovations. Why is this important? As mentioned, humans create machines against the backdrop of their world views, their knowledge, and their personal/cultural backgrounds. This means they create them based on how THEY understand the world, deliberately and/or subconsciously. Even if they work together in teams or organisations, most of the time, these groups may be too homogenous to bring in other important perspectives that could help avoid biases, risks, or discriminatory aspects. Specifically, people working on the decisions made in the BigTech industry and on data (or how it’s used) tend to be white men from Western countries. With BigTech, such as Google, Facebook and Amazon, providing people with critical communications infrastructure – worldwide – this means that the lives of billions of people using these technologies are directly affected by the decisions of a small and extremely homogenous group.
It’s important to look at who our data is about. If we base our decisions on data that is not representative enough or use this data to train algorithms to make decisions for us, we risk coming to biased or unfair outcomes. A prevalent example here is the use of facial recognition technology which has been proven to be extremely inaccurate for women and people of colour. One of the main reasons for this inaccuracy is that the datasets used to train facial recognition algorithms are predominately white and male. It is, therefore, not surprising, even for non-experts in artificial intelligence, that the technology might not know enough about people with other characteristics.
Another question raised within data ethics is the question of whether certain data-driven projects should be conducted at all or whether they might be too risky and prone to misuse to go forward. Facial recognition is a good example for this. Just because something sounds like a great idea and turns out to be operational, doesn’t mean it necessarily ends up being a benefit to society.
There are, for example, many accidental instances of the misuse of data or data-driven technologies. While most kinds of technology were never intended to be used in ways to harm people, they can still be exploited by those with bad intentions. Social media platforms, for example, have been struggling with the fight against hate speech, propaganda and mis/disinformation occurring in their communities. And even more worryingly, they have been found to be used for human trafficking and sexual exploitation. While crimes and misuses of this kind also take place in the absence of technologies, the access to data and data-driven systems can exacerbate and multiply these problems.
Besides unintended negative impacts arising from the use of data, there are lots of issues caused by the deliberate misuse of data. These include, for example, the weaponization and exploitation of data to manipulate or threaten people, and to drive misinformation campaigns. Data collection and processing, particularly with the aid of artificial intelligence, are also known to be used as tools for surveillance and law enforcement activities, some of which are meant to target specific groups of people or curb free speech. This might be a particularly big and dangerous problem in authoritarian countries, where the state is involved in the targeting of certain individuals and/or communities, but it is becoming an increasing problem in democracies alike.
Why is it important & where does data privacy come into this?
In a previous insight, we have defined data privacy as the practice of handling data, especially personal data, in a way that gives us the ability to decide over our data, and that prevents access to this data by anyone who should not have access to it. Nowadays, this practice is underpinned by laws and regulations, which provide individuals with further rights in regard to their personal data and which impose obligations onto those collecting that data. So, if data privacy has been enshrined in the law to protect our data, why is there a need to think about data ethics? And why should we, as those working in and thinking about data privacy, also pay close attention to data ethics?
Despite having an impact on many data practices, data privacy, by definition, does not and cannot cover all the potential harms arising from the use of data. It focuses a lot on protecting personal data and giving individuals the power to decide what happens to it, but it cannot govern everything that is done with our data, especially when it was acquired legally. This is to say that data practices can be legal on paper, and still yield unethical and harmful consequences for the affected people. Similarly, data privacy laws and regulations, despite representing crucial steps in protecting our data, are still quite recent developments and can, therefore, contain loopholes and inconsistencies. And because they are recent, and can be quite complex, data privacy laws and regulations are not yet understood well enough, both by companies, organisations and institutions on the one hand, and individuals on the other, to come into force effectively. Those controlling and processing data often don’t do enough to adhere to all the given rules, sometimes even exploiting the loopholes. And those who have their data collected don’t know about their rights, and more generally, what happens with their data and what kinds of adverse effects these uses can have on them and society as a whole.
The use of data is omni-present and touches on most areas in our lives. Data Ethics and the guidance we can draw from it, therefore, play a big role for all of us. Knowing what may happen to our data once we’ve given it away and what consequences the uses of our data may bring about, can help us navigate our online behaviour more consciously.