September 12, 2018

Cosima Rughiniș – Algorithmic ageism

Cosima RUGHINIȘ,
University of Bucharest
cosima.rughinis@gmail.com

 

We are not getting any younger. Age can be expressed as a number, and it tends to increase with time. We live our lives in times in which numbers and quantified life, be it clockwork time versus experiential time, or our numerical, calendar age versus our subjective or interpersonal age, become increasingly relevant in the world of digital data. The main question that I will address is: how can we make sense of ageism in a new age of surveillance capitalism [1]?

Ageism refers to a socially organized loss of value for the lives and living of older persons. Although humans have strived to live longer, and reaching an old age is a widely shared value across societies, it is also true that in present-day Western capitalist societies, older age, especially the so-called old-old stage, or the 4th stage of old age, is organized as a process of gradually losing personhood. The experience of old age is organized through overlapping processes, including inter-generational dynamics in the family and in broader social circles, employment and retirement patterns, intimacy and sexual life, representation of older people in media, organization of lifestyles and healthcare, and opportunities of community engagement, among others. As more and more of our lives become influenced by the digital sphere, how are we to observe and make sense of digitally-shaped forms of ageism?

Algorithmic ageism is one form of algorithmic discrimination, among others – such as algorithmic racism, sexism or classism. Algorithmic discrimination occurs when algorithmic decision making, in areas as diverse as access and pricing for various goods or services, is influenced by users’ social labels or membership in social categories that are unrelated to meritocratic or equal access criteria. For example, if our credit rating, health insurance or individualized prices in a hypermarket decided through our fidelity card depend on our race, gender, age, sexual preferences, gender or class, we talk about discrimination. If these decisions were at least partially accomplished through the work of algorithms, we talk about algorithmic discrimination.

The risk of algorithmic discrimination is enhanced by several processes. Firstly, there is a low diversity among ICT specialists, the clear majority of whom remain young, white and male. Secondly, the increasing complexity of algorithmic decision making, influenced by machine learning and neural networks, decreases accountability and the possibility of inspecting digitally mediated decisions. Finally, there is an asymmetric transparency [4], in which huge quantities of data become available about individual users of a certain service, but there is strong secrecy concerning proprietary algorithms and other properties of the organizations that provide that service.

In this brave new world, many questions arise for sociologists, including – but not limited to:

  1. What sociological theories and concepts are most useful for grasping algorithmic agency and discrimination, and their relationships with human agency and discrimination?
  2. How do algorithms contribute to decreasing, maintaining and increasing ageism – in relation with existing institutionalization of age and particularly old age?
  3. What is the specificity of algorithmic ageism in relation to other forms of algorithmic violence and bias? It is worthy to focus on ageism, or should we rather examine inequality in its broader outlines and see what forms emerge as most powerful?

Unlike racism, sexism or classism, algorithmic ageism is likely to reach us all in one form or another – in intersection with other forms of inequality. It therefore offers a good opportunity to examine novel forms of algorithmic regulation and their impact on social inequality, in a frame that is less affected by social conflict.

 

References

[1] Zhosanna Zuboff, 2016. The Secrets of Surveillance Capitalism. Frankfurter Allgemeine Zeitung.

[2] Dan McQuillan, 2016. Algorithmic paranoia and the convivial alternative. Big Data and Society. Sage Publications.

[3] Christian Sandvig et al., 2014. Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms. “Data and Discrimination: Converting Critical Concerns into Productive Inquiry”, ICA.

[4] Evgeny Morozov, 2014. Like clueless guinea pigs. Frankfurter Allgemeine Zeitung. [Online]

[5] Joseph Turow, 2017. The Aisles Have Eyes: How Retailers Track Your Shopping, Strip Your Privacy, and Define Your Power. Yale University Press.

%d bloggers like this: