top of page
Writer's pictureMelissa McGrath

The Dangers in Being Defined by the Algorithm

Updated: Mar 30

Photograph by Kenny Eliason

Social media has become integrated into the fabric of most global societies. Initially, a seemingly harmless format that remained true to its name has since evolved into a complex web of business models, too vast to be encompassed in a two-word phrase. Tangled up in this phenomenon of the virtual world is a very real, tangible impact on humanity's mental health. Algorithms, biases of artificial intelligence, and the pressures of our online presence have introduced new stressors and triggers disguised by satisfying aesthetics and clever captions.


Alex Hern, in an article for UK media outlet The Guardian, titled "Instagram is supposed to be friendly. So why is it making people so miserable?" takes a particular interest in what makes a platform a hotbed for insecurity and anxiety. Is it our fellow social media users that instigate these negative emotions, or is it the platform itself? A 2017 UK-based research study asked 14-24-year-olds to rank how various social media platforms impact their lives. As Hern cites in his article, Instagram was ranked the lowest, with a particularly bad impact on quality of sleep and body image. While Instagram has always encouraged users to put an unrealistic best-foot-forward, what has, in recent years, changed our feeds from a best-moments reel to something more toxic? As Hern explains, 2016 brought the introduction of Instagram's "algorithmic timeline." This timeline no longer prioritized most recent posts and instead "began populating feeds with the most noteworthy posts" from followed accounts. Unblemished skin, perfectly tidy homes, the latest model of cars, and the best travel spots rose to the top of our timelines along with rising money and self-image anxieties. Hern eloquently describes this as "promoting a curated, unrealistic version of an already curated, unrealistic feed."


Not unlike the capability of humans, technology has the capacity to exploit aspects of our identity. The 2020 documentary film Coded Bias explores the depth and damage of machine-learning algorithms which have infiltrated systems of advertising, hiring, and more. The particular damage Coded Bias refers to is the inequities perpetuated by AI and algorithms. Instagram and its impacts on our image of life and body is only one small piece of the puzzle that is technology and mental health. The New York Times review on the film, by Devika Girish, begins by recounting the story of M.I.T Media Lab researcher Joy Buolamwini who, when working with facial recognition, "found that the algorithm couldn't detect her face — until she put on a white mask." Buolamwini's further investigation into this phenomenon revealed that "artificial-intelligence programs are trained to identify patterns based on data sets that skew light-skinned and male." Even artificial intelligence holds blatant and polarizing biases that contribute to serious ethical and social issues.


Social media platforms and their business models rely on that judgement capability of technology. The monetization of social media has kick-started ad campaigns that cater to the individual based on data collected and sold via social media, as well as generalized judgements based on our online presence, age, ethnicity, and more. Algorithms can now look at identifiable aspects of our virtual identity and determine what we are most likely to buy. While this might sound convenient, these predetermined judgements carry and perpetuate prejudices. The stressors and triggers which run rampant in the online space are not simply superficial, nor are they solely perpetuated by fellow users. They are ingrained in our systems of marketing, production and consumption as well. The algorithm, the posts it prioritizes, and the ads it suggests, attempt to re-shape and define us, as users, in a way that solicits maximum profit and engagement.


However, there is hope yet. Annie Brown, in a Forbes article titled "Mental Health & Privacy In The Era Of Data-For-Revenue Algorithmic Models," looks at the increased consumption of social media content through the pandemic and its subsequent impacts. In Brown's article, a new era of social media is emphasized. Vigyaa.io is a new social media platform focused on anonymity, abandoning the popularized micro-marketing habits of major social media moguls. Brown goes on to identify "Sentiment Analysis machine learning" as a healthier alternative to the current social media algorithms. Sentiment analysis "uses behavioural data (as opposed to personal data) to generate positive mental health benefits, instead of only generating profit." Sentiment analysis allows you to define the algorithm, rather than vice versa.


—————

Melissa is a 20-year-old undergraduate student at the University of Toronto, heading into her fourth year of study. She is currently pursuing her Bachelor of Arts with a major in English and History and a minor in Religion. She has a passion for reading and writing, and intends to pursue a career in publishing.

15 views0 comments

Recent Posts

See All

コメント


Post: Blog2_Post
bottom of page