What links QAnon to A levels and why we in housing should care?

By Stewart Davison

This past week has seen an unfolding crisis in confidence around the use of technology, specifically algorithms. In the UK education system, all I hear on the news is “how the ‘algorithm’ got it wrong”, or “it can’t account for this year’s variations”. I have also been noticing a steady drip of news stories from around the world, where algorithms and so-called ‘Artificial Intelligence’ (AI) are having detrimental impacts upon individuals and, more worryingly, society at large.

Life seemed simpler back in the 80’s and 90’s, when we all thought that all we would have to contend with if AI went rogue, was 7ft tall, gleaming robotic endo skeletons marching across a post-apocalyptic landscape a la Terminator. However, the recent news stories hint at a different kind of takeover of human society by the machines; one more insidious than Skynet deciding in a milli second that to end all war humanity must die.

Anyone who follows any of my musings will know I am an advocate of technology. I focus in on the UK social housing sector, as this is where I have worked for the past 20+ years. I have seen how, with the correct application, technology has benefited housing organisations, but more importantly the residents these organisations serve. This appreciation for the benefits technology brings us,  ensures that I am constantly scouring the web, asking questions and researching where emerging technologies can be applied to the housing sector.

This enquiring approach means I usually come across some different and interesting tech, whether it be augmented reality (AR) or the Internet of Things (Iot). These new applications of technology have some potentially transformational benefits for housing organisations, and again could see massive changes in the way residents live and interact with their housing providers. As tech like AR and IoT becomes more mainstream, I start to see more stories, blogs and, perhaps more importantly, businesses talking about machine learning (ML) and AI,  and how it could be used in social housing.

I have been intrigued by the possibilities that ML and AI could deliver for the housing sector. Could it mean an increased adoption of virtual assistants like Alexa, or chatbots for residents and the provider? Could we see more providers using technology to analyse their data? I think the answer to all these is ‘Yes’ as some providers are already doing it.

When I first started talking about the applications of ML and AI in housing, I purposefully put the rictus grinning face of a Terminator robot on the screen. Why? Becausethis is, apart maybe from HAL 9000 of 2001 fame, the most recognised pop culture reference point when it comes to AI. I would then move on to talking about what ML and AI is not: a replacement for humans.

I would speak at length about how these emerging technologies should be harnessed to support the human decision-making process, and that ML and AI shouldn’t become a shortcut to deliver ‘efficiencies’ if they impacted upon real people’s lives and more importantly should not replace the human having the final decision.

Algorithms, Artificial Intelligence and Machine Learning

The terms ML and AI can be a bit murky, and I think can sometimes be used where really we are actually talking about an algorithm.

So, what is an algorithm? Google defines an algorithm as “a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer”.

What about ML, this is where the murky bit creeps in, here is one explanation from the Royal Society:

 ‘…computers are given real-world examples of data to learn from. They can then apply what they’ve learnt to new situations…’ A slightly longer one is Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it learn for themselves’

Okay, so ML is an application of AI, so let’s not mix the two, its important as ML relies upon Machine learning algorithms, using computational methods to “learn” information directly from data without relying on a predetermined equation as a model.

I don’t want to dive much deeper than that at the moment, as this article isn’t about the technology of ML or algorithms specifically themselves. This article is focusing more on what we have society perceive these technologies to be and the problems that are arising from this perception.

QAnon

The title of this article includes QAnon and A levels. Why? And what is it? Okay, strap in folks this could get a little bumpy but bear with me.

Well, I mentioned earlier in this article that I had increasingly been seeing examples of where algorithms had started to deliver unintended and, in many cases, detrimental decisions, and that humans had forgotten that an algorithm isn’t a replacement for human decision-making.

Wikipedia defines QAnon as a conspiracy theory detailing a supposed secret plot by an alleged deep state against President Donald Trump and his supporters.

The theory began with a post on the anonymous imageboard 4chan by someone who named themselves “Q” in October 2017. Q was presumably an American individual, and claimed to have access to classified information involving the Trump administration and its opponents in the United States.

QAnon was responsible for an infamous story dubbed ‘pizzagate’, which alleged that many establishment figures in US society, most prominently Hillary Clinton, were operating a pedophile ring from the basement of a pizza parlour.

So, again, what does QAnon have to do with algorithms?

The proliferation of QAnon in the US, and now also internationally, can be laid at the door of social media (SM) sites and the algorithms they employ to recommend content. Whether it be YouTube and its recommend videos to watch, or Facebook emailing you with other pages or groups it recommends you look at, all are controlled by an Algorithm. An Algorithm that wasn’t designed to spread baseless conspiracy theories like QAnon, but one that was originally based on and aimed at attracting more page reads, watches, clicks and likes.

The following article in the New York times gives a background to QAnon and how Facebook especially has driven interest towards QAnon and how mainstream politicians are embracing the conspiracy to tap into potential voters – https://www.nytimes.com/2020/08/15/opinion/qanon-marjorie-greene-congress.html

I have seen stories in the press about facial recognition software and the algorithms associated with it that have been used in law enforcement and have led to innocent people being arrested and spending time in jail.

A great overview of this case highlighting the issues with the technology and the algorithm behind it, coupled with firsthand testimony can be found by listening to the New York Times podcast- the Daily https://www.nytimes.com/2020/08/03/podcasts/the-daily/algorithmic-justice-racism.html?searchResultPosition=3

Most recently we have seen the fallout from the application of an Algorithm in our state education system in the UK which has led to widespread anger, frustration and bewilderment surrounding A level results. Just today the Government has reversed the outcomes produced from the Algorithm and returned to the grades suggested by students teachers.

On the face of it these are all different sectors, utilising different technologies which all have algorithms driving these negative or unintended outcomes, but what commonality, if anything can we see in their application?

What do QAnon, Facial Recognition and the latest UK A-Level scandal have in common?

Increasingly it seems it’s the failure of humans to take responsibility for a decision, instead abrogating that responsibility to an algorithm. In the case of QAnon, it’s the creation of a set of rules to drive increased traffic to social media sites and to keep users engaged with content. With facial recognition software its insufficient data sets on ethnicity being fed into ML which the algorithm drives. This mistake is then compounded when humans, who are meant to examine the outputs of the algorithm, fail to carry out their responsibilities, instead assuming the algorithm is always correct. In the current crisis around A levels, it seems the Algorithm employed wasn’t sufficiently able to account for individuals outperforming historical trends at their colleges and sixth forms. In all these cases, it seems humans are allowing the machines to take over, ceding their control (and with that responsibility for any errors) to the algorithm.

Way back in 2015 Ian Bogost in his article ‘The Cathedral of Computation’ concluded with the following:

Let’s keep the computer around without fetishizing it, without bowing down to it or shrugging away its inevitable power over us, without melting everything down into it as a new name for fate. I don’t want an algorithmic culture, especially if that phrase just euphemizes a corporate, computational theocracy.

If we take the examples cited in this article we should conclude that if algorithms are to be increasingly adopted within technology designed for use in social housing that we have to assume that bias, unforeseen outcomes and human failure could have significantly detrimental outcomes.

Therefore it is the responsibility of  suppliers, providers and consultants to ensure that these inevitable issues are mitigated; we cannot allow human society to be guided, molded and managed in the image the machine shows us, we, the human have to take ultimate responsibility for our decisions and ensure we understand the limitations of the machine.