You are entitled to your own opinions, but not to your own facts.
Patrick Moynihan, former US Senator
Cast your mind back to a less febrile time. It’s 2013. No one has ever heard the word ‘Brexit’. Cameron is pursuing “austerity” while “Britain is Broken”, Obama is in the White House, Trump is on TV and Sue and Mel still present the Great British Bake Off.
With your mind firmly lodged in those halcyon days, ask yourself a question. How often did you check a fact check website? I certainly didn’t. It seems it’s only been in the last few years I would look regularly at the day’s news, and then look again for a verifiable source behind them.
The rise of and perhaps need for fact-checking websites has been meteoric over the latter half of this decade. Sites such as FullFact, FactCheck.org, Politifact and Snopes all regularly publish fact-finding articles based on claims in the media or by politicians. They are so ubiquitous now that I’ve never really questioned why we have them at all, and what has changed so profoundly that they are now such an integral part of our daily discourse?
Are we more cynical? Do we demand a higher level of scrutiny in the Internet age than our forebears? Certainly we expect to know everything instantly, and are far more unwilling to accept something is not known, or not yet available to us. Perhaps we are not cynical enough? How often have you shared or retweeted social media news stories or posts without reading more than the headline? Are we clickbait fiends, simply clicking and sharing without scrutiny? Humans beginning and ending at our thumbs, spinning the wheels of nonsense machines, tens of thousands of likes at a time.
Perhaps there is simply more information in the modern era? When I can simply Google the last ten Prime Ministers of Kazakhstan, the prognosis of feline pancreatitis or the US-Chinese world trade tariff rate on pork products, why should I not expect every media story to contain the same level of detail? Never before has the human race had such universal access to the entirety of human knowledge. My ten year old nephew could not believe there was a time without the Internet. I showed him a dusty encyclopaedia we still had and he cried with laughter. Imagine what the Ancient Greeks would’ve made Google, or Twitter. Imagine what they’d think of Siri?
Year on year we have deeper and wider access to more and more information. Perhaps there is too much? Is that possible? The balance of resource for us has changed. Once, information was the limiting factor. Even a century ago everything ever published in a single field could be read by one person. Now it’s our own mortality, our own limited time, that’s the limiting factor. You might not consider this substantial, but a useful example is Instagram. When you are browsing through photos selected for you based on your preferences and followership you feel you are spending your time in a finite way. There IS only a limited numbers of images on Instagram, but the number is ever-expanding at an eye-watering rate of 46.7 million posts every MINUTE. Even with an average lifespan of 83 years or 43.6 million minutes you could never view them all. To your brain the information is effectively infinite.
The same goes for the Internet. A entire website is brought online every 0.5 seconds, exploding in the last two years to a staggering current total of 1.9 billion. It’s become so large that no one actually knows anymore how much data is accessible. For practically any subject the entire pool of information is too vast for any single human brain to ever fully consume, and even if you could read every book, news article, blog post, website, Internet forum, comment, Twitter thread or Insta-story, by the time you’d finished, a whole lifetime of information would’ve already been added. I’m not sure we know what effect this has on a human brain. It’s not a situation we’ve ever encountered before.
And for the wealth of information out there, there is a seemingly equal amount of misinformation. “Fake News”. A term applied so liberally from so many diverse corners of society to nearly everything. And like the ever expanding internet of information, there is an infinite pool of misinformation as well, an overwhelming barrage of untruths, half-truths, white lies, bold lies, spin, slander and plain nonsense. It’s like white noise for reality: we can’t really hear or see what is real or not anymore. It’s all woods and perhaps we can’t see any trees at all. Perhaps Fact-Checking websites have become markers in the forest, waypoints to reorient us, to prevent us becoming entirely lost.
With new knowledge comes new uncertainties, and with new uncertainties comes new fears. The environment globally is one of general distrust: of media, of politicians, of each other.
We seem to hold ever more polarised opinions, which are ever more entrenched. Is this something new?
In several well-known experiments, researchers introduced pieces of information to a group hand-selected to hold one of two differing opinions. Gun control, abortion, conservative or liberal: the group was set up to divide naturally into two sides on an issue. The researchers theorised that as they introduced more information on a topic it would bring the two sides together to a common viewpoint. What they found was the opposite: the more information was introduced the more divided and intransigent became the participants. Once the participants had declared for a viewpoint, it became harder, not easier, for them to change their mind as new information came to hand.
Is our current deeply divided society, especially in the U.K. and the US, a symptom of information overload?
Perhaps it’s deeper than that.
In 2010 a book called The Shallows climbed the New York Bestsellers list. The Shallows explored the impact of the Internet on the human brain and consciousness. Even then, when the uptake of social media was exponentially lower than it is now, the body of research on how the use of smartphones and the internet fundamentally changes the human brain was startling. Early studies of even the simplest element of a webpage, the hyperlink, showed that far from enhancing and deepening our reading the subconscious decision making required to click a link or not massively reduced the processing power available to actually understand and retain the information presented. In head to head tests of retention and understanding of the same article presented in two different formats, the webpage was substantially worse. This suggested even distractions so small we barely register them have a significant impact on our ability to process information.
Worse still, the brain by design is extremely malleable. It’s sole evolutionary purpose is to learn new habits. Every time two brain cells or neurones are stimulated simultaneously they grow new connections to fire more easily together next time. They physically wire together. This is the basis of association in the brain. It’s how we form habits like riding a bike, learning a language or the way to our aunts house. It also dictates our behaviour. The combination of the speed we receive information, the distracting nature of its presentation and the stimulation of something new has wired our brains to be easily distracted and to pay little attention to what we subsequently look at. When we try to apply these re-wired machines to a different task we find them no longer fit for purpose. Our ability to concentrate and retain information is stunted.
I recently had to sit another post-graduate exam, this time in specialist skills like heart ultrasound. At medical school we sat two exams, sometimes three, on average every six months, for six years. A short study session prepping for those exams was four hours, a long one would be an entire day.
My brain used to be able to do this, I thought, as I sat down with my ultrasound books. Three minutes passed and I picked up my phone, checked my mail, changed the song on Spotify. Put it down again. Eyes back to the page, but then my brain skitters away. Two minutes pass. Phone, email, WhatsApp, down again. Nothing is going in. What is happening?
Eventually I had to install an App on my phone, that would lock it out for a set time. To compensate and encourage concentration the app grows a small tree for you in that time- but if you use your phone your tree dies prematurely. In order to pass my exam I had to grow an entire forest.
Like any drug, the cycle of stimulus-reward had to be weaned off and then broken. This is your brain on the Internet. I’d like to tell you I stayed ‘clean’. I’d really like to.
So where does that leave us? Awash in a tsunami of information, addicted to new content, without the attention span to process it. Perpetually swimming in The Shallows.
This mind state gives us the false impression we have a deep understanding of issues that we actually only gave a cursory glance to. Our brains have evolved to identify patterns, to make connections and build from that a map of how the world is, so we can navigate it and try to predict events. A rustle in the leaves, the sudden lack of bird noise, a low growl behind us. What colour is the tiger?
Now our new brains are set up for constant distraction, jumping from email, to WhatsApp, to Twitter, to Instagram- scrolling through hundreds of links, photos and stories per day. Our brains cannot help but make connections, and as we don’t delve deeply into the information streaming through our brains, we make those links on very shallow data-points. But because we see multiple data-points, in multiple modalities, in a short time frame, our brain constructs a message that this pattern is deeply significant, that it’s concretely true. Worse, the brain assumes the information we are sampling is an accurate representation of the world, and therefore it’s conclusions are an accurate representation of reality. Of what is true.
Except the majority of the information coming to us, from social media or news outlets or Internet forums, is not an accurate representation of the whole world, only a tiny sliver of it. This is your ‘bubble’.
In an intriguing study of this ‘bubble effect’ a US study during the last Presidential election, possibly one of the ugliest and most divisive election seasons in modern US history, asked a handful of Democrats and Republicans to swap Facebook pages for several weeks. Both sides were shocked at what they found- not just the attitudes of what was shared and with whom, but the fact that whole parts of their realities were simply missing from their counterparts. News stories, concepts, generally accepted beliefs, parts of their culture that they felt were immutably true were simply not part of their immediate neighbours’.
Barack Obama said in 2016 “We used to disagree in our opinions. Now we cannot even agree on reality itself”.
A democratic society is built on the principle each individual is informed and all parties form an opinion based on the shared facts and cast their vote accordingly. If we cannot even agree on the facts, we cannot have a democracy. We will only have chaos.
The issue remains that we all feel we are very well informed. How can we not be, with the entire codex of human knowledge at our literal fingertips, 24 hours a day? Do you know the way to the beach? No, but I will do by the time we are in the car. Do you know how to build a tree house? No, but I’ve got a great YouTube video on it. Diagnosed with lung cancer? I’ll be an expert by tomorrow morning.
This new cultural paradigm is evident everywhere. Michael Gove confidently telling the nation we are “tired of experts” couldn’t be a truer expression of that. We no longer see information and knowledge as separate entities- anyone can Google anything and have the same level of knowledge. The problem is the difference between information and knowledge. I can google Fermat’s Theorem, I could even write down the calculus for it, but I couldn’t begin to tell you what it meant or how I’d apply it. I recently tried to delve in to World Trade Organisation rules, too ashamed to admit with all the information available to me there was something I couldn’t understand. In the end, I asked for help, and someone explained it me as if I were a six-year old. Which was exactly what I had asked for. Our shallow-swimming brains automatically assign any subject they touch on now as “understood”. Tick, store, move on to the next thing. It isn’t arrogance, that implies a fault in our characters. It’s a fault in our biology, in our brains.
Very recently I was faced with a similar situation at work. In my day to day I work as a hospital doctor specialising in conditions of the heart. Heart attacks, heart failure, funny heart rhythms, sometimes even cardiac arrest. One Monday morning a young woman collapsed while out shopping , one second completely well, then a twinge of chest pain and she dropped to the floor like a marionette doll. Her friend, screaming and panicked, scrabbled to feel for a pulse. Not finding anything she started doing CPR. Fortunately her friend coughed and spluttered and rolled over after two minutes, and by the time the ambulance got there she was lying on the floor, scared and worried, but alive and awake.
When she got to us, her investigations painted an even more concerning picture. Her electrical tracing of her heart was very abnormal, and her markers of heart damage and heart ultrasound all suggested an inherited heart condition. Potentially life threatening.
Her mother however had very different ideas as I explained all of the above to them.
“She just had a faint, she hadn’t eaten that day.”
I tried to explain the irregularities in the investigations and her lack of a pulse all suggested this was more than a simple faint.
“Well, it’s very unlikely isn’t it? Most collapses are faints right?”
I tried to explain again.
“Heart conditions in the young are very rare- only 1 in 10,000 people have one right?”
“We are going to go home now, we have tickets to see Harry Potter tomorrow night and we’ve waited a year for this.”
I explained again, how this condition has a high risk of sudden death, how we needed further investigations and possibly a special pacemaker to prevent it happening again. I left them to discuss it between them. The young woman hadn’t said much, but she seemed as sceptical as her mother.
I was quite struck by the cognitive dissonance on display. On the one hand, she’d had a possible cardiac arrest in the street, called an ambulance straight away and come to a hospital bed. On the other, when told the hospital investigations had shown a potential heart condition, Dr Google had trumped the actual doctors in front of them. They had all the information, and in their mind that was sufficient knowledge to apply to themselves. Heart conditions were very rare, and therefore it wasn’t very likely to be the diagnosis. Their logic missed the fact that heart conditions are very rare in the general population, but in those who have an unexpected cardiac arrest, especially in the young, they are the most likely diagnosis. There’s a depth and form to applying information that makes it knowledge. Just because you can tell me a Ferrari 458 has 562 brake horse power, doesn’t mean I’d get in one you built in your garage.
I came into work a few days later to find the patient had self-discharged against medical advice. She was still in college. I hope we were wrong. I hope she’s living a normal healthy life, or perhaps changed her mind and went to a doctor somewhere else. In the kindest way, I hope to never see her again. But I suspect that we will.
In the most recent years, politicians began to realise the fundamental landscape had changed. Truth, belying it’s definitive nature, had divided to the eye of the beholders, each “bubble” knowing with certainty the world is one way, while their neighbour believes with equal ferocity the world is another. Both are presented with multiple daily pieces of information that confirm what they believe. Information to the contrary is ignored, or more likely, never presented to them at all. This is known as ‘confirmation bias’- your brain is sampling only a portion of reality, and constructs a map of the world based only on that.
PizzaGate is a golden example of this effect. During the 2016 US Presidential campaign, several right-wing websites began sharing stories about John Podesta, a Hilary Clinton aide, being involved in an underground child sex ring ran out of a basement under a pizza restaurant in New York. As wild as this sounds now, multiple articles on various “alt-news” websites, Facebook posts and groups were all sharing and pumping out the same information. In the mind of at least one person, these multiple unverified stories coalesced into such a concrete reality that he stormed the restaurant in question, with a shotgun, demanding the release of the children in the basement. Of course, there was no sex ring, there were no children, and there was no basement.
Such an extreme example is easily dismissed, but it illustrates how influential and pervasive frequently repeated messages in multiple forms can be. We absorb all this information at speed and without question, and the brain constructs a concrete pattern that changes our beliefs and our behaviour. This is the basis of propaganda, but that implies a deliberate process, a puppeteer. While there is good evidence that certain actors such as Russia have sponsored coordinated messaging campaigns through networked bots and fake social media accounts, they are manipulating what is already a vulnerable and broken system. Even without an external force, the natural whorls of the Internet and digital culture are prone to widespread and rapid uptake of ideas, whether they are true or not. What goes ‘viral’ these days may or may not be objectively true, but that doesn’t seem to matter.
A lie can race around the world before the truth has got its boots on. Certainly that used to be true, but now lies can travel with such speed and force the truth becomes irrelevant. Such is the speed we absorb, accept and decide on information- and once we’ve decided it becomes nearly impossible to change our minds, as our ‘bubbles’ silo us in.
Politicians are now faced with two problems;
1. How to propagate a message that will attract votes
2. How to reach people in their bunkers who have already decided against you.
The answer to number 1 is tricky- a message has to be simple enough to hold their attention, frequent enough to create an impact sufficient to generate a vote, and already aligned with a voters pre-defined ‘bubble’, otherwise it will never even reach them. I’m sure there are people who individually read each party’s manifesto prior to each election, weighs the pros and cons, and then votes accordingly. I just don’t know anyone like that. Voting preferences are manufactured in the modern era in these ‘bubbles’, based on perception of messages and individuals.
We see this ‘messaging’ politics, imported from the big business election cycles in the US, in the U.K. today. How many times did you hear “Brexit means Brexit”? Despite it not meaning anything at all. It’s simple, it’s repeated frequently and it slots neatly into several different bubbles of voters.
As for number 2, that’s a lot easier. If a silo of voters has already decided against you, in this era it would take enormous energy, time and political capital to attempt to sway them back to your side. So you don’t bother. Or better yet, you poison their perception of their own candidate, so called “smear campaigns”. These messages get in to those bubbles because they are relevant, and can be kept simple and frequent enough to create apathy, to make those voters feel they can’t vote for any candidate. An voter not voting for your opponent is nearly as good as a vote for you.
My brother lives in the California and his bubble geographically and socio-politically is far removed from mine. During the 2016 US election we would talk about the campaign, and I would be shocked by his perception of the candidates, and he would be of mine.
“Sure, Trump is a crook and a racist, but Hilary wants to change the law to kill babies.” He said to me one cold weekend in September.
I went and fact-checked it- it was based on a campaign about Clinton’s stance on the upper term limit for abortion. He felt he couldn’t vote for Clinton either. In California that probably had no effect, but imagine in a swing-state how that might change the result?
Such is the power of misinformation, politicians have adapted to utilise this new environment. Previously when caught propagating a mistruth, an apology was made and the statement retracted. Now there seems to be a doubling down, a reframing, further misinformation to counter the criticism. Even when a statement is retracted, the retraction may never make it to the original bubbles where the message was delivered, by design.
I can never remember a time when we were more divided, based so heavily on perception and so little on fact.
Who is responsible? Is anyone? Is this the fault of the tech companies that run and profit from the platform?
Facebook’s algorithms play a big part in what you physically see on social media and what you don’t. Your preferences and friendships, what Facebook perceives as spam and does not will do as well, and more than a few targeted advert or promoted posts. Perhaps ahead of everyone, Facebook recognised the inherent value of these voter clusters, for commercial and for political gain. The Cambridge Analytica scandal, Steve Bannon’s associate company, owned by Robert Mercer, saw millions of Facebook users data used and manipulated to target individual messages to specific individuals. Facebook was essentially selling a map, a way into the silos and who was in there. A way to deliver messages that bypassed the filters of newspapers and journalists that required a veneer of truth. In our digital culture, this was an absolute jackpot.
VoteLeave for the Brexit campaign also utilised this technology. Dropping unlabelled adverts and memes onto the timelines of specific voters at specific times- making dubious claims like Turkey was imminently joining the EU, or immigrants were doubling demand on ambulances in the NHS. Verifiably, untrue. And yet, multiple shallow data points construct a concrete message to the user.
Facebook now are taking measures to restrict this type of behaviour, but the bubbles will remain.
Similarly Google has a large control over how and what information is returned when searched for. Again, it’s algorithms are increasingly based on your previous searches, location, preferences etc. Could Google control the results it’s search engine returns to deliver targeted information?
Is this the fault of the media? The so-called “mainstream” media, a phrase again so recurrent we don’t stop to question where it arises. By labelling traditional news outlets, with their associated, albeit limited, accountability laws and regulation as “mainstream” it implies the content available on social media and “alt-news” sites is of an equal standing, the information of an equal, but different value. Again, we aren’t scrutinising the information presented to us. Worse, in an effort to compete with social media, traditional newspapers and news stations become increasingly sensationalist and reactionary. The first to break a story is far more important than the outlet that covers it most accurately, or thoroughly. Headlines have to be as provocative as possible, in order to capture attention in the online space, and generate a click. The issue arises as social media and alt-news will always be able to outcompete in terms of speed and sensationalism; they aren’t constrained by the same regulations, the same need to be able to verify. Their business model relies on clicks, just as newspapers rely on papers bought and news channels in viewership. Perhaps the ever more extreme positions of traditional news is in an effort to compete with the online space. In my view, that’s misguided. They will never be able to compete on that level, and the falling readership and viewership for many giants of news broadcast in the Internet era reflects that. The new niche is for objective truth, something that can cut through the white noise and deal with verifiable fact. The traditional media is supposed to be the bulwark against which the onslaught of misinformation breaks. Some have adapted, some have conspired, too few have done nothing. If democracy depends on a electorate that is universally fully informed, the free press should be responsible for that information. If there is no journalistic scrutiny of the reams of misinformation in the public domain then we lose one of the key pillars holding up our society.
To be fair, vying for click bait means many traditional news outlets need to be faster and cheaper, which means less scrutiny of what is said and more reporting those who say it. It’s not uncommon for a politician or similars words, verbatim, to form a headline. “Global warming no threat” says minister.” This “he-said/she-said”reporting is lazy. It also generates a false sense of “balance”, legitimising one side of the debate which isn’t a normally accepted position. Climate change is a great example: 98% of all scientists and academic research agree climate change is happening and yet most traditional interviews will have a climate change denier for ‘balance’. What scientific topics like this, based on fact, is good quality information explored in depth by specialist journalists. Not very short televised arguments that create the impression of an unresolved issue. And yet that’s what the news cycle demands. The sheer pace and lack of verification of the modern information stream needs a filter.
Enter, fact checking websites. FullFact in the U.K., and Snopes in the US are both good examples. Scrutinising politicians and high profile claims, these websites seem to be stepping into a space traditional journalism has deserted, simultaneously filling the need for someone else to do our critical thinking for us.
Ultimately there is no external malignant force that we can defeat that will fix this issue. It isn’t social media, politicians, journalists or foreign powers driving this new digital Dark Age. Yes, each element has played a part, each taking advantage of the situation they find, but none would be able to without the core element. Us. It’s our clicks and votes, what we accept as fact and truth, what we believe, that determines the society we live in.
So what can we do to fix this?
In medicine, we’ve been here before. A hundred and fifty years ago the medical field was awash with new discoveries, new understanding of the human anatomy and physiology. Equally quackery and profiteering were rife. There was information and misinformation in abundance, without a means to filter between them, to find objective truth.
What developed from this primordial condition was a system to decide what was and wasn’t medically true, what did and didn’t work. The earliest cited example of this was James Lind, a British Naval surgeon in the 1800s, who conducted a controlled experiment aboard his ship whole at sea. At the time, scurvy was an epidemic amongst sailing ships without a known cause, causing 1,000,000 deaths at some estimates during the era. James Lind took 12 patients dying of scurvy and divided them into pairs. Crucially he matched each with similar symptoms and controlled all the other variables he could: their diet, their sleeping conditions. He then gave each group a different supplement, including one pair of daily citrus fruits. Only the pair that received the oranges and lemons improved after six days. Unfortunately Lind didn’t make the connection at the time and it would be another fifty years before citrus fruits were a mandatory part of a sailors diet.
As time passed, the methodology of understanding how to test treatments and medicines developed, into the field we call “evidence-based medicine” today. Essentially all new medicines and treatments are subject to much larger and much more rigorous trials than Lind performed, but the principles are the same. Controlling all the possible variables, can we prove conclusively if a medicine works, or a disease is linked to an exposure. This methodology has saved millions of lives; steroids for premature babies, multiple medicines in heart attacks and strokes, tuberculosis treatments, the list goes on and on. Gone are the days of charlatans in caravans with Cure-All ointments in brown glass bottles. Nowadays if you say “This is a new wonder drug!”, the medical community will arch an eyebrow and say “prove it.”
It’s our collective distrust that keeps the process honest. That’s not to say it’s perfect. Far from it. In his book Bad Pharma, UK doctor Ben Goldacre details the myriad ways drug companies sometimes manipulate data and trials to sell their wares. But the process is sound: it’s reproducible, verifiable and transparent. Evidence is king.
Evidence-based medicine is a key part of medical training and continues to be the essential requirement to staying up to date throughout your career. As new knowledge is acquired and released, as new information is published, we have to have the tools to dissect and critique them. If we didn’t, we’d simply pass on to our patients whatever study or paper came to us, and they would come to harm as a result.
Once upon a time we relied upon others to write our newspapers and textbooks, others to have verified the information before re-producing it. The lay person didn’t need these skills. But perhaps that has changed.
After my exam (which I thankfully passed), I began to wonder at what harm this little box of light in my hand was doing to my brain. Was it destroying my concentration? Was it destroying society? Is it destroying the objective truth?
My phone now goes into a little plastic box by the front door, and stays there all night. Too much of my day to day requires a smartphone to give it up entirely, but I’m certainly much happier without it. Our time on this planet is achingly finite, and yet we are wasting our lives and minds on endless and infinite distractions. These aren’t harmless endeavours, look up, and look around you. We disagree on the even the utter basics of our reality, even the shape of our planet. The dawn of the Internet has passed, and now we are in its formative years; deciding who we are, and if we will evolve or perhaps regress. It certainly feels like society on the whole is regressing: more divided, more certain, less willing to compromise or empathise. These are the perfect conditions to brew conflict.
That’s not to say at all the immeasurable benefits a globalised world connected by the Internet has brought all of us. The Arab Spring, although short lived, was organised almost entirely through social media. Education can be delivered entirely online, around the world. Satellite navigation, instant global communication, online shopping; our lives are more convenient, more connected and safer. Are we better informed? Certainly atrocities and natural disasters in distant regions such as the Middle East, Myanmar, Kashmir become public knowledge where once they might not have been. This creates public pressure on politicians, who in turn may or may not take action.
We are certainly more informed. That is inarguable. Where once there was a paucity of information now it seems there is a deluge of it. But the information we absorb, that changes our neurones and our beliefs, changes our behaviours, is unfiltered, unverified, and often untrue. The way we consume it; greedily, without pause or reflection, re-wires our minds to look for and value new information, a “hit” like any other drug. The inner world we build and inhabit based on this becomes warped, to the point one of us lives on a globe, and the other lives on a disc. This is your brain on the Internet.
We are not informed, we are intoxicated. Perhaps it’s time we sober up.