The Dunning-Kruger effect refers to a type of cognitive bias in which people assess their own knowledge of a topic or subject area as being greater than it actually is. Psychologists note that it tends to occur frequently in those people with a small amount of knowledge on a topic. In other words, it takes a certain amount of knowledge before we can actually know how little we know.
The last week has been filled with announcements from big tech firms:
Facebook tells us “the future is private“.
Google tells us they’re “here to help“.
Amazon tell us it’s a friend to small businesses.
In developers’ conferences and earnings calls, the biggest of the big tech companies are trying to develop unique value propositions that paint them as friendly, responsive, and attuned to the needs of their customers. Then the mainstream technology media (often overworked, understaffed and reliant on the good graces of big tech for continued access to stories), generally reports these messages at face value. News in the last week focused on Facebook’s pivot toward community groups, Google’s exciting universal translator or Amazon’s claim that small and medium sized business partners made on average 90K last year through their platform.
The beauty of participatory and social media has always been its ability to connect people. That is also the great evil of these platforms.
Social media allows what Barry Wellman calls networked individualism. In contrast to geographic or familial communities where we are brought together through accidents of fate like where or with whom we were born, networked individuals are not forced to conform to community norms to fit in. Instead, they can use network technologies to maintain their individual quirks and find others who share their unique interests and ideals in online communities. This is a beautiful and terrifying vision.
A year ago today, 10 people were killed and 16 injured when a young man rented a van and drove it onto the sidewalk. The perpetrator engaged in this action, as part of an “incel” or “involuntarily celebate” rebellion. The incel group is a group of men online who openly express misogyny and claim that they should be “given” women to have sex with. The attack targeted women. Incel communities on the internet celebrated following the attack.
Is social media becoming less social?
In early 2018, Facebook users were stunned to learn that Cambridge Analytica had used a loophole in Facebook’s API to harvest data from millions of users who had not given free and informed consent for the use of their data. Prior to this reveal, people around the world were already growing concerned about the spread of fake news and misinformation on social media and how this information may influence elections. This event sent apprehensions into overdrive and even sparked a #DeleteFacebook online movement, of sorts.
Popular opinion is that fake news and distrust of the mainstream media was mostly a problem during the 2016 US election and the ill-fated Brexit vote in the UK. However, before either of these things happened, we actually saw anti-news sentiment in small pockets of Canadian social media chatter. During our last election in 2015 people were beginning to use the hashtag #CdnMedia to criticize mainstream media sources and accuse journalists of working for the Liberal government. As we enter another election year, we may want to learn from what happened before, as I suspect this type of chatter will only become a bigger player in 2019.
Today is the International Day of Women and Girls in Science!
Truly women and girls have made tremendous advances in the sciences, however the UN reports that women still only make up less than 30 percent of researchers worldwide. This means we must do more work to ensure that this type of work is welcoming to women, and doesn’t push them out. While many initiatives focus on growing the pipeline for women and girls in science by providing new opportunities to involve girls in science and STEM, and while this is certainly a laudable goal, there a fewer initiatives that address the stresses women face as women who enter traditionally male-dominated fields. This is what I’d like to address here.
Why is the entertainment industry one of the most robust industries even during a recession?
Why did Donald Trump experience such a strong rise in his path to the presidency?
Why do we identify with people who share the same national identity as us, even in a large country where we may not share geography, living situation, or other demographic similarities?
In a recent article by Tristan Harris, this design ethicist and former magician lays out the various ways that our psychological limitations have been taken advantage of by technology companies in order to create a compulsive media environment.
These companies make money by ensuring we spend as much time on their platforms as possible so they use various tricks like creating an illusion of choice, hijacking our natural tendencies as social animals, and producing the compelling draw of variable rewards to capture and hold our attention. In his article, Harris makes suggestions for why each of these tactics is problematic for people, community, and society, and he also suggests different ways we could design and approach technology in our lives. I’d like to build on his ideas specifically with respect to weaponized misinformation and propaganda. Harris doesn’t really get into this in his article, but I’d like to suggest why I think the hacking of the human mind has left us far more vulnerable to this type of message manipulation.
Propaganda is not new, nor is the attempt of foreign powers to sow the seeds of division among the population of a country against which they are engaging in information ops. For example, in the 1950s and 1960s, Russia helped to support the burgeoning human rights movement as a way to sow deep division and distrust of power. It’s a complicated relationship, and one that likely had both intended and unintended outcomes.
When social media platforms hack our brains for attention, they have also super charged the propaganda, misinformation and black ops tactics that were already being deployed at a slower grassroots scale. Just as we are wired to seek variable rewards from social media notifications, we are wired to respond to emotionally charged (particularly negative) posts. The human mind, evolutionary speaking, is optimized to ignore the mundane but attend to threats to ourselves or our tribes. Thus when we see a viral video showing a confrontation between two groups, one of whom we identify with, we will be likely to pay attention to the video and then share it with our tribe without thinking critically about what is not shown on the video.
This type of uncritical engagement with media is not particularly new either. As anyone who has taken a media studies class can tell you, we tend to trust what we see with our own eyes, which is why video is so successful a medium for building and reinforcing cultural norms. But as social media platforms use popularity and auto play to hold our attention, they also facilitate the spread of video, increasing the global scale at which they can effectively influence people’s views.
So as Harris points out, we are all being hacked for our attention. And as the companies hack our brains, they pave the way for propagandists to do so as well. This adds additional weight to Harris’ call for a social media bill of rights, and I would add, suggests that we need to carefully think through the question of regulation for platforms and whether we need to develop an international and enforceable standard of practice.
We’ve all done it.
Well most of us have, anyway. The infamous addendum to your Twitter bio. Come on, you know it – it goes something like this: “RT’s are not endorsements” or “RT’s do not equal endorsements” or something along those lines.
Heck, I have one myself, you can check it out on Twitter if you look up @SocMedDr. It serves as a little disclaimer. A little “I may not have done my homework, but I liked a tweet so I retweeted it, don’t hassle me later” disclaimer.
And today, I’m going to tell you why I think we’re all wrong to do this. Especially now in an age of online information operations and fake news.
So, in the news last week, it turns out Facebook behaved like many other large and not particularly ethical companies. Sheryl Sandberg is implicated in the hiring of a right-wing PR firm known for it’s “black ops” style engagement. This firm created messages suggesting that anti-Facebook activists had ties to George Soros (a known Republic dog whistle tactic). It has also been suggested that Sandberg wanted to suppress information about Russian election meddling (even the information which originated from Facebook’s own security people. All this and more is detailed in a recent New York Times article that commentators are saying shocked, and I mean, SHOCKED! the world.
Or maybe, on second thought, it’s not so shocking after all. In fact, I would ask why, after all of the countless apologies made by Zuckerberg over the years (see here, here and here, for just a few examples), why we are shocked by this? Furthermore, I would ask us to consider similarly why we’re so shocked when Amazon mistreats employees, or when Google is implicated in government censorship in other countries.