“The beginning of knowledge” by dvidal.lorente is licensed under CC BY-NC 2.0
The Dunning-Kruger effect refers to a type of cognitive bias in which people assess their own knowledge of a topic or subject area as being greater than it actually is. Psychologists note that it tends to occur frequently in those people with a small amount of knowledge on a topic. In other words, it takes a certain amount of knowledge before we can actually know how little we know.
“War is Peace, Freedom is Slavery Ignorance is Strength” by Nney is licensed under CC BY-NC-SA 2.0
In developers’ conferences and earnings calls, the biggest of the big tech companies are trying to develop unique value propositions that paint them as friendly, responsive, and attuned to the needs of their customers. Then the mainstream technology media (often overworked, understaffed and reliant on the good graces of big tech for continued access to stories), generally reports these messages at face value. News in the last week focused on Facebook’s pivot toward community groups, Google’s exciting universal translator or Amazon’s claim that small and medium sized business partners made on average 90K last year through their platform.
The beauty of participatory and social media has always been its ability to connect people. That is also the great evil of these platforms.
Social media allows what Barry Wellman calls networked individualism. In contrast to geographic or familial communities where we are brought together through accidents of fate like where or with whom we were born, networked individuals are not forced to conform to community norms to fit in. Instead, they can use network technologies to maintain their individual quirks and find others who share their unique interests and ideals in online communities. This is a beautiful and terrifying vision.
“Frankenstein” by Britta Frahm is licensed under CC by 2.0
In early 2018, Facebook users were stunned to learn that Cambridge Analytica had used a loophole in Facebook’s API to harvest data from millions of users who had not given free and informed consent for the use of their data. Prior to this reveal, people around the world were already growing concerned about the spread of fake news and misinformation on social media and how this information may influence elections. This event sent apprehensions into overdrive and even sparked a #DeleteFacebook online movement, of sorts.
“Elon Musk backs #DeleteFacebook, and Tesla’s and SpaceX’s Facebook pages vanish” by marcoverch is licensed under CC BY 2.0
Popular opinion is that fake news and distrust of the mainstream media was mostly a problem during the 2016 US election and the ill-fated Brexit vote in the UK. However, before either of these things happened, we actually saw anti-news sentiment in small pockets of Canadian social media chatter. During our last election in 2015 people were beginning to use the hashtag #CdnMedia to criticize mainstream media sources and accuse journalists of working for the Liberal government. As we enter another election year, we may want to learn from what happened before, as I suspect this type of chatter will only become a bigger player in 2019.
“Vizrt Kurdsat 1 News @ 6 HD Graphics.” by arshan khan is licensed under CC by-nc-nd 4.0
“Science Careers in Search of Women 2009” by Argonne National Laboratory is licensed under CC BY-NC-SA 2.0. To view a copy of this license, visit: https://creativecommons.org/licenses/by-nc-sa/2.0
“203d Speed Western Stories May-1945 Includes Rawhidin\u2019 Tenderfoot by E. Hoffmann Price” by CthulhuWho1 (Will Hart) is licensed under CC BY 2.0. To view a copy of this license, visit: https://creativecommons.org/licenses/by/2.0
“Sleepy hacker” by thomasbonte is licensed under CC by 2.0
These companies make money by ensuring we spend as much time on their platforms as possible so they use various tricks like creating an illusion of choice, hijacking our natural tendencies as social animals, and producing the compelling draw of variable rewards to capture and hold our attention. In his article, Harris makes suggestions for why each of these tactics is problematic for people, community, and society, and he also suggests different ways we could design and approach technology in our lives. I’d like to build on his ideas specifically with respect to weaponized misinformation and propaganda. Harris doesn’t really get into this in his article, but I’d like to suggest why I think the hacking of the human mind has left us far more vulnerable to this type of message manipulation.
Propaganda is not new, nor is the attempt of foreign powers to sow the seeds of division among the population of a country against which they are engaging in information ops. For example, in the 1950s and 1960s, Russia helped to support the burgeoning human rights movement as a way to sow deep division and distrust of power. It’s a complicated relationship, and one that likely had both intended and unintended outcomes.
When social media platforms hack our brains for attention, they have also super charged the propaganda, misinformation and black ops tactics that were already being deployed at a slower grassroots scale. Just as we are wired to seek variable rewards from social media notifications, we are wired to respond to emotionally charged (particularly negative) posts. The human mind, evolutionary speaking, is optimized to ignore the mundane but attend to threats to ourselves or our tribes. Thus when we see a viral video showing a confrontation between two groups, one of whom we identify with, we will be likely to pay attention to the video and then share it with our tribe without thinking critically about what is not shown on the video.
This type of uncritical engagement with media is not particularly new either. As anyone who has taken a media studies class can tell you, we tend to trust what we see with our own eyes, which is why video is so successful a medium for building and reinforcing cultural norms. But as social media platforms use popularity and auto play to hold our attention, they also facilitate the spread of video, increasing the global scale at which they can effectively influence people’s views.
So as Harris points out, we are all being hacked for our attention. And as the companies hack our brains, they pave the way for propagandists to do so as well. This adds additional weight to Harris’ call for a social media bill of rights, and I would add, suggests that we need to carefully think through the question of regulation for platforms and whether we need to develop an international and enforceable standard of practice.
Well most of us have, anyway. The infamous addendum to your Twitter bio. Come on, you know it – it goes something like this: “RT’s are not endorsements” or “RT’s do not equal endorsements” or something along those lines.
Heck, I have one myself, you can check it out on Twitter if you look up @SocMedDr. It serves as a little disclaimer. A little “I may not have done my homework, but I liked a tweet so I retweeted it, don’t hassle me later” disclaimer.
“Twittering your Business 06: retweet” by Hugger Industries is licensed under CC by-nc-sa 2.0
And today, I’m going to tell you why I think we’re all wrong to do this. Especially now in an age of online information operations and fake news.
So, in the news last week, it turns out Facebook behaved like many other large and not particularly ethical companies. Sheryl Sandberg is implicated in the hiring of a right-wing PR firm known for it’s “black ops” style engagement. This firm created messages suggesting that anti-Facebook activists had ties to George Soros (a known Republic dog whistle tactic). It has also been suggested that Sandberg wanted to suppress information about Russian election meddling (even the information which originated from Facebook’s own security people. All this and more is detailed in a recent New York Times article that commentators are saying shocked, and I mean, SHOCKED!the world.