But is open by default really the best approach? Particularly in the area of government? Can we not conceive of information that really should not be freed? Whether for national security purposes, or personal privacy, or even efficiency’s sake?
The beauty of participatory and social media has always been its ability to connect people. That is also the great evil of these platforms.
Social media allows what Barry Wellman calls networked individualism. In contrast to geographic or familial communities where we are brought together through accidents of fate like where or with whom we were born, networked individuals are not forced to conform to community norms to fit in. Instead, they can use network technologies to maintain their individual quirks and find others who share their unique interests and ideals in online communities. This is a beautiful and terrifying vision.
In early 2018, Facebook users were stunned to learn that Cambridge Analytica had used a loophole in Facebook’s API to harvest data from millions of users who had not given free and informed consent for the use of their data. Prior to this reveal, people around the world were already growing concerned about the spread of fake news and misinformation on social media and how this information may influence elections. This event sent apprehensions into overdrive and even sparked a #DeleteFacebook online movement, of sorts.
Popular opinion is that fake news and distrust of the mainstream media was mostly a problem during the 2016 US election and the ill-fated Brexit vote in the UK. However, before either of these things happened, we actually saw anti-news sentiment in small pockets of Canadian social media chatter. During our last election in 2015 people were beginning to use the hashtag #CdnMedia to criticize mainstream media sources and accuse journalists of working for the Liberal government. As we enter another election year, we may want to learn from what happened before, as I suspect this type of chatter will only become a bigger player in 2019.
For the past 18 months, I have been part of a research team that looks at the effects of online harassment for researchers and scholars who need to be on social media for the purposes of their work.
This project relates to my general research program of understanding how information that is in the public interest can be spread online, and what the barriers are to the spread of information in this context.
Working on the question of online harassment has given me the opportunity to work with a fantastic team of super smart and caring people. We’ve interviewed scholars and researchers, launched a large scale survey, and produced papers, conference presentations, op-eds and YouTube explainer videos. Now we’re very excited to launch a website intended to showcase our research on this project to date, and also serve as a resource for scholars and researchers who use online tools to promote themselves or their work.
These companies make money by ensuring we spend as much time on their platforms as possible so they use various tricks like creating an illusion of choice, hijacking our natural tendencies as social animals, and producing the compelling draw of variable rewards to capture and hold our attention. In his article, Harris makes suggestions for why each of these tactics is problematic for people, community, and society, and he also suggests different ways we could design and approach technology in our lives. I’d like to build on his ideas specifically with respect to weaponized misinformation and propaganda. Harris doesn’t really get into this in his article, but I’d like to suggest why I think the hacking of the human mind has left us far more vulnerable to this type of message manipulation.
When social media platforms hack our brains for attention, they have also super charged the propaganda, misinformation and black ops tactics that were already being deployed at a slower grassroots scale. Just as we are wired to seek variable rewards from social media notifications, we are wired to respond to emotionally charged (particularly negative) posts. The human mind, evolutionary speaking, is optimized to ignore the mundane but attend to threats to ourselves or our tribes. Thus when we see a viral video showing a confrontation between two groups, one of whom we identify with, we will be likely to pay attention to the video and then share it with our tribe without thinking critically about what is not shown on the video.
This type of uncritical engagement with media is not particularly new either. As anyone who has taken a media studies class can tell you, we tend to trust what we see with our own eyes, which is why video is so successful a medium for building and reinforcing cultural norms. But as social media platforms use popularity and auto play to hold our attention, they also facilitate the spread of video, increasing the global scale at which they can effectively influence people’s views.
So as Harris points out, we are all being hacked for our attention. And as the companies hack our brains, they pave the way for propagandists to do so as well. This adds additional weight to Harris’ call for a social media bill of rights, and I would add, suggests that we need to carefully think through the question of regulation for platforms and whether we need to develop an international and enforceable standard of practice.
With COP24 coming to a close at the end of this week, climate change has been relatively newsworthy which likely means that people are more likely to use their favorite search engine to search for information related to climate change. In a recent survey by the Association for Canadian Studies, Canadians reported that they believe the Internet makes them smarter, and they feel they do not have to remember facts or events because they can so easily search them online.
But is it wise to trust search engines as information sources? A growing number of critical information scholars, including Safiya Noble would say otherwise. In honor of COP24, I decided to test the two most popular search engines on the topic of climate change. I entered “climate change is” into both Google, Yahoo and Bing and took screen capture images of their suggested searches. The differences were very interesting.
Well most of us have, anyway. The infamous addendum to your Twitter bio. Come on, you know it – it goes something like this: “RT’s are not endorsements” or “RT’s do not equal endorsements” or something along those lines.
Heck, I have one myself, you can check it out on Twitter if you look up @SocMedDr. It serves as a little disclaimer. A little “I may not have done my homework, but I liked a tweet so I retweeted it, don’t hassle me later” disclaimer.
And today, I’m going to tell you why I think we’re all wrong to do this. Especially now in an age of online information operations and fake news.
Think about the last time you shared something on Facebook or Twitter. What was your primary motivation for doing so?
Perhaps it was because you learned about some news or current events that you felt others should know about. Perhaps it was because you saw a cute video or picture that you thought other people would like to see too. Perhaps it was because you saw a pithy quote or saying that you felt really described the way you were feeling in that moment, and you wanted to convey that feeling to other people in your network.
So, in the news last week, it turns out Facebook behaved like many other large and not particularly ethical companies. Sheryl Sandberg is implicated in the hiring of a right-wing PR firm known for it’s “black ops” style engagement. This firm created messages suggesting that anti-Facebook activists had ties to George Soros (a known Republic dog whistle tactic). It has also been suggested that Sandberg wanted to suppress information about Russian election meddling (even the information which originated from Facebook’s own security people. All this and more is detailed in a recent New York Times article that commentators are saying shocked, and I mean, SHOCKED!the world.