The beauty of participatory and social media has always been its ability to connect people. That is also the great evil of these platforms.
Social media allows what Barry Wellman calls networked individualism. In contrast to geographic or familial communities where we are brought together through accidents of fate like where or with whom we were born, networked individuals are not forced to conform to community norms to fit in. Instead, they can use network technologies to maintain their individual quirks and find others who share their unique interests and ideals in online communities. This is a beautiful and terrifying vision.
These companies make money by ensuring we spend as much time on their platforms as possible so they use various tricks like creating an illusion of choice, hijacking our natural tendencies as social animals, and producing the compelling draw of variable rewards to capture and hold our attention. In his article, Harris makes suggestions for why each of these tactics is problematic for people, community, and society, and he also suggests different ways we could design and approach technology in our lives. I’d like to build on his ideas specifically with respect to weaponized misinformation and propaganda. Harris doesn’t really get into this in his article, but I’d like to suggest why I think the hacking of the human mind has left us far more vulnerable to this type of message manipulation.
When social media platforms hack our brains for attention, they have also super charged the propaganda, misinformation and black ops tactics that were already being deployed at a slower grassroots scale. Just as we are wired to seek variable rewards from social media notifications, we are wired to respond to emotionally charged (particularly negative) posts. The human mind, evolutionary speaking, is optimized to ignore the mundane but attend to threats to ourselves or our tribes. Thus when we see a viral video showing a confrontation between two groups, one of whom we identify with, we will be likely to pay attention to the video and then share it with our tribe without thinking critically about what is not shown on the video.
This type of uncritical engagement with media is not particularly new either. As anyone who has taken a media studies class can tell you, we tend to trust what we see with our own eyes, which is why video is so successful a medium for building and reinforcing cultural norms. But as social media platforms use popularity and auto play to hold our attention, they also facilitate the spread of video, increasing the global scale at which they can effectively influence people’s views.
So as Harris points out, we are all being hacked for our attention. And as the companies hack our brains, they pave the way for propagandists to do so as well. This adds additional weight to Harris’ call for a social media bill of rights, and I would add, suggests that we need to carefully think through the question of regulation for platforms and whether we need to develop an international and enforceable standard of practice.
Recently, Dr. Ann Dale, researcher Brigitte Petersen and I conducted a study in which we looked at the hashtag community formed by people who post using the tag #ClimateChange on Instagram. We wanted to see whether this community showed evidence of the potential for community agency, that is to say, do the posts related to this hashtag seem like content that could, under the right circumstances, inspire community action around the issue of global warming?
What good is 300,000 facebook friends, or a viral video viewed by 3 billion people if only a fraction of those people are actually interested in what you have to share with them or sell to them? The answer is, not much. Rather than aiming for a large broadcast audience, rather than taking a broadcast approach to participatory media, those of us without the money or other resources to spread our message far and wide need to be more strategic than that. Those brands that have grown a movement have tapped into just these principles. For this reason, smaller organizations, artists, or individuals probably don’t gain much by focusing on dramatically increasin follower counts over a short amount of time. This type of activity takes too much time, energy and money that could be better spent on actually growing a small business. Instead, for most of us, it’s better to have 3000 of the right followers- people who are most likely to convert.
I learned it on Instagram today: The Chive has officially left Facebook.
Ok, well they haven’t fully left. But they will no longer be posting their articles, videos and other content directly to their Facebook page. Instead, they will be sharing links only in Facebook, and the links will take people back to their webpage. The way God Herself intended.
One of the hallmarks of the last year of US politics has been a steady stream of messages from the president about “fake news” or the “lying media”. Arguably, this has been a mainstay of Trump’s political strategy since he announced his run for the presidency, and it remains a tactic that he employs, and his followers seem to take at face value. So it’s not surprising to learn that in the US, trust in traditional media is at an all time low. In fact, recent research from the American Press Institute and Associated Press shows that 41% of Americans report having hardly any confidence in the traditional press.
What was surprising for me though, as a Canadian researcher, was learning that this is more than just an American issue. In Canada, Statistics Canada reports that only 40% of Canadians report feeling confidence in the national media. With so much information available from so many different sources, it seems as though we just don’t know who or what we should trust anymore. This is true in 2017 and, unfortunately, my research also shows evidence of this trend as early as our Federal election in 2015. We collected thousands of tweets in the month leading up to the 2015 federal election, and we analyzed a sample of them using corpus analysis software along with human content analysis.
In a world of 24/7 cable news, round the clock Amazon Deliveries, and infinite social media notifications, it’s been difficult to not notice how the ways we interact with each other and information is changing each passing day. Chief among these changes of course is the problem of fake news (#fakenews, I guess) or the spread of misinformation in an online environment. Some people blame Facebook and Twitter, who are currently in the news for the role their platforms played in Russian trolling of the 2016 US election. Their involvement notwithstanding, it seems to me that though the platforms had a role to play in the spread of fake news, it was more a passive role in which they took advantage of human frailty, rather than an active one in which their algorithms spread fake news because it’s more exciting content or something. Allow me to explain.