Innovation first, then security

The internet of things: It can help us manage our energy use even when we’re away from home, it can help you let people into your house remotely or make receiving packages easier. It can help you monitor your own family, home security issues or grocery use.. and it can help others monitor you.

A recent Gizmodo investigation, for example, revealed that Amazon’s smart doorbell/home security system Ring had major security vulnerabilities even despite a company pledge to protect user privacy. Gizmodo was able to uncover the locations of thousands of ring devices within a random Washington DC area. While only the Ring users who chose to use the Neighbors app were revealed, this still represents a major vulnerability which is ripe for exploitation.

Reflecting the density of Ring cameras that have been used to share footage on Neighbors over the past 500 days. Screenshot: Gizmodo

Continue reading “Innovation first, then security”

Innovation first, then security

It’s Not You, Or Me!

Pop quiz: What do climate change and social media privacy have in common?

“Stop Global Warming” by Piera Zuliani is licensed under CC BY-ND 4.0

If you said, “a distracting and inaccurate focus on individual actions” you’re correct! Congratulations! Pat yourself on the back and pour yourself a congratulatory beer, glass of wine, coffee, or soda.

Continue reading “It’s Not You, Or Me!”

It’s Not You, Or Me!

Unintended Consequences

Researchers at MIT, who are at the forefront of autonomous vehicle technology, have noticed that paradoxically when a little bit of assistive technology is added to a car, drivers become less safe. In other words, when people feel like technology is behind they wheel they are more likely to be more distracted drivers and thus many of the autonomous technologies that are intended to make people more safe actually do the opposite.

This is a classic unintended consequence of technology, like the ones described by Edward Tenner in his 1997 book Why Things Bite BackTo combat this issue, the smart folks at MIT decided to put a human facing camera in a vehicle, which would look for distracted driving and compensate accordingly, as seen in this YouTube video. Rather than asking, what are the social and psychological reasons that drive people to engage in distracted driving, so that these reasons might be minimized, instead the best solution was determined to be adding another layer of technological assistance to the issue. Technology to solve the problem created by technology.

A screen capture from the MIT Human-Centered Autonomous Vehicle demo video, available from https://www.youtube.com/watch?v=OoC8oH0CLGc

 

 

Continue reading “Unintended Consequences”

Unintended Consequences

AI in the Canadian Government: The Immigration Edition

Over the last two years or so, the Canadian Government has been openly exploring the issue of how some government processes, such as the processing of lower risk or routine immigration files can be made more efficient through the use of AI (machine learning) algorithmic processes.

The good news is that the adoption of these systems has so far been guided by a digital framework which includes making the processes and software open by default whenever possible. These guidelines hint at a transparency that is necessary to mitigate algorithmic bias.

Input Creativity
“Input Creativity” by Row Zero – Simon Williamson is licensed under CC BY-NC 4.0

Continue reading “AI in the Canadian Government: The Immigration Edition”

AI in the Canadian Government: The Immigration Edition

AI security hits a Canadian University: Proceed with Caution

I usually only post to this blog once per week, but a news story caught my eye today since it concerns my sector (higher education), my country (Canada) and my passion (technology critique).

Mount Royal University: Image from https://www.cbc.ca/news/canada/calgary/mru-ai-security-1.5136407

Mount Royal University in Calgary, Alberta is going to be the first organization in Canada to install an AI system for the purposes of security. This system consists of a network of cameras and a machine learning algorithm that spends the first few weeks learning what “normal” movement looks like on campus, then uses that baseline to detect if there might be a security issue. Deviations from normal in this case, signal a potential “threat” or at least an event worth looking into. As described by the Vice-President, Securities management in a recent CBC article:

“when that pattern breaks, what it does, that screen comes to life and it shows the people in the security office where the pattern is now different and then it’s up to a human being to decide what to do about it,”

Continue reading “AI security hits a Canadian University: Proceed with Caution”

AI security hits a Canadian University: Proceed with Caution

This Week In Tech News: Orwellian Doublethink

The last week has been filled with announcements from big tech firms:

Facebook tells us “the future is private“.

Google tells us they’re “here to help“.

Amazon tell us it’s a friend to small businesses.

"War is Peace, Freedom is Slavery Ignorance is Strength"  by Nney is licensed under  CC BY-NC-SA 2.0
“War is Peace, Freedom is Slavery Ignorance is Strength” by Nney is licensed under CC BY-NC-SA 2.0

In developers’ conferences and earnings calls, the biggest of the big tech companies are trying to develop unique value propositions that paint them as friendly, responsive, and attuned to the needs of their customers. Then the mainstream technology media (often overworked, understaffed and reliant on the good graces of big tech for continued access to stories), generally reports these messages at face value. News in the last week focused on Facebook’s pivot toward community groups, Google’s exciting universal translator or Amazon’s claim that small and medium sized business partners made on average 90K last year through their platform.

Continue reading “This Week In Tech News: Orwellian Doublethink”

This Week In Tech News: Orwellian Doublethink

The Private Turn

Is social media becoming less social?

In early 2018, Facebook users were stunned to learn that Cambridge Analytica had used a loophole in Facebook’s API to harvest data from millions of users who had not given free and informed consent for the use of their data. Prior to this reveal, people around the world were already growing concerned about the spread of fake news and misinformation on social media and how this information may influence elections. This event sent apprehensions into overdrive and even sparked a #DeleteFacebook online movement, of sorts.

Elon Musk backs #DeleteFacebook, and Tesla's and SpaceX's Facebook pages vanish
“Elon Musk backs #DeleteFacebook, and Tesla’s and SpaceX’s Facebook pages vanish” by marcoverch is licensed under CC BY 2.0

Continue reading “The Private Turn”

The Private Turn

The Privacy Paradox

In a post Cambridge Analytica world, why are people still using social media platforms like Facebook and Instagram? Despite the fact that people threatened to leave the social network in droves after the data breach was revealed, most people actually stayed on and things returned to normal. Why is this the case?

A window printed with the words "private meeting room"
“Private” by Duane Tate is licensed under CC BY 2.0

Social media scholars have identified  a phenomenon called the privacy paradox that can help to explain this behavior. Put succinctly, the privacy paradox refers to the fact that though people state they do not intend to disclose their personal information online, they do so anyway; or the fact that though people say they do not trust that their information will be private online, they still end up disclosing a large amount of personal information on social networks.

Continue reading “The Privacy Paradox”

The Privacy Paradox

Social Media Mindfulness Is Not Enough

A group of virtual reality avatars sitting in a circle engaging in meditation
“VR Meditation Guided by Jeremy Nickel” posted to Flickr by Sansar VR. Available at https://flic.kr/p/GBrspR CC-BY 2.0

It used to be only a few voices on the margin: Ian Bogost, Sherry Turkle, Geert Lovink, or Evgeny Morozov, for example, who urged people to think a little more about the time they were spending on social media. But soon the whisper grew and now the movement may be reaching the mainstream. With the rise to prevalence of former Google design ethicist Tristan Harris and his Center for Humane Technology, and with the Facebook Privacy/Cambridge Analytical scandal all over congress and the world news, people are starting to have conversations that were considered almost laughable before. Continue reading “Social Media Mindfulness Is Not Enough”

Social Media Mindfulness Is Not Enough