Innovation first, then security

The internet of things: It can help us manage our energy use even when we’re away from home, it can help you let people into your house remotely or make receiving packages easier. It can help you monitor your own family, home security issues or grocery use.. and it can help others monitor you.

A recent Gizmodo investigation, for example, revealed that Amazon’s smart doorbell/home security system Ring had major security vulnerabilities even despite a company pledge to protect user privacy. Gizmodo was able to uncover the locations of thousands of ring devices within a random Washington DC area. While only the Ring users who chose to use the Neighbors app were revealed, this still represents a major vulnerability which is ripe for exploitation.

Reflecting the density of Ring cameras that have been used to share footage on Neighbors over the past 500 days. Screenshot: Gizmodo

Continue reading “Innovation first, then security”

Innovation first, then security

It’s Not You, Or Me!

Pop quiz: What do climate change and social media privacy have in common?

“Stop Global Warming” by Piera Zuliani is licensed under CC BY-ND 4.0

If you said, “a distracting and inaccurate focus on individual actions” you’re correct! Congratulations! Pat yourself on the back and pour yourself a congratulatory beer, glass of wine, coffee, or soda.

Continue reading “It’s Not You, Or Me!”

It’s Not You, Or Me!

Hello Shadow

Note: This post is for Laura, by request – hi Laura!

Note 2: This post is far more philosophical than I normally go, but I thought, what the heck, why not have fun with it?

“shadow” by mandaloo is licensed under CC BY-NC-SA 2.0

Your online Shadow.

Everyone has one. Even if you take care to not use platforms like Facebook, it’s highly likely that you have a shadow profile following you around the internet.

Platforms like Facebook and Google do it best. They collect all the data you and your friends or colleagues give them when you use their free services (But Google maps is SO CONVENIENT!) and they combine that with collected data about the other sites you visit (even after you log out) or where you’re logging in from, or whether you’re on a mobile device or a computer. They combine all of this data, and start to make predictions about you, which are either confirmed or adjusted depending on your online habits, and the habits of those you are connected to. This is precisely why so many people think their phones are secretly listening to them – and then delivering ads based on something they said. Your phone is not listening to you. It’s more troubling than that. Your shadow profile has revealed your secrets (she’s not very discrete!).

Continue reading “Hello Shadow”

Hello Shadow

Unintended Consequences

Researchers at MIT, who are at the forefront of autonomous vehicle technology, have noticed that paradoxically when a little bit of assistive technology is added to a car, drivers become less safe. In other words, when people feel like technology is behind they wheel they are more likely to be more distracted drivers and thus many of the autonomous technologies that are intended to make people more safe actually do the opposite.

This is a classic unintended consequence of technology, like the ones described by Edward Tenner in his 1997 book Why Things Bite BackTo combat this issue, the smart folks at MIT decided to put a human facing camera in a vehicle, which would look for distracted driving and compensate accordingly, as seen in this YouTube video. Rather than asking, what are the social and psychological reasons that drive people to engage in distracted driving, so that these reasons might be minimized, instead the best solution was determined to be adding another layer of technological assistance to the issue. Technology to solve the problem created by technology.

A screen capture from the MIT Human-Centered Autonomous Vehicle demo video, available from https://www.youtube.com/watch?v=OoC8oH0CLGc

 

 

Continue reading “Unintended Consequences”

Unintended Consequences

AI in the Canadian Government: The Immigration Edition

Over the last two years or so, the Canadian Government has been openly exploring the issue of how some government processes, such as the processing of lower risk or routine immigration files can be made more efficient through the use of AI (machine learning) algorithmic processes.

The good news is that the adoption of these systems has so far been guided by a digital framework which includes making the processes and software open by default whenever possible. These guidelines hint at a transparency that is necessary to mitigate algorithmic bias.

Input Creativity
“Input Creativity” by Row Zero – Simon Williamson is licensed under CC BY-NC 4.0

Continue reading “AI in the Canadian Government: The Immigration Edition”

AI in the Canadian Government: The Immigration Edition

AI security hits a Canadian University: Proceed with Caution

I usually only post to this blog once per week, but a news story caught my eye today since it concerns my sector (higher education), my country (Canada) and my passion (technology critique).

Mount Royal University: Image from https://www.cbc.ca/news/canada/calgary/mru-ai-security-1.5136407

Mount Royal University in Calgary, Alberta is going to be the first organization in Canada to install an AI system for the purposes of security. This system consists of a network of cameras and a machine learning algorithm that spends the first few weeks learning what “normal” movement looks like on campus, then uses that baseline to detect if there might be a security issue. Deviations from normal in this case, signal a potential “threat” or at least an event worth looking into. As described by the Vice-President, Securities management in a recent CBC article:

“when that pattern breaks, what it does, that screen comes to life and it shows the people in the security office where the pattern is now different and then it’s up to a human being to decide what to do about it,”

Continue reading “AI security hits a Canadian University: Proceed with Caution”

AI security hits a Canadian University: Proceed with Caution

A little knowledge is a dangerous thing

The beginning of knowledge
“The beginning of knowledge” by dvidal.lorente is licensed under CC BY-NC 2.0

The Dunning-Kruger effect refers to a type of cognitive bias in which people assess their own knowledge of a topic or subject area as being greater than it actually is. Psychologists note that it tends to occur frequently in those people with a small amount of knowledge on a topic. In other words, it takes a certain amount of knowledge before we can actually know how little we know.

Continue reading “A little knowledge is a dangerous thing”

A little knowledge is a dangerous thing

This Week In Tech News: Orwellian Doublethink

The last week has been filled with announcements from big tech firms:

Facebook tells us “the future is private“.

Google tells us they’re “here to help“.

Amazon tell us it’s a friend to small businesses.

"War is Peace, Freedom is Slavery Ignorance is Strength"  by Nney is licensed under  CC BY-NC-SA 2.0
“War is Peace, Freedom is Slavery Ignorance is Strength” by Nney is licensed under CC BY-NC-SA 2.0

In developers’ conferences and earnings calls, the biggest of the big tech companies are trying to develop unique value propositions that paint them as friendly, responsive, and attuned to the needs of their customers. Then the mainstream technology media (often overworked, understaffed and reliant on the good graces of big tech for continued access to stories), generally reports these messages at face value. News in the last week focused on Facebook’s pivot toward community groups, Google’s exciting universal translator or Amazon’s claim that small and medium sized business partners made on average 90K last year through their platform.

Continue reading “This Week In Tech News: Orwellian Doublethink”

This Week In Tech News: Orwellian Doublethink

What if finding your online community hurts others?

The beauty of participatory and social media has always been its ability to connect people. That is also the great evil of these platforms.

Social media allows what Barry Wellman calls networked individualism. In contrast to geographic or familial communities where we are brought together through accidents of fate like where or with whom we were born, networked individuals are not forced to conform to community norms to fit in. Instead, they can use network technologies to maintain their individual quirks and find others who share their unique interests and ideals in online communities. This is a beautiful and terrifying vision.

Frankenstein
“Frankenstein” by Britta Frahm is licensed under CC by 2.0

A year ago today, 10 people were killed and 16 injured when a young man rented a van and drove it onto the sidewalk. The perpetrator engaged in this action, as part of an “incel” or “involuntarily celebate” rebellion. The incel group is a group of men online who openly express misogyny and claim that they should be “given” women to have sex with. The attack targeted women. Incel communities on the internet celebrated following the attack.

Continue reading “What if finding your online community hurts others?”

What if finding your online community hurts others?