The internet of things: It can help us manage our energy use even when we’re away from home, it can help you let people into your house remotely or make receiving packages easier. It can help you monitor your own family, home security issues or grocery use.. and it can help others monitor you.
A recent Gizmodo investigation, for example, revealed that Amazon’s smart doorbell/home security system Ring had major security vulnerabilities even despite a company pledge to protect user privacy. Gizmodo was able to uncover the locations of thousands of ring devices within a random Washington DC area. While only the Ring users who chose to use the Neighbors app were revealed, this still represents a major vulnerability which is ripe for exploitation.
Pop quiz: What do climate change and social media privacy have in common?
If you said, “a distracting and inaccurate focus on individual actions” you’re correct! Congratulations! Pat yourself on the back and pour yourself a congratulatory beer, glass of wine, coffee, or soda.
Researchers at MIT, who are at the forefront of autonomous vehicle technology, have noticed that paradoxically when a little bit of assistive technology is added to a car, drivers become less safe. In other words, when people feel like technology is behind they wheel they are more likely to be more distracted drivers and thus many of the autonomous technologies that are intended to make people more safe actually do the opposite.
This is a classic unintended consequence of technology, like the ones described by Edward Tenner in his 1997 book Why Things Bite Back. To combat this issue, the smart folks at MIT decided to put a human facing camera in a vehicle, which would look for distracted driving and compensate accordingly, as seen in this YouTube video. Rather than asking, what are the social and psychological reasons that drive people to engage in distracted driving, so that these reasons might be minimized, instead the best solution was determined to be adding another layer of technological assistance to the issue. Technology to solve the problem created by technology.
Over the last two years or so, the Canadian Government has been openly exploring the issue of how some government processes, such as the processing of lower risk or routine immigration files can be made more efficient through the use of AI (machine learning) algorithmic processes.
The good news is that the adoption of these systems has so far been guided by a digital framework which includes making the processes and software open by default whenever possible. These guidelines hint at a transparency that is necessary to mitigate algorithmic bias.
I usually only post to this blog once per week, but a news story caught my eye today since it concerns my sector (higher education), my country (Canada) and my passion (technology critique).
Mount Royal University in Calgary, Alberta is going to be the first organization in Canada to install an AI system for the purposes of security. This system consists of a network of cameras and a machine learning algorithm that spends the first few weeks learning what “normal” movement looks like on campus, then uses that baseline to detect if there might be a security issue. Deviations from normal in this case, signal a potential “threat” or at least an event worth looking into. As described by the Vice-President, Securities management in a recent CBC article:
“when that pattern breaks, what it does, that screen comes to life and it shows the people in the security office where the pattern is now different and then it’s up to a human being to decide what to do about it,”
In developers’ conferences and earnings calls, the biggest of the big tech companies are trying to develop unique value propositions that paint them as friendly, responsive, and attuned to the needs of their customers. Then the mainstream technology media (often overworked, understaffed and reliant on the good graces of big tech for continued access to stories), generally reports these messages at face value. News in the last week focused on Facebook’s pivot toward community groups, Google’s exciting universal translator or Amazon’s claim that small and medium sized business partners made on average 90K last year through their platform.
In early 2018, Facebook users were stunned to learn that Cambridge Analytica had used a loophole in Facebook’s API to harvest data from millions of users who had not given free and informed consent for the use of their data. Prior to this reveal, people around the world were already growing concerned about the spread of fake news and misinformation on social media and how this information may influence elections. This event sent apprehensions into overdrive and even sparked a #DeleteFacebook online movement, of sorts.