AI security hits a Canadian University: Proceed with Caution

I usually only post to this blog once per week, but a news story caught my eye today since it concerns my sector (higher education), my country (Canada) and my passion (technology critique).

Mount Royal University: Image from https://www.cbc.ca/news/canada/calgary/mru-ai-security-1.5136407

Mount Royal University in Calgary, Alberta is going to be the first organization in Canada to install an AI system for the purposes of security. This system consists of a network of cameras and a machine learning algorithm that spends the first few weeks learning what “normal” movement looks like on campus, then uses that baseline to detect if there might be a security issue. Deviations from normal in this case, signal a potential “threat” or at least an event worth looking into. As described by the Vice-President, Securities management in a recent CBC article:

“when that pattern breaks, what it does, that screen comes to life and it shows the people in the security office where the pattern is now different and then it’s up to a human being to decide what to do about it,”

While the securities and facilities managers at MRU claim there is no threat to security, since the system is only analyzing “pixels” and movements of people in aggregate, many students are concerned, and MRU security staff has admitted that under certain circumstances an individual can be tracked (for example, when caught stealing something) across campus. This admission of tracking, sort of flies in the face of privacy claims. What it tells us is that privacy is only ensured by the way people use the software. The affordances of the tool itself do not ensure privacy and thus can be open to misuse.

Privacy isn’t the only concern, in my view. The software identifies “unusual” behavior, and then human agents decide what to do about it. Depending on the security staff, this opens up the potential for all kinds of misuse or social control. If people do not behave in a way that is specified by the software, they may be singled out for attention. Even if a security guard simply shows up to speak with students behaving differently, and does not decide to take any other action. The knowledge that they are being watched in itself will become a type of social control, that stretches far beyond discouraging bad behavior.

Also, this opens up the potential for abuse related to different marginalized groups. Perhaps a group of white students behaving unusually will get a pass from security, while for example, a group of indigenous students behaving in the same way could be singled out.

And I wonder why this is necessary. I’m a big fan of Mount Royal University. I think their practical, student centered education model is fantastic, and I’ve visited campus many times. I have never gotten the sense that MRU is an unsafe place that needs to take extra security measures.

We can have complete security or we can have some freedom from surveillance. We have to balance the two. I’m not sure I think we err on the side of more surveillance in a community that is already relatively safe.

AI security hits a Canadian University: Proceed with Caution

Leave a Reply

Your email address will not be published.