AI in the Canadian Government: The Immigration Edition

Over the last two years or so, the Canadian Government has been openly exploring the issue of how some government processes, such as the processing of lower risk or routine immigration files can be made more efficient through the use of AI (machine learning) algorithmic processes.

The good news is that the adoption of these systems has so far been guided by a digital framework which includes making the processes and software open by default whenever possible. These guidelines hint at a transparency that is necessary to mitigate algorithmic bias.

Input Creativity
“Input Creativity” by Row Zero – Simon Williamson is licensed under CC BY-NC 4.0

However, code transparency isn’t really sufficient, for a few different reasons. First of all, given how machine learning algorithms work. While the original code may very well be open by default, as the software adapts and learns over time, the code in practice may end up in a very different place than originally intended. People can use the source code to potentially “game the system” which would then influence the development of the software over time. And any decisions made by humans as part of the chain will also influence the way the code changes over time.

Secondly, what happens if the code starts making “bad decisions”? People still trust that algorithmic decisions are superior to human ones, but even algorithms open by default are difficult to hold to account, compared to a human actor. Who is to blame if an algorithm starts making biased decisions that put human lives at risk? The original coder? The human beings that made small decisions to change the code along the way? The code doesn’t care if it loses its job, and it doesn’t know how to act morally, it just executes the commands and inputs it is given.

Thirdly, as the code changes over time, does the government have enough resources in place to continually monitor it to ensure it’s working optimally and in the best interests of all Canadians? Machine learning is not a “set it and forget it” type of scenario. It requires ongoing testing and oversight. What happens if government priorities and resources change over time? Will money to ensure ongoing oversight be there?

AI in China is currently being used to identify and single out specific ethnic and cultural groups, and should remind us how this technology can be misused for the purposes of social control. Do we trust that every government in the future can resist the temptation to use AI to consolidate power? Do we trust that they will uphold the values of transparency, that they can uphold the values of accountability, and that they have the resources to continue to do so indefinitely?

Immigration is a sensitive issue, and one that’s linked to population control. The Citizen Lab has already sounded an alarm on the use of AI in this context. I think it’s important that we heed their warning.

 

AI in the Canadian Government: The Immigration Edition

Leave a Reply

Your email address will not be published.