Use of New Artificial Intelligence Technology Policy - Public Consultation
The Use of Artificial Intelligence Technology Policy public consultation has ended. This page has now been archived. |
The Toronto Police Services Board (the Board) has now concluded the public consultation on a draft Policy that will govern the way the Toronto Police Service (the Service) can obtain and use new artificial intelligence (AI) technologies. The Board is developing this Policy to create transparency about the Service’s use of AI technology, and to ensure that AI technologies are used in a manner that is fair, equitable, and does not breach the privacy or other rights of members of the public. This Policy is being developed in a field with few existing examples and no established guidelines or best practices. To our knowledge, this will be the first Policy of its kind among Canadian Police Boards or Commissions. The public's participation in the development process will be crucial for us to be able to effectively and meaningfully achieve these goals. We are grateful to all the individuals and organizations who participated in this consultation.
Background
Innovative AI technology promises to streamline, simplify and improve many aspects of modern life, providing efficiencies and cost savings. However, AI technology also carries potential risk to privacy, equality, accountability and fairness. There is no legislation currently in Ontario that fully regulates the use of AI technologies, nor are there any comprehensive guidelines. The Board is stepping into this new space in recognition of the importance of governing this field, but also recognizing that a Policy governing AI technology use by the Service will have to itself be innovative. Therefore, the Board is currently developing a Policy to govern the use of new AI technologies by the Service that will allow the Service to reap the benefits of AI technologies in terms of effective and efficient investigations contributing to positive community safety outcomes, while minimizing the identified risks and ensuring transparency in its use of these technologies.
If approved by the Board, this Policy will establish clear guidelines, safeguards and reporting requirements with regard to the procurement and use of AI technology by the Service. The proposed Policy will employ a risk-based process to assess the impacts of potential technologies, transparently report the benefits and risks associated with new technologies under consideration, and ensure the monitoring of actual impacts after the deployment of approved technologies to minimize the risk of adverse unintended consequences.
Key Elements of the Proposed Policy
Purpose
This Policy, if approved, will ensure the thoughtful and transparent consideration of the benefits and risks of obtaining and deploying any new technology using AI, including impacts on public trust in the Service, community safety and sense of security, individual dignity, privacy, and equitable delivery of policing services. In particular, it will help to ensure that new technologies do not introduce or perpetuate biases, including biases against vulnerable populations such as people with disabilities, children and older persons, Indigenous, Black and racialized communities, low-income and LGBTQ2+ communities, into policing decisions. The Policy will achieve this through the requirement for public consultations on the adoption of any AI technologies that may pose risks, and the development of an evidence-based approach to evaluating new AI technologies both before and after deployment.
Risk Categories
AI technologies are becoming more and more common, and many of them are incorporated into every day applications to assist users and simplify tasks. The risk posed by different applications of AI technology depends on both how the technology was developed, and how it will be used.
The proposed Policy establishes five risk-based categories:
- Extreme Risk, e.g., a facial recognition software with illegally-sourced data that could result in mass surveillance
- High Risk, e.g., an analytics system which recommends where units should be deployed in order to maximize crime suppression
- Medium Risk, e.g., a traffic analysis system that recommends where officers should be deployed to
- Low Risk, e.g., a speech-to-text transcription software to transcribe body-worn camera recording audio
- Minimal Risk, e.g., a translation engine that helps communications convert the Service website into different languages
Evaluation and Reporting
Every AI technology the Service proposes to use will be evaluated using a risk-assessment tool prior to use. The risk category will determine the level of further evaluation required for each AI technology prior to use.
Extreme-risk technologies will not be allowed for use by the Service.
High- and Medium-risk technologies will be subject to a set of evaluations and consultations. Before the Service may use any particular technology, the Chief will be required to justify to the Board why this technology should be approved despite its risks, including providing a risk-mitigation plan. These technologies will also be required to be evaluated for a year after full deployment, to ensure that their use does not result in any unanticipated negative consequences, such as discriminatory policing outcomes that are biased against particular communities.
Low-risk technologies will be reported to the Board, and members of the public will receive basic information on the technology in use (i.e., its name and purpose) and why it was determined to be of low risk.
Minimal-risk technologies, which are only for internal use and are not used to identify, categorize, or make any other decision pertaining to members of the public, will not require any reporting to the Board.
In addition, the Service will maintain a public list of all high, medium and low-risk AIs currently in use.
Public Involvement
Members of the public will be provided with a method to inform the Board of concerns regarding specific AI technologies used by the Service. For example, a member of the public may believe that a technology deemed to be of low risk has been mischaracterized or is used in ways that would render it of medium risk, and as such must undergo a more thorough review process. Concerns raised by the public will be evaluated by the Board Office, and reported on to the Board for its action.
Continuous Review
The proposed Policy also sets up a schedule for continuous review of all high, medium and low-risk AI technologies in use by the Service, to ensure the associated risks have not changed, and that the continued use is still justified.
The Consultation Process So Far
The Board Office has worked in close collaboration with the Service's Chief Information Officer to develop a risk-based approach to AI technology assessment. In addition, Board Staff have consulted with the Ontario Information and Privacy Commissioner (IPC), as the main regulatory body in charge of Ontario's access and privacy laws, the Ontario Human Rights Commission (OHRC), the Canadian Civil Liberties Association (CCLA), the Law Commission of Ontario (LCO), and various academic experts. The Board is grateful to all of these organizations for their input on the draft Policy, which is reflected in the draft below. Copies of the written statements provided by these bodies are available below. Additional written statements will be added as they become available.
The Public Consultation Phase
The Board thanks the public for their interest in this consultation. More than 40 submissions were made by individuals and organizations as part of the consultation. See below for the full list of submissions received by the Board. This is an important and complex issue and your thoughts and perspectives are very much valued as we work through the various elements that make up this Policy. Through your input, you are helping us to ensure all residents of Toronto can enjoy fair, effective and accountable policing services.
Materials
Received Submissions
We are grateful to the all of the individuals and organizations and experts who provided their feedback on an earlier draft of this Policy.
The following organizations provided us with a written response prior to the public consultation phase:
- Ontario Human Rights Commission
- Law Commission of Ontario
- Information and Privacy Commissioner of Ontario
The following submissions were received during the public consultation phase (please note this list was generated automatically and reflects the information as it was provided to the Board):
- Rachel Hoecke
- Brendan M
- Yuka Sai (Public Interest Advocacy Centre (PIAC))
- Fenwick McKelvey (Montréal Society and Artificial Intelligence Collective)
- Blythe Haynes
- Daniel Kligerman (TELUS)
- Amanda Gonzalez (Clearview AI)
- Ushnish Sengupta (University of Toronto)
- Kristen Thomasen, Suzie Dunn, Kate Robertson, et al
- Dr. Mai Phan, Laura Flyer, and Nicole Rebelo (Equity, Inclusion & Human Rights Unit, Toronto Police Service)
- Molly Johnson
- Jason Flint
- Willie Costello (Aggregate Intellect)
- Andrea Slane (Ontario Tech University)
- Leanne Huneault
- Jack Gemmell (Policing Committee, Law Union of Ontario) + attachment
- Alexa MacDougall
- Tilman Lewis
- Madelin Burt-D'Agnillo
- James Mackey
- Nicole Corrado
- Keith Cameron
- Riley Vainionpaa
- Helena Kita
- Peirce Trifonas
- albert venczel (Ryerson)
- john sewell (Toronto Police Accountability Coalition)
- Barbara Spyropoulos (CPLC 12 Division)
- Concerned Citizen
- Jennifer Beer
- Mike Mattos (Mount Dennis Community Association)
- Lisa (Private Citizen)
- Derrick Lau (n/a)
- Al
- Amanjeev Sethi
- JT
- Atila Rist
- Michele S
- Ruben Charles
- Mary Moreno
- Ben Smieja
- Michael Kos
- Zack
- Randy Barlow (Toronto)
- Lifeok432
Other Resources
These resources were helpful in developing this proposed Policy:
- IPC Comments on the Ontario Government’s Consultation on Ontario’s Trustworthy Artificial Intelligence (AI) Framework
- Government of Canada Directive of Automated Decision-Making
- To Surveil and Predict: A Human Rights Analysis of Algorithmic Policing in Canada, The Citizen Lab
- Algorithmic Policing in Canada Explained, The Citizen Lab
Submitting Written Comments
The Board is no longer accepting submissions for this consultation. Thank you to the many individuals and organizations who provided their feedback.
Outcome
The Board has approved the proposed Use of Artificial Intelligence Technology Policy at its meeting of February 28, 2022. The report to the Board on the proposed Policy describes some of the changes made in the Policy following the consultation process, and discusses some recommendations heard that were not addressed in the final Policy. In particular, changes that can be seen in the final Policy include the elucidation and development of the guiding principles, clearer guidance for meaningful consultation at all stages of the AI technology adoption process, several improvements to the risk categories, better definitions for the terms used in the Policy, and enhancements to the post-deployment monitoring and reporting regime. Executive Director Ryan Teschner and Senior Advisor Dubi Kanengisser gave a presentation to the Board on the proposed Policy during the Board Meeting. The approved Policy has been posted on the Board's website to the Board Policies page.
Photo by Fotis Fotopoulos on Unsplash