Building “machines with morals” and addressing the ethics of artificial intelligence are among the goals of a new research and teaching centre announced today at the University of Guelph.
“AI has the potential to do harm and the potential to improve life,” said Prof. Graham Taylor, the academic director of U of G’s new Centre for Advancing Responsible and Ethical Artificial Intelligence (CARE-AI).
“We want to connect researchers trying to solve real problems that are important to people,” said Taylor, an engineering professor and holder of the Canada Research Chair in Machine Learning Systems. He is also a member of Toronto’s Vector Institute for Artificial Intelligence.
Among only a handful of Canadian groups of its kind, the new U of G centre will ensure that AI technologies – now being rapidly deployed in numerous fields from health care to feeding the world – benefit people and minimize harm. It also aims to influence public policy and regulations.
“We hope to help guide the development and implementation of AI, including ensuring that we don’t lose sight of the human side of this technology,” said Charlotte Yates, U of G provost and vice-president (academic), who helped launch the initiative.
Malcolm Campbell, U of G’s vice-president (research) added: “This interdisciplinary centre will help foster partnerships among U of G researchers and experts in private and public organizations, all looking to address real-world issues and challenges with people implementing artificial intelligence in a range of applications. With a focus on humanistic aspects of AI, it’s an excellent example of how U of G looks to improve life.”
The centre will involve almost 90 researchers and scholars from across campus and include an advisory panel of academic and industry leaders. It will focus on applying machine learning and AI to U of G strengths, including human and animal health, environmental sciences, and agri-food and the bio-economy.
“There are only so many resources on the planet. AI will allow us to optimize the limited resources we have and equalize opportunities,” said Mary Wells, dean of the College of Engineering and Physical Sciences.
Members of the new centre will look at the humanistic and social aspects of AI, she added. “Often we think of technology in isolation from social applications. When we use this intellectual property, we have a responsibility to society to do good.”
CARE-AI researchers will investigate methodologies, including learning algorithms, human-computer interfaces, data analytics, sensors and robots.
Humanities scholars can address moral and legal aspects of artificial intelligence, said philosophy professor Andrew Bailey, associate dean (research and graduate studies) for the College of Arts.
This includes AIs as potential entities with emotion and consciousness, and how humans and machines will interact, especially as intelligent machines begin to design and build themselves.
“It sounds very science fiction, but it’s rooted in actual technology that we can see on the horizon now,” Bailey said.