College AI Courses Get an Ethics Makeover

By Jeremy Hsu | April 26, 2018 7:39 pm
Poster for the course "Artificial Intelligence Methods for Social Good." Credit: Fei Fang | Carnegie Mellon University

Poster for the course “Artificial Intelligence Methods for Social Good.” Credit: Fei Fang | Carnegie Mellon University

Years after it became a running gag on HBO’s show “Silicon Valley,” the idea of companies automatically “making the world a better place” through profit-driven technological development has lost much of its shine. The next generation of computer engineers and tech entrepreneurs may benefit from a more socially conscious education that combines training in artificial intelligence with teachings on societal issues and ethics.

A growing number of universities such as Harvard and Stanford have been introducing or developing new courses that teach computer science students about ethics and the societal implications of AI technology. But Carnegie Mellon’s Artificial Intelligence Methods for Social Good course may go even further with a hands-on experience that requires students to apply what they learn about AI techniques to societal issues in healthcare, social welfare, security and privacy, and environmental sustainability.

“People are realizing that AI is not just another technique; there are important aspects of society we need to think about and discuss,” said Fei Fang, an assistant professor at the Institute for Software Research at Carnegie Mellon University in Pittsburgh. “The reason why I say this is different from other courses is that the emphasis of this course is to link AI methods directly to the societal challenges we are facing.”

This spring semester that is drawing to a close marks the first time Fang has taught this course, which includes a 12-unit version geared for master’s and Ph.D. students in computer science and engineering. Early interest in the course has been relatively strong: Fang had to increase the class size after initially planning for a maximum of just 30 students.

Part of the course introduces popular AI methods such as pattern recognition and machine learning algorithms. But the course also dives into real-life examples of how various AI techniques have been used to tackle societal issues such as figuring out the best traffic patterns or protecting endangered animals from poachers. The final project requires students to propose how certain AI methods could make a positive impact on a particular issue.

The course readings include a research paper on software that has helped randomize the roadway security checkpoints and canine patrol routes for Los Angeles International (LAX) airport since 2007. Another reading covers a machine learning technique that analyzed satellite imagery of five African countries to extract measures of socioeconomic activity. Several readings also touched upon the challenges of regulating related technologies such as self-driving cars.

Fang’s own work may also serve as inspiration for students. She helped develop an AI system that enables drones armed with thermal infrared vision to automatically detect people and animals at night. Such high-flying surveillance is being tested by a wildlife conservation group called Air Shepherd at national parks in Africa.

The course also features a number of guest lectures by experts who have been conducting such research or even developing related applications. “Much of the work we’ll introduce in the course is already being tested in the field or even deployed,” Fang said.

Fang hopes that her course’s format mix of teaching AI techniques and how to apply them to societal issues may prove inspiring for other educators seeking to create similar courses. She pointed to a similar course being taught at the Center for AI and Society at the University of Southern California in Los Angeles. But even with the growing urgency to teach computer science ethics, most courses apparently focus on teaching the AI methods and ethics in isolation from one another.

“It’s more like AI researchers are working on the AI part while other researchers from philosophy departments or the law school discuss the implications of such AI,” Fang said. “It would be good if there was deeper collaboration between the AI researchers and non-computer science researchers who care about the ethics aspects of AI.”

CATEGORIZED UNDER: technology, top posts

Lovesick Cyborg

Lovesick Cyborg examines how technology shapes our human experience of the world on both an emotional and physical level. I’ll focus on stories such as why audiences loved or hated Hollywood’s digital resurrection of fallen actors, how soldiers interact with battlefield robots and the capability of music fans to idolize virtual pop stars. Other stories might include the experience of using an advanced prosthetic limb, whether or not people trust driverless cars with their lives, and how virtual reality headsets or 3-D film technology can make some people physically ill.

About Jeremy Hsu

Jeremy Hsu is journalist who writes about science and technology for Scientific American, Popular Science, IEEE Spectrum and other publications. He received a master’s degree in journalism through the Science, Health and Environmental Reporting Program at NYU and currently lives in Brooklyn. His side interests include an ongoing fascination with the history of science and technology and military history.


See More

Discover's Newsletter

Sign up to get the latest science news delivered weekly right to your inbox!

Collapse bottom bar