PALO ALTO, Calif. — The medical profession has an ethic: First, do no harm.
Silicon Valley has an ethos: Build it first and ask for forgiveness later.
Now, in the wake of fake news and other troubles at tech companies, universities that helped produce some of Silicon Valley’s top technologists are hustling to bring a more medicine-like morality to computer science.
This semester, Harvard University and the Massachusetts Institute of Technology are jointly offering a new course on the ethics and regulation of artificial intelligence. The University of Texas at Austin just introduced a course titled “Ethical Foundations of Computer Science” — with the idea of eventually requiring it for all computer science majors.
And at Stanford University, the academic heart of the industry, three professors and a research fellow are developing a computer science ethics course for next year. They hope several hundred students will enroll.
The idea is to train the next generation of technologists and policymakers to consider the ramifications of innovations — like autonomous weapons or self-driving cars — before those products go on sale.
“It’s about finding or identifying issues that we know in the next two, three, five, 10 years, the students who graduate from here are going to have to grapple with,” said Mehran Sahami, a popular computer science professor at Stanford who is helping to develop the course. He is renowned on campus for bringing Mark Zuckerberg to class.
“Technology is not neutral,” said Professor Sahami, who formerly worked at Google as a senior research scientist. “The choices that get made in building technology then have social ramifications.”
The courses are emerging at a moment when big tech companies have been struggling to handle the side effects — fake news on Facebook, fake followers on Twitter, lewd children’s videos on YouTube — of the industry’s build-it-first mind-set. They amount to an open challenge to a common Silicon Valley attitude that has generally dismissed ethics as a hindrance.
“We need to at least teach people that there’s a dark side to the idea that you should move fast and break things,” said Laura Norén, a postdoctoral fellow at the Center for Data Science at New York University who began teaching a new data science ethics course this semester. “You can patch the software, but you can’t patch a person if you, you know, damage someone’s reputation.”
Computer science programs are required to make sure students have an understanding of ethical issues related to computing in order to be accredited by ABET, a global accreditation group for university science and engineering programs. Some computer science departments have folded the topic into a broader class, and others have stand-alone courses.
But until recently, ethics did not seem relevant to many students.
“Compared to transportation or doctors, your daily interaction with physical harm or death or pain is a lot less if you are writing software for apps,” said Joi Ito, director of the M.I.T. Media Lab.
One reason that universities are pushing tech ethics now is the popularization of powerful tools like machine learning — computer algorithms that can autonomously learn tasks by analyzing large amounts of data. Because such tools could ultimately alter human society, universities are rushing to help students understand the potential consequences, said Mr. Ito, who is co-teaching the Harvard-M.I.T. ethics course.
“As we start to see things, like autonomous vehicles, that clearly have the ability to save people but also cause harm, I think that people are scrambling to build a system of ethics,” he said. (Mr. Ito is a director of The New York Times Company.)
Last fall, Cornell University introduced a data science course where students learned to deal with ethical challenges — such as biased data sets that include too few lower-income households to be representative of the general population. Students also debated the use of algorithms to help automate life-changing decisions like hiring or college admissions.
“It was really focused on trying to help them understand what in their everyday practice as a data scientist they are likely to confront, and to help them think through those challenges more systematically,” said Solon Barocas, an assistant professor in information science who taught the course.
In another Cornell course, Karen Levy, also an assistant professor in information science, is teaching her students to focus more on the ethics of tech companies.
“A lot of ethically charged decision-making has to do with the choices a company makes: what products they choose to develop, what policies they adopt around user data,” Professor Levy said. “If data science ethics training focuses entirely on the individual responsibility of the data scientist, it risks overlooking the role of the broader enterprise.”
The Harvard-M.I.T. course, which has 30 students, focuses on the ethical, policy and legal implications of artificial intelligence. It was spurred and financed in part by a new artificial intelligence ethics research fund whose donors include Reid Hoffman, a co-founder of LinkedIn, and the Omidyar Network, the philanthropic investment firm of Pierre Omidyar, the eBay founder.
The curriculum also covers the spread of algorithmic risk scores that use data — like whether a person was ever suspended from school, or how many of his or her friends have arrest records — to forecast whether someone is likely to commit a crime. Mr. Ito said he hoped the course would spur students to ask basic ethical questions like: Is the technology fair? How do you make sure that the data is not biased? Should machines be judging humans?
Some universities offer such programs in their information science, law or philosophy departments. At Stanford, the computer science department will offer the new ethics course, tentatively titled “Ethics, Public Policy and Computer Science.”
The expectations for the course are running high in part because of Professor Sahami’s popularity on campus. About 1,500 students take his introductory computer science course every year.
The new ethics course covers topics like artificial intelligence and autonomous machines; privacy and civil rights; and platforms like Facebook. Rob Reich, a Stanford political science professor who is helping to develop the course, said students would be asked to consider those topics from the point of view of software engineers, product designers and policymakers. Students will also be assigned to translate ideal solutions into computer code.
“Stanford absolutely has a responsibility to play a leadership role in integrating these perspectives, but so does Carnegie Mellon and Caltech and Berkeley and M.I.T.,” said Jeremy Weinstein, a Stanford political science professor and co-developer of the ethics course. “The set of institutions that are generating the next generation of leaders in the technology sector have all got to get on this train.”