This essay is a part of the Scientific American & Macmillan Learning STEM Summit. The STEM Summit is an annual event that attracts diverse stakeholders, ranging from teachers, policy makers, journalists, entrepreneurs, and students. The theme of the 2019 Summit is “The Future of Work,” and will explore critical questions such as: What are we doing to prepare students for careers in our automated future? What skills—both “hard” and “soft”—will students need to thrive in the “4th Industrial Revolution”? And what strategies, tools and technologies will best help students achieve that success? You can learn more about the annual event here, and view the livestream of this year’s Summit here on Thursday, September 26th.
As millions of students head back to school this fall, it is worth looking at how the education they receive now will prepare them to take on society’s big new challenges. Concerns about personal privacy are growing in an age of increasing digital corporate surveillance, social media platforms have become centralized points of attack for hate-mongers, and we see a deficit of trust in information found online. Technology is emerging as one of the most plentiful sources of new headaches for politicians and voters alike.
As we begin another semester, how are our educational institutions preparing the next generation of leaders to deal with these new digital problems? One interesting trend is in STEM (science, technology, engineering and math) education. Over the past 40 years, the number of graduate students studying STEM has more than doubled to almost 700,000. Yet over that same time period, relatively little has been done to educate those students about the political, psychological, economic, social and ethical dimensions of their work.
Yesterday’s STEM curriculum produced an environment where tech platforms and products were developed in isolation from the broader effects they had on society. We need to update the syllabus so society gains a wider understanding of both the good and bad that come with massively accelerated technological development.
Concerns about how emerging technology will be used not just for its promised benefits but also to wreak havoc are not new. In 1951 Alan Turing presented a paper titled “Intelligent Machinery, A Heretical Theory.” He predicted: “It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers…at some stage therefore we should have to expect the machines to take control.”
Finding ways to enhance the STEM curriculum so graduates have a deeper understanding of how their work affects humanity will at least equip us to have a fighting chance against technological takeover. Enhancing education in this way will imbue society with STEM professionals who are also trained to investigate and describe STEM’s impact on humanity and to help drive a conversation that is infused with technical expertise, as well as broader goals.
To help understand how we can enhance STEM education, I have been meeting academics and experts in the U.S. and abroad to think through the problem. A key point we have learned is that there exists a set of professors who are already working on enhancing STEM education. These academics cross many fields, from computer science and philosophy to social and economic fields and many branches of ethical study. Often, their work is described as adding “ethics” to computer science, but its scope is broader than that philosophical discipline of morality.
These professors are pioneers. They do this work because they feel called to it, because they are convinced enhancing STEM education is critical to developing well-rounded scientists and citizens. A number have told me that at these meetings I’ve held, for the first time they have started to feel a sense of “community” or that there are so many others trying to make progress in a similar area. These individuals often find their work is only now starting to be understood as a legitimate part of a STEM education. Many still struggle with the perception that “real” science or STEM do not need any conception of a connection to the human experience.
Another key point concerns the complexity of the issues. The implications of a technology are hard to predict. And if the technology is flexible—say, an algorithm for machine learning—it can be picked up and used in unexpected ways, making an early understanding of potential consequences much, much harder. Individual employees may have only a limited view of the system they are working on and even less impact on the business model or practices of their employer. It’s not clear that one can always know beforehand how “ethical” a technology will end up being.
All of these factors argue for a multifaceted approach. We need to commit to enhancing STEM education. The pace of technological change is so fast now that we can no longer rely on individual STEM practitioners developing the sense I have discussed on their own, as we have to date. We must support the pioneers in this field and encourage and expect institutions of learning to develop and bolster these programs. We should encourage the creation of a community of scholars in this area.
Secondly, we need to build on pioneering work to cover more than “ethics in artificial intelligence” and to include more than university computer science classes. I’ve met people working on curricula for all ages, including the very young. We should support this form of exploration.
In addition, we must look at the related but separate structural issues of whether today’s business models encourage bad behavior and develop ways for citizens and consumers to protect our own mental and public health. Enhancing STEM education will not solve all of these problems. Doing so, however, gives us a broad set of technologies and scientists who can help all of us develop a better understanding of, and innovation in, alternative solutions.
At Mozilla, which I chair, we are working toward this vision through the Responsible Computer Science Challenge. Along with Omidyar Network, Schmidt Futures and Craig Newmark Philanthropies, Mozilla is providing $3.5 million in funding to undergraduate computer science professors to integrate ethics into their curricula.
In April we announced the first round of funding. The winning ideas are novel and encouraging. They push students studying fields such as artificial intelligence and data analytics to investigate potential ethical and societal dilemmas related to their work. And they compel STEM students to read relevant case studies, engage with ethical reasoning modules, and more.
Recently, my colleague Kathy Pham, who is co-leading the Responsible Computer Science Challenge with Jenn Beard, wrote about her hopes for the initiative: “By recasting computer science and social science as compatible, and not mutually exclusive, we can make real progress on these problems, and also help prevent future ones.”
Originally posted by: Mitchell Baker