海角社区 to Embed Ethics in the Development of New Technologies, Including AI
April 28, 2022
Deborah Goldgaber, director of the 海角社区 Ethics Institute and associate professor in the Department of Philosophy & Religious Studies, has received a $103,900 departmental enhancement grant from the Louisiana Board of Regents to begin to reshape 海角社区鈥檚 science, technology, engineering, and math (STEM) curriculum around ethics and human values.

Deborah Goldgaber is the director of the 海角社区 Ethics Institute and will lead the work to embed ethics in 海角社区 STEM education and technology development.
Baton Rouge鈥擶hile many STEM majors are required to take at least one standalone ethics course in addition to their 鈥渞egular鈥 science and engineering classes, 海角社区 students in multiple disciplines will soon see ethics as more integral to STEM. By partnering with 10 faculty across 海角社区鈥檚 main campus, from library science to engineering, Goldgaber will build open-source teaching modules to increase moral literacy鈥斺渒nowing how to talk about values鈥濃攁nd encourage the development of more human-centered technology and design.
The 海角社区 program will be modeled on Harvard鈥檚 EthiCS program, where the last two letters stand for computer science. Embedded ethics education goes counter to the prevalent tendency to think of research objectives and ethics objectives as distinct or opposed, with ethics becoming 鈥渟omeone else鈥檚鈥 responsibility.
鈥淚f we want to educate professionals who not only understand their professional obligations but become leaders in their fields, we need to make sure our students understand ethical conflicts and how to resolve them,鈥 Goldgaber said. 鈥淟eaders don鈥檛 just do what they鈥檙e told鈥攖hey make decisions with vision.鈥
The rapid development of new technologies has put researchers in her field, the world of Socrates and Rousseau, in the new and not-altogether-comfortable role of providing what she calls 鈥渆thics emergency services鈥 when emerging capabilities have unintended consequences for specific groups of people.
鈥淲e can no longer rely on the traditional division of labor between STEM and the humanities, where it鈥檚 up to philosophers to worry about ethics,鈥 Goldgaber said. 鈥淣ascent and fast-growing technologies, such as artificial intelligence, disrupt our everyday normative understandings, and most often, we lack the mechanisms to respond. In this scenario, it鈥檚 not always right to 鈥榮tay in your lane鈥 or 鈥榡ust do your job.鈥欌
鈥淟eaders don鈥檛 just do what they鈥檙e told鈥攖hey make decisions with vision.鈥
Deborah Goldgaber
Artificial intelligence, or AI, is increasingly helping humans make decisions, come up with answers, and discover solutions both faster and better than ever before. But since AI becomes intelligent by learning from established patterns (seen or unforeseen) in available data, it can also inherit existing prejudices and biases. Some data, such as a person鈥檚 zip code, can accidentally become a proxy for other data, such as a person鈥檚 race.
鈥淚 think there鈥檚 a fear that technology could get beyond our control, which is alienating,鈥 Goldgaber said. 鈥淓specially in such a massive terrain like AI, which already is supplementing, complementing, and taking over areas of human decision-making.鈥
鈥淔or us to benefit as much as possible from emerging technologies, rights and human values have to shape technology development from the beginning,鈥 Goldgaber continued. 鈥淭hey cannot be an afterthought.鈥
Goldgaber gives the example of a mortgage application to illustrate the so-called 鈥渂lack box problem鈥 in AI and how technologies developed to increase efficiency inadvertently can undermine equality and fairness, including our ability to judge what鈥檚 just or unjust for ourselves. Risk assessments based on lots of lots of data has become one of the core applications of AI.
鈥淵ou get a report that you鈥檙e not accepted and don鈥檛 get the loan, but you don鈥檛 know why, or the basis on which the decision was made,鈥 Goldgaber said. 鈥淚f you can鈥檛 know and evaluate for yourself the kinds of reasons or logic used, it undercuts your rights and your autonomy. This goes back to the foundation of our normative culture where we believe we have a right to know the reasons behind the decisions that affect our lives.鈥
Western philosophy is based on the Enlightenment idea that humans have worth because they can reason and make decisions for themselves, and because they have these capacities, they also should have the right to choose and act autonomously. AI, meanwhile, often provides answers without any explanation of the reasoning behind the output. There is no further information.
鈥満=巧缜 students must know that what they do matters to the world we live in.鈥濃
Deborah Goldgaber
Her collaborative curriculum development at 海角社区 will span four areas: AI and data science; research ethics and integrity; bioethics; and human-centered design. Over the last year, she has also been collaborating with Hartmut Kaiser, senior research scientist at 海角社区鈥檚 Center for Computation & Technology, on an AI-focused speaker series where ethics is one of the recurring themes.
鈥淥ur goal is to place 海角社区 at the forefront of emerging efforts of the world鈥檚 greatest universities and research institutions to embed ethics in all facets of knowledge production,鈥 Goldgaber said. 鈥満=巧缜 students must know that what they do matters to the world we live in.鈥
The 海角社区 Ethics Institute was founded in 2018 with generous seed funding from James Maurin. It is the center for research, teaching, and training in the domain of ethics at 海角社区.