Skip to main content
Official Logo of Columbia Business School
Academics
  • Visit Academics
  • Degree Programs
  • Admissions
  • Tuition & Financial Aid
  • Campus Life
  • Career Management
Faculty & Research
  • Visit Faculty & Research
  • Academic Divisions
  • Search the Directory
  • Research
  • Faculty Resources
  • Teaching Excellence
Executive Education
  • Visit Executive Education
  • For Organizations
  • For Individuals
  • Program Finder
  • Online Programs
  • Certificates
About Us
  • Visit About Us
  • CBS Directory
  • Events Calendar
  • Leadership
  • Our History
  • The CBS Experience
  • Newsroom
Alumni
  • Visit Alumni
  • Update Your Information
  • Lifetime Network
  • Alumni Benefits
  • Alumni Career Management
  • Women's Circle
  • Alumni Clubs
Insights
  • Visit Insights
  • Digital Future
  • Climate
  • Business & Society
  • Entrepreneurship
  • 21st Century Finance
  • Magazine
Insights
  • Digital Future
  • Climate
  • Business & Society
  • Entrepreneurship
  • 21st Century Finance
  • Magazine
  • More 
Artificial Intelligence (AI)

Raising ‘Responsible AI’ from the Ground Up

Average Read Time:

CBS’s Hongseok Namkoong discusses the challenges of operationalizing responsible AI and the ethical implications surrounding it.

Article Author(s)
  • Tom di Mino
Published
November 29, 2023
Publication
Digital Future
Jump to main content
CB Edge Photo Image

A panel discussion at the recent CBS conference 'Challenges in Operationalizing Responsible AI.'

Category
Thought Leadership
Topic(s)
Artificial Intelligence
Data/Big Data
Digital Future
Technology
Save Article

Download PDF

About the Researcher(s)

Hongseok Namkoong

Hongseok Namkoong

Assistant Professor of Business
Decision, Risk, and Operations Division

0%

Share
  • Share on Facebook
  • Share on Threads
  • Share on LinkedIn

In the wake of the Biden administration’s“Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” industry and academic leaders in machine learning and artificial intelligence have been busy digesting and interpreting its implications for tech at large. But the discourse over “responsible AI” is nothing new to Columbia Business School's Digital Future Initiative (DFI). 

Earlier this fall, DFI conducted a workshop that brought together some of the leading experts on operationalizing responsible AI. Hosted by Omar Besbes, the Vikram S. Pandit Professor of Business, and organized by Assistant Professor Hongseok Namkoong, the event included thought leaders from Harvard, MIT,  LinkedIn, and Meta, among others. 

Namkoong is an interdisciplinary scholar working at the interface of AI and operations management. CBS spoke with Namkoong to discuss his takeaways from the workshop and what responsible AI means — both today and for the foreseeable future.

CBS: How did you become interested in responsible AI?

Hongseok Namkoong: My particular field is sort of a nascent topic — a research area, by and large. Its connection to topics like responsible AI is this: If you try to apply AI models in a high-stakes domain, like a large business, you soon realize that you can’t always trust the output of an AI – that it can silently fail in unexpected ways. It doesn’t fully grasp the extent of what it doesn’t know. 

All of this goes under slightly different names, like robustness, fairness, causality, and so on. And all these different rules that we encode in AI are learned from its training data. You could think of data as a byproduct of the socioeconomic system that we’re operating under.
 

Photo Image of CBS Professor, Namkoong
Professor Hongseok Namkoong


CBS: That raises the question of the purity of the data. Is there bias inherent in the data that the AI model itself is trained on? Are there privacy concerns —  even ethical concerns — in terms of how the data set has been collected or scraped without someone’s consent?

Namkoong: Like any topic, technology does not live in a vacuum, and it's contingent upon the social context in which it was developed and used. So, in some sense, these data sets all embody the capital interests and the social relations of the world that we live in. The same applies to all the models that we end up building off of this infrastructure. A lot of researchers have started adopting this concept of “infrastructure” because they're very much like roads and sewers in the sense that once they're established, it's really hard to change. So any models trained off of data are going to essentially reproduce and replicate the power structures that we see in society. 

In practice, every data set is biased. We live in a society full of structural racism. Any model we develop in that context will have these given limitations. To me, then, the problem before us is that we need to develop a language to capture that — either in a legal sense, a regulatory sense, a socially conscious sense, or a corporate sense. How are we going to articulate these biases and mitigate them? Some of it is quite straightforward — and some of it really isn't. 

CBS: What is CBS doing to advance the development of responsible AI?

Namkoong: CBS is pushing the envelope in providing a community in which we can have a grounded discussion on what it means to implement responsibility practices. What are the best practices, and what are the key challenges we're facing? All of these efforts don't happen in a vacuum, right? There are all kinds of vested capital interests that are interfering or facilitating these endeavors. There's different types of regulatory pressures and compliance topics that corporate firms need to think about. With the AI workshop, I wanted to bring together folks who are on the ground trying to do things within their organizational context, under resource constraints. You could say that this is part of my identity as an interdisciplinary AI researcher: Resource constraints are something we too often ignore in AI academia, although a central focus of operations management.

CBS: Ideally, we’d have responsible AI by the end of the week. But realistically, how quickly can that happen? Does the pace of integrating it depend on how a company is using AI and how quickly it can scale? It seems like that would differ across industries, from organization to organization.

Namkoong: Exactly. It's also dependent on the extent to which interests are aligned. So a company like LinkedIn cares deeply about this, even at the C level, because it's a professional network. Incentives are extremely aligned in terms of LinkedIn's ability to handle AI with care and responsibility, and the trustworthiness of the platform is integral to its business model. But for other platforms, you can imagine the extent to which responsible AI is a central topic can vary substantially.

CBS: Do you think even the definition of responsible AI will have to be continuously revisited because of the nature of this technology and how rapidly it’s evolving?

Namkoong: For sure, I don't think there is a definition that people agree on. I don't think there is a definition that I even agree on in a consistent way. I mostly think of it as caring about optimal decisions. We have the current status quo. We need to be able to take gradient steps towards a better equilibrium. So, in that sense, what are the more responsible practices we can institute and operationalize, and what are the different parties that we can move to get buy-in? That’s often how I think about these things.

CBS: Judging by your workshop’s panel, one of the first steps for any organization would be to figure out how to actually audit its operational use of AI, to see how it compares to industry standards and benchmarks. 

Namkoong: Right. And that audit comes in multiple layers. One is compliance. What is the current legislation saying about the bare minimum that a company should be doing? For example, this is fairly well defined in banking. The commercial banks for decades have been subject to these laws that say that whenever you deny an applicant a loan, you're responsible for giving them some information on why they were rejected. And your practices really cannot be discriminatory. So then a lot of these AI lending startups, or even the largest commercial banks like Chase, have these AI teams in charge of compliance of their AI models. Relatively speaking, the legal landscape is fairly well known and established in operationalizing.

CBS: What would you want people to know about machine learning and AI that has gone largely ignored in all the discourse?

Namkoong: My sense is that the set of people who are actually making progress in AI is a little bit separate from the people who are engaging in these “doomer debates.” I have never seen a productive debate on that topic where anyone has walked away with a deeper understanding of the space than they had before. That's why I intentionally chose to focus on operationalizing best practices in this workshop. I feel like that's something that's really difficult to do, particularly given a high-interest-rate environment with resource constraints.

How do I convince my boss's boss to assign more resources so we can develop better best practices that allow us to operate in a more responsible way? That's not an easy thing to do, and it comes with a whole lot of different nuances. That's something that our MBA population would be excellent at doing.

About the Researcher(s)

Hongseok Namkoong

Hongseok Namkoong

Assistant Professor of Business
Decision, Risk, and Operations Division

Related Articles

Artificial Intelligence
Data and Business Analytics
Data/Big Data
Digital Future
Digital IQ
Marketing
Technology
Date
April 08, 2025
A woman shopping in a grocery store
Artificial Intelligence
Data and Business Analytics
Data/Big Data
Digital Future
Digital IQ
Marketing
Technology

How Gen AI Is Transforming Market Research

Generative AI is revolutionizing market research by offering unprecedented ways to understand customers, assess competitors, and extend data-driven decision-making organizationally.

  • Read more about How Gen AI Is Transforming Market Research about How Gen AI Is Transforming Market Research
Algorithms
Data and Business Analytics
Digital Future
Marketing
Marketplace
Media and Technology
Date
April 02, 2025
TikTok logo on a smartphone
Algorithms
Data and Business Analytics
Digital Future
Marketing
Marketplace
Media and Technology

Why a TikTok Ban Would Boost Meta’s Ad Prices—and Hurt Small Businesses

In new research, Professors Dante Donati and Hortense Fong find that the brief TikTok outage in January benefited Meta as advertisers turned to its platforms to reach users. Small businesses, less able to switch, lost out.

  • Read more about Why a TikTok Ban Would Boost Meta’s Ad Prices—and Hurt Small Businesses about Why a TikTok Ban Would Boost Meta’s Ad Prices—and Hurt Small Businesses
Artificial Intelligence
Business and Society
Leadership
Organizations
The Workplace
Date
March 21, 2025
Walmart Chief People Officer Donna Morris, left, with Professor Stephan Meier
Artificial Intelligence
Business and Society
Leadership
Organizations
The Workplace

Walmart’s Donna Morris on Building High-Performing Teams in the Age of AI

During a conversation hosted by Columbia Business School’s Distinguished Speaker Series, the multinational retailer’s Chief People Officer shared how leaders can use AI and people-first strategies to drive workplace innovation and resilience.

  • Read more about Walmart’s Donna Morris on Building High-Performing Teams in the Age of AI about Walmart’s Donna Morris on Building High-Performing Teams in the Age of AI
Artificial Intelligence
Business and Society
DFI News & Write-Ups
Digital Future
Future of Work
Marketplace
Date
March 13, 2025
Columbia AI Summit workshop
Artificial Intelligence
Business and Society
DFI News & Write-Ups
Digital Future
Future of Work
Marketplace

AI-Generated Digital Twins: Shaping the Future of Business

During a Columbia AI Summit satellite workshop, faculty shared cutting-edge research on the opportunities and challenges of AI in business decision-making.

  • Read more about AI-Generated Digital Twins: Shaping the Future of Business about AI-Generated Digital Twins: Shaping the Future of Business

External CSS

Homepage Breadcrumb Block

Articles A11y button

Official Logo of Columbia Business School

Columbia University in the City of New York
665 West 130th Street, New York, NY 10027
Tel. 212-854-1100

Maps and Directions
    • Centers & Programs
    • Current Students
    • Corporate
    • Directory
    • Support Us
    • Recruiters & Partners
    • Faculty & Staff
    • Newsroom
    • Careers
    • Contact Us
    • Accessibility
    • Privacy & Policy Statements
Back to Top Upward arrow
TOP

© Columbia University

  • X
  • Instagram
  • Facebook
  • YouTube
  • LinkedIn