top of page
peh2124

The Tricky Ethics of Using AI in Journalism



KLARA BAUTERS, HOST: For a lot of us, ChatGPT has been our first introduction to the seeming magic of artificial intelligence. For news organizations, ChatGPT is only the tip of the iceberg. Big outlets like Bloomberg, the Washington Post and Reuters all have created their own AI tools, including ones that turn massive amounts of data into information in their reporting. But as Pascal Hogue reports, journalists are grappling with the ethics of using these new tools. 


DHRUMIL MEHTA: So today we’re gonna make some charts with code. 


PASCAL HOGUE, BYLINE: On a weekday evening, about 20 people are gathered around a large table at Columbia Journalism School with their laptops, eager to learn an increasingly relevant skill: harnessing AI to analyze data.


MEHTA: If you’re stuck just raise your hand or you can put one of these pink stickies on the back of your computer and we’ll come by and help you when we see it.


HOGUE: Even here, what AI can do is moving so fast that the instructors themselves are barely keeping pace. Dhrumil Mehta, one of the instructors, tells the attendees that its re-ordering this teaching. 

 

MEHTA: Lessons that I used to give at like near the end of the second semester of the data program are now like things that we talk about at the very beginning. 

 

HOGUE: Sarah Gotfredsen is an experienced journalist who uses a lot of big data in her reporting…she’s taking the workshop to learn how to prompt ChatGPT to write code that will rapidly visualize numbers.

 

 

SARAH GOTFREDSEN: So for projects that are more coding heavy, it has been very useful for me. I would never use it to produce texts or content that people will read – I’ll only use it for behind the scenes reporting and data analysis.


 

HOGUE: Of course, there are risks with using these tools. But Gotfredsensen says journalists have had to be cautious before. 

 

GOTFREDSEN: It’s kind of like how we approached Wikipedia when it first came out. It’s a great source of information, but it’s also very wrong, and you shouldn’t take it at face value.

 

HOGUE: That analogy to the early days of Wikipedia is…. close.…  


ALEX MAHADEVAN: Maybe? Because it was still could be edited by anyone, but they didn’t have a lot of the protections on important pages and important information that we have nowadays.


HOGUE: The Poynter Institute’s Alex Mahadevan says the difference is that Wikipedia has built in transparency. 


MAHADEVAN: If I don’t trust it, I can go follow a link and see the source for myself. You can’t really do that with ChatGPT or Claude or Bing. You can’t trust anything that it spits out. 

 

HOGUE: Mahadevan has taught media literacy to hundreds of journalists and news consumers. He says right now…The way AI works is like a “Black Box:” an impenetrable system – so when it comes to journalism, there’s always going to be a question of trust.  


MAHADEVAN: AI has been used by investigative journalists for a long time. They’ve used it to analyze documents, to analyze images. And generally, with good investigations, they have like a methodology box at the bottom where they explain, “here’s how we generated this data.” And so they would explain how the AI works. 

 

HOGUE: Back at Columbia Journalism School, Aarushi Sahejpal - one of the instructors has some final thoughts: the fundamentals remain the same. 

 

AARUSHI SAHEJPAL: Do what do journalists do best. Ask questions.


HOGUE: And that means applying the same ethical standards to AI.


SAHEJPAL: You should always critique and fact-check and double-check what you see so that if you’re using technology, use it in a way that you’re not trying to dismay an audience or say something that is not true. And that comes from having working knowledge. 

 

HOGUE: And with AI influencing how we create and consume news, grasping its ins and outs helps journalists navigate the digital age like pros.


Full disclosure: ChatGPT wrote that last line.

 

Pascal Hogue. Columbia Radio News.


コメント


bottom of page