At last weekend’s Bicentennial Commencement, Emily Warren Roebling was awarded the first posthumous honorary doctorate in the history of RPI. Actor Liz Wisan, who played Roebling on HBO’s The Gilded Age, portrayed her in the ceremony and at Colloquy. In keeping with RPI’s tradition of innovation, Roebling’s remarks were generated by an AI trained on her writings and historical archives, including letters to her husband, Washington Roebling, RPI Class of 1857.

Project Bridge, as this collaboration was called, was planned by a team including Rensselaer archivist Jenifer Monger; computer scientist Sola Shirai ’24, Ph.D.; and Roebling descendant Antoinette Maniatty, Ph.D., chair of the Department of Mechanical, Aerospace, and Nuclear Engineering at RPI. Jim Hendler, Ph.D., director of RPI’s Future of Computing Institute and Tetherless World Chair of Computer, Web, and Cognitive Sciences, led the generative AI work of the team.

We asked Hendler a few questions about how RPI’s AI research built this bridge between the 19th and 21st centuries and how it adds to our understanding of generative AI.

What makes RPI unique in the study and exploration of the capabilities of generative AI?

Many people are using generative AI, but to really use it correctly, you need to understand prompt engineering, the way you refine queries and commands to optimize output, so we try to focus on that. ChatGPT and these other large learning models are trained on general datasets. But you can make one that’s very specific to an area. For example, I just had a meeting with a student who’s working on how we can make much better generative AI for medical imagery. So I’d say that, one, we use all of the tools that generative AI offers, and, two, we extend those tools to use them in new domains like we did with Project Bridge.

How did you apply that approach to Project Bridge?

With Project Bridge, we were really trying to personalize ChatGPT, and that was a challenge because it’s not built to do that. We had a very specific purpose in mind. We weren’t trying to make some kind of chatbot where you could sit and talk to Emily Roebling. Some people are trying to “bring back the dead,” to pretend you can chat with Albert Einstein or something, but that doesn’t really work. What we were doing was very targeted — looking at historical and archival documents that let us understand what people were doing at the time, what they knew at the time — and building that in. We were also trying to capture the personal style of Emily Roebling, so we used a lot of her own writing to help us train the AI to base its answers off those, and we had family who could help us “edit” in terms of how she might say things in public talks, as opposed to personal letters. It’s a fascinating combination of computing, history, and social science. Because to do this sort of project, you need people who can really understand whether the answers that are being generated make sense in context.

What did you learn from Project Bridge in terms of what needs to be avoided when using generative AI?

I think one of our biggest lessons was that in some ways ChatGPT is overly helpful. We were trying to build a historically correct narrative, but with certain prompts, ChatGPT will say historically incorrect things. For example, I jokingly asked it what Emily Roebling’s response was when she saw an unidentified flying object over the Brooklyn Bridge. Of course, she never saw one. ChatGPT, though, because we’re asking as if it happened, answered saying, she would have said XYZ. And so maintaining historical accuracy was probably the hardest part of this entire project.

Do you have any other takeaways about generative AI from working on Project Bridge?

We’ve all known for a long time that ChatGPT plus humans outperforms either alone, but putting together a team that had the technical knowledge, the historical depth, and the ability to find the correct documents made this possible. And then somebody from the family who was able to look at the results and say, yes, that’s plausible. So what we learned is if you really want to do a complex project using ChatGPT, you have to have a team of people. You need people who know the domain. You need people who know the technology and the context around the technology. To really do significant things with generative AI, we’re going to have to work in multidisciplinary teams. We’re already assembling some to address more pressing problems, and we’re creating curricular activities that will bring these teams together to tackle some of the global challenges facing humanity.