For Austinites facing future weather disasters, lifesaving information could be provided by artificial intelligence, and according to researchers at The University of Texas at Austin, you might even trust it more than a human.
Through a joint venture with the city of Austin, a team of UT researchers is nearing the end of a three-phase, interdisciplinary exploration of AI’s ability to provide immediate, accurate emergency guidance across a wide range of languages. Funded by a grant from the city, the project combines rhetorical analysis and technical development to create an AI chatbot that can deliver real-time emergency messaging in different languages.
Lucy Atkinson, an associate professor in the Stan Richards School of Advertising & Public Relations at the Moody College of Communication, is a core member of the team, providing her expertise in communication strategies and use of artificial intelligence.
“Trust and receptivity to AI is not uniform across populations,” Atkinson says. “We ran into a lot of challenges with recruiting people. … The city told us that there are communities who are a little less trusting of institutions, and so that took a lot of effort, but we were able to get some really great insights.”
In the first phase, the research team surveyed non-English-speaking Austinites about their views on weather emergency preparedness, government trust and AI-generated communications. Atkinson and her colleagues wanted to represent Austin’s diverse community of language speakers, and in doing so, they were able to get responses from speakers of Spanish, Vietnamese, Arabic, Chinese and Korean.
After examining these communities’ attitudes about AI, Atkinson and her team launched the second phase of their study: measuring engagement between human-written and AI-written content.
To do this, researchers ran AI-generated and human-written emergency preparedness advertisements on Facebook and analyzed engagement metrics such as clicks, comments and likes to evaluate the messaging’s reach.
The ads ran in February 2025, which Atkinson says was chosen specifically due to the high likelihood of atypically cold weather, unexpected freezes and high levels of stress on the city’s infrastructure.

Results revealed that AI-generated messages created more concern among viewers, while human-written ads increased feelings of safety. The research team knew that feeling safe could potentially undermine the effectiveness of emergency messaging.
“The initial thought may be, ‘Well, humans are better than AI since it makes people feel safer,’” Atkinson says. “But if people feel safe, they may not be motivated or inclined to actually be prepared. They may feel: ‘Oh, I’m fine. I don’t need to do anything.’ It’s not that we want to make people scared, but we do want to make them understand the severity of what’s going on.”
Further analysis revealed another surprising finding. AI-generated messages were often perceived as more credible than human-written ones.
Junfeng Jiao, another member of the research team, described the generational divide as one of the biggest influences in trust toward AI. A 2025 report from the AI platform Pearl.com found that about 40% of both Gen Z and millennials say they trust AI more than humans in most cases, significantly higher than older generations. The study also found that Gen Alpha — children born between 2010 and 2024 — use AI 280% more than other generations.
“I’m not a psychologist, but I think people, depending on age group … trust that all technology is on our side,” Jiao says. “They think technology is neutral and will give you a robust and accurate answer. … Some people just trust technology more than people.”
Jiao is an associate professor in the Community and Regional Planning Program in the School of Architecture and the director of UT’s Ethical AI program. He primarily handles the final phase of the project, which involves developing the actual AI chatbot. He and Atkinson began their research together after submitting similar proposals to the city that were later merged into one project.
“I’m more from the technical side, she’s from the response-emergency side, and the city was saying, ‘Why not combine the proposals and they just do it together?’” Jiao says.

As the project entered its final phase, Atkinson, Jiao and their team were still dealing with one frequent challenge: translation accuracy.
To train the AI, researchers had to translate messaging from other languages into English and back into their original language. However, the technology struggled to properly translate languages’ many regional variations or words that have no equivalent in English.
“Right now, most of the language models are trained by the English context, and I can definitely see that,” Jiao says. “But specialized language models will be developed. There will be Chinese, Japanese, French or Spanish and some subtle ways for interactions to be captured by these different AI large language models.”
Despite the disparity, Atkinson remains hopeful that the AI translation will continue to advance, saying that the technology has made big improvements just within the time frame of the study.
Still, language is only one part of a much bigger conversation about AI accuracy. Jiao and Atkinson are both invested in finding ways to use artificial intelligence to better society, but to do that, users must be able to trust the chatbot.
AI tools are often flawed and make frequent mistakes, but Jiao says emphasizing safety, unbiasedness, transparency, explainability and inclusivity can make AI more ethical.
“If you look closely, not every AI system has these five characteristics,” Jiao says. “Some are biased, some are not safe, most are not explainable and some are not transparent, so we are still on the way to trying to achieve that goal.”
The data collection concluded in February 2025, and Jiao says he hopes the project and the chatbot will be complete by the end of year. Pending the city’s approval, the chatbot could be implemented into Austin’s emergency preparedness system.
Jiao and Atkinson are both dedicated to using AI as a tool to create a better, more inclusive society, but AI still has the potential to cause irreparable harm through the spread of misinformation, job automation and the toll AI data centers have on the environment. AI can be both promising and precarious, Atkinson says.
“I think AI, just as with all technology, can be a force for good and it can be a force for evil,” Atkinson says. “AI is one of those things where it’s not going away. It’s only getting bigger and bigger in terms of how we use it in our daily lives.”