Like most technology advancements that came before it, artificial intelligence feels like the Wild West. But instead of Clint Eastwood’s Man With No Name, this villain (or antihero, depending on your outlook) emerges as RoboCop.
The lack of regulation amid AI’s exponential growth is probably exciting to most attendees of 2025’s South By Southwest Interactive conference in Austin, Texas. The festival began on March 6, and alongside futurism and psychedelic drugs, AI was a primary focus of the conference’s sessions across industries.
The eager anticipation is easy to understand, as AI’s uses seem beyond imagination, and the profit incentives are present. A McKinsey & Company 2023 report found that generative AI is poised to unlock $4.4 trillion in value. It could inject $1 trillion into the U.S. GDP alone.
However, as much as panelists throughout the festival touted the benefits of AI, several could not ignore its pitfalls.
AI experts have concerns
“There’s so many ways this could go wrong,” said Oji Udezue, an AI product expert and investor, at a panel on the risks of simulated data. “Unfortunately, this stuff is above regulation… We need a way to force people, lazy people to do this. Finding high ethics builders in the world today is an exercise in frustration.”
Another panel at the conference took a more systematic approach to discussing AI risks. However, it was framed as a way to protect companies utilizing AI from reputation-crushing headlines rather than protecting their customers.
“AI headlines are good, right? Innovation of AI in headlines is good,” said Liz Grennan, a partner at McKinsey & Company and expert on operational risk assessment. “But there are headlines you do not want. We do not want consumers, patients as victims for consuming AI. So we’re trying to avoid that.”
Paired with what appeared to be AI-generated art, the panelists proposed a hypothetical situation involving a medical patient of an “underrepresented background.” A company’s AI-powered cancer screening software repeatedly marked the patient as clear—only for her to be later diagnosed with Stage 4 cancer, a result of the AI’s incomplete or biased data.
This situation doesn’t just mark a risk for companies’ reputations; it quite literally becomes life or death for everyday people when AI is used in a medical context.
Will AI eliminate human bias or perpetuate it?
It’s no secret that human-conducted research and the resulting data contain bias. For example, the National Highway Transit Safety Administration reported that women were 17% more likely than men to be killed in car accidents. But when the organization audited car manufacturers’ test processes, they found several were not using female test dummies whatsoever, skewing the results.
Such human biases in research have a domino effect in our daily lives, from what legislation is passed to who receives certain kinds of medical treatment. But many AI advocates argue that using the technology for data simulation can help eliminate human bias in training AI models, by simulating data from underrepresented groups.
“Synthetic data is artificially generated rather than collected from real-world events, allowing for the creation of datasets that include diverse and equitable representations of various demographic groups,” BetterData, a synthetic data company, argues in a blog post. “This helps address issues like underrepresentation and limited features that often lead to biased AI models.”
Simulated data is another attempt at reducing costs, namely of acquiring robust data sets. But as synthetic data usage grows, it would, in turn, make real world data all the more valuable. And despite the argument that synthetic data will create fuller datasets, many believe that data not based in reality will expound upon biases from our unrepresentative data.
But synthetic data has other risks.
AI models are at risk of collapsing
A Nature study from July 2024 found that AI models collapse when trained repeatedly on synthetic data. As AI usage becomes more popular on the internet and open AI models scrape AI-generated content to train themselves, the result could be a terrifying ourobouros.
That’s without mentioning the environmental impact of AI. The data centers that power and train AI models are set to approach 1,050 terawatts of energy usage by 2026, per MIT News. That would bump data centers to the fifth largest electricity consumer in the world—on the list between the energy usage of the entire countries of Japan and Russia.
So, how do get ahold of this looming loose cannon?
Most leaders in the industry at SXSW argued for human intervention at several steps in the AI model building process. Their discussion of AI regulation isn’t much different from the U.S.’s attempts at transparency around food.
What are AI nutrition labels?
Several experts have proposed AI “nutrition labels,” which, like the FDA-enforced ones that appear on your box of Cheez-Its or your Advil, detail important information about the “ingredients” of the model. This transparency would make the intent, the uses, and the risks of AI models clear to both individual consumers and businesses.
Several panelists also denoted easing in with low-risk use cases as an important factor in how we rev up AI’s role in our lives. For example, AI’s usage in mental healthcare sounds incredibly risky for such a sensitive use case, especially amid a lawsuit alleging that a Character.AI chatbot pushed a teen to commit suicide.
But Emily Feinstein, executive vice president at the non-profit organization, Partnership to End Addiction, explains the program’s use of AI to train its mentors. Instead of utilizing AI chatbots to be the direct point of contact for those in need of mentorship, the organization’s use of ReflexAI’s generative AI allows them to train real human mentors in a more personalized way and at a faster pace.
“If you’re not thinking about, talking about risk, and in the habit of thinking about, talking about risk at work, then you’re probably missing things, and you’re probably not taking it seriously enough,” she said at a panel on Tuesday.
Should we risk it?
But mental healthcare, which is naturally “sensitive to risk,” Feinstein said, is different from other use cases. Other industries, say creative ones like music, are much less touchy—at least from the point of view of those trying to implement AI.
Music generator Suno, which launched last year, admitted to training its model on essentially all publicly available music on the internet, which includes copyrighted works. The company was sued, alongside another company called Udio, by the Recording Industry Alliance of America for the use of its members’ copyrighted music. The company claimed it fell under fair use.
The controversial company has been called out by others in the industry for sucking the creativity out of music. Suno’s co-founder Mikey Shulman was flamed online last month for stating that making music is “hard,” and that’s the problem they’re trying to solve. Vickie Nauman, founder of digital music consulting agency CrossBorderWorks, said at a SXSW panel on AI in the music industry that Shulman has a fundamental disconnect in the power and purpose of music.
“Well, I mean, usually the best things in life are kind of hard,” she said at the panel Thursday. “You do have to put a little effort into life, and then you do have that sense of ownership. I don’t know that making 10,000 songs in an hour for a penny a piece is that, you know? How is that satisfying? How is that pushing creativity? It’s actually not very creative.”
But without leaders taking a proactive consideration of ethics in their implementation of AI, the regulation of AI will play out in the courts. At least, that’s the point of view of Grennan, who spoke at the panel on company ethics and AI governance.
“There will be people suing, and then court findings, and then standards being set,” she said.
If it’s all a lawsuit waiting to happen, then the most the average joe can do is proceed with caution and laugh at AI’s attempts at capturing human creativity. Oh, and maybe try to avoid the chatbots.