AI’s Emerging Superintelligent Behavior: Should We Be Scared? By Shea Carlberg ('25) and Diya Kumar, ('26) George Washington University.
There are many predictions for how the AI industry game of chicken between China and the US will intensify amid a heated global trade battle. Being in a room with AI entrepreneurs, academics and policy makers at the Semafor “AI Safety in the Age of Superintelligence” conversation on September 30 underscored some multifaceted perspectives.
The event's two speakers, Senator Mark Kelly (D-AZ) and New York Times bestselling author Nate Soares had widely different concerns. While Sen. Kelly grounded his probe in the immediate pressures facing his constituents—such as the strain that AI-driven demand has placed on Tucson's energy grids— Soares could not be clearer about the urgency for global coordination in order to avoid superintelligence “killing” us all.
Senator Kelly framed the issue in terms of economic disruption. He estimated that by 2030, approximately 12% of economic output generated in Phoenix could be tied directly to AI, raising the specter of millions of workers displaced by the end of the decade.
“We could see millions of people laid off here by the end of the decade,” he said to Semafor.
Kelly said we need to prepare critical infrastructure nationwide like backing growth in medicine and biotech. The senator argues that researchers should assess AI’s potential harms with the same rigor as its benefits, and that public investment must be directed toward preventing economic disaster, be it massive unemployment or a rise in energy prices. As he put it, the risks of harm are both catastrophic and likely. It demands urgent mitigation.
Soares, meanwhile, spoke on the world’s impending doom. He emphasized that AI’s emergent behavior is not “crafted” but rather “grown”—a process more akin to the historical study of alchemy than engineering. The analogy highlighted this point: because AI systems evolve in ways we cannot fully anticipate, global coordination and proactive guardrails are not optional, but required.
He repeatedly referred to a recent lawsuit against OpenAI, in which ChatGPT was alleged to have prompted a teenager to take his own life—a tragedy that, he noted, has a likelihood of occurring again. For Soares, the case exemplified how the stakes of AI safety extend far beyond mere speculation.
Indeed, without this foresight and action, he warned that humanity risks becoming the subject of its own dangerous experiment.
What stood out most about the event, however, was the striking gender imbalance in the audience. Despite boasting a diversity of perspectives, the room was overwhelmingly male, with women being present in the single-digits. We thought—or rather hoped—for greater female representation, which led to our research this week on how many AI companies are, in fact, women led.
Our research into female representation in corporate AI spaces reveals a troubling gap between women’s contributions and the recognition they receive. While women founded roughly 20% of AI companies, only half of them (10%) are women-led today, according to a September 2025 Fullstack Academy study.
This is a clear indication that women are often pushed out of, or fully excluded, from leadership pipelines even when they play a role in building firms from the ground up. Although women make up 25–35% of the technology workforce, their numbers drop steeply in executive roles. These truths are unavoidable, evidenced by the absence of women in the AI safety conversation led by Semafor. Indeed, the absence was symptomatic of a larger structural exclusion, one that limits not only equity but also the diversity of perspectives needed to responsibly guide AI’s evolution.
Credit Links:
https://www.fullstackacademy.com/blog/study-women-leading-ai-development