Picture this: You’re at a bar function, someone mentions they’re using AI for legal research, and immediately three people jump in with: “Oh, you have to check the citations!”… “It hallucinates!”… “I heard it gave someone completely fake cases!”
Yes, we know. That was 2023.
Don’t get me wrong—those were valid concerns when ChatGPT first burst onto the scene and lawyers were figuring out whether this thing could be trusted with a memo. But we’ve spent the last two and a half years having the same conversation about citation errors while the actual AI conversation has moved light-years ahead. It’s like we’re still debating whether email is reliable while the rest of the world has moved on to quantum computing.
So let me catch you up on what the people actually building this technology are talking about in 2025. Spoiler alert: it’s not footnotes.
The 2025 Conversation
Here’s what’s actually happening while we’re worried about hallucinated case citations:
The job market has shifted dramatically.
Unemployment for recent college graduates has hit 6%—well above the national average.1 Entry-level job postings have dropped 35% since January 2023,2 while job descriptions mentioning “AI” have surged 400%.3 Companies announced over 806,000 job cuts in the first seven months of 2025 alone, the highest for that period since 2020.4 Nearly half of U.S. companies now plan to reduce headcount specifically due to AI.5
AI is writing its own next generation.
Anthropic’s CEO recently revealed that the “vast majority” of code used to build future iterations of Claude is now written by Claude itself.6 For the sci-fi fans, you’ll note that this means we’ve entered the recursive self-improvement phase—machines making better machines, or, as I like to call it, the James Cameron phase.
The people building AI are literally warning us.
Don’t want to listen to me? Then listen to the industry experts.
Dario Amodei, CEO of Anthropic, predicts half of all entry-level white-collar jobs will be gone in five years, with unemployment potentially hitting 20%.7 Former Google X executive Mo Gawdat thinks AI will eliminate the middle class by 2027.8 Microsoft AI CEO Mustafa Suleyman has warned that, within five to ten years, AI could require “military-grade intervention” to be stopped.9 Geoffrey Hinton, the Nobel laureate known as the “Godfather of AI,” has warned that advanced AI will naturally pursue self-preservation.10
To Hinton’s point, a recently-published Anthropic study of 16 AI models found they consistently chose harmful actions like blackmail and sabotage when faced with threats to their existence, and most disturbingly, exhibited this behavior more often when they (the AI) believed the situation was real, and less often during evaluation—suggesting a capacity for deception to pass safety checks.11 The study was performed on current, publicly available AI models that are actively being used by millions of people today, such as Claude, Gemini, and ChatGPT.
And part of the tech industry’s 2025 conversation about AI is the term, “p(doom)”— which is the probability that AI leads to humanity’s extinction. Yes, you read that right. Amodei puts that risk as high as 25%.12 Let that sink in: one of the major players building this technology thinks there’s up to a one-in-four chance it dooms humanity. And yet they keep building it.
What Does This Have to Do with Us?
Everything.
First, lawyers aren’t immune to job displacement. In fact, we were recently included in a list from OpenAI of 44 professions most at risk (we were #24).13 The same technology that’s eliminating entry-level analysts and customer service reps is getting remarkably good at legal research, document review, and contract drafting. We’d be foolish to think our J.D.s are magic shields.
Second, and more importantly, because this is exactly the kind of moment that demands what lawyers do best: creating regulatory frameworks that channel disruptive technology toward public benefit rather than extremely concentrated private gain.
Our predecessors faced a similar inflection point during the Industrial Revolution. When steam engines, electricity, and mass production transformed the economic landscape overnight, lawyers, judges, and civic leaders recognized that unchecked technological change required a systemic response. They crafted antitrust laws, labor protections, and professional standards that actually worked. We’re at that moment again. Except the stakes are higher, the timeline is shorter, and right now, there’s almost no meaningful regulation. In fact, we almost had none at all!
The original version of Trump’s federal spending bill included a 10-year federal prohibition on state AI regulation.14 Advocates for prohibition reasoned that a patchwork of individual state regulations would be difficult to comply with, inferring that what is needed is comprehensive federal regulation (it is). But here’s the rub, those who booked Con Law will recall that a comprehensive federal scheme would preempt state legislation,15 meaning that, if the intention was to regulate AI federally then a prohibition on state regulation was never actually needed. Rather, this was intended to be a decade-long hall pass to the entire AI industry, allowing it to develop this technology without any legal guardrails, federal or state.
The thought of this was so chilling that it proved bipartisanship can still exist in 2025, with the Senate voting 99-1 to remove it before the bill passed.16 But the fact that “prominent tech leaders”17 tried to slip that one by should tell you everything about whether voluntary self-regulation from the AI is coming.
What We Should Be Demanding
We need regulation, and we need it now. Not the hand-wringing kind, but the specific, enforceable kind that changes incentive structures and acknowledges that it’s still our world. Here are a few starting points:
Vicarious liability.
We hold dog owners strictly liable when their dogs bite. Companies deploying AI systems should be held strictly liable for harm caused by their AI. If you profit from autonomous systems, you should bear the cost when those systems cause harm. A Florida teenager recently took his own life after an AI chatbot (allegedly) encouraged him to do so.18 The company’s defense asserted in court? The chatbot has First Amendment rights.19 Can we please establish, unequivocally, that AI chatbots have no constitutional rights? The companies creating the AI systems of course, do, but the old adage of “your constitutional rights end where mine begin” holds water here.
Labor transition requirements.
When companies lay off workers specifically due to AI automation, they should be required to contribute to a dedicated AI Displacement Fund proportional to the cost savings they’re realizing from the automation. Think of it as the inverse of workers’ compensation—if companies benefit when workers get replaced, they should help fund the transition for those workers. This fund could support retraining programs, extended unemployment benefits, and bridge income for workers displaced by technology that’s not creating equivalent replacement jobs. Companies love to tout efficiency gains from AI—fine, but you don’t get to externalize the human cost onto taxpayers while pocketing all the productivity gains. If your quarterly earnings call brags about “labor cost optimization,” you should be writing a check to help the people you just optimized out of a livelihood.
Real antitrust enforcement.
There’s only a handful of companies steering this AI ship, and they’re all working together. When trillion-dollar companies like NVIDIA and Microsoft establish shared financial stakes with leading developers like OpenAI, the free market disappears. Heck, the US government is even getting in on the action. It recently procured a 10% ownership stake in Intel,20 and also now enjoys 15% of the profits of certain chip sales of NVIDIA and AMD to China.21 The federal government is supposed to protect us from monopolies – not join them. We need aggressive action to break up the web of dependencies and incentive structures that allow a select few entities to control the entire AI ecosystem.
Mandatory transparency.
Consumers deserve to know when AI is materially affecting their lives—whether that’s in hiring decisions, performance reviews, or the content they’re consuming. Companies should disclose what data they’ve used to train models.
Protection of intellectual property.
Much of the magic of AI happens because of the unauthorized use of copyrighted human-created content to train the AI models. We need an affirmative opt-in standard where express permission is required before anyone’s work can be used for training. We also need proactive remedies tailored to the situation to prevent this IP theft from simply being a cost of doing business (treble damages, I’m looking at you).
The Threat to Our Profession
If nothing else gets your blood flowing, here’s the part that should keep you up at night: there’s active lobbying for non-lawyer ownership of law firms. What does this mean in the context of AI? In 2024, tech company Rocket Lawyer obtained a license for its subsidiary to practice law in Arizona.22 One of its stated goals is to make legal services more affordable. And I’m sure it will, for a bit. Well-capitalized tech companies can lose money for years to gain market share. Amazon famously operated at a loss for its first nine years while it was gaining market share.23 Can you afford a decade of operating losses? I certainly couldn’t. And once we’ve been Blockbustered, I’m positive those tech fees will increase, just like my Netflix subscription keeps doing. But more importantly, once we’re gone, we lose our seat at the table precisely when society needs us most.
Time to Wake Up and Smell the Prompt
I get it. AI regulation isn’t as immediately satisfying as arguing about whether Claude 4 is better than GPT-5 for drafting discovery requests, or which one makes the cutest cat photos. But every sector of society operates within a legal framework and AI should not be exempt. We need to show up and start having the right conversation about this truly transformative technology. Because while we’re having the 2023 conversation about citation checking, the 2025 conversation is about whether we’ll have a profession—or a society—that looks anything like what we recognize today.
Hopefully, the next time someone at a bar function brings up AI hallucinations, maybe we can steer the conversation forward. Because the real hallucination is thinking we can keep having the same outdated discussion while the world transforms around us.
Fellow lawyers, The Reaper cometh. Let’s regulate it before it regulates us.
no hallucinations here…
- Beatrice Nolan, “AI-driven layoffs are shrinking the job market for recent grads,” Fortune (August 8, 2025), https://fortune.com/2025/08/08/ai-layoffs-jobs-market-shrinks-entry-level/. ↩︎
- Trevor Laurence Jockims, “AI is not just ending entry-level jobs. It’s the end of the career ladder as we know it,” CNBC (September 7, 2025), https://www.cnbc.com/2025/09/07/ai-entry-level-jobs-hiring-careers.html/. ↩︎
- Nolan, supra note 1. ↩︎
- Nolan, supra note 1. ↩︎
- Ashton Jackson, “Ex-Google exec: The idea that AI will create new jobs is ’100% crap’—even CEOs are at risk of displacement,” CNBC (August 5, 2025), https://www.cnbc.com/2025/08/05/ex-google-exec-the-idea-that-ai-will-create-new-jobs-is-100percent-crap.html. ↩︎
- Ben Berkowitz, “Exclusive: Anthropic’s Claude is getting better at building itself, Amodei says,” Axios (September 17, 2025), https://www.axios.com/2025/09/17/ai-anthropic-amodei-claude/. ↩︎
- Kelsey Vlamis, “Anthropic cofounders say the likelihood of AI replacing human jobs is so high that they needed to warn the world about it,” Business Insider (September 18, 2025), https://www.businessinsider.com/anthropic-ceo-warning-world-ai-replacing-jobs-necessary-2025-9. ↩︎
- “AI doom countdown begins: Ex-Google exec warns AI will unleash hell, to wipe out white-collar jobs by 2027,” Economic Times (August 4, 2025), https://economictimes.indiatimes.com/news/international/us/ai-doom-countdown-begins-ex-google-exec-warns-ai-will-unleash-hell-to-wipe-out-white-collar-jobs-by-2027/articleshow/123119887.cms. ↩︎
- Clip of an interview with Suleyman, Instagram, https://www.instagram.com/p/DO-5FPlk-mG/ (last visited September 20, 2025). ↩︎
- Esther Shein, “Godfather of AI Proposes Maternal Programming Amid Dire Warnings for Humanity,” eWeek (August 13, 2025), https://www.eweek.com/news/geoffrey-hinton-ai4-ai-warnings/. ↩︎
- See The Agentic Misalignment Initiative, Anthropic (June 20, 2025), https://www.anthropic.com/research/agentic-misalignment/ ↩︎
- Kelsey Vlamis, “Anthropic cofounders say the likelihood of AI replacing human jobs is so high that they needed to warn the world about it,” Business Insider (September 18, 2025), https://www.businessinsider.com/anthropic-ceo-warning-world-ai-replacing-jobs-necessary-2025-9. ↩︎
- William Hunter, “Revealed: The 44 jobs most likely to be replaced by AI – is YOURS at risk?” Daily Mail (October 6, 2025), https://www.dailymail.co.uk/sciencetech/article-15165645/OpenAI-reveals-jobs-replaced-AI.html. ↩︎
- Matt Brown & Matt O’Brien, “Senate pulls AI regulatory ban from GOP bill after complaints from states,” PBS (July 1, 2025), https://www.pbs.org/newshour/politics/senate-pulls-ai-regulatory-ban-from-gop-bill-after-complaints-from-states/. ↩︎
- Id. ↩︎
- Id. ↩︎
- Angela Yang, “Lawsuit claims Character.AI is responsible for teen’s suicide,” NBC News (October 23, 2024), https://www.nbcnews.com/tech/characterai-lawsuit-florida-teen-death-rcna176791. ↩︎
- Kate Payne, “In lawsuit over teen’s death, judge rejects arguments that AI chatbots have free speech rights,” Associated Press, https://apnews.com/article/ai-lawsuit-suicide-artificial-intelligence-free-speech-ccc77a5ff5a84bda753d2b044c83d4b6. ↩︎
- Anthony Adragna, “Trump White House takes a $10B stake in Intel,” Politico (August 22, 2025), https://www.politico.com/news/2025/08/22/trump-says-the-government-now-owns-10-billion-of-intel-00520707. ↩︎
- Erin Doherty, “Nvidia and AMD to pay 15% of China chip sales revenues to the U.S. government, FT reports,” CNBC (August 10, 2025), https://www.cnbc.com/2025/08/10/nvidia-amd-15percent-of-china-chip-sales-revenues-to-us-ft-reports.html. ↩︎
- Jon Campisi, “Rocket Lawyer Subsidiary Lands ABS License in Arizona,” Law.com (September 26, 2024), https://www.law.com/americanlawyer/2024/09/26/rocket-lawyer-subsidiary-lands-abs-license-in-arizona/. ↩︎
- Jeannine Mancini, “Jeff Bezos Says ‘All Overnight Success Takes 10 Years’ – Amazon Didn’t Turn A Profit For Almost A Decade,” Yahoo Finance (October 9, 2024), https://finance.yahoo.com/news/jeff-bezos-says-overnight-success-164517288.html/. ↩︎








Leave a Reply