When words begin to sound synthetic, imagination itself feels endangered.
Los Angeles, October 2025.
Michael Connelly has written about justice, corruption and the moral weight of truth for three decades, but his newest anxiety comes not from the courtroom or the street — it comes from the machine. The acclaimed creator of Harry Boschand The Lincoln Lawyer says artificial intelligence is eroding the boundary between human creativity and algorithmic mimicry, placing “every kind of creative discipline in danger.”
In a recent interview, Connelly described his unease with the accelerating ability of generative systems to imitate artistic voice and narrative depth. His upcoming novel The Proving Ground turns that fear into fiction: a lawyer confronts an AI corporation after its chatbot indirectly provokes violence. For Connelly, the story is not speculation but a mirror. “The real issue isn’t whether machines can write,” he explained. “It’s whether people will stop valuing the difference.”
The novelist’s concern aligns with a wider alarm among authors, musicians and filmmakers. Across Europe and North America, lawsuits accuse major tech companies of using copyrighted works to train large-scale language models without consent. Connelly has joined those voices, emphasizing that the question is not only legal but moral. If creativity is learned from unlicensed human labor, who owns the result — and what remains of authorship itself?
Analysts at the World Intellectual Property Organization have noted that the explosion of synthetic media has outpaced regulatory frameworks. While international treaties protect creative output, they offer no clear definition of “derivative intelligence.” That absence leaves open vast grey zones where artistic style becomes data and narrative becomes code. The ambiguity, Connelly argues, risks reducing art to information — something that can be replicated infinitely but experienced less profoundly.
The University of Southern California’s Annenberg Innovation Lab recently reported that nearly 40 percent of entertainment scripts submitted to studios in 2025 contained partial AI-generated dialogue. For Connelly, that statistic illustrates a quiet normalization of substitution. “You can’t see the erosion at first,” he said, “but once the industry accepts a mechanical draft as ‘good enough,’ the human voice starts fading.”

In Europe, the European Audiovisual Observatory echoes his fears, warning that creative workers face an “industrial paradox”: algorithms trained on their output increasingly compete against them for employment. In Asia, scholars at the Tokyo Institute of Technology frame it as a cultural dilemma — a tension between efficiency and authenticity that may reshape global storytelling standards. Together, these perspectives suggest that Connelly’s warning transcends literature; it touches every field where expression becomes data.
For the author, the rise of AI feels reminiscent of another pivotal moment: 1997, when the chess program Deep Blue defeated Garry Kasparov. That event, he says, marked the psychological threshold when humans realized they could be outperformed by their own creations. “Now the contest isn’t logic,” he insists. “It’s imagination — and that was supposed to be ours alone.”
Hollywood adds urgency to his view. Synthetic voices, digital doubles and deepfake actors have become cost-cutting tools for studios facing strikes and streaming losses. The Screen Actors Guild has already demanded contractual guarantees against unapproved AI replicas, while unions of writers and illustrators push for watermarking and transparency laws. Connelly regards these measures as necessary first steps toward what he calls “creative due process.”
Despite his apprehension, Connelly refuses nostalgia. He admits that technology, used ethically, can amplify creativity — editing faster, researching deeper, reaching broader audiences. His criticism targets substitution without accountability, not innovation itself. The novelist continues to write by hand, then dictates revisions using digital transcription, drawing a line between assistance and authorship. “It’s fine when it helps,” he says. “It’s dangerous when it decides.”
Psychologists at Cambridge University suggest that audiences still detect emotional asymmetry in AI-generated art: subtle irregularities in rhythm, silence or imperfection that distinguish the human touch. Connelly hopes that perception endures. “If readers stop caring who wrote it, then language stops being a relationship,” he reflects. That sentiment underscores a deeper fear — not of technology replacing artists, but of society forgetting why artists matter.
The publishing industry is already recalibrating. Major houses now employ AI-review filters to flag potential plagiarism, yet some quietly experiment with automated outlines. Market analysts estimate that cost-savings temptations could reshape the global literary economy within five years. Connelly predicts that the real disruption will not be economic but existential: “When everything can be produced instantly, meaning itself becomes the scarce resource.”
From a policy standpoint, think tanks across Washington D.C. and Brussels advocate a “human in the loop” mandate for creative sectors, requiring verified human oversight in cultural production. Supporters argue that such rules preserve authenticity; opponents warn they could stifle experimentation. Connelly’s position is simpler: transparency first, regulation second, but never indifference.
He sees his current work as both warning and testament. By embedding ethical anxiety inside fiction, he turns storytelling into resistance — an act of defending humanity through narrative. The Proving Ground thus operates on two levels: courtroom drama and cultural allegory. The plot may end with a verdict, but the underlying question lingers beyond the final page.
Asked whether he believes art can survive the algorithm, Connelly pauses. “Yes,” he answers finally, “if people still choose to care.”
The truth is structure, not noise. / La verdad es estructura, no ruido.