When a convincing deepfake video of Tom Cruise performing magic tricks surfaced on TikTok in 2024, a curious thing happened. The actor himself never posted it, but millions of viewers were captivated by the illusion. This moment wasn’t just a viral stunt; it was a public preview of a profound question now echoing through Hollywood’s soundstages and executive offices. What happens when a tool can perfectly mimic a human performance, resurrect a star from the past, or create an entirely synthetic actor from scratch?
The film industry has always raced to adopt new technology, from the first “talkies” to modern CGI. But artificial intelligence represents something different. It’s not just a better camera or a more realistic dragon. It’s a technology that challenges the very definition of performance, ownership, and authenticity. As we stand at this crossroads, the central dilemma is no longer can we, but should we?
The Double-Edged Sword: Capability and Concern
On the surface, the creative possibilities seem boundless. Directors can now de-age an actor to play their younger self, as Martin Scorsese did with Robert De Niro in The Irishman. A performance can be completed or recreated after an actor has passed away, as seen with Carrie Fisher’s Princess Leia in Star Wars: The Rise of Skywalker. These uses, often called “digital humans” or “synthetic media,” are becoming a standard part of the visual effects toolkit.
However, the same underlying technology—deepfake technology—has a much darker portfolio outside the studio lot. It’s the engine behind non-consensual explicit content, political disinformation campaigns, and sophisticated financial scams. A stark example occurred during the Ukraine conflict, when a fabricated video of President Volodymyr Zelensky appeared to show him ordering soldiers to surrender. This blurring of reality creates a “liar’s dividend,” where the mere existence of convincing fakes allows people to dismiss genuine evidence as fraudulent.
The table below outlines the key applications of this technology in entertainment and their associated ethical concerns:
| AI Application in Film | Common Use Case | Primary Ethical Concern |
| Digital De-Aging/Re-creation | Allowing actors to play younger versions of themselves or completing performances (e.g., Carrie Fisher in Star Wars). | Consent, posthumous rights, and the integrity of an artist’s legacy. |
| Synthetic Performers | Creating entirely digital actors (e.g., the controversial “AI actress” Tilly Norwood). | Job displacement for human actors and the opaque use of training data from real people. |
| Voice Cloning & Replication | Replicating an actor’s voice for dubbing, archival work, or new dialogue. | Informed consent and fair compensation for the use of a performer’s unique vocal identity. |
| Background & Crowd Generation | Populating scenes with AI-generated background actors instead of hiring extras. | Erosion of entry-level jobs and the commodification of human likeness. |
The Human Cost: Protests, Pickets, and the “Invisible Middle”
The theoretical ethical debate became concrete during the 2023 Hollywood strikes. A core demand from SAG-AFTRA was protection against the uncontrolled use of AI and actors’ digital likenesses. Reports emerged that studios were scanning background actors’ faces for a single day’s pay, seeking the right to own and reuse their digital replicas indefinitely.
This highlighted a critical class divide. While A-list stars may have the leverage to negotiate terms for their digital selves, the industry’s “invisible middle”—background actors, voice-over artists, and journeyman performers—are far more vulnerable. As one voice actor put it, “It’s not just about stars–it’s about the everyday working actors who’ll vanish first”.
The backlash isn’t limited to live-action. In 2025, SAG-AFTRA secured a landmark deal with major video game studios, mandating written consent and compensation for the use of AI voice replicas. These conflicts show that for creatives, AI isn’t an abstract future threat; it’s a present-day issue of livelihood and autonomy.
Beyond Consent: The Deeper Ethical Frameworks
The conversation often starts with consent, but philosophers and engineers argue it must go further. Two ethical frameworks help dissect the issue:
- Consequentialism asks us to judge an action by its outcomes. From this view, many uses of deepfakes are unethical because of their high potential for societal harm—eroding trust, spreading disinformation, and enabling fraud. Even if a specific fake video causes no immediate damage, it contributes to a polluted media environment where truth becomes subjective.
- Deontology, on the other hand, focuses on the act itself and the duties involved. Pioneered by Immanuel Kant, this framework suggests that deception is inherently wrong because it violates the duty to respect others as rational beings. Therefore, any AI-generated media intended to deceive is morally suspect, regardless of its “harmless” intent.
These frameworks challenge the common argument that “technology is neutral.” An axe can cut wood or commit murder, but its design is simple. AI systems, however, are built on vast datasets and complex algorithms created by engineers with their own biases and blind spots. As one analysis of engineering ethics notes, claiming total neutrality allows creators to avoid accountability for the predictable harms their tools enable.
The Path Forward: Transparency, Labels, and Guardrails
So, is there a way to harness this powerful tool without causing irreparable harm? Many experts believe the answer lies in robust ethical guidelines and proactive governance, not in halting progress.
- Radical Transparency: The most straightforward solution is clear labeling. If audiences know they are watching a digitally de-aged performance, a posthumous cameo, or a fully synthetic actor, the deception is removed. This is a cornerstone of the EU’s pioneering AI Act. Projects like the “DeepTomCruise” TikTok should have an unmistakable watermark.
- Consent and Compensation as Standard: The protections fought for in the SAG-AFTRA strikes must become industry-wide baseline standards. This means informed consent agreements that are specific, limited, and financially fair, applying to all performers, not just leads.
- Auditing the Training Data: AI doesn’t create from nothing. It learns from millions of images, videos, and performances. There is a growing push for developers to be transparent about their training datasets and to allow creators to opt out or receive royalties when their work is used. This addresses the critique of AI actress Tilly Norwood, who was built from the likenesses of countless real women who were neither credited nor paid.
- Legal Safeguards: Technology outpaces law, but legislation is catching up. Proposals like the No Fakes Act aim to legally protect individuals from unauthorized digital replicas, creating a necessary deterrent against the worst abuses.
The Final Take: Protecting the Spark
The dazzling AI-enhanced revival of The Wizard of Oz at the Las Vegas Sphere offered a glimpse of a potential future: a spectacle powered by algorithms, where classic stories are endlessly remixed by machines. Yet, the backlash it received speaks to a deep-seated public intuition. The value of a story isn’t just in its plot, but in the fragile, human spark that brings it to life—the fleeting emotion in an actor’s eye, the improvisation that wasn’t in the script, the shared vulnerability between performer and audience.
AI can mimic this spark, but it cannot live it. The great task for Hollywood in this algorithmic age is not to reject the new tool, but to build the ethical guardrails that ensure technology remains in service to human creativity, not a replacement for it. The goal is a future where we can marvel at the technical achievement of a de-aged star while never forgetting to honor, protect, and hire the very real human artist behind the illusion.

No Comment! Be the first one.