AI

AI clones: the good, the bad, and the ugly

At a glance:

  • AI digital twins are being used for both legitimate political campaigning and highly unethical non-consensual impersonations.
  • New software like Colleague Skill allows employees to clone the professional personas of bosses and coworkers using chat histories and emails.
  • The emergence of "Ex-Partner Skill" and "deathbots" presents profound ethical dilemmas regarding grief, consent, and digital resurrection.

The rise of the digital twin

Artificial intelligence has reached a level of sophistication where it can convincingly mimic a real person's voice, appearance, and personality. While the technical capability is well-established, the ethical landscape is shifting from clear-cut applications to increasingly murky territory. In the most positive use cases, digital twins serve as scalable extensions of high-profile individuals, allowing them to interact with large audiences simultaneously.

Silicon Valley leaders are already exploring this frontier. Meta’s Mark Zuckerberg and LinkedIn co-founder Reid Hoffman have reportedly worked on or already possess digital twins of themselves. Beyond the tech elite, politicians are leveraging these tools to maintain presence and reach. For example, Pakistan’s Imran Khan utilized an authorized voice clone to continue his political campaigning while imprisoned. Similarly, New York City Mayor Eric Adams has employed voice-cloned robocalls to communicate with constituents in multiple languages, including Mandarin and Yiddish.

These applications are generally considered ethical provided there is full transparency. As long as the interacting parties are aware they are engaging with a digital clone rather than a human being, the technology serves as a powerful tool for accessibility and communication efficiency.

The dark side of non-consensual cloning

The flip side of this technological leap is the rise of non-consensual cloning, which is used for fraud, extortion, and harassment. These cases are not theoretical; they have already resulted in massive financial losses and psychological trauma. The ability to replicate a trusted voice or face has turned social engineering into a high-tech weapon.

Several high-profile incidents illustrate the severity of this threat:

  • In 2019, scammers used an AI-generated German accent to impersonate a parent company executive, tricking a UK energy firm's CEO into transferring €220,000 to a fraudulent account.
  • In 2023, an Arizona mother named Jennifer DeStefano was targeted by extortionists who used an AI clone of her 15-year-old daughter’s voice to demand a $1 million ransom.
  • In 2024, a finance worker at a multinational firm in Hong Kong was defrauded of $25 million after attending a video conference call where the CFO and several other colleagues were actually deepfake recreations.

Beyond financial crime, the technology is frequently used to create non-consensual deepfake pornography, where celebrity faces are superimposed onto adult film actors. In these instances, the ethical lines are unambiguous: the use of a person's likeness without their permission for these purposes is fundamentally wrong.

The emergence of workplace clones

While fraud is a clear evil, a new and more subtle trend is emerging from China that challenges traditional workplace ethics. This involves the use of specialized software to build digital versions of colleagues and superiors. The most prominent driver of this trend is a project called Colleague Skill, released in late March by a 24-year-old Shanghai-based engineer named Zhou Tianyi.

Colleague Skill and its various open-source forks allow users to upload internal data to create a functional persona that mimics a specific individual's professional expertise and communication style. The technical stack behind these tools is sophisticated, utilizing a combination of:

  • Claude
  • Kimi
  • ChatGPT
  • DeepSeek API
  • OCR (Tesseract)
  • Sentiment analysis modules

Originally intended as a satirical commentary on AI-driven layoffs, the tool has been adopted by employees for more earnest, albeit controversial, reasons. Some use it to retain institutional knowledge or to have an instant sounding board for ideas. Others use it to clone their bosses, attempting to predict how a manager might react to specific work or proposals. Because these clones are often created without the subject's knowledge, they represent a significant breach of professional privacy.

Digital resurrection and the personal frontier

The technology has moved from the office to the most intimate spheres of human life. Zhou Tianyi later forked his project into "Ex-Partner Skill," a tool designed to recreate a former romantic partner through AI. By uploading photos, social media posts, and chat logs, users can create a chatbot that mimics the tone, catchphrases, and linguistic nuances of their former significant other.

This has led to the rise of "deathbots"—simulations used to interact with the digital likeness of deceased loved ones. While some argue these tools provide therapeutic closure, allowing people to say things they never got to say, others question the psychological impact of maintaining an obsessive, one-sided relationship with a non-existent entity. The technology exists on a spectrum that ranges from a tool for emotional healing to a medium for digital harassment.

Even in the West, these trends are surfacing. Users of platforms like Character.AI have attempted to create bots based on ex-partners, prompting the company to update its Terms of Service to explicitly ban the creation of bots using the likenesses of private individuals without permission. As these tools continue to circulate in private developer circles, society faces a growing challenge to redefine consent and boundaries in an age of digital immortality.

Editorial SiliconFeed is an automated feed: facts are checked against sources; copy is normalized and lightly edited for readers.

FAQ

What is Colleague Skill and how does it work?
Colleague Skill is a software project created by engineer Zhou Tianyi that allows users to build digital personas of coworkers. It functions by analyzing uploaded chat histories, emails, and internal documents using a tech stack that includes Claude, ChatGPT, DeepSeek API, and Tesseract OCR to mimic a person's specific professional expertise and speech patterns.
Have there been real-world examples of AI voice cloning fraud?
Yes, several significant cases have been documented. In 2019, a UK energy firm lost €220,000 to a scammer mimicking an executive's voice. In 2023, an Arizona mother faced a $1 million extortion demand using a clone of her daughter's voice, and in 2024, a Hong Kong worker lost $25 million after a video call featuring deepfake versions of his colleagues.
Is it legal to create AI clones of people without their permission?
While specific laws vary by region, the ethical and platform-based consensus is shifting toward strict prohibition. For example, Character.AI has updated its Terms of Service to explicitly ban creating bots using the likeness of private individuals without permission, reflecting a growing movement to protect personal identity from non-consensual cloning.

More in the feed

Prepared by the editorial stack from public data and external sources.

Original article