Why Someone Already Owns Your Digital Twin

Mar 19, 2026

It May Already Be Out There Making Bad Decisions In Your Name. Untrained And Eager to Speak.

My teenage son recently asked if he could grill our ribeye steaks for dinner.

For those without teens, here's what this really means: "I want to run a science experiment on a $47 piece of meat. I need your approval before the smoke alarm goes off."

My brain ran the full calculation in about four seconds.

Option A: Say no, crush his spirit, and eat a ribeye cooked perfectly by someone who knows what they're doing - a.k.a me.

Option B: Say yes, hand a 15-year-old a pair of tongs, and see what happens to a cow that already had a rough day.

I said yes. Obviously. Because I am a father. I am biologically incapable of saying no to a teenager holding something that could start a fire.

This is relevant because a nearly identical psychological event happened when my client Julie showed me her new AI clone. She wanted to implement what 23-year-old influencer Keri did. She built a digital clone of herself and charged her audience a dollar a minute to interact with it. She made $72,000 in a single week.

Cloning is huge right now. Matt Gray's clone can manage 0-80% of any task. This includes structuring frameworks, synthesizing data, and drafting baselines. It kills long rump-up time. You start at 80, so the human's hours are fully dedicated to the high-judgment final 20%.

And God forbid that something ever happens to you. But what if it does? Who's going to do the work, then? Hereafter AI and similar platforms are already building interactive avatars of the deceased. A clear and legally protected clone of a founder's thinking can guide successors. It can also generate revenue even after the founder is gone.

I’m a man whose German ancestors didn’t travel four thousand miles for their descendants to be vague about ribeye steaks. So, I paused for 90 seconds and told Julie, "Kill half of it. For now." If Julie's eyes were daggers, they'd kill half of me instead. Her mind must have been spinning about how automation could handle getting rid of me.

Her calibrated response was: "What I hate about you Germans is that you think you are always right. What I really hate about Germans is that they ARE always right. So tell me. What am I not seeing?"

Your Voice Is Already for Sale. You Just Haven't Found the Listing Yet.

I told her about Paul Lehrman. He is a voice actor whom Lovo hired via Fiverr to record audio. He was driving somewhere unremarkable when he heard himself on a podcast.

Not his words. Not a clip from something he recorded. His voice. His cadence. He narrates an MIT episode for which he never agreed, never recorded, and never received payment, using his exact vocal fingerprint.

He pulled over.

Because what do you do when your identity is already in the market without you? The wrong choice would be to keep driving. Paul was doing fine until the universe made him aware of his hidden career in audio media.

Here is what had happened. Paul had recorded some voice work through Fiverr. The brief said "internal research." In the AI industry, that currently means approximately what "I'll respect you in the morning" meant in a different era. He signed. He recorded. He moved on with his life. He had no idea that a company called Lovo took his audio. They fed it into a cloning model, renamed him Kyle Snow, and sold him to subscribers worldwide as a product.

By the time he found out, Kyle Snow had a career. Paul Lehrman did not consent to any of it.

Perhaps you are thinking: "This is a very unfortunate story about a voice actor. He is a professional seller of vocal cords. I am a serious firm owner who would never fall for something so obvious." To which I say: put down your coffee.

Because you record Zoom calls with clients.

You paste the transcripts into ChatGPT, and the summary writes itself. You share your thoughts on LinkedIn five days a week. Your content strategist said consistency builds authority, and you trusted that advice. Now, you see it as a cautionary tale.

Every time you do those things, you’re sharing your biometric identity, frameworks, and proprietary judgment with systems. Their terms of service allow them to train on everything you upload.

Paul Lehrman at least knew he was selling his voice.

You are giving yours away free with every upload, paste, and post. Your version of Kyle Snow or Kylie Snow already has a queue number. You just haven't heard them introduce themselves yet.

The two firms that launched on the same Tuesday.

The same week that 23-year-old influencer Keri launched a digital clone of herself and made $72,000 in 7 days, something else happened. A fake AI-generated image of a Pentagon explosion wiped $500 billion from the stock market in minutes.

To put that in perspective, the stock market lost $500 billion because a computer mistook a fake picture for a real one. Keri, meanwhile, was presumably doing something else entirely. Getting a facial. Filming content. Sleeping. The clone handled the customers.

Same technology. Same week. One system had no guardrails, believed everything it saw, and acted at machine speed before a single human could say, "Wait, did anyone verify this?" The other had a fence, a job description, and clear rules about what it could and couldn’t do.

The trading algorithms had no guardrails. Nobody drew a line around what they were permitted to believe or react to. This is the exact parenting style that leads to teens who see a $47 ribeye and tongs as a fun science experiment. No hypothesis, no control group, and no adult supervision.

A fake image entered the system. The algorithms treated it as real data and acted on it at machine speed, before any human could intervene. The damage was done in minutes because there was no wall between input and execution.

Keri's clone had hard walls. It knew its job. It stayed in its lane. Anything outside its lane went directly to a human. Because Keri, whatever else you might say about her, understood something that the architects of a $500 billion algorithm apparently did not.

The fence is not a limitation on the tool. The fence is the entire point.

Most firms install the tool first and think about the fence later. The firms that are compounding right now built the fence first. Then they opened the gate.

That paragraph is everything.

And also, apparently, $72,000 a week.

The rule that the elite operators don't talk about publicly.

Matt Gray didn't build a generic AI assistant. That's what you get when you open ChatGPT and type "help me be more productive." It responds with five bullet points that sound like they were written by a motivational poster that went to business school. Matt Gray was not interested in that.

Instead, he built what he calls a 10X Profit Clone. It’s based on 39 unique documents. These include internal P&L statements, a 70,000-word unreleased book, his COO interview system, and brand positioning architecture. Not his output, but how he thinks.

This is an important distinction. Most people build AI assistants that sound like them. Matt Gray built one that reasons like him. This is considerably harder and also considerably more useful.

Then he drew a hard line at 80%.

Here is what that means in plain English.

Before you can do anything smart, you have to do a lot of work that is not smart.

You have to gather information, organize it, build a structure, and write a first draft that is probably wrong. But at least it gives you something to argue with. This is known as the blank page problem. It causes about 60% of unnecessary suffering in the knowledge economy. The other 40% comes from reply-all emails and conference calls that should have been Looms.

The clone handles all of that. You hand it the raw inputs. It returns a structured draft, an organized framework, a synthesized baseline. Not the finished answer. Just everything you need so your brain can stop spinning its wheels and start actually driving.

Instead of taking 90 minutes to build the ramp, you spend just 2 minutes reading what the clone made. Then, you start making real decisions right away.

You avoid the ramp completely. Your attention starts where your expertise actually matters. Then, at 80%, the clone stops.

Not because it runs out of capability. It's banned from continuing.

This should be on a poster in every company that has let a machine make a decision it shouldn't have.

The final 20% is where you step in. And that 20% is not the cleanup crew. It is the consequence layer. The contextual judgment. The call that carries your name. The part where, if it goes wrong, a real human being answers for it in a room with other real human beings, some of whom may be lawyers.

You cannot sue a clone. You can’t call it at 11 PM when the deal falls through, and the client is using language that would shock a longshoreman. It has no professional liability insurance. It has no reputation to protect. It has no relationship with the client you took four years to build, and could evaporate in one bad paragraph.

The final 20% is not the end of the process.

It is the entire reason anyone wrote you a check in the first place.

The study your competitors haven't read.

Duke University funded a controlled study involving 4,400 participants. The question was straightforward: "Does knowing AI wrote content change how much you trust it?"

The output was identical. Same quality. Same accuracy. Same result. A human and a machine produced work that was, by every measurable standard, indistinguishable.

It did not matter.

Once participants realized or suspected that AI created the main work, their view of the operator's skill fell apart like a folding table at a buffet. Trust evaporated. The premium burnt with it. The work was the same. The invoice was suddenly a different conversation.

This is called the social penalty. It's like discovering that your 2-carat diamond ring is actually a lab-made stone. At first, it makes sense. Diamonds lose up to 75% of their retail value the moment you swipe your credit card. So, why not buy a lab-created diamond that is chemically and physically identical to mined ones but costs up to 80% less? Technically still sparkling. Somehow deeply upsetting.

Mark Schaefer learned this the hard way. Mark is a highly respected marketing consultant who built a bot trained on decades of his own blog posts, books, and podcast transcripts. He fed his whole professional identity into a machine and asked it to be him. At first, this seemed like a good idea. But later, it turned into a cautionary tale.

Technically, the bot was 90% accurate.

It also took away his humor, personal proof lines, and specific stories. The researchers might say it removed his human spark. Mark would likely call that the main reason anyone read him at all.

His voice was there. He wasn't.

His clients noticed right away. It's costly to realize that 90% accurate and 100% are not the same.

Here is the principle worth tattooing somewhere professional.

You suffer the social penalty when you use a machine to fake humanity. The buyer's brain detects the absence of effort the way a dog detects the absence of its owner. It cannot explain the feeling. It knows something is missing and acts on it. For a dog, this means whining by the door. For a premium client, it means discreetly updating their vendor list.

You suffer no penalty when you use a machine to execute machine work with precision. Nobody feels betrayed when software analyzes a contract faster than a human can. Nobody questions your competence when an algorithm finds a pattern in ten thousand data points in four seconds. That is just a very fast machine doing machine things, and everyone is fine with it.

The winning firms are not using AI less. They have simply stopped confusing the two categories.

Which, given everything, turns out to be the most expensive confusion in the market right now. The nightmare scenario is playing out in boutique firms as we speak.

You finish a Zoom call with a client. Sensitive financial restructuring. Maybe some employee health data is in the mix. Standard consulting work.

You paste the transcript into that free ChatGPT license to generate an executive summary. Efficient. Reasonable. Obvious.

And the damage is serious, even if it does not look dramatic at the moment. Nothing explodes right away. No alarm goes off. No one calls. You paste a transcript, upload a file, train a tool, and move on.

Easy now, expensive later.

Most open model terms of service allow user inputs to be ingested for future training. You just handed a third-party tech conglomerate your client's most confidential data. Under the California Consumer Privacy Act, that is not a minor administrative error. Under the Confidentiality of Medical Information Act, it is a potentially firm-ending liability.

You don't have to violate an AI-specific law to destroy your business. You just have to trip over the one that was already there.

The federal government is not coming to save you. The SEC and the DOJ are running regulation by enforcement: they wait for a firm to make a mistake, apply a massive fine after the fact, and call it policy. While they deliberate, California has already deployed AB 2013, AB 2602, and AB 2655. More states follow every quarter.

Elite operators don't treat regulation as the ceiling of their defense. They treat it as the floor.

What the fortress actually looks like.

The entrepreneurs pulling ahead right now are not using the same consumer AI the rest of America uses to write awkward emails and generate photos of themselves as Christmas Peppermint Fairies. They are using private systems trained on approved information, tightly fenced in, with a human making the final decision whenever the stakes rise above “mildly embarrassing.”

Private, grounded systems with approved data, hard boundaries, and human judgment at the consequence layer.

An open model is what most people are using. You ask a question. The model then searches all it has learned from the internet. It makes its best guess and gives an answer. It sounds confident, like someone who knows nothing but believes fully in their response. This is fine for writing a thank-you note. It’s not ideal when a client asks about financial restructuring options, and the model tries to fill gaps with creative guesses.

The firms winning right now are running closed-loop infrastructure.

This sounds technical because it is, but the concept is straightforward. Platforms like Delphi use retrieval augmented generation. This means the clone gets answers only from documents you’ve uploaded and authorized. Your frameworks. Your methodologies. Your proprietary material. Nothing else.

If a client asks something not in the approved docs, the clone won’t guess. It doesn’t fake an impressive answer while secretly wishing no one checks the math. It doesn’t act like a junior employee who talks with more confidence but less accuracy when unsure, unless someone more senior arrives.

It stops.

It says: I don't have a verified framework for that. Let me escalate to the human operator.

In the history of artificial intelligence, this is one of the most useful sentences ever programmed. That is the architecture of a firm that compounds. The machine knows exactly where its lane ends. The human covers everything past that point.

Lock it down with a contract. Architecture without paperwork is just an expensive suggestion.

Your Master Services Agreement must clearly state that clients cannot use your frameworks, memos, or recorded sessions for AI training. The Lovo case, in which Paul Lehrman learned he had a parallel career as Kyle Snow, passed its early legal challenges largely due to contract law. Lovo violated the terms of the original Fiverr agreement. That precedent goes both ways. Your contract can define your limits or open doors for others. It all depends on what you include.

Tennessee has already done the hard work for you. The LAVS Act, or the Ensuring Likeness, Voice, and Image Security Act, bans AI voice cloning as a misuse of biometric identity. It's the gold standard for protection in the U.S.

You do not wait for your state legislature to catch up. Use that language in your client agreements. Enforce your boundaries quickly, like a contract, not slowly like federal regulations. To put it in perspective, federal processes move as slowly as a glacier on a lazy day.

Your contract is your fortress wall.

Everything else is just hoping for the best. The question most founders are not ready for.

Here is where this stops being about productivity and starts being about legacy.

Companies like Hereafter AI are building interactive digital avatars of deceased individuals. Families hold ongoing conversations with people who have passed. A paper in the American University Law Review, titled "Don't Fear the Reaper," explores the legal and ethical gaps in AI communication after death. These gaps remain unaddressed.

The question it forces is one most founders refuse to face.

What happens to your proprietary judgment when you die?

Who owns the founder's digital twin? What does your firm gain if your voice is spread out on platforms, lightly protected, and poorly contracted?

Consider what you could build instead.

Every framework you have developed over twenty years of client work. Every decision was shaped by tough experiences. This includes the three engagements that went wrong and still cross your mind at 3 AM. Every method you struggled to perfect and never wrote down is at risk. You thought it was safe in your head, but that’s not reliable.

Documented precisely. Legally protected. Designed to stop theft and scraping. No one can take it, rename it to something like Brad Wisdom or Kylie Snow, and sell it to subscribers around the globe. You won't find yourself driving somewhere ordinary, not knowing your professional identity is starting a parallel career without you.

A system that advises your successor when you retire. That guides your partners when you are unavailable. It answers client questions at 2 AM without waking you. A system that generates revenue after you are gone.

It's not something your coworkers will be toasting at your retirement party, only to later try to piece together from old emails and hazy memories. An operating asset.

The difference between the two is about fifteen minutes and a blank document.

Which is exactly what we will talk about next. The greatest legacy may not be the capital in your account. It may be the protected operational data of your original judgment.

Are you currently protecting it like an asset of that magnitude?

The 15-minute document that starts your fortress today.

Not inspiration. Action.

Open a blank document right now. Set a timer for 15 minutes.

Note the key constraints for the most time-consuming task that's eating into your company's profits. Identify the task that occurs most frequently and determine the factors that hinder your team's efficiency in completing it. What's the one task that's constantly draining resources and taking up too much time? The one that eats two hours a week and produces a deliverable any trained system could draft at the 80% mark.

Detail the exact data inputs required. Specify the format of the output. Write down, in plain language, what the system is explicitly not allowed to do or assume.

This document is not a memo. It is the core intellectual property of your firm's first internal digital tool.

It is also the first page of your fortress.

Kyle Snow is already in the market. The question is whether your version is protected, constrained, and legally yours. Or whether it's just available.

Your 15 minutes start now.

About Andrew Lawless

Andrew Lawless is an investor, AI strategist, and keynote speaker. He advises CEOs and owners of U.S. professional services firms on building AI systems that use approved data, clear rules, and human oversight. His core argument is simple. Scale your capability. Protect your judgment. Never confuse the two. His work helps firms protect pricing power, strengthen decisions, and turn original thought into durable business leverage.

When the stakes are high and generic AI is not good enough, contact Andrew to build systems that are useful, defensible, and built for the real world.