Have you noticed how AI in 2025 trained us to use speed? More reports. More analysis. More social media posts, etc. But has that also trained us to smell the use of AI right away? Perhaps you have experienced how many people sound confident and have no clue what they are talking about?
Well, the latter was a problem before AI. As John Cleese stated so eloquently eleven years ago: "You see, if you’re very, very stupid, how can you possibly realize that you’re very, very stupid? You’d have to be relatively intelligent to realize how stupid you are…"
So here we are. It’s the New Year. We copy and paste AI content like madmen. Maybe you, too, have once or twice responded to a LinkedIn comment by asking GPT for a response without even reading or understanding it. I see it every day. The em dash is the easiest tell. But there are many others. We suspect AI-generated content even when it's not there. Trust is down the drain. When someone invites us to connect on LinkedIn, we immediately expect to get spammed.
My question is: Why can two people use the same AI tool and the same prompts and still end up with totally different outputs? One sounds authentic and provides accurate information; the other sounds smooth but makes up stuff. Is the latter just dumb and the former relatively intelligent?
Perhaps an answer can be found in a legal case that made AI headlines last year.
In Alabama, in a prisoner-assault lawsuit, Johnson v. Dunn, the defense team had lawyers from Butler Snow. They represented former Alabama Department of Corrections commissioner Jeff Dunn. The defense motions included fabricated authorities and cited five non-existent cases.
One attorney, Matthew Reeves, was at the center of it all. He admitted to using ChatGPT for citations but didn't check if the cases were real. Two other attorneys, William J. Cranford and William R. Lunsford, signed the filings.
Judge Anna Manasco did not treat this as a small AI mistake. She called the conduct “tantamount to bad faith” and described it as extreme recklessness. Her remedy was not a fine. She disqualified all three attorneys from the case. Then, she referred the matter to the Alabama State Bar for possible discipline.
She added a disclosure-style consequence. The sanctions order must be shared with clients, opposing counsel, and judges in other cases involving the sanctioned lawyers. That’s a purposeful reputational blast radius.
If we're honest, we've all done that to some extent. We delivered something half-baked and hoped our client wouldn't notice or mind. And we were relieved that we got away with it. Until we didn't. Even before AI. This is called being human.
In 2025, you could still get away with some of it. But we increasingly learned to manage AI output. The top AI question last year was “Can you produce more and faster with fewer resources?” In 2026, it’ll be “Can you be trusted? Is it you or AI I am reading? Can you show your work without letting AI replace your judgment?”
In other words, the AI-First mindset shifts from boosting volume to increasing credibility. Winners use AI to cut out extra steps and boost clarity. They keep the final decision, the final check, and the final accountability human.
There are three kinds of mental effort at play.
That’s the effort required by the task itself. Some work is hard even when everything is straightforward. Landing a plane with a pressurization failure is hard. Coordinating a fleet-wide software update is hard. Selling a high-trust service to a stranger is hard. AI does not erase this. It can only help you stage, chunk, and execute it in the right order.
That’s the effort created by clutter, confusion, and unnecessary steps. Bad interfaces, unclear instructions, too many options, too many drafts, too much “maybe.” This is the effort that burns attention and produces nothing. AI can reduce this by turning chaos into a clean sequence. AI can raise it when it floods you with output and choices.
That’s the effort you spend building a better model in your head. It’s the sense-making. It’s the part that makes you more capable next time. This is the effort you want to protect. AI can become a liability if it takes over this work. When that happens, you stop using your judgment. You stop having original ideas and instead amplify crowdthink without owning it.
I know this from coaching CEOs and business owners. The worst compliment they can give me after a coaching session is telling me I did well and that it was a great session. The best compliment is, "Andrew, this was amazing. I had a major breakthrough." That's when I know they own the idea of creating a better way forward rather than just liking it.
It delights me to see when these clients mold my teachings and coaching into their own systems, diagrams, and concepts. That's when I know they got it. When you leave that work to AI, you'll never get there. You and your organization never learns. They may look smart on the outside for a while, but then face long-term consequences and exposure, as in the Johnson v. Dunn case.
On December 20, 2025, a Beechcraft King Air B200 departed Aspen and climbed into thinner air. At an altitude of 23,000 feet, the cabin pressurization failed without warning. Pressurization is the invisible life support system on a plane. When it fails at 23,000 feet, the airplane can still fly. But the people inside can start losing the ability to think and make good choices.
Oxygen starvation in the brain causes foggy thoughts, slow reactions, and odd confidence. It also leads to messy checklist use and missed steps. It can progress to a loss of consciousness. The scary part is the false sense of control. People can feel “fine” while getting worse.
Pilots knew they had little time to troubleshoot and communicate with air traffic control. They also had to manage passengers and make quick decisions during emergency descents. They understood that losing control of the airplane was possible. This increased the risk of an uncontrolled descent, especially in mountainous terrain. It also made a poor landing more likely.
In short, a pressurization failure changes a normal flight into a quick-thinking test. Your brain struggles due to chemical effects. That is why emergency automation and strict procedures exist. They reduce steps and keep the sequence alive when humans are least reliable.
So pilots needed to make quick decisions. Masks on now or “one more minute?” Descend immediately or stay high to troubleshoot? Fly it manually or allow the emergency automation to take control? Assume they are fine or assume they are already impaired?
The crew put on oxygen masks and activated Garmin's emergency descent and Autoland capabilities. The system declared an emergency over the radio and used wording that included “pilot incapacitation.” Autoland chose Rocky Mountain Metropolitan Airport near Denver. It set up the approach, flew in for landing, slowed down the aircraft, and shut down the engines. The eerie part is the calm. Just a machine executing a sequence built for moments when humans have the least mental bandwidth.
In the King Air event, intrinsic effort spiked instantly. The airplane could still fly, and the humans were at risk of losing the ability to think clearly. That is a brutal kind of complexity because it attacks the very tool you need to solve it. The win came from reducing steps. Oxygen on. Emergency systems engaged. Autoland ran a sequence built for low-bandwidth moments. AI-like automation did not make the situation less serious. It made the actions simpler and more reliable while the human mind was under threat.
A month before the King Air event, airlines were dealing with the opposite problem. This one started as a line of code and ended as a worldwide scheduling headache.
On November 28, 2025, Airbus told airlines it had found a risk in A320-family flight control software. Their recent analysis found that strong solar radiation can harm key flight-control data. Regulators moved fast. The fix had to be updated before returning to normal service.
That’s when the story shifted from one airplane to discovering thousands of them.
Every airline had to answer a simple question in real time. Which tail numbers are impacted, and where are they right now? Some were at the gates turning in 40 minutes. Some were mid-rotation, two legs away from a maintenance base. Some were on the far side of the network, where you do not keep extra technicians on standby. So operations teams started rerouting. Dispatch, maintenance control, crew scheduling, gate operations, and customer service all had to move as one organism.
The update itself was not a weeks-long engineering rebuild. It was a straightforward software update. The hard part was throughput. They needed an aircraft on the ground. It had to be in the right place, with the right people ready. It also needed enough time to complete the work and sign it off.
The timing poured kerosene on the problem. It hit during the Thanksgiving travel window in the US, when fleets are already stretched, and spare planes are scarce. That is when “just swap planes” stops being a real option and starts being wishful thinking.
Some carriers worked through the night to clear the backlog. American Airlines clarified that 209 jets were affected. They planned to finish most updates that day and overnight, leaving just a few for the next day. In Japan, All Nippon Airways canceled 65 flights on Saturday while working through the required fix. In Australia, Jetstar canceled 90 flights. They also grounded some Airbus planes. Each aircraft took hours to be updated and returned to service. Airlines in Europe and beyond experienced short delays but quickly returned to schedule after overnight work was completed.
Airbus apologized for the disruption and framed the action as safety-first. That part is important. In aviation, a fix you can explain beats a risk you cannot defend.
The vivid point is this. That A320 situation looked technical on the surface. It turned into a scramble to coordinate thousands of moving parts. The industry had to find affected planes and move them. They also needed to staff the work and provide passengers with clear information about the issues. All this happened while the network was at peak load.
That's the extraneous cognitive load, the extra thinking you burn that doesn't complete the actual task. Here, the actual task was simple. Put the software update on the airplane, verify it, and sign it off. The extra load came from coordination and constant change. Planes kept moving. Gates changed. Crews ran into duty limits. Weather and delays shifted turn times. A plan that was correct at 9:00 was wrong at 9:20. Teams had to replan again and again, and each replan required handoffs across dispatch, maintenance control, crew scheduling, airport ops, and customer teams. That switching is a mental drain.
No one has publicly said “AI fixed this” in the reporting I can see. It still likely helped in the background, or could have helped, by reducing the “find and replan” work. Airlines already run large optimization and forecasting systems for crew pairing, aircraft routing, and disruption recovery. Those systems are often described as “AI,” even when they are classic operations research plus rules and heuristics.
I envision an AI-driven operations system that can maintain a live list of affected tails, their current locations, their software status, and the time window during which each can be updated. It can suggest the optimal order for updating aircraft based on real constraints such as ground time, station capacity, technician availability, and network impact. It can keep recomputing the plan as conditions change, so humans don't have to rebuild it from scratch all day.
AI could also help with passenger updates by drafting consistent messages from approved wording, plus current facts. That reduces confusion and the need for repeated explanations at gates and call centers.
But AI only reduces extrinsic cognitive load when it uses trusted data and shows what it used. If it guesses or hallucinates, it adds work for humans to verify during a crisis.
Our friends at Deloitte always make for good case studies and learning experiences. Last year, they wrote a report for Australia’s Department of Employment and Workplace Relations on a welfare compliance framework and the IT system behind it.
Chris Rudge, a researcher in health and welfare law at the University of Sydney, opened the government report. He did the opposite of what most people would do. He didn't start with reading the conclusions. He started with the footnotes.
The report looked reassuring at first glance. It was long, dense, and citation-heavy. It had the polished tone that tells a busy reader, “Relax, this is handled.” Rudge read it like a skeptic anyway, because citations are the proof chain.
One reference whetted Rudge's interest. It credited a Sydney Law School professor with a book that Rudge had never seen and a title that felt off. He performed the simplest check. He searched for the book, but nothing came up. He tried again with different keywords. Still nothing. In that moment, the report's validity fell into question. A missing source is rarely a one-off. It is usually a symptom.
Rudge kept pulling threads. Another reference did not exist. Then another. He moved from reading to testing. He checked a quotation attributed to a federal court judgment and could not find it in the judgment. He later described the experience bluntly. “I instantaneously knew it was either hallucinated by AI or the world’s best-kept secret.”
Rudge told the Associated Press he found up to 20 problems. He said the report used respected names as “tokens of legitimacy.” He also claimed it cited sources without proof that they had been read. He also said, “They’ve totally misquoted a court case and then made up a quotation from a judge.”
That turns into a governance failure fast. The report was meant to guide policy and compliance decisions. If the evidence is unreliable, the conclusions become a confidence trick. The reader is forced to trust claims with no proof. Rudge called it out. The report was “full of fabricated references.”
After the story broke, the machine got into motion. The department republished a revised version. Deloitte and the department acknowledged that some footnotes and references were wrong. Coverage noted that Deloitte used generative AI in the project. They agreed to also repay the last installment of their contract.
This is germane cognitive load in the real world. One person carried the burden of understanding. He checked the chain. He tested reality. He forced the system to tighten its constraints.
AI can produce clean sentences that feel like finished thinking. That fluency increases extraneous load because it creates a false sense of certainty, and then you spend time cleaning up mistakes. It also reduces germane load by stopping you from building your own model.
Use AI as a drafting engine, and then you carry the burden of proof. Verify quotes against originals. Verify citations exist. State what you know, what you infer, and what you do not know. Then publish.
There are not many places where this is more important than when you sell your services on LinkedIn.
Just this morning, I was on a call with a fractional HR consultant. She engaged a company to help her get leads. But when she checked the links, the profiles did not exist. Others showed profiles that weren't a good match, hadn't been updated in decades, were not active on LinkedIn, or had fewer than 100 contacts. She lost thousands of dollars on nothing.
What the company actually did is something like this. They defined a niche, such as “Roofing owners in Texas” or “HR directors in healthcare.” They open a scraping tool like PhantomBuster or a data broker such as ZoomInfo, Apollo, Cognism, Clearbit, or Lusha.
They set filters like job title, company size, location, and keywords. Then they hit export. Out comes a spreadsheet with names, profile links, company names, sometimes emails, and a few guessed fields like seniority or tech stack.
That's how far this fractional HR executive got- and fighting for a refund. If you are llucky and get past that stage...
They dump the list into Go High Level or another sequencer. They paste in a message set that looks personal on the surface. It uses tiny tokens. First name. Company name. City. Maybe a fake compliment about a “recent post” that the sender never read. The copy is written to sound warm and specific, and it is actually generic. It’s a costume.
Hundreds of connection requests go out in waves. The system drips follow-ups every day, whether the person replied or not. If the target accepts, they get a “thanks for connecting” message within seconds. If they do not respond, they get a bump two days later, then a longer pitch four days later, then a “closing the loop” nudge. The timing is engineered. The intent is not conversation. The intent is extraction.
On the sender’s side, the dashboard looks productive. New connections. Messages sent. Response rate charts. The system calls it a “campaign.” The person running it calls it “leveraging AI.”
On the receiver’s side, it feels like being processed.
The message arrives with a too smooth compliment and a hard turn into an offer. The words fit the template better than they fit them. Then the follow-ups keep coming, even after contacts ignore the first one. They realize the sender is not reading; they are just sending a sequence. That moment is the whole problem. Trust collapses.
LinkedIn starts throttling. Connection limits bite. Accounts get restricted. Deliverability drops. The operator responds by adding more accounts, rotating domains, changing copy, and buying new lists. The system evolves like spam. The work becomes a cat-and-mouse game with platform rules and human attention.
Just go on Facebook, look for LinkedIn Groups, and join them. You'll be shocked by how big the market is for renting, buying, or selling LinkedIn accounts.
This kind of automation saves effort up front and creates debt later. At worst. You'll get blocked from LinkedIn for life, and you should. At best, you burn your reputation. You also lose the thinking muscle because the system rewards volume, not understanding. That’s the germane load cost. Your market model is replaced by a dashboard.
Intrinsic load is the real work. It’s understanding your offer, your buyer, the problem you solve, and the constraints you must obey. AI can help you prepare, and it cannot do this work for you. Use AI to outline the decision, map objections, and draft options. Then you choose the claim you can defend. You set the target. You set the boundary. You decide what “good” looks like before you press send.
Extraneous load is the junk tax. It’s chasing bad lists, cleaning up hallucinated facts, arguing with dashboards, and doing ten tool handoffs because nothing is connected. AI should reduce this. Use it to summarize threads, track follow-ups, write first drafts, and keep your CRM notes clean. Use it to standardize checklists, templates, and SOPs so your team stops reinventing basic steps. Never use AI to manufacture specificity. Never use it to pretend you read a post you didn’t read. That is how extraneous load becomes reputation damage.
Germane load is the model building. It’s the part that makes you dangerous in a good way. It’s you forming a clear view of the market and testing it against reality. You own this. AI can assist by stress testing your reasoning. It can play the skeptic. It can ask “what would disprove this” and “what evidence would you need.” Then you do the verification. Open the source. Read it. Confirm the quote. Confirm the profile exists. Confirm the person is active. Confirm the fit. If you cannot prove it, don’t claim it.
Here’s the operating rule:
If you want this to show up on LinkedIn, run a simple discipline.
Before you publish, make one claim. Attach one proof. Write one constraint. State one next step.
Before you outreach, pick a small list. Read the profiles. Send fewer messages that prove you paid attention. Track replies. Improve the message from real responses, not from wishful dashboards.
This is how you keep speed without losing trust. This is how you use AI to lower noise, protect your name, and build a model that compounds.
In 2026, the winners use AI like aviation uses automation. They use it to reduce unnecessary steps and reduce needless decisions. They protect the human work that earns trust. They deliver proof, exercise judgment, hone their voice, and exercise radical integrity and accountability.
That is the new standard. Your content does not need to sound finished. It needs to be true, specific, and unmistakably yours.
Most people think the risk of using ChatGPT to pump out content is that it doesn't sound like them. But that’s a surface issue. Most people use AI not to sound like them, but to sound better. That's certainly true for me as a non-native English speaker.
The real risk is losing trust by sharing wrong and unchecked information. So we treat LinkedIn like pilots treat safety.
Start with intrinsic effort. Selling a service on LinkedIn has real complexity. You need a clear audience, an active need, a clear promise, a believable mechanism, and proof. AI will not invent that for you. AI can help you organize it, sharpen it, and test it. You still own the model. If you don’t own the model, you will stumble from message to message, and people will feel it.
Now handle extraneous effort. This is where most creators and sellers get wrecked in 2026. They let AI generate 10 versions, 10 angles, 10 hooks, 10 CTAs, and 10 persona styles. They keep “improving” and never deciding. Output goes up, clarity goes down. I see this with my clients all the time. Once they get down the AI-prompt rabbit hole, it's time to pull them out and coach them on clarity.
Have you noticed that AI never stops improving? It often reminds me of one of my translators at the World Bank, whom I once caught while she was criticizing her own translation. She thought a colleague whom she did not like rendered it. AI isn't vicious. It wants to please you. So it's totally willing to critique and rewrite its own content. It just never stops. Ask GPT to improve your website's conversion rate. Then implement updates and ask again, and again, and again.
In my work on the landing page, GPT advised adding a consistent call to action to every button. After I did that and asked to revise and further improve the conversion, it suggested the opposite. It did not troll me. It was caught between two schools of thought. One school says repeat the same CTA everywhere to reduce choices and friction, especially when the page has one job and traffic is warm. The other says to match each button to the visitor’s stage. Use specific micro-commitments early and save the hard CTA for when trust and intent are earned.
Asking GPT to continuously improve can trap it. Likewise, when prospects get mixed signals, their brains start to feel like the A320 update at peak travel. The fix is clear, and everything around it becomes a scramble.
The fix is to limit choices on purpose. Ask AI for two options, not ten. Ask for one recommended path and one alternative, not a buffet. Then decide and ship. Your job is not to explore every possibility. Your job is to pick a lane and run it long enough to learn. Correct course often and early.
Now protect germane effort. This is the part that keeps your work unmistakably yours. It’s meaningful work. The judgment. The ability to say, “Here’s what matters and here’s why.” This is also where trust is built, because trust comes from coherence. Coherence comes from a mind that is present.
AI can gather context. Summarize a prospect’s last ten posts. Pull themes. Spot repeated pain points. Draft a clean recap of their situation.
AI can draft language. A post structure, a DM outline, a follow-up that stays polite and clear, a subject line, and a tighter paragraph.
AI can challenge your thinking. Ask it to look for weak claims, missing steps, unclear proof, and places where you sound generic.
The claim you are willing to stand behind. The proof you can explain without looking away. The opinion you will defend. The next step you choose. The final check before it goes out.
That’s the difference between the King Air landing and the fake citations. Automation can run a sequence. It cannot carry responsibility. You own the truth.
Andrew Lawless is an AI-First growth strategist and business coach for expert services. He helps them generate LinkedIn demand without scraping, spam, or daily posting by building permission-based outbound systems that protect trust.
He brings two decades of experience coaching and training leaders across high-stakes environments, including work with the FBI, Special Forces, Fortune 500 teams, and the World Bank. His focus is practical execution under pressure, where credibility matters and mistakes carry real cost.
His core lens is cognitive load. He teaches clients to handle intrinsic complexity, cut extraneous clutter, and carry the germane load that builds sound judgment. He uses AI as a tool for drafting, summarizing, and systemizing workflows, while keeping verification and accountability human.
His work is built for 2026 realities. Fluent content is cheap. Trust is a scarce asset. He helps clients ship clear, provable claims, run clean follow-ups, and scale what works without sacrificing standards.
Meet him here: https://www.teamlawless.com/momentum-call-schedule
50% Complete
Sign up for my newsletter and never miss my latest updates, blogs, news, and events. I will immediately share with you my worksheet The Pillars of High Performance as a Thank You Gift.