Steph’s Digital Ambassador (00:00):
Welcome to the deep dive. So we're living in this world now, right? Completely. Well, seemingly reorganized by artificial intelligence.
Andrew’s Mindmate (00:08):
Yeah, it's everywhere now. The promise is huge. Efficiency, streamlining cost savings. Companies are pouring billions into it.
Steph’s Digital Ambassador (00:16):
Billions. But here's the thing that seems off. If AI is the silver bullet, why are so many organizations struggling, struggling to actually see real market returns from all that investment?
Andrew’s Mindmate (00:28):
That really is the core question, isn't it? And the sources we've looked at today, they offer a really fascinating, maybe even slightly alarming perspective on this. Okay? We're digging into a concept that's kind of new, but it feels fundamental. It's about how we value people, human capital in this age of machines, it's called original intelligence. Oi,
Steph’s Digital Ambassador (00:46):
Original intelligence. Oi. Okay, so our mission today is really specific. We need to unpack why when generative AI can just churn out ideas nonstop, does actual human originality suddenly become this measurable edge,
Andrew’s Mindmate (01:00):
A competitive advantage?
Steph’s Digital Ambassador (01:02):
And we're looking closely at one company upside because they're claiming they've actually figured out how to quantify it, how to put a number on this skill.
Andrew’s Mindmate (01:11):
But first we really need to frame the problem itself. I mean, think about it. If AI is built to do the hard thinking for us, the quick pattern matching the statistical analysis to find the most likely answer, what's happening to our own ability to think differently, to deviate,
Steph’s Digital Ambassador (01:29):
Right? Are we losing something?
Andrew’s Mindmate (01:31):
Well, the sources suggest, yes, there's a warning here about over-reliance, potentially leading to decreased cognitive function and this very real risk of homogenized thinking. It could be behind a lot of these AI transformation failures.
Steph’s Digital Ambassador (01:45):
Okay, let's definitely unpack that. If AI gives us the statistically best answer, why would human original intelligence be the thing that makes the difference? The thing separating the leaders from Well, the failures,
Andrew’s Mindmate (01:57):
That's exactly where we need to start. Let's look at what the sources are calling the AI transformation crisis.
Steph’s Digital Ambassador (02:01):
Yeah, because when you look past the hype, the actual financial results of mass AI adoption, well, they don't look great for most companies.
Andrew’s Mindmate (02:08):
No, they really don't. The numbers are pretty stark. Actually, research looking into this whole AI transformation crisis, points out that adoption just isn't delivering like people expected.
Steph’s Digital Ambassador (02:18):
How stark are we talking?
Andrew’s Mindmate (02:19):
We're talking something like only 5%, 5% of companies finding significant ROI from these massive AI efforts.
Steph’s Digital Ambassador (02:27):
Wow. Okay. 5% success means 95% are essentially pouring money into tools that aren't giving them a real competitive advantage. Why? What's going wrong? The sources mentioned AI readiness, but that sounds a bit fuzzy.
Andrew’s Mindmate (02:43):
Well, it's less fuzzy when you look at how it's failing and it seems baked into AI's design in a way. AI is fundamentally a feedback loop based on existing data. It's incredibly good, better than humans at sifting through enormous data sets, finding patterns, and spitting out the most statistically probable answer. It's inherently conservative, you could say risk averse.
Steph’s Digital Ambassador (03:03):
So if I ask an AI model, what's the best strategy for launching Product X? It gives me the answer that history says works most often. The polished, predictable, safe bet.
Andrew’s Mindmate (03:13):
Exactly. And that's the homogenization trap. If every single company is asking similar strategic questions to the same few foundation models,
Steph’s Digital Ambassador (03:21):
They're all getting variations of the same statistically likely answers.
Andrew’s Mindmate (03:24):
Precisely, which leads to this massive market wide group. Think companies are spending fortunes essentially to get the same kind of output as their competitors. So where's the differentiation? Where's the edge?
Steph’s Digital Ambassador (03:37):
Right? So it's not that the AI is bad, it's that relying on it too much levels the playing field in the wrong way. It makes everyone sound the same.
Andrew’s Mindmate (03:44):
Yeah, inify sameness,
Steph’s Digital Ambassador (03:46):
Which means the real solution, the way to get ahead must be finding the people who can push beyond that sameness. The humans who don't just take the AI's first answer.
Andrew’s Mindmate (03:56):
And that's why the old ways of finding talent don't quite cut it anymore. Relying just on resumes or standard performance reviews. Huffy talks about this. They call it rewarding polish over possibility.
Steph’s Digital Ambassador (04:06):
Polish over possibility, meaning we value people who execute well within the known system.
Andrew’s Mindmate (04:11):
Exactly. People good at optimizing, documenting, working within the established framework, but the people who think really differently, the ones who might propose something radical that could genuinely future-proof the company,
Steph’s Digital Ambassador (04:25):
They often get overlooked or maybe even penalized because their ideas don't fit the neat AI approved box. They seem too risky, too out there.
Andrew’s Mindmate (04:34):
They're too far outside. What's called the idea space.
Steph’s Digital Ambassador (04:37):
Okay, that term, again, idea space. Let's transition to that. This brings us right to the core concept, original intelligence. If AI flattens ideas, OI is the human counterweight. What exactly is it based on? The sources?
Andrew’s Mindmate (04:51):
They define it pretty clearly. OI is the human knack for going beyond the ideas everyone else has already thought of. Whether everyone else is human consensus or an AI spitting out probabilities.
Steph’s Digital Ambassador (05:00):
So imagine all known concepts, all existing solutions, all the statistically likely answers. That's the idea space.
Andrew’s Mindmate (05:07):
Yep. Think of it like a giant library of everything known or predictable.
Steph’s Digital Ambassador (05:11):
And original intelligence is the ability to write a completely new book for that library. Something genuinely novel, not just a remix with the old ones.
Andrew’s Mindmate (05:17):
That's a great way to put it. It's about pattern breaking, connecting dots. Nobody else saw solving problems in ways that aren't just incremental improvements. It's that capacity to push the known boundaries.
Steph’s Digital Ambassador (05:28):
Okay.
Andrew’s Mindmate (05:28):
And the source of stress. It's not just vague creativity. OI seems to have two specific qualities that are really crucial for actually succeeding with ai, not just despite it.
Steph’s Digital Ambassador (05:38):
Two qualities. Okay. I'm interested because creativity can feel so nebulous. What are they?
Andrew’s Mindmate (05:45):
Right. So the first is cognitive. It's the raw ability to generate something truly novel, something statistically unlikely to pop out of current AI models or common human thinking.
Steph’s Digital Ambassador (05:55):
Makes sense. Novelty. What's the second?
Andrew’s Mindmate (05:56):
The is more behavioral and maybe overlooked. It's the willingness, the actual drive to think for yourself, to generate your own ideas, even when the AI has already handed you a perfectly decent, easy answer. It taps into conscientiousness almost.
Steph’s Digital Ambassador (06:11):
Wait a second. So it's not just about being creative, it's about having the, I don't know, the discipline to actually do the original thinking to maybe ignore the easy AI answer long enough to see if you can come up with something better yourself.
Andrew’s Mindmate (06:26):
That's a huge part of it. Yeah. It's behavioral, not just cognitive horse power.
Steph’s Digital Ambassador (06:30):
That feels really important. It's like acknowledging the path of least resistance isn't always the winning path.
Andrew’s Mindmate (06:35):
Exactly. The most valuable people in the AI era might be those who force themselves to do the hard work of original thought, even when the machine is offering a shortcut.
Steph’s Digital Ambassador (06:45):
But here's where I always get a bit skeptical. Defining OI is one thing. Measuring it. How on earth do you objectively measure something? So abstract isn't an OIU score, just another attempt to quantify creativity, which we know is notoriously tricky and subjective.
Andrew’s Mindmate (07:01):
And that's the big claim from hub side with their tool hub checker, they argue they've cracked it, that they're objectively measuring originality, moving past those subjective judgments.
Steph’s Digital Ambassador (07:10):
How? What are they saying they do?
Andrew’s Mindmate (07:13):
They talk about using well rigorous science, their own proprietary assessments and cognitive mapping techniques. The output is this OIQ score plus an archetype mapping and insights for teams,
Steph’s Digital Ambassador (07:26):
An objective OIQ score. If they can really do that, it sounds like it could be the missing piece for companies trying to get that AI ROI. But this brings us to the philosophy behind what they're measuring. They seem to draw a very sharp line.
Andrew’s Mindmate (07:38):
They do, and this is where it gets really interesting. I think Hub checker deliberately focuses only on measuring novelty.
Steph’s Digital Ambassador (07:45):
Just novelty.
Andrew’s Mindmate (07:45):
Just novelty. The ability to generate something statistically new, something outside the current human and AI idea space. They explicitly separate that from usefulness.
Steph’s Digital Ambassador (07:56):
Why split those isn't an idea only valuable? If it's actually useful.
Andrew’s Mindmate (08:01):
Well, think about traditional creativity assessment. They usually try to judge both, right? Novelty and usefulness often based on subjective expert opinions.
Steph’s Digital Ambassador (08:08):
Yeah, that sounds familiar,
Andrew’s Mindmate (08:09):
But hub side argues that in today's world with AI churning out billions of ideas, subjective, human judgments about what's novel are basically obsolete.
Steph’s Digital Ambassador (08:18):
Obsolete. Why obsolete now? What changed
Andrew’s Mindmate (08:21):
The scale before these massive AI models? If you or your team came up with something that felt new to you, you'd call it novel. But now how do you know if your brilliant new idea isn't actually statistically similar to thousands of other outputs? The AI generated for other people asking similar questions.
Steph’s Digital Ambassador (08:41):
Okay. Our human brains just can't grasp the sheer volume of AI generated possibilities. So what feels original to me might actually be a really common AI suggestion. If you looked at the whole picture,
Andrew’s Mindmate (08:52):
That's the argument. You need an objective, analytical way to identifying true statistical novelty, the raw potential, and that's why hub checker stops there. It deliberately doesn't try to judge usefulness.
Steph’s Digital Ambassador (09:03):
So the tool identifies the explorers, the people who can find new land.
Andrew’s Mindmate (09:07):
Yeah, something like that.
Steph’s Digital Ambassador (09:08):
But it doesn't tell you if that land is fertile or barren. That decision is this novel idea actually useful for our goals that stays with the humans?
Andrew’s Mindmate (09:15):
Exactly. It should always be up to human leaders using their critical thinking and strategic judgment to decide if a novel idea actually works for the mission, it separates the potential for breakthrough from the practical application.
Steph’s Digital Ambassador (09:29):
That distinction makes sense. So moving to straten, OI is being framed as this new currency. If AI is the great equalizer, maybe even the great flattener,
Andrew’s Mindmate (09:39):
Then measuring and nurturing. OI is how you get your edge back. It's how you finally unlock that ROI. Everyone's chasing with their AI investments.
Steph’s Digital Ambassador (09:48):
And they're specific about who should be measured, right? It's not for everyone.
Andrew’s Mindmate (09:52):
No. H checkers designed for teams doing strategic, creative, innovative work. Think high level strategy units, R and d, product development, certain marketing roles, leadership teams,
Steph’s Digital Ambassador (10:02):
The places where breakthrough thinking actually moves the needle,
Andrew’s Mindmate (10:05):
Right? The goal is to give managers visibility into the OI of their teams so they can better leverage both their people and their AI tools.
Steph’s Digital Ambassador (10:12):
The sources also lay out a really clear definition of what AI ready talent looks like. Four parts, I think.
Andrew’s Mindmate (10:18):
Yeah, four key elements. First, it's someone who can use AI to produce output that's better than what either the AI or the person could do alone. It's about synergy,
Steph’s Digital Ambassador (10:28):
Not just using AI as a crutch, but as a partner to elevate the outcome.
Andrew’s Mindmate (10:32):
Okay. Second, they need to be adaptable, able to roll with the punches in a constantly changing AI environment. Third, and this ties back to oi. They must bring original thinking to their role. They can't just be good prompt writers.
Steph’s Digital Ambassador (10:46):
They need to add value beyond just operating the machine.
Andrew’s Mindmate (10:48):
Exactly. And fourth crucial point, they need good judgment. They need to see clearly where AI is helpful and just as importantly, where it's not helpful, where human insight is still essential.
Steph’s Digital Ambassador (11:00):
That last one feels key. Knowing when to switch it off essentially, or when to question its output. And I can see how mapping OI across a team would be powerful too. Understanding how different originality types could compliment each other. That could really boost collaboration, especially in creative fields.
Andrew’s Mindmate (11:15):
Yeah, lining the right kind of thinkers for the right challenge, but it's important to mention a caveat. The source includes,
Steph’s Digital Ambassador (11:22):
Okay. What's that?
Andrew’s Mindmate (11:23):
Hub side is apparently very clear that the hub checker tool is not meant to be a major factor in hiring or firing decisions.
Steph’s Digital Ambassador (11:31):
So not for performance management or selection on its own,
Andrew’s Mindmate (11:33):
Right? It's positioned as an additional data point. Something to use alongside interviews, work samples, existing performance data, more for strategic team design and understanding potential not for making definitive employment judgements based solely on the score.
Steph’s Digital Ambassador (11:50):
Got it. So bringing this all together, what's the big takeaway for you, the listener, the professional, navigating this AI wave? It seems clear. Original intelligence is emerging as this vital human element. It's what ensures those huge AI investments actually deliver value.
Andrew’s Mindmate (12:07):
It's the differentiator In a World War technology is making many things homogenous. Your ability to push beyond that known idea space is what sets you and your organization apart.
Steph’s Digital Ambassador (12:17):
So whether you're adopting new AI tools or just tackling complex problems day to day,
Andrew’s Mindmate (12:21):
That willingness, that ability to do the hard work of thinking, originally expanding beyond what the machine or the crowd suggests, that's becoming a measurable, crucial skill. Your value isn't just in getting the polished AI answer, it's in finding the answer the AI couldn't even conceive of,
Steph’s Digital Ambassador (12:38):
Which leaves us with a final thought, something for you to really mull over. We know AI is designed to make thinking feel easy, almost frictionless, and the source is worn. That relying too heavily on that ease can dull our own cognitive abilities and lead to groupthink.
Andrew’s Mindmate (12:53):
Yeah, a real double-edged sword.
Steph’s Digital Ambassador (12:55):
So how often should we deliberately make it harder for ourselves? How often should we and our teams actively choose to ignore the AI suggestion, at least for a while, just to force our own original intelligence muscle to get a workout?
Andrew’s Mindmate (13:08):
If the machine always presents a perfect looking answer,
Steph’s Digital Ambassador (13:11):
Is the truly original thinker, the one disciplined enough to switch that perfection off and grapple with the messy, uncertain process of creating something genuinely new.