Innovation Heroes: Is AI dramatically underhyped? The math says yes:
"We'll never move this slow again," Intel's Stacey Shulman tells us.

 In |

Reading Time: 4 minutes

Moore’s Law — the observation that computing power doubles roughly every two years — has governed technology strategy for decades. IT leaders have built entire careers around planning for that predictable pace of change.

AI just broke the rules.

“AI capabilities are doubling every six months,” Stacey Shulman, Intel’s Vice President of Health, Education, and Consumer Industries explains. “And if you think AI is moving fast now, wait til next year. It’ll never move this slow again.”

That’s four times faster than Moore’s Law. And she’s not talking about abstract benchmarks. She’s measuring three concrete things:

  1. The length of tasks AI can complete. What used to require stringing together multiple short automations now happens in single, complex workflows.
  2. The overall intelligence of models. Their ability to reason, understand context, and produce nuanced outputs.
  3. The practical usefulness. The gap between what’s technically possible and what’s actually deployable in production.

“When things are coming straight at us, you can’t always tell the speed,” Shulman notes. “But let’s say a car was coming at us and it moved, it doubled its pace every hour. Well, for the first hour, it would still seem really slow. And then the fourth hour, it would still seem really slow. And then all of a sudden, you would have to get out of the way.”

The $5 billion signal everyone missed

On September 18, 2025, Intel and Nvidia announced a historic partnership. The headlines focused on the $5 billion investment and joint chip development. But Shulman sees something more fundamental at play.

“I believe wholeheartedly that you can’t have parallel universes with AI,” Shulman explains. “You can’t have your AI infrastructure and your AI stuff and then your operational stuff. Those things must converge. Your operational things must have AI infused into them. So everything, every bit of compute that you have, in my opinion, it has to have AI embedded into it.”

This is where most organizations are getting it wrong. They’re building “AI infrastructure”— separate systems, separate teams, separate budgets. What they should be building is resilient infrastructure that can integrate AI capabilities as they evolve, without requiring a rip-and-replace approach every six months.

The question nobody wants to answer

About halfway through the conversation, Innovation Heroes host Ed McNamara asks the question every responsible IT leader is thinking: How do you balance “move faster” with “do it responsibly”?

Shulman doesn’t dodge it. But her answer might surprise you.

“I remember back when I was a CIO, my team was all complaining that, look, we’re exhausted. Things are moving so fast right now. And I’m like, yeah, it’ll never move this slow again. What are we going to do to adapt? If you can’t move fast and safe, you balance to safety, right? You bias to safety. It depends on your risk profile and what kind of sensitive data you’re sitting on. So I don’t have a one-size-fits-all answer to this. All I have is empathy for the people who have to wrestle with it right now.”

But here’s the critical reframe: moving fast and moving safely aren’t opposites when you have the right infrastructure. This is where that “resilient infrastructure” concept becomes crucial. It’s not about recklessly deploying unvetted AI. It’s about building systems flexible enough to evolve without creating security gaps or compliance nightmares.

What’s coming

Shulman predicts that within 12-24 months, AI agents will act as virtual teammates — joining chats and meetings to offer expertise, challenge ideas, and provide real-time analysis. These “virtual interns,” working for the cost of electricity, will need onboarding, training, and performance reviews just like human employees. As Shulman puts it: “Loop in your HR department and say, if we were going to onboard 1,000 new employees, what would we need them to know? What would we teach them? What training? What manuals would we give them? And I think that answer needs to go into your next training model.”

Monday morning action steps

If you’re convinced by Shulman’s argument — or at least worried enough to act — here’s where to start:

  1. Audit your infrastructure strategy: Are you building “AI infrastructure” in a parallel universe, or resilient infrastructure that can integrate AI as it evolves?
  2. Talk to HR: Seriously. Ask them what new employee onboarding looks like. That’s your AI agent training playbook.
  3. Review every business improvement initiative: For each strategic goal, explicitly ask: “Where does AI fit in this?” Don’t default to “nowhere” or “everywhere.”
  4. Identify your exponential trends: What in your industry is doubling every 6-12 months? You can’t afford to miss those curves.
  5. Test before you commit: Whether through internal labs or partnerships, like SHI’s AI & Cyber Labs, prototype AI applications before massive infrastructure investments.
  6. Assign AI agent managers: Start thinking about who will be responsible for training, evaluating, and improving your AI agents’ performance.
  7. Look for “time back” opportunities: Where are your highly skilled people spending time on tasks AI could handle? Think about doctors typing notes, teachers grading papers, or analysts building repetitive reports.

NEXT STEPS

Listen to the full conversation here to discover how SHI’s AI & Cyber Labs can help your organization prototype and test AI solutions before committing to massive infrastructure changes.

You can also find episodes of the Innovation Heroes podcast on SHI’s Resource Hub, Spotify, and other major podcast platforms, as well as on YouTube in video format.

Video + audio

Audio only