EXCLUSIVE: Former Nike Tech and AI Leader Examines Risks & Benefits of AI Development

December 28th, 2025 6:24 AM

How do we navigate a world that is rapidly changing the way commerce is done through the explosion of artificial intelligence? MRC caught up with the former AI lead at global footwear company Nike to get some answers.

Elaine Barsoom, former Global Head of Tech and AI Innovation Partnerships at Nike and now an AI strategic advisor, spoke with MRC Business following her appearance on a panel at the 2025 JBiz Expo in Atlantic City, New Jersey organized by the Orthodox Jewish Chamber of Commerce.

The informative discussion focused on the rapid acceleration of artificial intelligence across global business — and the growing tension between technological capability and human judgment. As warnings from industry leaders like Elon Musk raise concerns about AI’s long-term risks, Barsoom offered a measured perspective. The challenge, she said, is not whether AI is powerful, but how much decision-making humans are willing to hand over to it.

“With any technology — just like social media — we eventually have to put limits and governance in place,” Barsoom explained. “Right now, the real question is humans and AI in the loop. How much reliance should we place on AI for decisions that require human reasoning?

AI as an Accelerator — Not a Replacement

Barsoom emphasized that AI delivers real benefits when used as an augmenting tool, not as a substitute for human thinking.

“You can’t depend on AI completely and assume it’s always accurate,” she told MRC Business. “Whether at an enterprise level or a personal level, people need to validate what’s being created. This is not something to replace people.”

Instead, Barsoom framed AI as a force multiplier:

It’s a tool — a technology to augment human creativity and judgment. Humans are still responsible for reasoning and decision-making at the end of the day.

MRC Business also examined recent trends in AI adoption and human reliance on these tools. A 2025 YouGov survey found that:  

  • 56% of U.S. adults now use AI tools, with nearly one-third using them weekly.
  • Among adults under 30, usage jumps to 76%, with half engaging weekly.

“That kind of adoption happened almost overnight,” Barsoom noted. “We haven’t historically given ourselves time to think through the long-term cognitive and behavioral effects of relying on a system to do our thinking for us.”

Where AI Helps — and Where It Can Harm

Barsoom pointed to education and everyday productivity as examples of AI used well.

“Before, if I wanted to learn something new, I had to enroll in a class or a university,” she explained. “Now I can say, ‘Teach me how to start a garden,’ and AI can walk me through the steps. That’s incredibly empowering.”

She added that automation can return time to people — especially parents and working professionals. “Simple things that used to take hours can now be done in minutes. That gives people back time for their families and the things they love.”

But the risk emerges when convenience replaces cognition.

“The biggest danger is overreliance,” Barsoom said. “When we substitute our creativity and judgment entirely with AI, we stop thinking. We stop reasoning. That’s when we become reactive instead of intentional.”

Data, Security, and the ‘Wild West’ Phase of AI

Beyond cognitive dependency, Barsoom flagged data security as one of the most underappreciated risks in AI’s current development cycle.

“Any industry that handles personal or sensitive data is at risk — healthcare, financial services, government, consumer businesses,” she told MRC Business. “We’re in the Wild West right now. There is data leakage.”

Industry data reinforces that concern. 

Multinational technology company IBM reasoned in a blog on AI threats that “lack of security threatens to expose data and AI models to breaches, the global average cost of which is a whopping USD 4.88 million in 2024.” In addition, IBM warned, “the data that helps train [large language models] is usually sourced by web crawlers scraping and collecting information from websites. This data is often obtained without users’ consent and might contain personally identifiable information (PII).” Barsoom confirmed to MRC Business that “any industry that uses PII or your personal identification is at risk.” 

Software company Zylo noted in a company blog that its 2025 SaaS Management Index found 77.6 percent of IT leaders increased investment in SaaS tools for their AI capabilities, yet 51 percent of applications in the average enterprise portfolio still carry a ‘Poor’ or ‘Low’ risk score. As Zylo concluded, “These gaps put sensitive data and company operations at risk.”

“I can’t say with 100% confidence that we’re in a world where data is fully protected,” Barsoom acknowledged.

She encouraged users and organizations to take basic precautions — auditing data, adjusting AI training settings, and limiting exposure — but warned that visibility remains imperfect.

“What we don’t know,” she concluded, “is what we don’t know.”

The Bottom Line

For Barsoom, the future of AI is not about fear — it is about discipline.

“AI can elevate human potential,” she said. “But only if we stay accountable for judgment, ethics, and responsibility. Once we give that away, we don’t just risk bad outcomes — we risk losing what makes us human.”