GUEST OPINION: We should be living in the golden age of intelligence in enterprises.
Not only do we have the world’s knowledge at our fingertips, but we have AI to forecast demand, spot anomalies, resolve issues, and optimise resources faster than we ever dreamed. Organisations should be packed with humans freely exploring bold ideas, while machines handle the grind. But the opposite is happening.
The more instant answers we’re receiving, the fewer questions we’re asking.
In the age of prescriptive AI, where systems don’t just analyse data but tell us exactly what to do, discovery is quietly slipping away. Decision-making is becoming effortless, while curiosity is now expendable.
AI becomes the authority
Prescriptive AI is no longer just a recommendation engine. It’s fast becoming a proxy for authority. In complex enterprise environments, its outputs are increasingly treated as truth, not starting points.
A recententerprise survey found 44 per cent of executives would overturn their own decision after seeing AI‑driven insights, and 74 per cent said they trust AI more than advice from friends and family when making business decisions.
This trust, unchecked, creates a subtle yet seismic shift, from augmentation to abdication. The consequences are most visible in environments like IT operations and cybersecurity, where real-time decision-making is vital. AI-driven monitoring tools surface incidents, assign priorities, and even trigger auto-remediation.
While prescriptive AI promises automation and speed,a report revealed 97 per cent of organisations that experienced AI-related breaches reported lacking proper AI access controls, and many had no governance framework in place.
Moreover, studies showed that while security AI and automation can reduce breach costs by up to $1.9 million, overdependence on machine outputs without human oversight can inadvertently create blind spots.
The issue isn’t with AI’s capability; it’s with our increasing reluctance to question it.
This shift toward machine-led certainty is creating a new kind of fragility in enterprise systems. As teams grow accustomed to AI explanations and preapproved solutions, the deeper instincts of engineering, such as troubleshooting, theorising, and experimenting, begin to atrophy. There is a measurable drop in critical decision-making and inquiry, exactly the kind of cognitive passivity that enterprise technologists must resist.
This is a glaring concern. Because the most dangerous systems aren’t the ones that fail loudly, they’re the ones that seem to work, while quietly narrowing our critical thinking abilities.
A descent into complacency
Most of the current landscape of enterprise AI thrives on optimisation. However, optimisation is inherently conservative. This is why overreliance on prescriptive systems can unintentionally institutionalise complacency and mediocrity, with “good enough” silently becoming the ceiling.
Trained on 10 years of historical hiring data, Amazon’s AI recruiting tool penalised resumes that mentioned “women” and favoured male-coded language. It wasn’t malicious; it simply learned from the past. It took human curiosity and intervention to realise that this optimisation was reinforcing systemic bias and to shut it down.
In a world obsessed with acceleration, curiosity often feels inefficient. But efficiency without reflection is not real progress; it’s just going through the motions.
The deeper cost of all this is cultural. As AI systems handle more of the decision-making load, organisations begin to conflate speed with certainty, automation with intelligence, and compliance with alignment.
In such environments, curiosity can slowly start to look like a liability. But history tells us that the most important breakthroughs – the zero-trust model, blockchain, even generative AI itself – came from people who looked at working systems and still asked, “Why not something else?”.
The responsibility to safeguard curiosity doesn’t rest with frontline engineers, it rests with leadership. If AI is to amplify human intelligence, not replace it, leaders must build cultures where questions are valued as much as answers. Make it culturally safe to say “I don’t know”, because that’s where discovery starts.
Prescriptive AI will get faster, smarter, and more context-aware, but it will never match human curiosity. Because if we let curiosity fade, innovation won’t stop – it’ll just loop endlessly, optimising the past instead of inventing the future.
In a world of instant answers, the edge isn’t speed, it’s depth. The leaders of the next decade won’t be defined by how much they automate, but by how deeply they keep exploring.
