How to Approach AI in 2025
For nearly seven decades, the field of Artificial Intelligence has alternated between heady breakthroughs and bitter disappointments. Twice, the momentum of early progress collapsed into so-called “AI winters,” when expectations outpaced technical reality and investment dried up. Today, in the wake of unprecedented advances in generative AI, it is legitimate to ask: are we on the verge of another chilling correction?
In recent months, the tone around AI has shifted. Reports highlight the massive energy demands of model training, with data centres straining local grids. Analysts warn of shortages of high-quality training data, raising the spectre of “model collapse” if synthetic content feeds back into models unchecked. Others point to the sobering fact that the overwhelming majority of corporate AI pilots never reach measurable return on investment. Commentators even whisper the familiar phrase: AI winter.
And yet, this moment is not one for abstention. It is not the time to stand aside. Rather, it is a time for realism, a time to weigh up the risks against the opportunities. If the last two years have been a rush into the “peak of inflated expectations” on Gartner’s famous hype cycle, then we are now sliding toward the “trough of disillusionment.” This is not an end, but an invitation: to discover what genuinely works, to separate signal from noise, and to embed AI where it creates lasting value.
What does that mean in practice? First, it means adopting a disciplined portfolio approach to AI investment. Not every pilot deserves to survive; kill projects that do not prove their worth within a few months, and double down on those that deliver measurable benefits. Second, it means addressing the new constraints head-on: treating energy and compute as scarce resources, securing licensed, human-origin data, and designing architectures that allow model substitution if costs or regulations shift. Third, it means governance. The EU AI Act, phasing in from 2025, will not be optional. Risk management, transparency, and accountability must be integral to AI initiatives, not afterthoughts.
Above all, it means keeping faith in progress without succumbing to hype. Setbacks don't have to be disasters. Sometimes a good approach was simply pursued too early. Maintain your positive curiosity. The history of AI teaches us that cycles are inevitable, but they need not be destructive. Each winter has been followed by renewed spring, with stronger foundations. By embracing a sober, risk-aware, and value-driven approach, corporations can continue to harness AI’s power even as public mood cools.
The call to action, then, is simple: do not step back—step forward wisely. The AI Risk & Readiness Checklist below may serve as a guidance. Experiment, but with discipline. Invest, but with governance. Build, but with an eye to resilience. If another winter comes, let it be one in which your organisation is not frozen, but already equipped with the tools, the practices, and the maturity to emerge stronger.
Many companies have already embarked on the long journey. But if you haven’t started yet, here is a 90-day plan for the practical next steps:
- Week 0–2: Create an AI risk register (compute/energy, data provenance, legal, security), take a broad view, map use-cases to EU AI Act tiers. (It’s about knowing the risks: digital-strategy.ec.europa.eu)
- Week 2–6: Prepare for pilot implementation(s). Stand up evaluation & observability (hallucination, latency, cost per task), and data provenance filters. (Here is a helpful deep dive: aclanthology.org)
- Week 6–12: Run two pilots with kill/scale gates (one productivity, one customer-facing with human-in-the-loop). Lock energy/compute forecasts and vendor exit options in contracts. (An in-depth survey on the status of AI: McKinsey & Company)
In any case, sitting still and doing nothing is not an option in these highly exciting times.
More References
- International Energy Agency. (2024). Electricity 2024: Analysis and forecast to 2026. Paris: IEA.
- Highlights the projected doubling of data centre electricity consumption by 2030, with AI workloads as a primary driver.
- Altman, S. (2024, January). Remarks at the World Economic Forum. Davos, Switzerland.
- OpenAI’s CEO warns that AI progress depends on breakthroughs in affordable, abundant energy (advanced nuclear and renewables).
- Villalobos, P., Chin, M., Jones, A., & Kaplan, J. (2023). Will we run out of data? Epoch AI.
- Forecasts exhaustion of high-quality human training data between 2026 and 2032; discusses risks of over-reliance on synthetic datasets.
- Shumailov, I., Cotton, C., & Andreeva, D. (2023). The curse of recursion: Training on generated data makes models forget. arXiv:2305.17493.
- Introduces the concept of “model collapse,” showing how recursive training on AI outputs degrades performance.
- Ransbotham, S., & Khodabandeh, S. (2023). Seizing the value of generative AI. MIT Sloan Management Review, 65(1), 1–10.
- Reports that ~95% of generative AI pilots do not achieve measurable ROI, highlighting the need for disciplined use-case selection.
- Gartner. (2024). Hype cycle for artificial intelligence, 2024. Stamford, CT: Gartner Research.
- Places generative AI at the “trough of disillusionment,” reflecting overinflated expectations and uneven results.
- European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council on artificial intelligence (AI Act). Official Journal of the EU.
- Establishes phased obligations from 2025–2026, including governance, transparency, and accountability requirements.
A one-page AI Risk & Readiness Checklist (2025)
Here’s a one-page AI Risk & Readiness Checklist (aligned to EU AI Act, energy/compute realities, and enterprise ROI learnings).
1 Strategic Fit
- Clear problem statement & KPIs defined for each use-case
- Kill/scale gates in place (stop non-performing pilots quickly)
- Use-cases mapped to business impact & regulatory risk tiers
2 Compliance & Governance
- Use-case mapped to EU AI Act risk category (prohibited / high / limited / GPAI / minimal)
- Model register (internal inventory of all models in use) maintained
- Transparency & documentation (intended purpose, data sources, known risks) created
- Accountable executive owner assigned (budget + oversight)
- Red-teaming & bias/robustness testing scheduled regularly
3 Data Readiness
- Data provenance controls: human-origin data distinguished from AI-generated
- Eval sets insulated from training data; drift tracked
- Data retention & consent checks (GDPR/industry-specific) completed
- Synthetic data flagged & controlled to avoid model collapse
4 Compute & Energy
- GPU/compute TCO model built (training + inference + cooling + energy)
- Energy supply secured (PPAs, regional grid feasibility checked)
- Vendor exit options in contracts (ability to switch infra/models)
- Model architecture is right-sized (don’t overtrain for low-value tasks)
5 Delivery & Change
- Cross-functional team (business + data + compliance + IT) assigned
- Process redesign & change management funded, not just the model work
- Human-in-the-loop checkpoints where failure is costly (finance, claims, underwriting)
- Continuous observability: cost per task, hallucination/error rates, latency tracked
6 Security & Risk
- AI-specific security testing (prompt injection, data leakage, model poisoning) included
- Access & identity controls for AI services (integration with IAM/IGA)
- Incident logging/reporting pipeline connected to risk & compliance functions
- Regular audit trail of model use for regulators
7 Next 90 Days (Quick Wins)
- Map all pilots to EU AI Act categories
- Stand up evaluation dashboard (KPIs, cost, error rate, hallucinations)
- Launch two controlled pilots (1 productivity, 1 customer-facing with human oversight)
- Set up AI risk register under enterprise risk management
8 Additional advice
- Rule of thumb: If a pilot doesn’t prove value and show compliance-readiness within 3–6 months ? kill it.
- Focus on best-of-breed + governance instead of chasing hype.
No comments:
Post a Comment