• HPE chief AI officer Bob Friday says agentic AI is changing how networks are run and maintained.
• Agentic AI moves tasks from scripted automation to autonomous decision‑making, speeding remediation but raising new risks.
• Developers are shifting from writing procedures to building guardrails, observability hooks, and feedback loops.
• Organizations must balance efficiency gains with safety, governance, and upskilling to avoid costly mistakes.

What Bob Friday means by “agentic AI”

Bob Friday, HPE’s chief AI officer, has highlighted a clear distinction between traditional automation and what he calls agentic AI. Where automation executes predefined scripts, agentic AI refers to systems or agents that can set goals, make decisions and act across systems with less direct human orchestration. This shift matters because it changes who (or what) makes operational choices inside the network.

How network operations are changing

Agentic approaches can dramatically speed routine work: diagnosing outages, applying fixes, reconfiguring paths or scaling resources without waiting for manual approval. That creates clear efficiency and availability benefits for operations teams. At the same time, handing more autonomy to agents raises new risks — unintended actions, cascading changes, and harder‑to‑explain decision paths — that traditional automation did not present.

Early adopters are already rethinking runbooks, moving from static playbooks to dynamic policies that agents can interpret. Observability and real‑time telemetry become far more important; operators need richer, higher‑fidelity signals to verify agent behavior and to roll back or intervene quickly when something goes wrong.

How developers are adapting

Developers working on network and ops tooling are changing their focus. Instead of only coding task automation, they are now:

  • Designing intent‑driven APIs that let agents express goals rather than low‑level steps.
  • Building robust guardrails, constraint engines and safety checks to prevent harmful agent actions.
  • Instrumenting systems with better telemetry and traceability so agent decisions can be audited.
  • Creating feedback loops and human‑in‑the‑loop workflows so operators can validate and correct agent behavior.

This shift requires new skills — more emphasis on model‑aware engineering, observability design, policy enforcement and incident simulation — rather than just scripting and integration.

Risks, governance and practical next steps

The upside is faster remediation and fewer manual errors. The downside is a higher potential for systemic errors if agent behaviors aren’t tightly governed. Friday’s message implies organizations should treat agentic AI as a platform change: introduce strong governance, run progressive rollouts, and invest in retraining teams.

Practical steps include testing agents in controlled environments, instrumenting every action with explainability metadata, enforcing rollback and rate limits, and adopting clear policies for escalation. For operators and developers, the race is not just to automate more, but to build safe, auditable systems where agents improve outcomes without replacing human oversight.

The core takeaway: agentic AI is not just an efficiency play — it’s a structural change in how network operations are designed and staffed. Companies that adapt tools, processes and skills now will gain the most; those that treat it as incremental automation risk being blindsided by faster, more autonomous competitors.

Image Referance: https://www.informationweek.com/machine-learning-ai/hpe-chief-ai-officer-on-the-line-between-ai-and-automation