The first two articles in this series established the foundation: intent-based networking as a discipline and the three-layer data model that makes it practical — business requirements, design intents and a structured Source of Truth. We used ACME Investments, a financial services firm, as a worked example, showing how a business requirement like “trading systems must be isolated from corporate users” travels through design intent into annotated YAML and eventually into device configuration generated by Ansible and verified by Batfish.

In this final article, I want to explore what becomes possible once that foundation is in place. Specifically: automated design verification — the ability to assert, on every proposed change, that the entire network still satisfies every stated intent — and intent-driven network generation — the ability to add a new site to the network by declaring what you want, not by writing configuration. Together, these capabilities point toward a network that can provision itself, verify itself and ultimately heal itself.

This is not a distant aspiration. With the tooling available today, it is an achievable near-term state. The organisations that get there first will have a structural operational advantage that compounds over time.

Repository for this article: https://github.com/ppklau/network_automation/tree/main/learning/intent_based_networking


The verification problem at scale

Consider what happens when ACME’s network grows. More leaf pairs. More branch offices. More security zones. More routing policies. The number of design intents stays roughly constant — they reflect architectural decisions, not device count — but the number of instances of each intent grows linearly with the infrastructure.

With a manual approach, this creates a scaling problem. Every change must be reviewed against every relevant intent. An engineer making a routing change must simultaneously reason about reachability, security policy, management plane access and logging configuration. The cognitive load grows with the network. Review processes get longer. Change windows get larger. Risk accumulates.

With an intent-based approach, this problem inverts. The verification work is done by machines, running on every proposed change, faster than any human review process. The engineer’s cognitive load stays constant — they declare what they want and the system tells them whether it is consistent with everything else.

The verification layer has two components, which are complementary rather than redundant.


Layer 1: SoT intent verification

The first verification layer runs directly on the Source of Truth, before any configuration is rendered. It is a Python script — verify_intents.py — that reads design_intents.yml and nodes.yml and asserts that the data model satisfies every intent.

This is fast — it runs in under a second — and it catches structural violations that would otherwise surface much later in the pipeline, or worse, in production. Here is what the output looks like when run against ACME’s full node inventory:

Loaded 10 node(s) from nodes.yml
Running 12 intent checks...

  [PASS] INTENT-TOPO-01: Spine-leaf fabric exists
  [PASS] INTENT-TOPO-02: MLAG on all leaf pairs
  [PASS] INTENT-TOPO-03: VXLAN VNI=VLAN on all leaves
  [PASS] INTENT-RTG-01: eBGP underlay, unique ASNs
  [PASS] INTENT-RTG-02: eBGP EVPN enabled on all nodes
  [PASS] INTENT-RTG-03: OSPF area 0 at branch sites
  [PASS] INTENT-SEG-01: VRF per zone, no cross-zone leakage
  [PASS] INTENT-SEG-02: ACLs: deny-default, comments, no any
  [PASS] INTENT-SEG-03: DMZ VLANs only in DMZ VRF
  [PASS] INTENT-MGMT-01: OOB management VRF on all nodes
  [PASS] INTENT-MGMT-02: Syslog x2 + SNMPv3 on all nodes
  [PASS] INTENT-IP-01: All IPs within declared zone prefix

Results: 12 passed, 0 failed out of 12

The checks are not superficial. INTENT-SEG-02 inspects every ACL entry on every device and asserts that no permit-any rule exists, that every entry has a comment containing a requirement ID and that the default action is deny. INTENT-RTG-01 checks that every fabric node has a unique BGP ASN — a duplicate ASN is a routing failure waiting to happen and this check catches it before the configuration is ever rendered. INTENT-IP-01 verifies that every interface address falls within the declared zone prefix for its site — ensuring that IP addressing is consistent with the design intent regardless of how many sites have been added.

The output is also emitted as JUnit XML, which GitLab CI consumes natively. Every intent check appears as a named test case in the pipeline UI. A failed check blocks the merge request — just as a failing unit test blocks a software release.

The checks themselves are written in plain Python and are straightforward to extend. Adding a new design intent means adding a new check function and registering it in the intent registry. The schema and the tests grow together.


Layer 2: Batfish behavioural validation

The second verification layer runs on the rendered configurations, after the Ansible pipeline has generated them but before they are deployed. Batfish models the entire network — every device, every protocol, every policy — and answers questions about network behaviour that cannot be answered from the data model alone.

Where the SoT verification checks structural consistency, Batfish checks behavioural correctness. Some examples of what this means in practice for ACME:

Reachability assertions: Can a host in the trading zone reach a host in the DMZ zone without traversing a firewall? The answer should always be no. Batfish can assert this deterministically across every possible path in the modelled network.

Routing correctness: Does every leaf node have a valid path to every other leaf node via the spine layer? Are there any routing black holes introduced by the proposed change? Are all eBGP sessions correctly configured to reach Established state?

Policy compliance: Does any ACL on any device permit traffic that should be blocked by INTENT-SEG-02? Are all management-plane interfaces bound to VRF MGMT?

Change impact analysis: Given the proposed configuration change, what is the set of forwarding paths that will change? This is the blast radius analysis that makes automated change safe.

These checks run in a Docker container as part of the GitLab CI pipeline, emitting JUnit XML that integrates with the same pipeline UI as the SoT verification. The result is a pipeline that provides, on every merge request, a complete behavioural validation of the proposed network state.

The combination of the two layers — SoT verification catching structural issues in under a second, Batfish catching behavioural issues in a CI stage — means that by the time a human reviewer looks at a merge request, the automated system has already done the vast majority of the verification work. Human review focuses on what humans are genuinely good at: architectural judgement, business context and the questions that the automated checks cannot yet ask.


Intent-driven generation: adding a New York branch

Verification is one half of the automation story. The other half is generation — and this is where the model shifts from “we validate changes” to “we generate changes that are guaranteed to be valid.”

ACME wants to open a New York office. In a traditional workflow, a network engineer would design the site, allocate IPs, write the configuration, review it, test it and deploy it. The quality of the result depends entirely on the engineer’s familiarity with ACME’s design standards, security policies and management configuration. If the engineer is new, or under time pressure, or working late, corners get cut. Policies get inconsistently applied.

In an intent-based workflow, the engineer runs a single command:

python generate_branch.py \
  --site-id     nyc-branch1 \
  --location    "New York, US" \
  --prefix      10.2.0.0/16 \
  --router-ip   10.2.20.1 \
  --router-lo   10.2.254.1 \
  --switch-ip   10.2.0.11 \
  --switch-lo   10.2.254.11 \
  --verify

The generator produces two new nodes in nodes.yml — a WAN router and an access switch — and immediately runs the full intent verification suite against the updated file. The output:

Generating branch site: nyc-branch1 (New York, US)
  Site prefix : 10.2.0.0/16
  WAN router  : nyc-branch1-rtr01  lo=10.2.254.1
  Access sw   : nyc-branch1-sw01   lo=10.2.254.11
  Intents     : INTENT-RTG-03, INTENT-SEG-01,
                INTENT-MGMT-01, INTENT-MGMT-02, INTENT-IP-01

Written 10 nodes to nodes.yml
  Added: nyc-branch1-rtr01, nyc-branch1-sw01

Results: 12 passed, 0 failed out of 12

The generated nodes have OSPF Area 0 configured — because INTENT-RTG-03 mandates it. They have dual syslog servers pointing to the central collectors — because INTENT-MGMT-02 mandates it. They have SNMPv3 with SHA auth and AES128 privacy — because INTENT-MGMT-02 mandates it. They have deny-default ACLs with requirement-traced comments — because INTENT-SEG-02 mandates it. They have dual WAN uplinks — because REQ-NET-05 mandates it.

None of these are decisions the engineer has to remember to make. They are encoded in the generator, which derives them from the design intents. The engineer provides only the inputs that are genuinely site-specific: the hostname prefix, the IP prefix and the loopback addresses. Everything else follows from the intent.

The generator also enforces constraints that prevent mistakes. If the engineer specifies a prefix that overlaps with an existing site’s management addresses, the generator exits with an error before writing anything. If the site ID already exists in the SoT, same result. These are the guardrails that make automation safe — not just fast.


What this means for the pipeline

With both verification and generation in place, the GitLab CI pipeline becomes the governance layer for the entire network. The flow for any change — whether it is a human editing the SoT directly or a generator creating new nodes — is identical:

[MR opened against main]
        │
        ▼
[yamllint: SoT syntax]          ← catches malformed YAML immediately
        │
        ▼
[verify_intents.py]             ← structural intent compliance, < 1 second
        │
        ▼
[Ansible: render configs]       ← generates device configs as artefacts
        │
        ▼
[Batfish: behavioural tests]    ← reachability, routing, policy, < 2 minutes
        │
        ▼
[hier-config: diff]             ← shows exactly what changes on each device
        │
        ▼
[MR approval gate]              ← human review of a pre-validated change
        │
        ▼
[Napalm: deploy]                ← pushes only the diff, not a full replace

The human reviewer at the approval gate is not being asked to verify correctness — the pipeline has done that. They are being asked to exercise architectural and business judgement. Is this the right change? Is this the right time? Does this align with the business direction? That is an appropriate use of human attention. Checking whether ACLs have a default-deny action is not.


The road to self-healing

Verification and generation are the foundation. Self-healing is the next layer and it follows naturally from the same principles.

Self-healing requires three things: the ability to observe actual network state, the ability to compare it against intended state and the ability to act when they diverge. In an intent-based model, the intended state is always available — it is the Source of Truth. The observation layer is Oxidized (for configuration backup and change detection), streaming telemetry via GNMI/gRPC, or simple SNMP polling. The action layer is the same Ansible pipeline that handles normal changes, triggered automatically rather than by a human.

When a device configuration drifts from the SoT — because someone made a manual change, because a device was replaced and not fully restored, because a bug caused an unexpected state — the system detects the drift, raises an alert and optionally generates a remediation change that restores conformance with the intent. The intent is the source of truth. The running configuration is not.

This is not a hypothetical capability. It is achievable with the open-source tooling described in this series, assembled into a closed loop. The maturity required is primarily organisational: the discipline to treat the SoT as authoritative, to resist the temptation of making “quick” CLI changes and to trust the automation.


Where AI enters the picture

There is a layer above self-healing that is just beginning to become practical: AI-driven closed loops. This is where the investment in structured, machine-readable intent pays its most significant dividends.

Consider what an AI system with access to the intent layer can do that was previously impossible:

Natural language to intent. A business stakeholder says “we need to connect our new algorithmic trading platform to the market data feeds with the lowest possible latency and strict isolation from other traffic.” An AI system that understands the intent schema can translate this into a draft INTENT-TOPO-* and INTENT-SEG-* entry, which an architect reviews and approves. The gap between business language and technical design closes.

Anomaly-driven intent refinement. The network is performing below its latency KPI. An AI system that can correlate telemetry data with the intent model can identify that the actual traffic path is three hops rather than the two mandated by INTENT-TOPO-01, diagnose the cause and propose a configuration change that restores conformance. This is reasoning at the intent layer, not the configuration layer.

Predictive impact analysis. Before a change is applied, an AI system can reason about the blast radius — not just whether the change passes the existing verification checks, but whether it introduces risk that the checks were not designed to catch. “This BGP route-map change is syntactically valid and passes all assertions, but it changes the preferred path for 47% of trading zone traffic. Are you sure?”

Self-generating intent. As the network grows and the business evolves, new patterns emerge that the original design intents did not anticipate. An AI system that analyses the gap between current intent coverage and observed network behaviour can propose new intents — and flag the gaps where the network is operating without any stated intent at all. Unintended behaviour is the most dangerous kind.

None of this is science fiction. The prerequisite is exactly what this series has described: a network whose intent is encoded in machine-readable, structured, version-controlled data. Without that foundation, AI has nothing meaningful to reason about. With it, the possibilities expand significantly.


The compounding advantage

There is an economic argument for this approach that is worth stating explicitly. The investment in intent-based networking is front-loaded: designing the intent schema, building the verification checks, establishing the CI/CD pipeline, training the team to work in the new model. These are real costs and they are incurred before any benefit is realised.

But the benefits compound. Every new site added via the generator costs a fraction of what a manually-configured site costs. Every change validated by the pipeline carries a fraction of the risk of a manually-reviewed change. Every compliance audit supported by the intent model requires a fraction of the engineering time of a retrospective documentation exercise. Every incident that the self-healing loop catches before it reaches production avoids costs that are genuinely hard to quantify — but not hard to imagine.

The organisations that adopt this model early do not just reduce their operational costs. They change the nature of their operational risk. Manual configuration has a risk profile that grows with network complexity. Intent-based configuration has a risk profile that is bounded by the quality of the intent model — and that quality improves over time, as intents are refined and verification checks are extended.


A practical path forward

For organisations considering this journey, the practical path is incremental. You do not need to re-architect your entire network to begin capturing value from intent-based practices.

Start with the intent layer. Write requirements.yml and design_intents.yml for your most critical infrastructure, even if you do not yet have the SoT or the pipeline to consume them. The act of writing structured intents surfaces assumptions, resolves ambiguities and creates a shared vocabulary for the team.

Add the Source of Truth for one domain — a single datacenter fabric, a set of branch offices. Build the Jinja2 templates for that domain. Run the configuration generation in dry-run mode alongside your existing process and compare the output with what you would have written manually.

Build the verification checks for the intents you have defined. Run them in your existing CI platform. Observe what they catch.

Expand incrementally. Each iteration extends the coverage of the intent model and deepens the automation. The pipeline becomes more capable. The team builds confidence in the automated verification. The organisation develops the discipline to treat the SoT as authoritative.

This is not a two-year transformation programme. With the right team and clear scope, meaningful capability can be demonstrated in weeks. The ACME example in this series — requirements, intents, SoT, verification and branch generation — represents a realistic proof of concept that can be built, demonstrated to leadership and used as the foundation for a broader adoption programme.


Closing: the network as a governed system

The vision that intent-based networking is moving toward is not a fully autonomous network that operates without human involvement. It is a network that operates as a governed system — where human decisions are made at the intent level, where machines handle everything below that level and where the boundary between human and machine responsibility is explicit, enforced and auditable.

In this model, network engineers are not diminished. Their expertise is elevated — from writing CLI commands to defining the architectural principles that govern a network that configures, verifies and heals itself. The craft does not disappear; it moves to a higher level of abstraction where it has more leverage.

The networks that will serve financial services, healthcare, critical infrastructure and the hyperscale platforms of the next decade cannot be operated by humans writing configuration line by line. The complexity is too great, the pace of change too fast, the cost of error too high. Intent-based networking is not one possible response to this reality. It is the necessary one.

The tools exist. The workflow is proven. The path is clear. The question is when, not whether.