Network & Security engineering is not immune to what AI is doing to every technical discipline. That point is still treated as background noise in a lot of industry conversations, framed as something that affects other roles, not ours. It's a mistake. The role is changing now (to be frank, it has been for some time), and the teams that have started adapting are already working differently to those that haven't.

This post covers the daily workflow changes, the staffing reality, custom tooling, the security picture, and where encryption standards are heading. Some of it applies to how you run a team. Some of it applies to how you manage your career. All of it is happening now.

THE DAILY WORKFLOW IS ALREADY DIFFERENT

AI is cutting the time that repeatable tasks consume. Configuration drafting, change documentation, troubleshooting triage, log analysis: all faster. It produces a usable first pass that an engineer can validate, correct, and act on rather than starting from blank.

The more significant shift is in diagnostics. A client calls to say users at their London office are dropping off the wireless network intermittently. The traditional process: log into the wireless controller or cloud dashboard, pull event logs, check RF metrics, review client roaming data, check the upstream switch port, check DHCP lease behaviour, and build a picture manually. That process depends on knowing where to look, and it takes time even when you do.

With AI layered over network APIs, you put the question differently. Describe the symptom, name the site, give a time window. The tooling queries the relevant data sources, correlates across the stack, and returns a structured response: roaming failure counts, signal levels at the point of drop, AP load, adjacent AP signal comparisons, client event patterns. The AI doesn't always identify the correct cause, but it surfaces a shortlist of likely candidates faster than manual triage, and it considers the full environment rather than the single log file you happened to open first.

That changes the shape of an investigation. Instead of spending the first twenty minutes working out where to look, you're spending twenty minutes validating a hypothesis.

THE STAFFING TRUTH - This hurts...

AI and automation are reducing the number of engineers needed to run the same network. This is not a coming risk. It's happening now, and teams that ignore it are making a planning error.

We saw one networking team go from eight engineers to five over two years. No redundancies. Three roles weren't backfilled when engineers moved on. The five remaining engineers, upskilled in automation tooling and AI-assisted workflows, are running the same estate. The tasks that had previously required eight people were largely repeatable: pulling reports, executing standard changes, reviewing dashboards, producing documentation. The remaining engineers are spending their time on architecture decisions, complex fault investigation, and work that still requires human judgement.

For IT managers, that's a real opportunity. Leaner teams delivering the same output is achievable if those teams have the right tools and time to learn them properly. For engineers: identify which parts of your current role are automatable and which are not. The automatable parts are under pressure. The rest is where your value sits, and it's worth being deliberate about developing it.

BUILDING YOUR OWN INTELLIGENCE LAYER

Off-the-shelf vendor AI tools are useful. The engineers getting the most out of this are also building their own integrations, and the threshold for doing that has dropped considerably.

The enabling technology is MCP: Model Context Protocol, a standard that lets AI assistants connect directly to external APIs and data sources. Wrap a vendor API in a custom MCP server and you can query your network infrastructure in plain language without logging into dashboards or knowing in advance which report to run.

We've been building this with Juniper Mist.

Mist has a well-documented REST API. We built a lightweight MCP server around it that allows an AI assistant to query client event data, AP performance metrics, roaming statistics, and site health in a single exchange. A client reports users dropping at a specific site. Instead of pulling five separate views in the Mist dashboard, we describe the symptom and ask the question. The assistant draws on roaming event logs, signal data, AP health, and client behaviour patterns together and returns a correlated picture.

The value isn't just speed. It's being able to start from a vague symptom and get something structured back. "Clients are dropping on the second floor" is a vague description. A manual investigation requires you to already know which metrics to pull and in which order. Starting from the symptom and working backwards is a different kind of diagnostic, and it often surfaces correlations that manual triage would find much later or miss entirely.

The AI isn't always right. That matters: treat the output as a hypothesis, not a conclusion. But a shortlist of likely causes with supporting data is a much better starting point than a blank screen.

Building this isn't as complex as it sounds. If a vendor has a REST API and you have an engineer who can write Python, you can build a useful MCP server in a day. Most major wireless, switching, and firewall platforms have documented APIs. The tooling exists. The question is whether teams give themselves time to build with it.

SECURITY AND THE THREAT LANDSCAPE

AI has simultaneously made attacks easier to execute and made defensive tooling more capable. Both sides of that equation are moving fast.

Split illustration showing AI-powered cyber attacks on the left in red tones versus AI-assisted threat detection and defence on the right in blue tones

On the attack side: the barrier to convincing social engineering has dropped sharply. AI-generated phishing campaigns are more targeted, contextually accurate, and grammatically clean than anything that was operationally viable two years ago. Vulnerability exploitation is faster. The window between public disclosure and active exploitation of network perimeter devices has compressed in a way that standard patching cycles don't accommodate. VPN concentrators, firewalls, and edge routers are high-value targets, and they're being probed continuously.

The "perimeter is dead, everything is cloud-managed" argument was always overstated. The organisations that bought it fully have had some difficult conversations. Zero trust architecture is the right direction: strict access control, microsegmentation, logging that gets reviewed. But it doesn't replace well-configured perimeter controls. It layers on top of them. Network engineers who understand both the perimeter and the zero trust model are in a better position than those who know only one side.

On the defensive side: AI-assisted threat detection is catching anomalies that rule-based SIEM configurations miss. Unusual lateral movement patterns, unexpected device-to-device communications, authentication sequences that deviate from baseline: these signals correlate across log sources faster with AI analysis than with a human working the SIEM manually. If your organisation isn't using some form of AI-assisted log analysis, the volume of data you're not reviewing is a real gap.

If this interests you we really reccomend you watch the below talk from Nicholas Carlini from Anthropic - talking more in-depth of what we just mentioned.

ENCRYPTION STANDARDS ARE MOVING

TLS 1.0 and 1.1 are gone from any environment that takes security seriously. TLS 1.2 is under pressure. TLS 1.3 should be your current baseline, and if you haven't audited what's still running older cipher suites on your network, that audit is overdue. It's not a difficult audit to run; it just gets deprioritised until something forces it.

Post-quantum cryptography has moved from long-horizon concern to near-term planning requirement. NIST finalised its first post-quantum cryptographic standards in August 2024: ML-KEM (formerly CRYSTALS-Kyber), ML-DSA (formerly CRYSTALS-Dilithium), and SLH-DSA. Major vendors are building support into product roadmaps, and UK and US government agencies have issued migration guidance. This isn't speculative.

The reason it matters now rather than when quantum computing matures: an adversary can intercept and store encrypted traffic today, then decrypt it when the capability exists. Any organisation carrying data with a long sensitivity lifecycle, healthcare, legal, financial, government, needs to account for this in its current encryption strategy. Waiting until the threat is demonstrated is too late.

WHAT THIS MEANS IF YOU MANAGE A TEAM

AI tooling gives you two choices: run leaner with the same output, or produce more with your existing headcount. Both are legitimate depending on what your organisation needs. The mistake is treating AI adoption as something your engineers figure out independently on their own time.

Teams making progress are doing it deliberately: blocking time for experimentation, evaluating vendor AI features against specific use cases, and building or adapting tooling for their own environment. Vendor AI features are worth evaluating on practical grounds. Juniper's Marvis engine, Cisco's AI-driven analytics, Aruba AI Insights: the question isn't whether they're impressive but whether they reduce the time your team spends on a specific task and whether the time saving justifies the licensing cost. Run a real evaluation with a defined test case.

On headcount: decisions to reduce team size on the basis of AI productivity gains need to come from evidence, not expectation. Build the tooling, prove the output over time, then make staffing decisions from what you've observed. Teams that skip that sequence tend to create gaps they then fill expensively.

WHAT THIS MEANS IF YOU ARE AN ENGINEER

The engineers who are well-positioned are doing things AI can't do: making judgement calls in ambiguous situations, managing client relationships, designing architecture for a specific environment, and recognising when a tool's output is wrong.

The engineers who are under pressure are those whose primary output is tasks that are now automatable: generating configurations from templates, pulling standard reports, writing change requests from scratch, summarising dashboard views. It's not a comfortable point, but it's accurate.

The practical move: learn to code at a functional level. Enough to script, automate, and build integrations with vendor APIs. Python is the most transferable choice in this space. Engineers who can write a working API integration, parse its output, and build something useful from it are using AI as a force multiplier. Engineers who can't are in a more exposed position as the tools improve.

You don't need to be exceptional at it. Competent enough to build the things that make your work faster, and to understand what's technically possible. That bar is lower than most engineers assume, and the return comes quickly.

HOW WE THINK ABOUT IT AT RUPE NETWORKS

We're a relatively small network and security team doing project and subcontract work for MSPs and direct clients across the UK. AI isn't a strategic initiative for us. It's part of how we work day-to-day, because the alternative is working slower and charging more for the same output.

The Juniper Mist MCP integration is a concrete example. Wireless troubleshooting has always involved a lot of manual correlation: pulling separate views of client events, AP performance, and RF data to build a picture of what's actually happening. Wrapping the Mist API in a custom MCP server and querying it with an AI assistant gets us to a working hypothesis faster. We treat the output critically: it's a starting point, not an answer. Arriving at three likely candidates with supporting evidence is a better place to begin than a blank dashboard.

We use AI for documentation, configuration drafting, and working through vendor-specific behaviour that used to mean an hour on a knowledge base or reading through firmware release notes. We build custom tooling where vendor tools don't cover what we need.

The result is less time on tasks that don't require engineering judgement and more time on the ones that do. Smaller teams, better tooling, covering more ground with more confidence: that's what modern engineering looks like in practice, and it's the direction this discipline is moving whether individual teams are ready for it or not.

If you want to talk through how any of this applies to your environment or team structure, get in touch.