Most FortiGate issues that appear in production don't emerge out of nowhere. They trace back to something that was skipped, assumed, or left over from the initial staging process. An open policy left from testing. A logging destination that was never configured. An HA pair that's showing as synced in the dashboard but hasn't actually been failover-tested. These are the kinds of problems that surface at 11pm on a Tuesday when someone is trying to work out why half the users can't reach the internet.

This checklist is for anyone deploying a FortiGate firewall — whether it's a single-site FortiGate 60F or a multi-site SD-WAN rollout with 40-series and 100-series devices across distributed locations. Run through it before you hand over and you'll catch the common problems before they become incidents.

NETWORK SEGMENTATION AND FIREWALL POLICY

The firewall policy is where most deployment errors live. FortiGate uses a zone-based policy model — traffic between zones is denied by default unless a policy explicitly permits it. That's the right approach, but it also means the policy table needs deliberate design before you go live.

Start with your zone layout. Map each physical or VLAN interface to a zone and confirm the zone design reflects your intended security architecture. A flat network where everything lands in the same zone is not segmentation — it's just a firewall with a single trust boundary. At minimum, you want separate zones for LAN, WAN, DMZ, and management. In most enterprise or multi-VLAN deployments, you'll add server VLANs, guest wireless, IoT, and voice as separate zones.

FortiGate processes policies top-down, first-match. This matters: a broad "allow all" policy sitting near the top of the table will silently swallow traffic that should be hitting a more specific policy lower down. Before go-live, review the full policy table in order. Look for:

  • Policies left from staging — "allow all" from LAN to WAN with no source/destination restriction is a common leftover from initial testing. Remove it or replace it with specific rules
  • Missing logging on deny rules — FortiGate deny policies don't log by default. Enable logging on implicit deny and any explicit deny rules — you cannot diagnose blocked traffic if you're not capturing it
  • Overly broad address objects — "all" as a source address is fine for outbound internet traffic, but review every rule where "all" appears on the destination side
  • Service objects — avoid using "ALL" as the service/port unless you have a specific reason. Use predefined service objects or create named custom services for anything non-standard

VPN AND SD-WAN CONFIGURATION

VPN and SD-WAN configuration are frequently the most complex parts of a FortiGate deployment and the most likely to have subtle misconfiguration that doesn't fail immediately but fails badly in edge cases.

For IPSec tunnels, the critical checklist items are phase 1 and phase 2 proposal alignment with the remote peer, Dead Peer Detection (DPD) settings, and route-based versus policy-based tunnel selection. DPD is often left at the default or disabled — confirm it's configured correctly for your underlay, since a flapping tunnel with DPD disabled will appear up in the dashboard while passing no traffic. Route-based tunnels are strongly preferred in FortiOS for any site-to-site configuration; policy-based VPN is legacy and creates problems with SD-WAN integration.

For SD-WAN, the FortiOS SD-WAN rule framework is a significant departure from the older WAN load balancing model. If you've migrated from an older FortiOS version or are deploying alongside a legacy configuration, confirm that SD-WAN rules are actually in effect and not being overridden by static routes with lower admin distances. The key pre-go-live checks for SD-WAN are:

  • SLA health check targets configured — each SD-WAN member interface needs a health check (ping or HTTP probe) with realistic SLA thresholds. Defaults are often too permissive
  • Failover thresholds match your SLA commitments — if you're promising sub-second failover, verify the health check interval and failure threshold math actually delivers that
  • Application-based SD-WAN rules tested — if you're using application steering to route Microsoft 365 or VoIP traffic over a preferred interface, verify the application signature is matching correctly before go-live
  • Asymmetric routing ruled out — with multiple WAN interfaces, confirm reply traffic returns on the same interface it came in on, or that the FortiGate session table is handling the asymmetry correctly

HIGH AVAILABILITY AND FIRMWARE

FortiGate HA is either FGCP (FortiGate Clustering Protocol) for active-passive or active-active pairs, or FGSP (FortiGate Session Life Synchronisation Protocol) for environments where you need independent routing but still want session synchronisation across units. FGCP is correct for the vast majority of deployments; FGSP is for specific architectures where each FortiGate needs to maintain its own routing adjacency, typically in front of ECMP-routed firewalls.

Whatever your HA mode, the pre-go-live checklist for HA is:

  • Config checksum match — both units should show identical configuration checksums in the HA dashboard. A mismatch means a sync failure somewhere
  • Session sync verified — active sessions should appear on the standby unit. Test by establishing a session, then confirming it appears in the session table on the standby
  • Firmware version match — both units in a pair must run identical FortiOS versions. A version mismatch will cause HA to refuse to form or enter a split-brain state
  • Failover test completed — this is non-negotiable. Before handover, physically fail the primary unit (disconnect the power or use the CLI failover command) and confirm the standby takes over within your expected timeframe, and that traffic recovers. Log the actual failover time
  • Management access on standby — confirm you can reach the standby unit on its dedicated management interface or HA management IP, not just through the cluster IP

LOGGING, MONITORING, AND BACKUP

A FortiGate with no logging destination configured is a firewall you can't operate. Logging is not optional — it's the mechanism that tells you what's being blocked, what's been allowed that shouldn't be, and what happened in the minutes before an incident.

Pre-go-live logging checklist:

  • FortiAnalyzer or syslog destination — if you have FortiAnalyzer, confirm the FortiGate is registered and logs are flowing before handover. If using syslog, confirm the destination IP, port, and facility are correct and the receiving system is seeing events
  • Local disk logging — if there's no external log collector, enable disk logging on the FortiGate as a minimum. Confirm disk logging is set to log all traffic, not just security events
  • SNMP community strings changed — default community strings ("public", "private") must be replaced. If SNMP monitoring isn't in scope, disable SNMP entirely on all interfaces that don't require it
  • Automated config backup schedule — configure FortiGate's built-in scheduled backup or confirm your external backup tool is polling the config. A firewall with no config backup is a project-ending risk if the hardware fails
  • Alert thresholds configured — CPU, memory, and interface utilisation alerts should be set before handover. Defaults in FortiOS are usually adequate for this, but verify they're sending to the right destination

THE PRE-GO-LIVE CHECKLIST

Run through this before signing off any FortiGate deployment:

  • Zone design reviewed — each interface assigned to a zone, zones reflect intended security architecture
  • Policy table audited — no leftover test policies, no "allow all" with broad source/destination, least-privilege rules only
  • Logging enabled on deny rules — implicit deny and all explicit deny policies log to the configured destination
  • VPN phase 1/2 proposals confirmed — matching the remote peer, DPD configured, route-based tunnels used
  • SD-WAN SLA health checks configured — realistic thresholds, failover tested end-to-end
  • HA firmware versions matched — both units on the same FortiOS build
  • HA config sync verified — matching checksums, session sync confirmed
  • HA failover tested — primary unit failed, standby took over, traffic recovered, failover time recorded
  • Log destination configured and tested — FortiAnalyzer or syslog receiving events
  • SNMP community strings changed — or SNMP disabled if not required
  • Config backup scheduled — automated, tested, destination confirmed
  • Management access documented — cluster IP, standby management IP, admin credentials in password manager

NEED HELP WITH A FORTIGATE DEPLOYMENT?

We deploy and configure FortiGate firewalls for UK businesses, MSPs, and partners — from single-site installs to multi-site SD-WAN rollouts. See our Fortinet firewall and SD-WAN services or get in touch directly.