top of page
Writer's pictureMatt Sherif

A madman's thoughts: Azure IP/Routing weirdness

Cloud, what an interesting place. It enables us to deploy workloads faster than ever, and it has also expanded the "attack surface" by a factor of 10 (if not higher). For those of us who have been in the "game" for 10+ years, the cloud is in essence - to quote my friend Manny - "Black Magic Networking".

Such a simple cloud right? NOPE!

It's not that the concepts are that dissimilar, in fact they're near identical, with a few very big differences. This entry chronicles one of those said differences.


It started when I had asked Jenna T. , a colleague of mine if she'd ever deployed a FortiManager in Azure. As I had just deployed one, and couldn't for the life of me figure out how to get the darn thing to talk to the internet. Turns out, I didn't have a user-defined routing table assigned to my "Azure LAN" subnet, and despite me configuring the default gateway on the FortiManager VM, it would not ping outside the LAN subnet. This is another write up, for another time.


I shared my findings with Jenna, who seemed to take interest due to a problem she'd been assisting someone else with, and it was exhibiting similar behavior. With my findings above being fresh I offered to take a look.


The topology we were working with looked like this:


And despite the user defined routes being present we the issue could be summed up as this:


  1. The LAN interface of the FG-201E could ping/reach the LAN interface of the Azure FG

  2. The LAN interface of the FG-201E could ping/reach any device in the Azure LAN

  3. All devices on the 172.25.25.0/24 could ping/reach the LAN interface of the Azure FG

  4. The devices in the 172.25.25.0/24 with exception to the LAN interface of the FG-201E could not reach any devices on the 10.0.255.0/24 subnet, with exception to the Azure LAN interface

  5. The reverse is true as well, The LAN interface on the Azure FGT could ping/reach the LAN interface of the FG-201E

  6. The LAN interface of the Azure FGT could ping/reach any devices on the 172.25.25.0/24 subnet

  7. The devices in the 10.0.255.0/24 subnet could ping the LAN interface of the FG-201E

  8. The devices in the 10.0.255.0/24 subnet with exception to the Azure FGT LAN interface could not ping/reach any devices in the 172.25.25.0/24 subnet with exception to the FG-201E LAN interface

I think we've ruled out routing, diag sniffer showed one way traffic. And diag debug flow showed:

id=20085 trace_id=17 func=print_pkt_detail line=5665 msg="vd-root:0 received a packet(proto=6, 172.25.25.2:52828->10.0.255.5:3389) from ToHub. flag [S], seq 2386492846, ack 0, win 64240"
id=20085 trace_id=17 func=resolve_ip_tuple_fast line=5746 msg="Find an existing session, id-00002426, original direction"
id=20085 trace_id=17 func=npu_handle_session44 line=1160 msg="Trying to offloading session from ToHub to port2, skb.npu_flag=00000000 ses.state=00200204 ses.npu_state=0x02040000"
id=20085 trace_id=17 func=fw_forward_dirty_handler line=396 msg="state=00200204, state2=00000001, npu_state=02040000"
id=20085 trace_id=17 func=ipd_post_route_handler line=490 msg="out port2 vwl_zone_id 0, state2 0x1, quality 0.
"
id=20085 trace_id=18 func=print_pkt_detail line=5665 msg="vd-root:0 received a packet(proto=6, 172.25.25.2:52828->10.0.255.5:3389) from ToHub. flag [S], seq 2386492846, ack 0, win 64240"
id=20085 trace_id=18 func=resolve_ip_tuple_fast line=5746 msg="Find an existing session, id-00002426, original direction"
id=20085 trace_id=18 func=npu_handle_session44 line=1160 msg="Trying to offloading session from ToHub to port2, skb.npu_flag=00000000 ses.state=00200204 ses.npu_state=0x02040000"
id=20085 trace_id=18 func=fw_forward_dirty_handler line=396 msg="state=00200204, state2=00000001, npu_state=02040000"
id=20085 trace_id=18 func=ipd_post_route_handler line=490 msg="out port2 vwl_zone_id 0, state2 0x1, quality 0.

In essence the traffic was being allowed. Yet there was no response, we couldn't access the server in Azure, despite the server having internet access.


Frustrated yet determined, I asked for help internally from one of our cloud experts, and one of them - we'll call her M - suggested we check and make sure that IP Forwarding is enabled on the LAN NIC of the Azure FortiGate.

We checked the LAN interface on the Azure FGT, and it turns out IP forwarding was disabled. We enabled it and everything was reachable!


This gave me pause, and made me think why that could be. So I did what any one of us does when we don't understand something - I Google'd it! Here's what I found.



"Any network interface attached to a virtual machine that forwards traffic to an address other than its own must have the Azure enable IP forwarding option enabled for it."


To many of you reading this, you may think "Well, duh...", this is where that mention of Cloud concepts being near identical but the few differences there are being big. It turns out - I knew this bit, but didn't realize the depth of the impact - that there's no real "L2" in cloud environments, and ARP is kind of a "smoke and mirrors" handled by the cloud environment. In this case, for the packet to be forwarded to a destination outside of the "Azure LAN" we need to enable "IP forwarding" to disable Azure's check for the source and destination for a corresponding network interface.


Traffic going to the internet must be proxied or NATed - which explains why these devices could reach the internet - by a network interface that has forwarding enabled (the WAN interface of the Azure FGT had it enabled).


Now the takeaway here isn't that deploying an Azure FortiGate is hard, it's not, but we need to be aware of a few things. When deploying an Azure FortiGate you'll be presented with 3 options:

  • FortiGateNGFW - Single VM with ARM Template

  • Fortinet FortiGate Next-Generation Firewall

  • FortiGate NFGW for Azure LB HA with ARM template

Looking at these options, I would be pretty sure since I didn't know what an ARM template was that I didn't need it, and that since I didn't need load balancing the 3rd option isn't it either, the second option it is. So the person Jenna was helping thought the same too. Turns out the second option deploys with 2 nics, but only one is attached (the WAN), and you need to attach the LAN nic and enable IP forwarding.


The first option is actually what you need, after reading up more on Azure Resource Manager (ARM) and templates, turns out you can deploy an entire FortiGate, Wan, Lan subnets, with the requisite NICs (and forwarding enabled) with the requisite routing tables too, so it's a much more complete deployment.


Long story short, if you run into the issues described here, check IP forwarding on your FortiGate interfaces in Azure. If you haven't yet deployed and would like to, consider either of the ARM template versions, and avoid the trouble above.


Hope this helps. Madman out.

472 views0 comments

Recent Posts

See All

Comments


bottom of page