Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

"Network Connectivity for Clones" guide doesn't result in connectivity from the clone working#4720

Unanswered
kanpov asked this question inQ&A
Discussion options

I followed the guide and got ssh to 192.168.0.3 from default netns to work, but pinging 1.1.1.1 from the guest fails.

ip a on default netns:

...91: veth1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000    link/ether de:30:77:dd:fc:c8 brd ff:ff:ff:ff:ff:ff link-netns fcnet    inet 10.0.0.1/24 brd 10.0.0.255 scope global veth1       valid_lft forever preferred_lft forever    inet6 fe80::dc30:77ff:fedd:fcc8/64 scope link proto kernel_ll        valid_lft forever preferred_lft forever

ip r on default netns:

...192.168.0.3 via 10.0.0.2 dev veth1

ip a in vm netns:

1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:002: tap0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000    link/ether 22:ae:d0:45:a0:55 brd ff:ff:ff:ff:ff:ff    inet 172.16.0.1/24 brd 172.16.0.255 scope global tap0       valid_lft forever preferred_lft forever    inet6 fe80::20ae:d0ff:fe45:a055/64 scope link proto kernel_ll        valid_lft forever preferred_lft forever4: veth2@if91: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000    link/ether ca:3c:cc:57:3a:69 brd ff:ff:ff:ff:ff:ff link-netnsid 0    inet 10.0.0.2/24 brd 10.0.0.255 scope global veth2       valid_lft forever preferred_lft forever    inet6 fe80::c83c:ccff:fe57:3a69/64 scope link proto kernel_ll        valid_lft forever preferred_lft forever

ip r in vm netns:

default via 10.0.0.1 dev veth2 proto static 10.0.0.0/24 dev veth2 proto kernel scope link src 10.0.0.2 172.16.0.0/24 dev tap0 proto kernel scope link src 172.16.0.1

iptables -t nat -vnL in vm netns:

Chain PREROUTING (policy ACCEPT 17 packets, 2006 bytes) pkts bytes target     prot opt in     out     source               destination             2   120 DNAT       0    --  veth2  *       0.0.0.0/0            192.168.0.3          to:172.16.0.2Chain INPUT (policy ACCEPT 2 packets, 168 bytes) pkts bytes target     prot opt in     out     source               destination         Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target     prot opt in     out     source               destination         Chain POSTROUTING (policy ACCEPT 2 packets, 120 bytes) pkts bytes target     prot opt in     out     source               destination             1    84 SNAT       0    --  *      veth2   172.16.0.2           0.0.0.0/0            to:192.168.0.3

Using default ubuntu-22.04.ext4 rootfs with its guest mac setup andip r a default via 172.16.0.1 in the guest.

I think there might be an error in the guide since connecting to the guest makes sense (192.168.0.3 through to tap through to guest), but how would NATing the packetsto 192.168.0.3 result in them going to my actual network interface (wlp1s0)? There aren't any routes or iptables rules being set up in the guide that would point 192.168.0.3 at the correct network interface, or am I missing something?

You must be logged in to vote

Replies: 5 comments 7 replies

Comment options

I made my own setup like this:

Netns withveth end outside andvpeer end inside, and connectivity on both ends :

iptables -t nat -A POSTROUTING -s ${VPEER_ADDR}/24 -o ${IFACE} -j MASQUERADEiptables -A FORWARD -i ${IFACE} -o ${VETH} -j ACCEPTiptables -A FORWARD -o ${IFACE} -i ${VETH} -j ACCEPT

And forwarding from tap to vpeer according to network setup guide with iptables inside the netns.

While this works, it requires 6 iptables rules (7, if also making a clone address with DNAT accessible from outside the netns) and can probably somehow be simplified. Plus, why does the original guide not work like I described?

You must be logged in to vote
0 replies
Comment options

After some more networking pain I reduced the amount of necessary rules from 7 to 5 so I'm actually gonna stick with my solution instead of the broken one in the docs:

[src/main.rs:147:5] &iptables_cmd = "-t nat -A POSTROUTING -o veth0 -s 172.16.0.2 -j SNAT --to 10.0.0.2"[src/main.rs:147:5] &iptables_cmd = "-t nat -A PREROUTING -i veth0 -d 192.168.0.3 -j DNAT --to 172.16.0.2"[src/main.rs:147:5] &iptables_cmd = "-t nat -A POSTROUTING -s 10.0.0.2/24 -o wlp1s0 -j MASQUERADE"[src/main.rs:147:5] &iptables_cmd = "-A FORWARD -i wlp1s0 -o veth1 -j ACCEPT"[src/main.rs:147:5] &iptables_cmd = "-A FORWARD -o wlp1s0 -i veth1 -j ACCEPT"

1 (inside netns) - mark connections from guest as connections from inside netns
2 (inside netns) - mark connections to forwarded guest ip as connections inside netns to the actual guest ip
3-5 (outside netns) - proper internet connectivity inside the netns allowing 1 and 2 to work

You must be logged in to vote
3 replies
@pb8o
Comment options

pb8oAug 19, 2024
Maintainer

Hi@kanpov it may be the host configuration was not fully captured back when this guide was written. As you said we could update it to useMASQUERADE, but that would invalidate the results below unless we repeat them. we will look into it but I am initially more inclined to just point to theworking example we have in our integration tests.

@kanpov
Comment options

Hi@kanpov it may be the host configuration was not fully captured back when this guide was written. As you said we could update it to useMASQUERADE, but that would invalidate the results below unless we repeat them. we will look into it but I am initially more inclined to just point to theworking example we have in our integration tests.

Funnily enough, the working example you link doesn't use a singleSNAT orDNAT rule and does forwarding similarly to what I pointed out instead, so even it isn't what's described on that documentation page. So I think I'm correct in assuming that the configuration presented in the docs is not functional, and yet (as you said) fixing the documentation page would require rerunning the tests which I can't do.

@kanpov
Comment options

image
This is completely the double-forward (from tap to vpeer, and from vpeer to host iface) setup I described in my comment where I used 6/7 (7 with guest IP forwarding) rules. So if the tests were run using it, it's actually possible that this latest setup I showed with 1 forwarding chain and 2 NAT rules inside the netns is faster than the one currently being used by CI (2 forwarding chains).

Comment options

pb8o
Sep 3, 2024
Maintainer

Hi, I actually tried the guide myself and I run into the same issues, it is not functional. I thought it could be fixed withiptables -P FORWARD ACCEPT but no I cannot make it work with this guide.

@kanpov what is the rule you call7 with guest IP forwarding is that so we can make the guest routable from the host? Could you share it?

You must be logged in to vote
3 replies
@kanpov
Comment options

The last rule is DNAT inside the netns to rewrite the globally-routable IP to the one representing the guest inside the netns.

@kanpov
Comment options

For the full code, you can check out the fcnet crate I made that implements both simple and netns networking. For netns, it uses 1 forwarding chain and 2 rewrites instead of 2 forwarding chains like in CI

@pb8o
Comment options

pb8oSep 13, 2024
Maintainer

I guess that ishttps://github.com/kanpov/fcnet/blob/master/src/netns.rs? Thanks, I will take a look at it.

Comment options

pb8o
Oct 14, 2024
Maintainer

Provided an updated guide in#4848. I think that at least works with what we have in the integration tests even if it's not optimal. Also removed the performance section since we don't have a way of rerunning those tests.

You must be logged in to vote
1 reply
@kanpov
Comment options

Provided an updated guide in#4848. I think that at least works with what we have in the integration tests even if it's not optimal. Also removed the performance section since we don't have a way of rerunning those tests.

Thanks for the effort! As for the optimal claim, I don't actually have concrete benchmarks that would prove that FORWARD,MASQUERADE + DNAT for connectivity to the guest and SNAT + FORWARD,MASQUERADE for connectivity from the guest are faster than FORWARD,MASQUERADE + FORWARD,MASQUERADE on both, I made that assumption based on the reduction in iptables rules. So I suppose it'd be interesting to actually make that benchmark, but doing it "properly" in the context of Firecracker would entail recovering those tests or creating new ones.

Comment options

Update on this: after#4877 gets merged with major improvements to the mainline network setup doc, I'm gonna be working on de-crusting the network-for-clones setup as well, with nftables support and less rules as described above.

You must be logged in to vote
0 replies
Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Category
Q&A
Labels
Type: DocumentationIndicates a need for improvements or additions to documentationType: FixIndicates a fix to existing code
2 participants
@kanpov@pb8o

[8]ページ先頭

©2009-2025 Movatter.jp