-
Notifications
You must be signed in to change notification settings - Fork 46
Description
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
When a container is connected to multiple networks only one network can resolve containers by fqdn.
Steps to reproduce the issue:
- create two networks
podman network create --subnet 192.168.55.0/24 network1
podman network create --subnet 192.168.56.0/24 network2
- start up a container that will be resolved in dns attached only to network1 and have it do nothing
podman run --detach --rm -ti --name container1 --network network1 alpine sleep 9000
- create a container attached to both networks and run dig to resolve the fqdn of the first container against both network dns servers in resolv.conf
podman run --rm -ti --name container2 --network network1,network2 alpine sh -c "cat /etc/resolv.conf; apk add bind-tools > /dev/null; echo '<<<<<<<<<<< network1 dns test'; dig container1.dns.podman @192.168.55.1; echo '<<<<<<<<<<< network2 dns test'; dig container1.dns.podman @192.168.56.1"
- repeat number 3 with short names
podman run --rm -ti --name container2 --network network1,network2 alpine sh -c "cat /etc/resolv.conf; apk add bind-tools > /dev/null; echo '<<<<<<<<<<< network1 dns test'; dig container1 @192.168.55.1; echo '<<<<<<<<<<< network2 dns test'; dig container1 @192.168.56.1"
Describe the results you received:
When resolving the fqdn of container1 only one name server responds correctly.
search dns.podman dns.podman
nameserver 192.168.55.1
nameserver 192.168.56.1
nameserver 192.168.121.1
<<<<<<<<<<< network1 dns test
... (clipped for clarity)
;; QUESTION SECTION:
;container1.dns.podman. IN A
;; ANSWER SECTION:
container1.dns.podman. 86400 IN A 192.168.55.2
;; Query time: 1 msec
;; SERVER: 192.168.55.1#53(192.168.55.1)
;; WHEN: Tue May 24 14:37:03 UTC 2022
;; MSG SIZE rcvd: 78
<<<<<<<<<<< network2 dns test
... (clipped for clarity)
;; QUESTION SECTION:
;container1.dns.podman. IN A
;; Query time: 3 msec
;; SERVER: 192.168.56.1#53(192.168.56.1)
;; WHEN: Tue May 24 14:37:03 UTC 2022
;; MSG SIZE rcvd: 62
When resolving the short name of the container one both name server respond correctly
search dns.podman dns.podman
nameserver 192.168.56.1
nameserver 192.168.55.1
nameserver 192.168.121.1
<<<<<<<<<<< network1 dns test
... (clipped for clarity)
;; QUESTION SECTION:
;container1. IN A
;; ANSWER SECTION:
container1. 86400 IN A 192.168.55.2
;; Query time: 2 msec
;; SERVER: 192.168.55.1#53(192.168.55.1)
;; WHEN: Tue May 24 14:38:01 UTC 2022
;; MSG SIZE rcvd: 67
<<<<<<<<<<< network2 dns test
... (clipped for clarity)
;; QUESTION SECTION:
;container1. IN A
;; ANSWER SECTION:
container1. 86400 IN A 192.168.55.2
;; Query time: 3 msec
;; SERVER: 192.168.56.1#53(192.168.56.1)
;; WHEN: Tue May 24 14:38:01 UTC 2022
;; MSG SIZE rcvd: 67
Describe the results you expected:
Both name servers should respond to both the shortname and fqdn queries.
Additional information you deem important (e.g. issue happens only occasionally):
- /etc/resolv.conf entries show up unsorted so if a user uses fqdns to reference containers they will have intermittent issues in applications. ( see /etc/resolv.conf nameserver order varies when starting a container with multiple networks. podman#14262 )
- It is also notable that /etc/resolv.conf lists the dns suffix dns.podman twice - this probably does not harm anything but should probably be deduplicated by podman.
Output of podman version:
# podman version
Client: Podman Engine
Version: 4.1.0
API Version: 4.1.0
Go Version: go1.18
Built: Fri May 6 16:15:54 2022
OS/Arch: linux/amd64
Output of podman info --debug:
# podman info --debug
host:
arch: amd64
buildahVersion: 1.26.1
cgroupControllers:
- cpuset
- cpu
- io
- memory
- hugetlb
- pids
- misc
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.1.0-2.fc36.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.1.0, commit: '
cpuUtilization:
idlePercent: 97.24
systemPercent: 0.93
userPercent: 1.83
cpus: 2
distribution:
distribution: fedora
variant: cloud
version: "36"
eventLogger: journald
hostname: container.redacted
idMappings:
gidmap: null
uidmap: null
kernel: 5.17.5-300.fc36.x86_64
linkmode: dynamic
logDriver: journald
memFree: 148381696
memTotal: 6217089024
networkBackend: netavark
ociRuntime:
name: crun
package: crun-1.4.4-1.fc36.x86_64
path: /usr/bin/crun
version: |-
crun version 1.4.4
commit: 6521fcc5806f20f6187eb933f9f45130c86da230
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
os: linux
remoteSocket:
path: /run/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: false
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.2.0-0.2.beta.0.fc36.x86_64
version: |-
slirp4netns version 1.2.0-beta.0
commit: 477db14a24ff1a3de3a705e51ca2c4c1fe3dda64
libslirp: 4.6.1
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.3
swapFree: 5851836416
swapTotal: 6217003008
uptime: 15h 16m 15.62s (Approximately 0.62 days)
plugins:
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
volume:
- local
registries:
search:
- docker.io
store:
configFile: /usr/share/containers/storage.conf
containerStore:
number: 19
paused: 0
running: 19
stopped: 0
graphDriverName: overlay
graphOptions:
overlay.mountopt: nodev,metacopy=on
graphRoot: /var/lib/containers/storage
graphRootAllocated: 41788899328
graphRootUsed: 9318744064
graphStatus:
Backing Filesystem: btrfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "true"
imageCopyTmpDir: /var/tmp
imageStore:
number: 67
runRoot: /run/containers/storage
volumePath: /var/lib/containers/storage/volumes
version:
APIVersion: 4.1.0
Built: 1651853754
BuiltTime: Fri May 6 16:15:54 2022
GitCommit: ""
GoVersion: go1.18
Os: linux
OsArch: linux/amd64
Version: 4.1.0
Package info (e.g. output of rpm -q podman or apt list podman):
# rpm -q netavark podman
netavark-1.0.3-3.fc36.x86_64
podman-4.1.0-1.fc36.x86_64
Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)
Yes
Additional environment details (AWS, VirtualBox, physical, etc.):
Libvirt VM