Harden VNC access security in OpenStack

1. Problem discovery

Many students do not have the habit of exiting the terminal after using noVNC, and usually close the page window directly after using it. There is a danger that if there are some script kiddies on the Intranet, they can scan out the VNC port on our virtual machine at any time. Let’s use NMAP to test the port opening of the compute node in the development environment.

ubuntu@ubuntu:~ $nmap 192.168.23.12 Starting nmap 7.70 (https://nmap.org) at 2019-06-13 19:22 CST Nmap Scan reportfor 192.168.23.12
Host is up (0.0099s latency).
Not shown: 992 closed ports
PORT     STATE SERVICE
22/tcp   open  ssh
4000/tcp open  remoteanything
5900/tcp open  vnc
5901/tcp open  vnc-1
5902/tcp open  vnc-2
5903/tcp open  vnc-3
5904/tcp open  unknown
8000/tcp open  cvd

Nmap done: 1 IP address (1 host up) scanned in0.31 seconds ubuntu @ ubuntu: ~ $Copy the code

If a service student does not exit the terminal after using noVNC, another person can log in to the VM if he knows the IP address and port of the host.

2. Process analysis

OpenStack VNC Proxy process analysis

OpenStack uses the VNC Proxy to isolate the management network from the service network. In this way, we can use port 6080 of the management network to access the VIRTUAL Network Computing (VNC) of VMS and provide a Token to verify the access validity. On a Compute Node, a VNC Proxy works as follows in OpenStack:

  1. A user tries to open the VNC Client connected to the VM from a browser
  2. The browser sends a request to the Nova-api to return a URL to access VNC
  3. Nova-api calls the get VNC console method in nova-compute to return information about the VNC connection

4. Nova – compute calling libvirt get VNC console function 5. Libvirt will virtual chance done by parsing the/etc/libvirt/qemu/instance – 0000000 c. XML file for VNC Server information 6. Libvirt will host, The information such as port is returned to nova-compute in JSON format. Nova-compute randomly generates a UUID as a Token 8. Nova-compute combines the information returned by libvirt and the information in the configuration file into a connect_info and returns it to nova-api 10. Nova-consoleauth cache instance — > token, token — > connect_info http://172.24.1.1:6080/vnc_auto.html? token=7efaee3f-eada-4731-a87c-e173cbd25e98&title=helloworld%289169fdb2-5b74-46b1-9803-60d2926bd97c%29 12. The browser will attempt to open the link 13. This link will send the request to nova-novncProxy 14. Nova-novncproxy calls the check_token function of nova-consoleauth 15. Nova-consoleauth validates this token, 16. Nova-novncproxy connects to the VNC Server on the compute node using the host and port information in the connect_info instance. Thus began the work of proxy. reference

3. Potential consequences

You know the IP address, scan the port number, and directly connect to the VM using VNC.

4. Solutions

  • ① IP access is controlled by acLs on switches and firewalls
  • (2) Set vncserver_LISTEN in the nova.conf file of each compute node to the internal IP address to ensure that the new vm can be created
  • ③ For existing VMS and new VMS, modify the VNC listening port in libvirt. XML to ensure that the vm does not open the port to the public network after the vm restarts
  • ④ Modify the IPtables configuration rules to block port access and block ports 5900 to 5999 on network segments other than the Intranet
  • ⑤ VNC adds an access password

Methods (4) and (5) are described in detail below.

(4) to configure IPtables

According to step 16 in the OpenStack VNC Proxy process analysis, nova-novncProxy connects to host:vncport to provide the VNC access service. That is, the VNC port on the compute node only needs to be accessible to the Nova-NovNCporxy service.

Add the following rules to the INPUT table of IPTABLES on all compute nodes:

$ iptables -A INPUT -s {{ CONTROLLER_NODE_IP }}/32 -p tcp -m multiport --dports 5900:5999 -m comment --comment "ACCEPT VNC Port only by Controller Node" -j ACCEPT

$ iptables -A INPUT -p tcp -m multiport --dports 5900:5999 -j REJECT --reject-with icmp-port-unreachable
Copy the code

Control nodes are only allowed to access ports 5900-5999 on the host. When you use nmap to scan again, the VNC port is not visible.

$nmap 192.168.23.12Copy the code

⑤ Add the VNC access password

The corresponding configuration file is virt/libvirt/config.py

Libvirtd supports VNC access passwords in the

domain

. <graphicstype='vnc' port='1' autoport='yes' listen='192.168.23.59' passwd='YOUR-PASSWORD-HERE' keymap='en-us'/>
...
Copy the code

Nova will also find the code for the graphics in the create vm configuration method. I will simply add the value of passwd to the return dev list. The value is the VNC access password.

1482 class LibvirtConfigGuestGraphics(LibvirtConfigGuestDevice):
1483 
1484     def __init__(self, **kwargs):
1485         super(LibvirtConfigGuestGraphics, self).__init__(root_name="graphics",
1486                                                          **kwargs)
1487 
1488         self.type = "vnc"
1489         self.autoport = True
1490         self.keymap = None
1491         self.listen = None
1492 
1493     def format_dom(self):
1494         dev = super(LibvirtConfigGuestGraphics, self).format_dom()
1495 
1496         dev.set("type", self.type)
1497         if self.autoport:
1498             dev.set("autoport"."yes")
1499         else:
1500             dev.set("autoport"."no")
1501         if self.keymap:
1502             dev.set("keymap", self.keymap)
1503         if self.listen:
1504             dev.set("listen", self.listen)
1505 # dev.set("passwd", "123456")
1506         return dev
Copy the code

Dev. Set (“passwd”, “123456”) is a new line. If you do not need to enter the password for VNC access, comment it out directly. The following is a one-time resolution process, because the development environment uses container deployment, the file path is relatively long. The resolution process finds the file

root@controller1:~# find /var/lib/docker/aufs/diff -name config.py | grep novaThe/var/lib/docker/aufs/diff/pr0XDEZwLDflwwzUPc0mNVYwf6b3wJ4wxEwxNBRlmKMD7qRurdlBck41J8hAkjd3 / usr/lib/python2.7 / dist - packag es/nova/config.py The/var/lib/docker/aufs/diff/pr0XDEZwLDflwwzUPc0mNVYwf6b3wJ4wxEwxNBRlmKMD7qRurdlBck41J8hAkjd3 / usr/lib/python2.7 / dist - packag es/nova/virt/libvirt/config.py The/var/lib/docker/aufs/diff/pr0XDEZwLDflwwzUPc0mNVYwf6b3wJ4wxEwxNBRlmKMD7qRurdlBck41J8hAkjd3 / usr/lib/python2.7 / dist - packag es/nova/common/config.pyCopy the code

The path to the configuration file is displayed

root@controller01:~# cd The/var/lib/docker/aufs/diff/pr0XDEZwLDflwwzUPc0mNVYwf6b3wJ4wxEwxNBRlmKMD7qRurdlBck41J8hAkjd3 / usr/lib/python2.7 / dist - packag es/nova/virt/libvirt/root@controller01:/var/lib/docker/aufs/diff/pr0XDEZwLDflwwzUPc0mNVYwf6b3wJ4wxEwxNBRlmKMD7qRurdlBck41J8hAkjd3/usr/lib/pyt Hon2.7 / dist - packages/nova/virt/libvirt# ls
blockinfo.py   compat.py   config.py          config.pyc   designer.pyc  driver.pyc   firewall.pyc  guest.pyc  host.pyc         imagebackend.pyc  imagecache.pyc  __init__.pyc           instancejobtracker.pyc  migration.pyc  utils.py   vif.py   volume
blockinfo.pyc  compat.pyc  config.py.bak.ori  designer.py  driver.py     firewall.py  guest.py      host.py    imagebackend.py  imagecache.py     __init__.py     instancejobtracker.py  migration.py            storage        utils.pyc  vif.pyc
Copy the code

Modifying a Configuration File

root@controller1:/var/lib/docker/aufs/diff/pr0XDEZwLDflwwzUPc0mNVYwf6b3wJ4wxEwxNBRlmKMD7qRurdlBck41J8hAkjd3/usr/lib/pyth On2.7 / dist - packages/nova/virt/libvirt# vim config.py
Copy the code

After the modification, restart the nova-compute service. That is, the modification takes effect the next time a VM is created. The result is as follows:Enter the passwordvirt/libvirt/config.pyNew password in the configuration123456You are entering the VIRTUAL machine.

5. Read more

Minsheng Bank explores and practices of OpenStack Security hardening

Openstack VIRTUAL Network Computing (VNC) Security Of openstack Configure VNC security of openstack