Quick Start Guide: VMware-based OpenNebula Cloud

This guide helps you to quickly get an OpenNebula cloud up and running. This is useful at the time of setting up pilot clouds, to quickly test new features and as a base deployment to build a large infrastructure. {{INLINETOC}} ====== 1. Infrastructure Set-up ====== The infrastructure needs to be set up in a similar fashion as the one depicted in the figure. :!: A ESX version 5.0 was used to create this guide. This guide may be useful for other versions of ESX, although the configuration (and therefore your mileage) may vary. {{ :QuickStart-Vmware.png?350 }} In this guide it is assumed that at least two physical servers are available, one to host the OpenNebula front-end and one to be used as a ESX virtualization node (this is the one you need to configure in the following section). The figure depicts one more ESX host, to show that the pilot cloud is ready to grow just by adding more virtualization nodes. **Front-End** * **Operating System**: Centos 6.4 * **Required extra repository**: EPEL * **Required packages**: NFS, libvirt $ sudo rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-7.noarch.rpm $ sudo yum install nfs-utils nfs-utils-lib libvirt **Virtualization node** * **Operating System**: ESX 5.0 :!: The ESX hosts needs to be configured. To achieve this, you will need access to a Windows machine with the Virtual Infrastructure Client (vSphere client) install. The VI client can be downloaded from the ESX node, by pointing a browser to its IP. :!: The ESX hosts need to be properly licensed, with write access to the exported API (as the Evaluation license does). More information on valid licenses [[http://www.virtuallyghetto.com/2011/06/dreaded-faultrestrictedversionsummary.html|here]]. ====== 2. OpenNebula Front-end Set-up ====== **2.1 OpenNebula installation** The first step is to install OpenNebula in the front-end. This guide is meant to be used with the latest SP from the OpenNebula 3 series (3.8.5 at the time this guide was updated). Please download the OpeNebula 4.4 from [[http://opennebula.org/software:software|here]], choosing the CentOS package. Once it is downloaded to the front-end, you need to untar it: $ tar xvzf CentOS-6.3-opennebula-4.0.0.tar.gz And then install all the needed packages: $ sudo yum localinstall opennebula-4.4.0/*.rpm Let's install noVNC to gain access to the VMs: $ sudo /usr/share/one/install_novnc.sh Find out the uid and gid of oneadmin, we will need it for the next section: $ id oneadmin uid=499(oneadmin) gid=498(oneadmin) In order to avoid problems, we recommend to disable SELinux for the pilot cloud front-end (sometimes it is the root of all evil). Follow [[http://www.ehowstuff.com/how-to-check-and-disable-selinux-on-centos-6-3/|these instructions]]: $ sudo vi /etc/sysconfig/selinux # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=disabled # SELINUXTYPE= can take one of these two values: # targeted - Targeted processes are protected, # mls - Multi Level Security protection. SELINUXTYPE=targeted $ sudo setenforce 0 $ sudo getenforce Permissive **2.2 NFS configuration** The front-end needs to export via NFS two datastores (the system and the images datastore), so we can use the shared transfer manager drivers and so achieve a pilot cloud with very short VM deployment times. See the [[documentation:documentation:sm|Storage Overview]] for more details. Let's configure the NFS server. You will need to allow incoming connections, here we will simply stop iptables (as root): $ sudo su - oneadmin $ sudo vi /etc/exports /var/lib/one/datastores/0 *(rw,sync,no_subtree_check,root_squash,anonuid=499,anongid=498) /var/lib/one/datastores/1 *(rw,sync,no_subtree_check,root_squash,anonuid=499,anongid=498) $ sudo service iptables stop $ sudo service nfs start $ sudo exportfs -a :!: Make sure **anonuid** and **anongid** are set to the oneadmin uid and gid. **2.3 Networking** There must be connection between the front-end and the ESX node. This can be tested with the ping command: $ ping ====== 3. VMware Virtualization Node Set-up ====== This is probably the step that involves more work to get the pilot cloud up and running, but it is crucial to ensure its correct functioning. The ESX that is going to be used as worker node needs the following steps: **3.1 Creation of a oneadmin user** With the VI client connected to the ESX host, go to the "local Users & Groups" and add a new user like shown in the figure (**the UID is important, it needs to match the one of the front-end. **). Make sure that you are selecting the "Grant shell to this user" checkbox, and write down the password you enter. {{ cloud:Sandbox:usercreation.png }} Afterwards, go to the "Permissions" tab and assign the "Administrator" Role to oneadmin (right click -> Add Permission...). {{ cloud:Sandbox:userrole.png }} **3.2 Grant ssh access** Again in the VI client go to Configuration -> Security Profile -> Services Properties (Upper right). Click on the SSH label, select the "Options" button, and then "Start". You can set it to start and stop with the host, as seen on the picture. {{ documentation:qsguides:sshaccess-1.png }} Then the following needs to be done: * Connect via ssh to the OpenNebula front-end as the oneadmin user. Copy the output of the following command to the clipboard: $ ssh-keygen //Enter an empty passphrase// $ cat .ssh/id_rsa.pub * Connect via ssh to the ESX worker node (as oneadmin). Run the following from the front-end: $ ssh // Enter the password you set in the step 3.1// $ su # mkdir /etc/ssh/keys-oneadmin # chmod 755 /etc/ssh/keys-oneadmin # vi /etc/ssh/keys-oneadmin/authorized_keys //paste here the contents of oneadmin's id_rsa.pub and exit vi// # chown oneadmin /etc/ssh/keys-oneadmin/authorized_keys # chmod 600 /etc/ssh/keys-oneadmin/authorized_keys # chmod +s /sbin/vmkfstools /bin/vim-cmd # This is needed to create volatile disks * Now oneadmin should be able to ssh without been prompted for a password $ ssh **3.3 Mount datastores** We need now to mount the two datastores exported by default by the OpenNebula front-end. First, you need to make sure that the firewall will allow the NFS Client to connect to the front-end. Go to Configuration -> Software -> Security Profile, and enable the row NFS Client: {{ cloud:Sandbox:firewall.png }} Again in the VI client, go to Configuration -> Storage -> Add Storage (Upper right). We need to add two datastores (**0** and **100**). The picture shows the details for the datastore **1**, to add the **0** simply change the reference from 1 to 0 in the Folder and Datastore Name textboxes. Please note that the IP of the server displayed may not correspond with your value, which has to be the IP your front-end uses to connect to the ESX. {{ cloud:Sandbox:adddatastore-1.png?700 }} The paths to be used as input: /var/lib/one/datastores/0 /var/lib/one/datastores/1 More info on [[documentation:documentation:vmware_ds|datastores]] and different possible configurations. **3.4 Configure VNC** Open an ssh connection to the ESX as root, and: # cd /etc/vmware # chown -R root firewall/ # chmod 7777 firewall/ # cd firewall/ # chmod 7777 service.xml Add the following to /etc/vmware/firewall/service.xml # vi /etc/vmware/firewall/service.xml :!: The service id must be the last service id+1. It will depend on your firewall configuration VNC outbound tcp dst 5800 5999 inbound tcp dst 5800 5999 true false Refresh the firewall # /sbin/esxcli network firewall refresh # /sbin/esxcli network firewall ruleset list ====== 4. OpenNebula Configuration ====== Let's configure OpenNebula in the front-end to allow it to use the ESX hypervisor. The following must be run under the "oneadmin" account. **4.1 Configure oned and Sunstone** Edit ''/etc/one/oned.conf'' with "sudo" and uncomment the following: $ sudo vi /etc/one/oned.conf #******************************************************************************* # DataStore Configuration #******************************************************************************* # DATASTORE_LOCATION: *Default* Path for Datastores in the hosts. It IS the # same for all the hosts in the cluster. DATASTORE_LOCATION IS ONLY FOR THE # HOSTS AND *NOT* THE FRONT-END. It defaults to /var/lib/one/datastores (or # $ONE_LOCATION/var/datastores in self-contained mode) #******************************************************************************* DATASTORE_LOCATION = /vmfs/volumes #------------------------------------------------------------------------------- # VMware Information Driver Manager Configuration #------------------------------------------------------------------------------- IM_MAD = [ name = "im_vmware", executable = "one_im_sh", arguments = "-c -t 15 -r 0 vmware" ] #------------------------------------------------------------------------------- # VMware Virtualization Driver Manager Configuration #------------------------------------------------------------------------------- VM_MAD = [ name = "vmm_vmware", executable = "one_vmm_sh", arguments = "-t 15 -r 0 vmware -s sh", default = "vmm_exec/vmm_exec_vmware.conf", type = "vmware" ] Edit ''/etc/one/sunstone-server.conf'' with "sudo" and allow incoming connections from any IP: sudo vi /etc/one/sunstone-server.conf # Server Configuration # :host: 0.0.0.0 :port: 9869 **4.2 Add the ESX credentials** $ sudo vi /etc/one/vmwarerc //// # Username and password of the VMware hypervisor :username: "oneadmin" :password: "password" :!: Do not edit '':libvirt_uri:'', the HOST placeholder is needed by the drivers **4.3 Start OpenNebula** Start OpenNebula and Sunstone **as oneadmin** $ one start $ sunstone-server start If no error message is shown, then everything went smooth! **4.4 Add the physical resources** Let's add one datastore: $ vi datastore.template NAME=VMwareImages TM_MAD=shared DS_MAD=vmware $ onedatastore create datastore.template ID: 100 $ onedatastore chmod 100 644 And the ESX Host: $ onehost create -i im_vmware -v vmm_vmware -n dummy **4.5 Create a regular cloud user** $ oneuser create oneuser ====== 5. Using the Cloud through Sunstone ====== Ok, so now that everything is in place, let's start using your brand new OpenNebula cloud! Use your browser to access Sunstone. The URL would be ''http://@IP-of-the-front-end@:9869'' Once you introduce the credentials for the "oneuser" user (with the chosen password in the previous section) you will get to see the Sunstone dashboard. You can also log in as "oneadmin", you will notice the access to more functionality (basically, the administration and physical infrastructure management tasks) {{ documentation:qsguides:dashboard-1.png }} It is time to launch our first VM. Let's use one of the pre created appliances found in the [[http://marketplace.c12g.com/|marketplace]]. Log in as "oneuser", go to the Marketplace tab in Sunstone (in the left menu), and select the "ttylinux-VMware" row. Click on the "Import to local infrastructure" button in the upper right, and set the new image a name (use "ttylinux - VMware") and place it in the "VMwareImages" datastore. If you go to the Virtual Resources/Image tab, you will see that the new Image will eventually change its status from ''LOCKED'' to ''READY''. Now we need to create a template that uses this image. Go to the Virtual Resources/Templates tab, click on "+New" and follow the wizard, or use the "Advanced mode" tab of the wizard to paste the following: NAME = "ttylinux" CPU = "1" MEMORY = "512" DISK = [ IMAGE = "ttylinux - VMware", IMAGE_UNAME = "oneuser" ] GRAPHICS = [ TYPE = "vnc", LISTEN = "0.0.0.0" ] Select the newly created template and click on the Instantiate button. You can now proceed to the "Virtual Machines" tab. Once the VM is in state RUNNING you can click on the VNC icon and you should see the ttylinux login (root/password). {{ documentation:qsguides:vmware-tty-vnc.png?700 }} Please note that the minimal ttylinux VM does not come with the VMware Tools, and cannot be gracefully shutdown. Use the "Cancel" action instead. And that's it! You have now a fully functional pilot cloud. You can now create your own virtual machines, or import other appliances from the marketplace, like [[http://marketplace.c12g.com/appliance/4ff2ce348fb81d4406000003|Centos 6.2]]. Enjoy! ====== 6. Next Steps ====== * Follow the [[:documentation:documentation:evmwareg|VMware Virtualization Driver Guide]] for the complete installation and tuning reference, and how to enable the disk attach/detach functionality, and vMotion live migration. * OpenNebula can use [[:documentation:documentation:vmwarenet|VMware native networks]] to provide network isolation through VLAN tagging. * This guides assumes a shared storage model, but there are several possibilities explained in this [[:documentation:documentation:vmware_ds|Overview Guide]]. \\ \\ :!: Did we miss something? Please [[contact@opennebula.org?subject=Feedback-on-OpenNebula-VMware-Sandbox|let us know]]!