Virtual Machine For Different Configuration
Virtual Machine For Different Configuration
Virtual Machine For Different Configuration
Aim:
To find the procedure to run the virtual machine of different configuration .Check
how many Virtual Machines can be utilized at particular time.
Procedure:
Oracle VM VirtualBox is an open source virtualization software that you can install on
various x86 systems.
You can install Oracle VM Virtualbox on top of Windows, Linux, Mac, or Solaris.
Once you install the virtualbox, you can create virtual machines that can be used to run
guest operating systems like Windows, Linux, Solaris, etc.
The following are the basic terms you should be aware of before we go further:
Host – The physical machine where you are going to install VirtualBox
Guest – The machines created using VirtualBox. ( Virtual Machine )
Guest Additions – A set of software components, which comes with VirtualBox to
improve the Guest performance and also to provide some additional features.
1. Installing VirtualBox
This article explains how to install VirtualBox on a Ubuntu OS
First ,Go go Workspace ---->Type Ubuntu Software --->Click Ubuntu Software Center
Icon
Next ,after open the Ubuntu Software Center to type the virtualbox in the search field.
Figure 2:Ubuntu Software Centre | VirtualBox
Select VirtualBox (Run several virtual systems on a single computer) and then click
Install. You may need to enter your administrator password.
In the toolbar, click the New button. The New Virtual Machine Wizard is displayed in a
new window, as shown in below fig
In the Name field, enter a description that best describes your virtual machine and guest
operating system. If you specify what your guest operating system actually is, the Type and
Version fields alter to match automatically. If they do not, ensure that you select the correct
type and version before proceeding and clicking Next.
Figure 7: VirtualBox | Configuration for Ubuntu
Move the slider to select the amount of memory (RAM) you want to allocate to your
guest operating system. The more RAM you make available the better, but be careful not to
starve your host operating system of memory.
Next, you must specify a virtual hard drive for your VM.
Figure 9: Create a Virtual Hard Drive Now | VirtualBox
A dynamically allocated file will only grow in size when the guest actually stores data
on its virtual hard disk. It will therefore initially be small on the host hard drive and only later
grow to the size specified as it is filled with data.
A fixed-size file will immediately occupy the file specified, even if only a fraction of the
virtual hard disk space is actually in use. While occupying much more space, a fixed-size file
incurs less overhead and is therefore slightly faster than a dynamically allocated file
Choose a reasonable amount of storage space for your guest operating system. Take into
account the space needed for the operating system itself and the size of the programs you
wish to install on it. Since we have opted for Dynamically allocated disk space, the size of the
virtual hard drive will grow as more data is written to it, up to the maximum specified here.
This maximum cannot be altered later, so chose wisely.
Figure 12: VirtualBox Hard Drive Size
Click the Create button to complete the main configuration of your virtual drive.
Your new virtual machine will be shown in the Oracle VM Manager. Right click that
drive and select Settings…
Figure 13: Right Click New Virtual Drive and Select Settings
Select Storage and the Empty CD icon beneath Storage Tree. Under Attributes, select
the CD/DVD Drive location icon and select Chose a virtual CD/DVD disk file…
Navigate to your Ubuntu installation ISO file and Open it. The ISO file will then be
shown underneath the Controller: IDE. Click OK. Effectively, this step inserts your Ubuntu
installation disk into your virtual machine. Your virtual machine will boot from this when it
is first switched on.
Click OK to apply the storage settings. The Settings window is closed. If you connected
the virtual machine's CD/DVD drive to the host's physical CD/DVD drive, insert the
installation media in the host's CD/DVD drive now. You are now ready to start your virtual
guest machine for the first time. ClickStart.
After click the Start button A new window is displayed, which shows the virtual machine
booting up. Depending on the operating system and the configuration of the virtual machine,
VirtualBox might display some warnings first. It is safe to ignore these warnings. The virtual
machine should boot from the installation media
Note:
Similarly way to We can install different OS (Windows,Fedora,Mac....so on) on Virtual
Box.
The number of VMs you'll be able to support will depend on several factors – the
capacity of the server hardware, the efficiency of the hypervisor, and the requirements of the
guest operating systems. Server hardware can support up to four 12-core processors, 256GB
of RAM, and four or more quad-port Gigabit Ethernet or dual 10G Ethernet adapters, along
with enough high-speed storage for dozens of VMs per server.
Efficiency relates to the ability of the hypervisor to recover unused resources when a VM
is not in use. CPU resources and memory can be allocated only when needed. Storage space
can also be thin-provisioned, which means that even though a VM has an 80GB virtual drive,
only the 10GB or so actually in use to store files will occupy space on the storage system,
rather than the full 80GB.
Of course, guest operating systems will vary widely in their requirements for CPU power,
memory and storage. For example a Linux server might only need half a CPU core, 512MB
of RAM and an 8GB virtual disk, while running Windows 7 optimally will require at least
one CPU core, 2GB of RAM (preferably four), and 20GB or more of virtual disk. Not all
Windows guests will require lots of resources – Windows Web Server 2008 R2 can operate
with minimal resources. In general though, Linux will require less in the way of resources
than other OSes.
TO ATTACH VIRTUAL BLOCK TO THE VIRTUAL MACHINE
Aim:
To find the procedure to attach virtual block to the Virtual Machine and check
whether it holds the data even the release of the VM.
Procedure:
I will show you how to Attach Virtual block to your virtual machine.If your first hard
drive is slowly filling up this is a very easy way to expand your storage.
I will use the Ubuntu 12.04 virtual machine (make sure it is powered off first!!)
Go to setting , We now need to click on Storage, locate the Sata controller (as shown below)
and click on the Add hard disk icon (shown below). You will then get the message below.
I
f you already have a disk set up click Choose Existing Disk but for the purpose of this tutorial click
Create new disk.
This will now start the Virtual Disk Creation Wizard. On the first page make sure you
choose the VHD format for your hard drive and click next.
The next page of the wizard is Virtual Disk Storage Details. You can either choose a
dynamic disk or a fixed size disk. Dynamic disks slowly grow over time to the maximum
value you set whereas fixed is just that – a fixed size. Fixed are faster to use but take longer
to create. Let’s create a fixed size of 20 Gb.
Next we have to decide where to store this new virtual disk.Personally I don’t store any
virtual disks on the same hard drive as the host operating system. This means that in the event
of the host disk dying my virtual machines are kept separate. I can then quickly retrieve them
and get them back up with little time lost.
Rename of the new virtual hard drive file into the box below or click on the folder
icon to select a different folder to create the file in.Select the size of the virtual hard drive in
megabytes.
Then click the create button .
Successfully your virtual block device was added to virtual machine.
Check the Virtual block holds the data even the release of the VM:
No, after released the Virtual block from VM the virtual machine does not contain
virtual block device.
students@CSL-L2:~$ fdisk -l
students@CSL-L2:~$ sudofdisk -l
[sudo] password for students:
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
students@CSL-L2:~$ sudofdisk -l
Then click remove icon for remove the hard disk from VM.
After release the Hard disk ,then check the hard disk are place on VM or not
sudofdisk -l
AIM:
To install a C compiler and execute a sample program in the virtual machine.
PROCEDURE:
Step 1: To login into Guest OS in KVM
Step 2: To write and execute your own C Program in gcc compiler. Install c compiler using
commands.
$ apt-get install gcc
VIRTUAL MACHINE MIGRATION
Aim:
To show the Virtual Machine migration from one node to other node.
Procedure:
Step 2: Start the cloning process Select the virtual machine you want to clone in the left
pane of the VirtualBox main window. Click the Snapshots tab (Figure A) and then click the
small sheep icon. Figure A
A Ubuntu 12.04 virtual machine with no snapshots taken.
The wizard that opens is a simple two-step process. The first step requires you to give the
clone a name; you must give this clone a different name than the source virtual machine. By
default, VirtualBox will append "clone" at the end of the name of the source virtual machine.
Figure B
The Full Clone — A full clone is an independent copy of a virtual machine that
shares nothing with the parent virtual machine after the cloning operation.
Ongoing operation of a full clone is entirely separate from the parent virtual
machine.
The Linked Clone — A linked clone is a copy of a virtual machine that shares
virtual disks with the parent virtual machine in an ongoing manner. This
conserves disk space, and allows multiple virtual machines to use the same
software installation.
You can keep this, and it should work fine. The second phase of the wizard asks the type of
clone you want to create (Figure B). You will want to create a full clone, since our goal is to
move this virtual machine to a new host. Figure C
After Click the clone option the clone Virtual Machine:Cloning Machine window will
display in FigureD
Step 3: Locate and move the clone
You will be looking for a .vdi file. The location of this file will depend upon the host
platform. On my Linux host, the file will be found in ~/VirtualBox VMs. Within that
directory, you will find sub-directories of all your virtual machines. Within the virtual
machine directory in question, you will find the .vdi file of the cloned virtual machines —
that is, the file that must be moved to the new host. Copy that file to an external or shared
drive and then copy it onto the new host (the location doesn't matter).
Step 4: Create a new virtual machineThe process of creating the new virtual machine will
be the same as if you were creating a standard virtual machine until you get to the Virtual
Hard Disk creation screen (Figure C). You will select Use Existing Hard Disk, click the
folder icon, navigate to the newly copied .vdi file, select the file in question, and then click
Next. Figure E
AIM:
To find procedure to install storage controller and interact with it.
PROCEDURE:
OpenStack Block Storage The OpenStack Block Storage service (cinder) adds persistent
storage to a virtual machine. Block Storage provides an infrastructure for managing volumes,
and interacts with OpenStack Compute to provide volumes for instances. The service also
enables management of volume snapshots, and volume types.
The Block Storage service consists of the following components:
cinder-api
Accepts API requests, and routes them to the cinder-volume for action.
cinder-volume
Interacts directly with the Block Storage service, and processes such as the cinder-
scheduler. It also interacts with these processes through a message queue. The cinder-volume
service responds to read and write requests sent to the Block Storage service to maintain
state. It can interact with a variety of storage providers through a driver architecture.
cinder-scheduler daemon
Selects the optimal storage provider node on which to create the volume. A similar
component to the nova-scheduler.
Messaging queue
Routes information between the Block Storage processes.
Install and configure controller node
This section describes how to install and configure the Block Storage service, code-
named cinder, on the controller node. This service requires at least one additional storage
node that provides volumes to instances.
To configure prerequisites
Before you install and configure the Block Storage service, you must create a database,
service credentials, and API endpoints.
1. To create the database, complete these steps: a. Use the database access client to
connect to the database server as the root user:
$ mysql -u root -p
2. Create the cinder database:
CREATE DATABASE cinder;
a. Grant proper access to the cinder database:
GRANT ALL PRIVILEGES ON cinder.* TO ’cinder’@’localhost’ \
IDENTIFIED BY ’CINDER_DBPASS’;
GRANT ALL PRIVILEGES ON cinder.* TO ’cinder’@’%’ \
IDENTIFIED BY ’CINDER_DBPASS’;
Replace CINDER_DBPASS with a suitable password.
b. Exit the database access client.
3. Source the admin credentials to gain access to admin-only CLI commands:
$ source admin-openrc.sh
4. To create the service credentials, complete these steps:
a. Create a cinder user:
$ keystone user-create --name cinder --pass CINDER_PASS
+----------+------------------------------------------------+
| Property | Value |
+----------+------------------------------------------------+
| email | |
| enabled | True |
| id | 881ab2de4f7941e79504a759a83308be |
| name | cinder |
| username | cinder |
+----------+-------------------------------------------------+
Replace CINDER_PASS with a suitable password.
b. Add the admin role to the cinder user:
$ keystone user-role-add --user cinder --tenant service --role admin
c. Create the cinder service entities:
$ keystone service-create --name cinder --type volume \ --description "OpenStack Block
Storage"
+--------------+----------------------------------------------+
| Property | Value |
+---------------+---------------------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | 1e494c3e22a24baaafcaf777d4d467eb |
| name | cinder |
| type | volume |
+---------------+----------------------------------------------+
$ keystone service-create --name cinderv2 --type volumev2 \ --description "OpenStack
Block Storage"
+---------------+-----------------------------------------------+
| Property | Value |
+---------------+-----------------------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | 16e038e449c94b40868277f1d801edb5 |
| name | cinderv2 |
| type | volumev2 |
+-------------+--------------------------------------------------+
4. Create the Block Storage service API endpoints:
$ keystone endpoint-create \
--service-id $(keystone service-list | awk ’/ volume / {print $2}’) \
--publicurl http://controller:8776/v1/%\(tenant_id\)s \
--internalurl http://controller:8776/v1/%\(tenant_id\)s \
--adminurl http://controller:8776/v1/%\(tenant_id\)s \
--region regionOne
+--------------+-----------------------------------------------------+
| Property | Value |
+--------------+-----------------------------------------------------+
| adminurl | http://controller:8776/v1/%(tenant_id)s |
| id | d1b7291a2d794e26963b322c7f2a55a4 |
| internalurl | http://controller:8776/v1/%(tenant_id)s |
| publicurl | http://controller:8776/v1/%(tenant_id)s |
| region | regionOne |
| service_id | 1e494c3e22a24baaafcaf777d4d467eb |
+-------------+------------------------------------------------------+
$ keystone endpoint-create \ --service-id $(keystone service-list | awk ’/ volumev2 / {print
$2}’) \
--publicurl http://controller:8776/v2/%\(tenant_id\)s \
--internalurl http://controller:8776/v2/%\(tenant_id\)s \
--adminurl http://controller:8776/v2/%\(tenant_id\)s \
--region regionOne
+--------------+----------------------------------------------------+
| Property | Value |
+--------------+-----------------------------------------------------+
| adminurl | http://controller:8776/v2/%(tenant_id)s |
| id | 097b4a6fc8ba44b4b10d4822d2d9e076 |
| internalurl | http://controller:8776/v2/%(tenant_id)s |
| publicurl | http://controller:8776/v2/%(tenant_id)s |
| region | regionOne |
| service_id | 16e038e449c94b40868277f1d801edb5 |
+---------------+-----------------------------------------------------+
To install and configure Block Storage controller components
1. Install the packages:
# apt-get install cinder-api cinder-scheduler python-cinderclient
2. Edit the /etc/cinder/cinder.conf file and complete the following actions:
a. In the [database] section, configure database access:
[database] ...
connection = mysql://cinder:CINDER_DBPASS@controller/cinder
Replace CINDER_DBPASS with the password you chose for the Block Storage database.
b. In the [DEFAULT] section, configure RabbitMQ message broker access:
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = cinder
admin_password = CINDER_PASS
3. Replace CINDER_PASS with the password you chose for the cinder user in the Identity
service.
4. d. In the [DEFAULT] section, configure the my_ip option to use the management interface
IP address of the controller node:
[DEFAULT]
...
my_ip = 10.0.0.11
e. (Optional) To assist with troubleshooting, enable verbose logging in the [DEFAULT]
section:
[DEFAULT]
...
verbose = True
3. Populate the Block Storage database:
# su -s /bin/sh -c "cinder-manage db sync" cinder
To finalize installation
1. Restart the Block Storage services:
# service cinder-scheduler restart
# service cinder-api restart
2. By default, the Ubuntu packages create an SQLite database.
Because this configuration uses a SQL database server, you can remove the SQLite
database file:
# rm -f /var/lib/cinder/cinder.sqlite
To configure prerequisites
You must configure the storage node before you install and configure the volume service
on it. Similar to the controller node, the storage node contains one network interface on the
management network. The storage node also needs an empty block storage device of suitable
size for your environment.
1. Configure the management interface:
IP address: 10.0.0.41
Network mask: 255.255.255.0 (or /24)
Default gateway: 10.0.0.1
2. Set the hostname of the node to block1.
3. Copy the contents of the /etc/hosts file from the controller node to the storage node and
add the following to it:
# block1
10.0.0.41 block1
Also add this content to the /etc/hosts file on all other nodes in your environment.
4. Install and configure NTP using the instructions in the section called “Other nodes”.
5. Install the LVM packages:
# apt-get install lvm2
6. Create the LVM physical volume /dev/sdb1:
# pvcreate /dev/sdb1 Physical volume "/dev/sdb1" successfully created
7. Create the LVM volume group cinder-volumes:
# vgcreate cinder-volumes /dev/sdb1
Volume group "cinder-volumes" successfully created
The Block Storage service creates logical volumes in this volume group.
Only instances can access Block Storage volumes. However, the underlying
operating system manages the devices associated with the volumes. By default, the LVM
volume scanning tool scans the /dev directory for block storage devices that contain volumes.
If tenants use LVM on their volumes, the scanning tool detects these volumes and attempts to
cache them which can cause a variety of problems with both the underlying operating system
and tenant volumes. You must reconfigure LVM to scan only the devices that contain the
cinder-volume volume group. Edit the /etc/lvm/lvm.conf file and complete the following
actions:
a. In the devices section, add a filter that accepts the /dev/sdb device and rejects all
other devices:
devices {
...
filter = [ "a/sdb/", "r/.*/"]
Each item in the filter array begins with a for accept or r for reject and includes a regular
expression for the device name. The array must end with r/.*/ to reject any remaining devices.
You can use the vgs -vvvv command to test filters.
[DEFAULT]
...
auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = cinder
admin_password = CINDER_PASS
Replace CINDER_PASS with the password you chose for the cinder user in the Identity
service.
d. In the [DEFAULT] section, configure the my_ip option:
[DEFAULT]
...
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
Replace MANAGEMENT_INTERFACE_IP_ADDRESS with the IP address of the
management network interface on your storage node, typically 10.0.0.41 for the first node in
the example architecture.
e. In the [DEFAULT] section, configure the location of the Image Service:
[DEFAULT]
...
glance_host = controller
f. (Optional) To assist with troubleshooting, enable verbose logging in the [DEFAULT]
section:
[DEFAULT]
...
verbose = True
To finalize installation
Verify operation
This section describes how to verify operation of the Block Storage service by creating a
volume.
1. Source the admin credentials to gain access to admin-only CLI commands:
$ source admin-openrc.sh
2. List service components to verify successful launch of each process:
$ cinder service-list
+------------------+------------+------+---------+-------+----------------- -----------+-----------------+ |
Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | +--------
----------+------------+------+---------+-------+----------------- -----------+-----------------+ | cinder-
scheduler | controller | nova | enabled | up | 2014-10- 18T01:30:54.000000 | None ||
cinder-volume | block1 | nova | enabled | up | 2014-10- 18T01:30:57.000000 | None
| +------------------+------------+------+---------+-------+----------------- -----------+-----------------+
3. Source the demo tenant credentials to perform the following steps as a non- administrative
tenant:
$ source demo-openrc.sh
4. Create a 1 GB volume: