0% found this document useful (0 votes)
32 views61 pages

Ishuu CC

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views61 pages

Ishuu CC

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 61

Reg : 622422104020 | Name : S.

Ishwarya

This document describes the installation of the Google App Engine Software

Development Kit(SDK)on a Microsoft Windows and running a simple“helloworld”

application.

The App Engine SDK allows you to run Google App Engine Applications on your local

computer. It simulates the run---time environment of the Google App Engine

infrastructure.

Pre--Requisites:Python2.5.4

If you don't already have Python2.5.4 installed in your computer,download and Install

Python2.5.4 from: http://www.python.org/download/releases/2.5.4/

Download and Install

You can download the Google App Engine SDK by going to:

http://code.google.com/appengine/downloads.html And

download the appropriate install package.

Download the Windows installer–the simplest thing is to download it to your Desktop or


another folder that you remember.
Reg : 622422104020 | Name : S.Ishwarya
Reg : 622422104020 | Name : S.Ishwarya
Reg : 622422104020 | Name : S.Ishwarya

Making your First Application

Now you need to create a simple application.We could use the“+”option to have the

launcher make us an application–but instead we will do it by hand to get a better sense


of what is going on.

Make a folder for your Google App Engine applications.I am going to make the

Folder on my Desktop called“apps”–the path to this folder is:

C:\DocumentsandSettings\csev\Desktop\apps

And then make a sub---folder in within apps called“ae--01--trivial”–the path to this


folder would be:

C:\DocumentsandSettings\csev\Desktop\apps\ae--01--trivial

Using a text editor such as JEdit(www.jedit.org), create a file called app.yaml in the ae--

01—trivial folder with the following contents:

application:ae-01-trivial version:

1 runtime:pythonapi_version:

1 handlers:

- url:/.* script:

index.py

Note:Please do not copy and paste these lines into your text editor–you might end up
with strange characters–simply type them into your editor.

Then create a file in the ae--01—trivial folder called index .py with three lines

init:print'Content-Type:text/plain' print'' print 'Hello there Chuck'

Then start the Google App Engine Launcher program that can be found under

Applications.Use the File-

->Add Existing Application


command and navigate into the
app sdirectory and select
the ae--01-

-trivial folder. Once you have added the application ,select it so that you can control
the

application using the launcher


62242220
5034 | P
Pavithra
622422205034 | P Pavithra
622422205034 | P Pavithra
If you make a mistake in a file like inde in your
x.py,you can simply fix the file and press refresh
browser–there is no need to restart the server.

Shutting Down the Server

To shutdown the server, use the Launcher,select your application and press the

Stop button.

This materials is Copyright All Rights Reserved–Charles

Severance Comments and questions to [email protected]

-chuck.com --

How to use Cloud Sim in Eclipse

Cloud Sim is written in Java. The knowledge you need to use Cloud Sim is basic Java

programming and some basics about cloud com such


puting.
as Eclipse or Net Beans is also helpful. It is a library and, Knowledge
hence, ofdoes
Cloud Sim programming IDEs
not have to
be

installed.Normally, you can unpack the downloaded package in any directory,add it to the
Java

class path and it is ready to be used. Please verify whether Java is available on your
system.

To use CloudSimin Eclipse:

1. Download Cloud Sim installable files

fromhttps://code.google.com/p/cloudsim/downloads/listand unzip

2. Open Eclipse

3. Create a new Java Project:File ->New

4. Import an unpacked Cloud Sim project into the new Java Project

5. The first step is to initial ise the Cloud Sim package by initializing

the Cloud Simlibrary,as follows:

Cloud Sim.init(num_user,calendar,trace_flag)

6. Data centres are the resource providers in Cloud Sim; hence,


creation of data centres is a
622422205034 | P
Pavithra
second step. To create Datacenter, you need the Data center Characteristics
f machines,allocation
object that stores the
policy that
properties of a data centre such as architecture,OS,list o covers the
time or space shared, the time zone and its price:
Data center data center 9883=new Data center(name,characteristics,new Vm Allocation
Policy Simple(host

List),s

7. The third step is to create a broker:

Data center Broker broker= create Broker();

8. The fourth step is to create one virtual machine unique ID of the


VM,user Id ID of the

VM’s owner, mips, number Of Pes amount of CPUs, amount of RAM, amount of bandwidth,
amount of storage, virtual machine monitor,
and cloudlet Scheduler policy for
Vmvm=newVm(vmid,brokerId,mips,pesNumber,ram,bw,size,vmm,new

Cloudlet Scheduler Time Shared())

9. Submit theVM list to the broker:

broker.submit Vm List(vm list)

10. Create a cloudlet with length,file size,outp

Cloudlet cloudlet=new Cloudlet(id,length,pes Number,file Size,output Size,utilization


ut size,and utilization model:
Model,utilization

Mode

11. Submit the cloudlet list to the broker: broker.submit Cloudlet

List(cloudlet List)

12. Start the simulation:

Cloud Sim.start

Simulation()

Sample Output from the Existing Example:

Starting Cloud Sim

Example1... Initialising...
622422205034 | P Pavithra

Starting Cloud Sim version3.0

Datacenter_0 is starting...

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>null

Broker is starting...

Entities started.

0.0:Broker:Cloud Resource List received with 1 resource(s)

0.0: Broker: Trying to Create VM #0 inDatacenter_0

0.1:Broker:VM#0has been created in Data center#2,Host#0

0.1: Broker: Sending cloudlet 0 to VM #0

:Broker: Cloudlet0 received

:Broker:All Cloudlets executed.Finishing...

400.1: Broker: Destroying VM#0 Broker

is shutting down...

Simulation:No more future events

Cloud Information Service:Notify all Cloud Sim entities for shutting down.

Datacenter_0 is shutting down...

Broker is shutting down...

Simulation completed.

Simulation completed.

==========OUTPUT==========

Cloudlet ID STATUS Data center ID VM ID Time Start Time Finish Time

0 SUCCESS 2 0 400 0.1 400.1

*****Datacenter:Datacenter_0*****

User id Debt

3 35.6

CloudSimExample1finished!

1. You can copy few(or more)lines with copy&paste mechanism.

For this you need to share clipboard between host OS and guest OS, installing Guest Addition
on both the virtual machines(probably setting bidirectional and restarting them).
622422205034 | P Pavithra

You copy from guest OS in the clipboard that is shared with the host OS. Then

you paste from the host OS to the second guest OS.

2. You can enable drag and drop too with the same method(Click on the
machine,settings, general, advanced, drag and drop: set to bidirectional)

3. You can have common Shared Folders on both virtual machines and use one of the

directory shared as buffer to copy.

Installing Guest Additions you have the possibility to set Shared Folders too.As you put a file

in a shared folder from host OS or from guest OS, is immediately visible to the other. (Keep

in mind that can arise some problems for date/time of the files when there are different

clock settings on the different virtual machines).

If you use the same folder shared on more machines you can exchange files directly
copying them in this folder.

4. You can use usual method to copy files between 2 different computer with
clientserver application.(e.g.scp with sshd active for linux ,winscp... you can get
some info about SSH servers e.g.here)

You need an active server (sshd) on the receiving machine and a client on the sending

machine.Of course you need to have the authorization setted (via password or,better,via
an automatic authentication method).

Note:many Linux/Ubuntu distribution install sshd by default:you can see if it is running


with pgrep sshd from a shell. You can install with sudo apt-get install open ssh-server.

5. You can mount part of the file system of a virtual machine via NFS or SSHFS on the

other, or you can share file and directory with Samba.

You may find interesting the article Sharing files between guest and host without

Virtual Box shared folders with detailed step by step instructions.

You should remember that you are dialing with a little network of machines with
different operative systems, and in particular:

Eachvirtualmachinehasitsownoperativesystemrunningonandactsasaphysical

machine.

Each virtual machine is an instance of a program owned by an user in the hosting


operative system and should undergo the restrictions of the user in the hosting OS.
622422205034 | P Pavithra

E.g Let we say that Hastur and Meow are users of the hosting machine, but they did not

allow each other to see their directories(no read/write/execute authorization).When


each of them run a virtual machine, for the hosting OS those virtual machine are two
normal

programs owned by Hastur and Meow and cannot see the private directory of the other
user.

This is a restriction due to the hosting OS. It's easy to over came it: it's enough to give
authorization to read/write/execute to a directory or to chose a different directory in
which both users can read/write/execute.
Windows likes mouse and Linux fingers.:-)

I mean I suggest you to enable Drag & drop to be cosy with theWindows machines and

the Shared folders or to be cosy with Linux.

When you will need to be fast with Linux you will feel the need of ssh-key gen and To

Generate once SSH Keys to copy files on/from a remote machine without writing

password anymore. In this way it functions bash auto-completion remotely too!


622422205034 | P Pavithra
622422205034 | P Pavithra
622422205034 | P Pavithra
622422205034 | P Pavithra
622422205034 | P Pavithra
622422205034 | P Pavithra
622422205034 | P Pavithra
622422205034 | P Pavithra
622422205034 | P Pavithra
622422205034 | P Pavithra
622422205034 | P Pavithra
622422205034 | P Pavithra
622422205034 | P Pavithra
622422205034 | P Pavithra
622422205034 | P Pavithra
622422205034 | P Pavithra
622422205034 | P Pavithra
622422205034 | P Pavithra
622422205034 | P Pavithra
622422205034 | P Pavithra
622422205034 | P Pavithra
622422205034 | P Pavithra
622422205034 | P Pavithra
622422205034 | P Pavithra

STEP:1

CS117@user:~$cd~

#Update the source listCS117@user:~$ sudo

apt-get update

#The Open JDK project is the default version of Java#that is

provided from a supported Ubuntu repository.

CS117@user:~$sudo apt-get install default-jdk

CS117@user:~$java-version java version

"1.7.0_65"

Open JDK Runtime Environment(IcedTea2.5.3)(7u71-2.5.3-0 ubuntu0.14.04.1)

Open JDK 64-Bit Server VM (build 24.65-b04, mixed mode)

STEP:2

Adding a dedicated Hadoop user CS117@user:~$sudo

add group hadoop

Adding group `hadoop' (GID 1002) ...

Done.

CS117@user:~$sudocadduser--in group hadoop hduser Adding

user `hduser' ...

Adding new user`hduser'(1001)with group`hadoop'...

Creating home directory `/home/hduser' ...

Copying files from`/etc/skel'...

Enter new UNIX password: Retype

new UNIX password:

password: password updated successfully

Changing the user information for hduser

Enter the new value,or press ENTER for the default Full

Name []:

Room Number[]:

Work Phone []:


622422205034 | P Pavithra

Home Phone []:

Other []:

Is the information correct?[Y/n] Y

STEP:3

Installing SSH

Ssh has two main components:

1. ssh :The command we use to connect to remote machines-the client.

2. sshd:The daemon that is running on the server and allows clients to connect to the

server.

The ssh is pre-enable don Linux,but in order to start sshd dae mon,we need to install

ssh first.Use this command to do that :

CS117@user:~$sudoapt-get install ssh

This will install ssh on our machine.If we get something similar to the following,we can
think it is setup properly:

CS117@user:~$which ssh

/usr/bin/ssh

CS117@user:~$which sshd

/usr/sbin/sshd

Create and Setup SSH Certificates

CS117@user:~$suhd user

Password:

CS117@user:~$ssh-keygen-trsa-P""

Generating public/private rsa keypair.

Enter file in which to save the key(/home/hduser/.ssh/id_rsa):

Created directory'/home/hduser/.ssh'.

Your identification has been saved in/home/hduser/.ssh/id_rsa.Your public

key has been saved in /home/hduser/.ssh/id_rsa.pub. The key fingerprint

is: 50:6b:f3:fc:0f:32:bf:30:79:c2:41:71:26:cc:7d:e3 hduser@laptop The

key's ran domart image is:


622422205034 | P Pavithra

+--[RSA2048]------+

.oo.o |

..o=. o|

| . +. o .|

| o = E|

|S+|

|.+|

| O+ |

| Oo|

| o..|

++

hduser@laptop:/home/k$cat$HOME/.ssh/id_rsa.pub>>$HOME/.ssh/authorized_keys

check if ssh works:

hduser@laptop:/home/k$ssh localhost

The authenticity of host 'local host (127.0.0.1)'can't be established.ECDSA key

finger printise1:8b:a0:a5:75:ef:f4:b4:5e:a9:ed:be:64:be:5c:2f.Are you sure you want

to continue connecting (yes/no)? yes

Warning:Permanently added'localhost'(ECDSA)to the list of known hosts.

Welcome to Ubuntu 14.04.1 LTS (GNU/Linux 3.13.0-40-generic x86_64)

...

STEP:4

Install Hadoop

hduser@laptop:~$wgethttp://mirrors.sonic.net/apache/hadoop/common/hadoop2.
6.0/hadoop-2.6.0.tar.gz hduser@laptop:~$tarxvzfhadoop-2.6.0.tar.gz

hduser@laptop:~/hadoop-2.6.0$suk Password:

CS117@user:/home/hduser$sudo add user hduser sudo[sudo] password

for k:
622422205034 | P Pavithra

Adding user `hduser' to group `sudo' ...

Adding user hduser to group sudo Done. CS117@user:/home/hduser$sudosu

hduser

hduser@laptop:~/hadoop-2.6.0$sudomv*/usr/local/hadoop
hduser@laptop:~/hadoop2.6.0$ sudo chown -R hduser:hadoop /usr/local/hadoop

STEP:5

Setup Configuration Files

The following files will have to be modified to complete the Hadoop setup:

1. ~/.bashrc

2. /usr/local/hadoop/etc/hadoop/hadoop-env.sh

3. /usr/local/hadoop/etc/hadoop/core-site.xml

4. /usr/local/hadoop/etc/hadoop/mapred-site.xml.template

5. /usr/local/hadoop/etc/hadoop/hdfs-site.xml ~/.bashrc:

hduser@laptop update-alternatives--config java

There is only one alternative in link group


java(providing/usr/bin/java):/usr/lib/jvm/java-7-
openjdk-amd64/jre/bin/java Nothing to configure.

hduser@laptop:~$ vi ~/.bashrc

#HADOOPVARIABLESSTART export
JAVA_HOME=/usr/lib/jvm/java-7-open jdkamd64 export

HADOOP_INSTALL=/usr/local/hadoop export

PATH=$PATH:$HADOOP_INSTALL/bin export

PATH=$PATH:$HADOOP_INSTALL/sbin export

HADOOP_MAPRED_HOME=$HADOOP_INSTALL export

HADOOP_COMMON_HOME=$HADOOP_INSTALL export

HADOOP_HDFS_HOME=$HADOOP_INSTALL export

YARN_HOME=$HADOOP_INSTALL export

HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL

/lib/native export HADOOP_OPTS="-


622422205034 | P Pavithra

Djava.library.path=$HADOOP_INSTALL/lib"#HADOOP

VARIABLES END hduser@laptop:~$source~/.bashrc

hduser@ubuntu-VirtualBox:~$javac-version javac1.7.0_75

hduser@ubuntu-VirtualBox:~$which javac

/usr/bin/javac hduser@ubuntu-VirtualBox:~$read link-

f /usr/bin/javac

/usr/lib/jvm/java-7-open jdk-amd64/bin/javac

2./usr/local/hadoop/etc/hadoop/hadoop-env.sh

We need to set JAVA_HOME by modifying hadoop-env.sh file.

hduser@laptop:~$vi/usr/local/hadoop/etc/hadoop/hadoop-env.shexport

JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64

3. /usr/local/hadoop/etc/hadoop/core-site.xml: hduser@laptop:~$sudo

mk dir-p/app/hadoop/tmp hduser@laptop:~$ sudo chown

hduser:hadoop /app/hadoop/tmp

hduser@laptop:~$vi/usr/local/hadoop/etc/hadoop/core-site.xml

<configuration>

<property>

<name>hadoop.tmp.dir</name>

<value>/app/hadoop/tmp</value>

<description>A base for other temporary directories.</description>

</property>

<property>

<name>fs.default.name</name>

<value>hdfs://localhost:54310</value>

<description>The name of the default file system.AURI whose scheme and

authority determine the File System implementation. The uri's scheme

determines the config property (fs.SCHEME.impl) naming the File System

implementation class. The uri's authority is used to determine the host, port, etc.

for a file system.</description>


622422205034 | P Pavithra

</property>

</configuration>

4. /usr/local/hadoop/etc/hadoop/mapred-site.xml

hduser@laptop:~$cp/usr/local/hadoop/etc/hadoop/mapred-site.xml.template

/usr/local/hadoop/etc/hadoop/mapred-site.xml

<configuration>

<property>

<name>mapred.job.tracker</name>

<value>localhost:54311</value>

<description>The host and port that the Map Reduce job tracker run sat.If

"local", then jobs are run in-process as a single map

and reduce task. </description>

</property>

</configuration>

5. /usr/local/hadoop/etc/hadoop/hdfs-site.xml hduser@laptop:~$ sudo

mkdir -p /usr/local/hadoop_store/hdfs/namenode hduser@laptop:~$

sudo mkdir -p /usr/local/hadoop_store/hdfs/datanode

hduser@laptop:~$sudochown-Rhduser:hadoop/usr/local/hadoop_store

hduser@laptop:~$vi/usr/local/hadoop/etc/hadoop/hdfs-site.xml

<configuration>

<property>

<name>dfs.replication</name>

<value>1</value>

<description>Default block replication.

The actual number of replications can be specified when the file is created.The default

is used if replication is not specified in create time.

</description>

</property>

<property>
622422205034 | P Pavithra

<name>dfs.namenode.name.dir</name>

<value>file:/usr/local/hadoop_store/hdfs/namenode</value>

</property>

<property>

<name>dfs.datanode.data.dir</name>

<value>file:/usr/local/hadoop_store/hdfs/datanode</value>

</property>

</configuration>

STEP:6

Format the New Hadoop Filesystem hduser@laptop:~$hadoop

namenode-format

DEPRECATED:Use of this script to execute hdfs command is deprecated.Instead use the

hdfs command for it.

15/04/1814:43:03 INFO namenode.NameNode:STARTUP_MSG:

/*************************************************

***********STARTUP_MSG:StartingNameNode

STARTUP_MSG: host = laptop/192.168.1.1

STARTUP_MSG: args = [-format]

STARTUP_MSG: version = 2.6.0

STARTUP_MSG:classpath= /usr/local/hadoop/etc/hadoop

...

STARTUP_MSG:java=1.7.0_65
************************************************************/

15/04/1814:43:03 INFO namenode.NameNode:registered UNIX signal handlers


for[TERM,

HUP, INT]

15/04/1814:43:03 INFO namenode.NameNode:createNameNode[-format]

15/04/1814:43:07 WARN util.Native Code Loader:Unable to load native-hadoop library


for your platform... using builtin-java classes where applicable

Formatting using cluster id:CID-e2f515ac-33da-45bc-8466-5b1100a2bf7f15/04/18


622422205034 | P Pavithra

14:43:09 INFO namenode.FS Namesystem: No Key Provider found. 15/04/18

14:43:09 INFO namenode.FS Namesystem: fs Lock is fair:true 15/04/18 14:43:10

INFO block management.Datanode Manager: dfs.block.invalidate.limit=1000

15/04/18 14:43:10 INFO block management.Datanode Manager:

dfs.namenode.datanode.registration.ip-hostname-check=true 15/04/18 14:43:10

INFO block management.Block Manager:

dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000

15/04/1814:43:10 INFO block management.Block Manager:The block deletion will


start around

2015 Apr 18 14:43:10

15/04/1814:43:10 INFO util.GSet:Computing capacity for map Blocks Map

15/04/18 14:43:10 INFO util.GSet:VM type = 64-bit

15/04/18 14:43:10 INFO util.GSet: 2.0% max memory 889 MB = 17.8MB

15/04/18 14:43:10 INFO util.GSet:capacity =2^21=2097152 entries

15/04/18 14:43:10 INFO block management.Block Manager:

dfs.block.access.token.enable=false

15/04/18 14:43:10 INFO block management.Block Manager:default Replication =1

15/04/18 14:43:10 INFO block management.Block Manager:max Replication =

512

15/04/18 14:43:10 INFO block management.Block Manager:min Replication =1

15/04/18 14:43:10 INFO block management.Block Manager:max Replication Streams


=2

15/04/1814:43:10 INFO block management.Block Manager:should Check For Enough

Racks=false

15/04/18 14:43:10 INFO block management.Block Manager: replication Recheck


Interval= 3000

15/04/18 14:43:10 INFO block management.Block Manager:encrypt DataTransfer=


false

15/04/1814:43:10 INFO block management.Block Manager:max Num Blocks To Log

=1000
622422205034 | P Pavithra

15/04/18 14:43:10 INFO namenode.FS Namesystem:fs Owner=hduser

(auth:SIMPLE)

15/04/18 14:43:10 INFO namenode.FS Namesystem:super group=super group


15/04/18

14:43:10 INFO namenode.FSNamesystem: is Permission Enabled = true 15/04/18


14:43:10

INFO namenode.FS Namesystem: HA Enabled:false

15/04/18 14:43:10 INFO namenode.FS Namesystem: Append Enabled:true

15/04/1814:43:11 INFO util.GSet:Computing capacity for map INode Map

15/04/18 14:43:11 INFO util.GSet:VM type = 64-bit

15/04/18 14:43:11 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB

15/04/18 14:43:11 INF Outil.GSet:capacity =2^20=1048576entries

15/04/1814:43:11 INFO namenode.NameNode:Caching filenames occurring more than


10 times

15/04/18 14:43:11 INFO util.GSet: Computing capacity for map cached Blocks

15/04/1814:43:11 INFO util.GSet:VM type =64-bit

15/04/18 14:43:11 INFO util.GSet: 0.25% max memory 889 MB = 2.2MB

15/04/18 14:43:11 INFO util.GSet:capacity =2^18=262144 entries

15/04/18 14:43:11 INFO namenode.FS Namesystem:

dfs.namenode.safemode.threshold-pct =0.9990000128746033

15/04/18 14:43:11 INFO namenode.FS

Namesystem: dfs.namenode.safemode.min.datanodes=015/04/18

14:43:11 INFOnamenode.FSNamesystem: dfs.namenode.safemode.extension

=30000

15/04/18 14:43:11 INFO namenode.FS Namesystem: Retry cache on namenode is


enabled

15/04/1814:43:11INFO namenode.FS Namesystem:Retry cache will use 0.03 of total


heap and retry cache entry expiry time is 600000 millis

15/04/1814:43:11 INFO util.GSet:Computing capacity for map NameNode Retry Cache

15/04/18 14:43:11 INFO util.GSet:VMtype = 64-bit


622422205034 | P Pavithra

15/04/1814:43:11INFO util.GSet:0.029999999329447746%max memory889MB=

273.1 KB

15/04/18 14:43:11 INFOutil.GSet:capacity = 2^15 = 32768entries

15/04/1814:43:11 INFO namenode.NNConf:ACLs enabled? False

15/04/18 14:43:11 INFO namenode.NNConf: XAttrs enabled?true

15/04/18 14:43:11 INFO namenode.NNConf: Maximum size of an xattr: 16384

15/04/18 14:43:12 INFO namenode.FSImage: Allocated new Block Pool Id:


BP130729900-192.168.1.1-1429393391595

15/04/1814:43:12 INFO common.Storage:Storage directory

/usr/local/hadoop_store/hdfs/namenode has been successfully formatted.

15/04/1814:43:12 INFO namenode.NN Storage Retention Manager:Going to retain1


images with txid >= 0
15/04/1814:43:12 INFO util.Exit Util: Exiting with status 0

15/04/1814:43:12 INFO namenode.NameNode:SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at laptop/192.168.1.1

************************************************************/
622422205034 | P Pavithra

622422205034 | P Pavithra

STEP:7

Starting Hadoop

@laptop:~$ cd /usr/local/hadoop/sbin

CS117@user:/usr/local/hadoop/sbin$ls distribute-

exclude.sh start-all.cmd stop-balancer.sh hadoop-daemon.sh

start-all.sh stop-dfs.cmd

hadoop-daemons.sh start-balancer.shstop-dfs.sh hdfsconfig.cmdstart-dfs.cmd


stopsecure-dns.sh hdfs-config.sh start-dfs.sh stop-yarn.cmdhttpfs.sh start-secure-
dns.shstop-yarn.sh kms.sh start-yarn.cmd yarn-daemon.sh

mr-jobhistory-daemon.shstart-yarn.sh yarn-daemons.sh refreshnamenodes.sh


stopall.cmd slaves.sh stop-all.sh
CS117@user:/usr/local/hadoop/sbin$sudosu hduser

hduser@laptop:/usr/local/hadoop/sbin$start-all.sh

hduser@laptop:~$ start-all.sh

This script is Deprecated.Instead use start-dfs.shand start-yarn.sh

15/04/1816:43:13 WARN util.Native Code Loader:Unable to load native-hadoop


library for your platform... using built in-java classes where applicable Starting
namenodes on[localhost]
localhost:starting
namenode,loggingto/usr/local/hadoop/logs/hadoophdusernamenode-laptop.out

localhost:starting datanode,loggingto/usr/local/hadoop/logs/hadoop-
hduserdatanodelaptop.out

Starting secondary namenodes [0.0.0.0]

0.0.0.0:starting secondary
namenode,loggingto/usr/local/hadoop/logs/hadoophdusersecondary namenode-
laptop.out

15/04/1816:43:58 WARN util.Native Code Loader:Unable to load native-hadoop library


for
622422205034 | P Pavithra

your platform... using built in-java classes where applicable startingyarn

daemons

starting resource manager,loggingto/usr/local/hadoop/logs/yarn-hduserresource


manager-laptop.out

localhost:starting node manager,loggingto/usr/local/hadoop/logs/yarn-hdusernode


manager-laptop.out STEP:8 hduser@laptop:/usr/local/hadoop/sbin$jps

9026 Node Manager

7348 Name Node

9766 Jps

8887 Resource Manager

7507 Data Node

StoppingHadoop

$ pwd

/usr/local/hadoop/sbin

$ ls

distribute-exclude.shhttpfs.sh start-all.sh start-yarn.cmd stop-dfs.cmd yarn-daemon.sh

hadoop-daemon.sh mr-jobhistory-daemon.shstart-balancer.sh start-yarn.sh


stopdfs.shyarn-daemons.sh

hadoop-daemons.sh refresh-namenodes.sh start-dfs.cmd stop-all.cmd stop-


securedns.sh

hdfs-config.cmd slaves.sh start-dfs.sh stop-all.sh stop-yarn.cmd hdfs-config.sh


startall.cmdstart-secure-dns.sh stop-balancer.s

Output:
622422205034 | P Pavithra
622422205034 | P Pavithra

Procedure:

/home/hduser/HadoopFScat.java:

import java.io.InputStream; import java.net.URI;

import org.apache.hadoop.conf.Configuration; import

org.apache.hadoop.fs.FileSystem;import

org.apache.hadoop.fs.Path; import

org.apache.hadoop.io.IOUtils; public class

HadoopFScat { public static void

main(String[]args)throws Exception{

String uri = args[0];

Configuration conf =newConfiguration();

File Systemfile System=FileSystem.get(URI.create(uri),conf);

InputStream inputStream = null; try{ inputStream =

fileSystem.open(new Path(uri));

IOUtils.copyBytes(inputStream,System.out,4096,false);

}finally{

IOUtils.closeStream(inputStream);}

}}

Download the jar file:

Download Hadoop-core-1.2.1.jar,which is used to compile and execute the Map Reduce

program.Visit the following link

http://mvnrepository.com/artifact/org.apache.hadoop/
hadoopcore/1.2.1todownloadthejar.Let us assume the
downloaded folder is /home/hduser/.
Creating a directory to collect class files:

hduser@nspublin:/usr/local/hadoop/sbin$mkdir/home/hduser/fscat

Compiling the java file-Hadoop FScat.java:


hduser@nspublin:/usr/local/hadoop/sbin$sudo/usr/lib/jvm/java-8oracle/bin/
javacclasspath /home/hduser/hadoop-core-1.2.1.jar -d /home/hduser/fscat

/home/hduser/HadoopFScat.java

hduser@nspublin:/usr/local/hadoop/sbin$ls/home/hduser/fscat
622422205034 | P Pavithra

Hadoop FScat.class

Creating jar file for Hadoop FScat.java:

hduser@nspublin:/usr/local/hadoop/sbin$jar-cvf/home/hduser/fscat.jar-C

/home/hduser/fscat/.

added manifest adding:Hadoop FScat.class(in=1224)

(out=667)(deflated45%)

OUTPUT:

Executing jar file for Hadoop FScat.java:

hduser@nspublin:/usr/local/hadoop/sbin$hadoopjar/home/hduser/fscat.jarHadoopF
Scat

/user/input/file.txt

16/06/0815:29:03 WARN util.Native Code Loader:Unable to loadnative-hadoop


library for your platform... using built in-java classes where applicable
622422205034 | P Pavithra

Alzheimer's virtual reality app simulates dementia 2

June 2016 Last updated at 19:13 BST

A virtual reality app has been launched to provide a sense of what it is like to live with
different forms of dementia.
A Walk Through Dementia was created by the charity Alzheimer's Research UK.It has

been welcomed by other experts in the field.

We will increasingly be asked for help by people with dementia, and having had some
insight

into what may be happening for them will improve how we can help, said Tula Brannelly
from the University of Southampton.
A woman living with the condition and her husband told the Today programme why
they supported the Android app's creation.

Visitors to St Pancras International station in London can try out the app until 1700 on
Saturday4

June.

Pre requisites

Install Docker on your machine

For Ubuntu:

First,update your packages:

$sudoapt update

Next,install docker with apt-get:

$sudoapt install docker.io

Finally,verify that Docker is installed correctly:

$sudo docker run hello-world

1.Create project

In order to create first Docker application,create a folder on computer.It must contain


the following two
622422205034 | P Pavithra

files:

A.main.pyfile(python file that will contain the code to be executed).

A'Docker file'file(Docker file that will contain the necessary instructions to create the
environment).

Normally the folder architecture is:

Dockerfile main.py

0directories,2files

2Edit the Python file

You can add the following code to the'main.py'file:

#!/usr/bin/env python3 print("Docker

is magic!")

Nothing exceptional,but once you see"Docker is magic!"displayed in you terminal you


will know that your Docker is working.

3. Edit the Dockerfile

Cloud Computing

The first step to take when you create a Docker file is to access the Docker Hub
website.This site contains

many pre-designed images to save your time (for example: all images for linux or code
languages).

#A docker file mustal ways start by importing the base image.

#We use the keyword 'FROM' to do that.

#In our example,we want import the python image.

#So we write'python'for the image name and'latest for the version.

FROM python:latest

#In order to launch our python code,we must import it in to our image.

#We use the keyword 'COPY' to do that.

#The first parameter'main.py'is the name of the file on the host.

#The second parameter"/"is the path where to put the file on the image.

#Here we put the file at the image root folder.

COPY main.py/

#We need to define the command to launch when we are going to run the image.
622422205034 | P Pavithra

# We use the keyword 'CMD' to do that.

#The following command will execute"python./main.py".

CMD [ "python", "./main.py" ]

4. Create the Docker image

Once your code is ready and the Docker file is written,all you have to do is create your
image to contain your application.

$docker build-tpython-test

The'-'option allows you to define the name of your image. In our case we have
chosen'python-test'but you can put what you want.

5. Run the Docker image

Once the image is created,your code is ready to be launched.

$docker run python-testYou need to put the name of your image after'docker run.
"Docker is magic!" displayed in your terminal.
622422205034 | P Pavithra

PROCEDURE:

Step-1:

Verify Docker version and also login to DockerHub docker

version

Docker login

Step-2:

Pull Image from DockerHub

Docker pull stack simplify/docker intro-springboot-helloworld-restapi:1.0.0- RELEASE

Step-3:

Run the downloaded Docker Image&Access the Application Copy

the docker image name from Docker Hub

dockerrun-nameapp1-p80:8080-dstacksimplify/dockerintro-springboot-
helloworldrest-api:1.0.0-

RELEASE

http://localhost/hello Step-4:

List Running Containers

docker ps docker ps-a

dockerps-a-q

Step-5:

Connect to Container Terminal dockerexec-it<container-name>/bin/sh

Step-6:
622422205034 | P Pavithra

Container Stop, Start dockerstop<container-name>

dockerstart<container-name>

Step-7:

Remove Container
622422205034 | P Pavithra

docker stop<container-name> docker rm


<containername>Step-8: Remove
Imagedocker images dockerrmi<image-id>

You might also like