Program 1 & 2 DS
Program 1 & 2 DS
Aim:
Algorithm:
Step 2: Download the Hortonworks data platform from the below link:
https://drive.google.com/file/d/15ok9qXPcbFsj_WkFqytXoA8shAg6y?usp=share_link
Hortonworks Data Platform (HDP) Product Download (cloudera.com)
Step 6: Choose the path of the .ova file (HDP) and click ok.
Step 7: Select the Horton Works Docker Sandbox from the list of Virtual Machines and press
the Start Button.
Step 8: Wait until the OS loads and shows that the browser can be used to navigate.
Step 10: When asked for username and password give username / password as raj_ops.
Step 11: Open the SSH Client for running terminal commands inside the Browser using the
Link: (https://127.0.01:4200)
Step 12: Type the username as root and password as Hadoop to enter the Shell.
Step 13: Type in Hadoop/HDFC/PIG commands in the commands in the Terminal window from
the browser.
Result:
Thus the installation of Hadoop single node cluster in Ubuntu has been implemented
successfully.
Ex. No: 2 Page No:
Date: AMBARI SERVER
Aim:
To monitor and manage Hadoop resources and process using Ambari Server.
Algorithm:
Step 1: Open any web browser (Firefox, Chrome, Edge) after running the Hadoop HDP in
Virtual Box.
Step 5: Enter the credentials as raj_ops, raj_ops for username and password respectively and
Go through the services and components in the UI.
Step 6: Unless there occurs any problem, the Services need not be changed. If there is an issue
in the components, choose Services->, Any Service-> and then manage the service
by choosing from Service Actions-> Turn On | Turn Maintenance | Turn Off. Turn
On the services and turn off the maintenance mode.
Step 7: To view and manage files in the HDFS through the web dashboard go to Views-gt; Files
View.
Step 8: If any problem occurs and persists while working, give Start All Components or Restart
All Components under services.
Step 9: For using the Web Client Terminal for running all commands, navigate to
http://localhost:4200/.
Step 10: The Ambari Server is ready to orchestrate and monitor the Hadoop Cluster.
Result:
Thus the Hadoop resources and processer are monitored and managed using Ambari Server.