0% found this document useful (0 votes)
6 views45 pages

Bda Lab Manual 2024

The document outlines a Big Data Lab course at Visvesvaraya Technological University, focusing on implementing file management tasks in Hadoop, running a Word Count MapReduce program, and analyzing weather data using MapReduce. It provides detailed algorithms, commands, and steps for various programs including matrix multiplication with Hadoop. The lab emphasizes hands-on experience with Hadoop's HDFS and MapReduce programming model.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views45 pages

Bda Lab Manual 2024

The document outlines a Big Data Lab course at Visvesvaraya Technological University, focusing on implementing file management tasks in Hadoop, running a Word Count MapReduce program, and analyzing weather data using MapReduce. It provides detailed algorithms, commands, and steps for various programs including matrix multiplication with Hadoop. The lab emphasizes hands-on experience with Hadoop's HDFS and MapReduce programming model.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Big Data Lab (22SCSL26)

VISVESVARAYA TECHNOLOGICAL UNIVERSITY


“Jnana Sangama”, Belagavi-590 018, Karnataka

“Big Data Lab (22SCSL26)”


Master of Technology
in
Computer Science and Engineering

Department of CS&E, M.Tech, SJCIT 2


Big Data Lab (22SCSL26)

Program No.1

Implement the following file management tasks in Hadoop:


i. Adding files and directories
ii. Retrieving files
iii. Deleting files
Hint: A typical Hadoop workflow creates data files (such as log files) elsewhere and copies them
into HDFS using one of the above command line utilities

DESCRIPTION:
HDFS is a scalable distributed filesystem designed to scale to petabytes of data while running on
top of the underlying filesystem of the operating system. HDFS keeps track of where the data
resides in a network by associating the name of its rack (or network switch) with the dataset. This
allows Hadoop to efficiently schedule tasks to those nodes that contain data, or which are nearest
to it, optimizing bandwidth utilization. Hadoop provides a set of command line utilities that work
similarly to the Linux file commands, and serve as your primary interface with HDFS. We‘re
going to have a look into HDFS by interacting with it from the command line. We will take a look
at the most common file management tasks in Hadoop, which include:

 Adding files and directories to HDFS


 Retrieving files from HDFS to local filesystem
 Deleting files from HDFS

ALGORITHM:- SYNTAX AND COMMANDS TO ADD, RETRIEVE AND DELETE DATA


FROM HDFS
Step-1
Adding Files and Directories to HDFS

Before you can run Hadoop programs on data stored in HDFS, you‘ll need to put the data into
HDFS first. Let‘s create a directory and put a file in it. HDFS has a default working directory of

Department of CS&E, M.Tech, SJCIT 3


Big Data Lab (22SCSL26)

/user/$USER, where $USER is your login user name. This directory isn‘t automatically created

Department of CS&E, M.Tech, SJCIT 4


Big Data Lab (22SCSL26)

for you, though, so let‘s create it with the mkdir command. For the purpose of illustration, we
use chuck. You should substitute your user name in the example commands.

hadoop fs -mkdir /user/chuck


hadoop fs -put example.txt hadoop fs -put example.txt /user/chuck

Step-2

Retrieving Files from HDFS

The Hadoop command get copies files from HDFS back to the local filesystem. To retrieve
example.txt, we can run the following command:

hadoop fs -cat example.txt

Step-3

Deleting Files from HDFS

hadoop fs -rm example.txt

 Command for creating a directory in hdfs is “hdfs dfs –mkdir /lendicse”.


 Adding directory is done through the command “hdfs dfs –put lendi_english /”.

Step-4

Copying Data from NFS to HDFS

Copying from directory command is “hdfs dfs –copyFromLocal


/home/lendi/Desktop/shakes/glossary /lendicse/” View the file by using the command “hdfs dfs
–cat /lendi_english/glossary”

 Command for listing of items in Hadoop is “hdfs dfs –ls hdfs://localhost:9000/”

. Command for Deleting files is “hdfs dfs –rm r /kartheek”.

Department of CS&E, M.Tech, SJCIT 5


Big Data Lab (22SCSL26)

Program-2:

Run a basic Word Count Map Reduce Program to understand Map Reduce Paradigm

DESCRIPTION:--

MapReduce is the heart of Hadoop. It is this programming paradigm that allows for massive
scalability across hundreds or thousands of servers in a Hadoop cluster. The MapReduce concept
is fairly simple to understand for those who are familiar with clustered scale-out data processing
solutions. The term MapReduce actually refers to two separate and distinct tasks that Hadoop
programs perform. The first is the map job, which takes a set of data and converts it into another
set of data, where individual elements are broken down into tuples (key/value pairs). The reduce
job takes the output from a map as input and combines those data tuples into a smaller set of tuples.
As the sequence of the name MapReduce implies, the reduce job is always performed after the
map job.

ALGORITHM

MAPREDUCE PROGRAM

WordCount is a simple program which counts the number of occurrences of each word in a given
text input data set. WordCount fits very well with the MapReduce programming model making it
a great example to understand the Hadoop Map/Reduce programming style.

Our implementation consists of three main parts:

1. Mapper

2. Reducer

3. Drive

Step-1. Write a Mapper

A Mapper overrides the ―map¦ function from the Class "org.apache.hadoop.mapreduce.Mapper"


which provides pairs as the input. A Mapper implementation may output pairs using the provided
Context .

Department of CS&E, M.Tech, SJCIT 6


Big Data Lab (22SCSL26)

Input value of the WordCount Map task will be a line of text from the input data file and the key
would be the line number . Map task outputs for each word in the line of text.

Pseudo-code

void Map (key, value)

for each word x in value:

output.collect(x,1);

Step-2. Write a Reducer

A Reducer collects the intermediate output from multiple map tasks and assemble a single result.
Here, the WordCount program will sum up the occurrence of each word to pairs as .

Pseudo-code

void Reduce (keyword, )

for each x in :

sum+=x;

final_output.collect(keyword, sum);

Step-3. Write Driver

The Driver program configures and run the MapReduce job. We use the main program to
perform basic configurations such as:

 Job Name : name of this Job

Department of CS&E, M.Tech, SJCIT 7


Big Data Lab (22SCSL26)

 Executable (Jar) Class: the main executable class. For here, WordCount.

 Mapper Class: class which overrides the "map" function. For here, Map.

 Reducer: class which override the "reduce" function. For here , Reduce.

Output Key: type of output key.For here, Text.

Output Value: type of output value. For here, IntWritable

.  File Input Path

 File Output Path

INPUT

Set of Data Related Shakespeare Comedies, Glossary, Poems

Output

Department of CS&E, M.Tech, SJCIT 8


Big Data Lab (22SCSL26)

Department of CS&E, M.Tech, SJCIT 9


Big Data Lab (22SCSL26)

Program No.3

Write a Map Reduce program that mines weather data. Hint: Weather sensors collecting data every
hour at nuny locations across the globe gather a large volume of log data, which is a good candidate
for analysts with Map Reduce, since it is semi structured and record-oriented.

Tool used/ system requirement:

Software:

1. Operating System: compatible with Windows OS, mac OS or Linux OS


2. VMware
3. Cloudera

Hardware:

1. Memory = Minimum 8 GB RAM


2. CPU = multicore processor (minimum 2 cores)
3. Storage = minimum of 30 GB free disk space is required to install VM

Step 1

Fig1: Create a new java project

Department of CS&E, M.Tech, SJCIT 10


Big Data Lab (22SCSL26)

Step 2

Fig2:Name the java project as MaxTemp

Step3

Fig3:Once a project is created go to libraries and add external jars

Department of CS&E, M.Tech, SJCIT 11


Big Data Lab (22SCSL26)

Step 4

Fig4:Adding External jar files of Hadoop

Step 5

Fig5:Adding External jar files of Client

Department of CS&E, M.Tech, SJCIT 10


Big Data Lab (22SCSL26)

Step6

Fig 6:Once all the jar files are added click on finsh

Step 7

Fig7:Now create a 3 java class in java project(MaxTemp)

Department of CS&E, M.Tech, SJCIT 11


Big Data Lab (22SCSL26)

Step 8

Fig8:Name them as MaxTempMapper,MaxTempReducer,MaxTemp

Step 9

Fig 9:Add the code in particular java class above snapshot is for MaxTempMapper

Department of CS&E, M.Tech, SJCIT 12


Big Data Lab (22SCSL26)

import java.io.IOException;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.mapreduce.Mapper;
public class MaxTempMapper extends Mapper<LongWritable, Text, Text, IntWritable > {
Public void map(LongWritable key, Text value, Context context)throws IOException,
InterruptedException {
String line=value.toString();
String year=line.substring(15,19);
int airtemp;
if(line.charAt(87)== '+')
{
airtemp=Integer.parseInt(line.substring(88,92));
}
else
airtemp=Integer.parseInt(line.substring(87,92));
String q=line.substring(92,93);
if(airtemp!=9999&&q.matches("[01459]"))
{
context.write(new Text(year),new IntWritable(airtemp));
}
}
}

Department of CS&E, M.Tech, SJCIT 13


Big Data Lab (22SCSL26)

Step 10

Fig 10:Add the code in particular java class above snapshot is for MaxTempReducer

import java.io.IOException;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.mapreduce.Reducer;
import sun.awt.SunHints.Value;
public class MaxTempReduce extends Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterable<IntWritable> values, Context context)throws IOException,
InterruptedException {
int maxvalue=Integer.MIN_VALUE;
for (IntWritable value : values) {
maxvalue=Math.max(maxvalue, value.get());
}
context.write(key, new IntWritable(maxvalue));
}
}

Department of CS&E, M.Tech, SJCIT 14


Big Data Lab (22SCSL26)

Step11

Fig 11:Add the code in particular java class above snapshot is for MaxTemp

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
public class MaxTemp {
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "MaxTemp");
job.setJarByClass(maxtemp.maxtempdriver.class);
job.setMapperClass(MaxTemp.class);
job.setReducerClass(MaxTempReduce.class);

Department of CS&E, M.Tech, SJCIT 15


Big Data Lab (22SCSL26)

job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.setInputPaths(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
if (!job.waitForCompletion(true))
return;
}
}

Step12

Fig 12:Once all codes are done now we have to create jar file right click on java
project(MaxTemp)and click on export

Department of CS&E, M.Tech, SJCIT 16


Big Data Lab (22SCSL26)

Step 13

Fig 13:Once u click on export select java and select jar file

Step14

Fig 14: Now click on browser to set the path

Department of CS&E, M.Tech, SJCIT 17


Big Data Lab (22SCSL26)

Step15

Fig 15:Name it asMaxTemp.jar and click on Desktop it will created in desktop then click on ok

Step16

Fig 16: Check the path and click on finish

Department of CS&E, M.Tech, SJCIT 18


Big Data Lab (22SCSL26)

Step17

Fig 17: jar file created in desktop

Step18

Fig18: Now open the terminal and first we have to create the directory

Command
Hadoop dfs -mkdir / maxinput

Department of CS&E, M.Tech, SJCIT 19


Big Data Lab (22SCSL26)

Step19

Fig 19:Now go to deskstop to identify the actual data set present in in deskstop circle one in
dataset(dataset present in the kaggel nd internet can download)
Command: Cd Desktop/

Step 20

Fig20 :Copy the input file into the input directory it will successfully copy

Department of CS&E, M.Tech, SJCIT 20


Big Data Lab (22SCSL26)

Command :hadoop dfs –copyFromLocal tempinput.txt /maxinput

Step21

Fig 21: Now we have to run jar file

Command: hadoop jar MaxTemp.jar MaxTemp /maxinput /maxoutput

Step22

Fig22:It will successfully run the jar file

Department of CS&E, M.Tech, SJCIT 21


Big Data Lab (22SCSL26)

OUTPUT

Step23

Fig23: Checking the output of maximum temperature

Command: hadoop dfs –cat / maxoutput/part-r-00000

We can also check this in browser using localhost:50070 and search for maxoutput and it will
display the maximum temperature.

Department of CS&E, M.Tech, SJCIT 22


Big Data Lab (22SCSL26)

Program-4:

Implement Matrix multiplication with Hadoop Map Reduce

Tool used/ system requirement:

Software:

1. Operating System: compatible with Windows OS, mac OS or Linux OS


2. VMware
3. Cloudera

Hardware:

1. Memory = Minimum 8 GB RAM


2. CPU = multicore processor (minimum 2 cores)
3. Storage = minimum of 30 GB free disk space is required to install VM

Commands used to run program:

Step 1:

1a. Create two files M1, M2 and put the matrix values. (sperate columns with spaces and rows
with a line break)

1b. Put the above files to HDFS at location /user/clouders/matrices/

Department of CS&E, M.Tech, SJCIT 23


Big Data Lab (22SCSL26)

Step 2:

2a.We need to create two programs Mapper and Reduces

Mapper.py

First, define the dimensions of the matrices (m,n)

2b. Read each line i.e a row from stdin and split then to separate elements. Map int to each
element as we read elements as string from stdin.

2c.The mapper will first read the first matrix and then the second. To differentiate them we can
keep a count i of the line number we are reading and the first m_r lines will belong to the first
matrix.

Department of CS&E, M.Tech, SJCIT 24


Big Data Lab (22SCSL26)

2d. Now comes the crucial part, printing the key value. We need to think of a key which will
group elements that need to be multiplied, elements that need to be summed and elements that
belong to the same row.

{0} {1} {2} are the part of key and {3} is the value.

{0} {1} {2} actually represents the position of element from A or B to A*B

 {0} is the row position of the element

 {1} is the column position of the element

 {2} is the position of the element in addition. (like 1, 6 are at position 0 in addition and
2,5 are at position 1)

2e. For each element in matrix A:

Element remains in same row, therefore {0}=i

Element is duplicated and distributed to each column, therefore, column pos in A*B =
Duplication order of element i.e {1}=k

As you can see in the picture, the position of the element, in addition, is the same as it’s
column’s number therefore {2}=j

2f. For each element in matrix B:

Elements remain in the same column, therefore {1}=j

Element is duplicated and distributed to each row, therefore, row pos in A*B = Duplication order
of element i.e {0}=k

Department of CS&E, M.Tech, SJCIT 25


Big Data Lab (22SCSL26)

As you can see in the picture, the position of the element, in addition, is the same as it’s row’s
position therefore {2}=i-m_r

Step 3:After mapper produces output, Hadoop will sort by key and provide it to reducer.py

3a.Reducer.py

Our reducer program will get sorted mapper result which will look like this.

Department of CS&E, M.Tech, SJCIT 26


Big Data Lab (22SCSL26)

If you look closely at the output and image of matrix multiplication, you will realize:

 Every 2 numbers need to be multiplied

 Every m_c multiplied results need to get summed

 Every n_c summed result belong to the same row

 There will be m_r number of rows

Department of CS&E, M.Tech, SJCIT 27


Big Data Lab (22SCSL26)

Step 4:Running the Map-Reduce Job on Hadoop

You can run the map reduce job and view the result by the following code (considering you have
already put input files in HDFS)

This will take some time as Hadoop do its mapping and reducing work. After the successful
completion of the above process view the output by:

Department of CS&E, M.Tech, SJCIT 28


Big Data Lab (22SCSL26)

Output:

Department of CS&E, M.Tech, SJCIT 29


Big Data Lab (22SCSL26)

Program-5:

PIG Latin scripts to find the word count

Tool used/ system requirement:

Software:

1. Operating System: compatible with Windows OS, mac OS or Linux OS


2. VMware
3. Cloudera

Hardware:

 Memory = Minimum 8 GB RAM


 CPU = multicore processor (minimum 2 cores)
 Storage = minimum of 30 GB free disk space is required to install VM

Commands used to run program:

Step 1: 1a. Open Hue

1b. Go to Hue file

1c.Create a directory ‘pig’

Department of CS&E, M.Tech, SJCIT 30


Big Data Lab (22SCSL26)

1d. Create the file ‘hadooptext.txt’ within the directory

The file path would be:/user/cloudera/pig/hadooptext.txt

Copy the following content to ‘hadooptext.txt’ by clicking the file and edit option

I am learning Pig Using Tom white

I am learning Spark Using Eric Sammer

I am learning Java Using Eric Sammer

I am learning Hadoop Using Tom white

Step 2: Start Grunt shell.

Open terminal and type pig

Step 3:Now load the file stored in hdfs (Space separated file)

input1 = LOAD '/user/cloudera/pig/hadooptext.txt' AS (f1:chararray);

DUMP input1;

Department of CS&E, M.Tech, SJCIT 31


Big Data Lab (22SCSL26)

Step 4: flatten the words in each line

wordsInEachLine = FOREACH input1 GENERATE flatten(TOKENIZE(f1)) as word;

DUMP wordsInEachLine;

Step 5: Group the same words

groupedWords = group wordsInEachLine by word;

dump groupedWords;

describe groupedWords;

Step 6: Now do the wordcount.

countedWords = foreach groupedWords generate group, COUNT(wordsInEachLine);

dump countedWords;

Output:

Department of CS&E, M.Tech, SJCIT 32


Big Data Lab (22SCSL26)

Program 6

Run the Pig Latin Scripts to find a max temp for each and every year.

Step 1:

1a. Open Hue

1b. Go to Hue file

1c.Create a directory ‘pig’

1d. Create the file ‘hadooptext.txt’ within the directory

The file path would be:/user/cloudera/pig/hadooptext.txt

The Pig Latin MAX() function is used to calculate the highest value for a column (numeric values
or chararrays) in a single-column bag. While calculating the maximum value,the Max()
function ignores the NULL values.

Note

 To get the global maximum value, we need to perform a Group All operation, and
calculate the maximum value using the MAX() function.
 To get the maximum value of a group, we need to group it using the Group By operator
and proceed with the maximum function.

Syntax

Given below is the syntax of the Max() function.

Department of CS&E, M.Tech, SJCIT 33


Big Data Lab (22SCSL26)

grunt> Max(expression)

 Copy the following content to ‘hadooptext.txt’ by clicking the file and edit option

Pune,2007,31.5

Pune,2007,30.5

Pune,2008,34.5

Blre,2009,13.0

Blre,2009,10.5

Commands

grunt> A = LOAD '/home/cloudera/temp' using PigStorage(',') AS


(city:chararray,year:int,temp:double);

grunt> B = group A by city;

grunt> C = FOREACH B GENERATE group, MAX(A.temp);

Output

34.5

Department of CS&E, M.Tech, SJCIT 34


Big Data Lab (22SCSL26)

Program No.7

Use Hive to create, alter, and drop database, tables, views, functions, and indexes.

Tool used/ system requirement:

Software:

 Operating System: compatible with Windows OS, mac OS or Linux OS


 VMware
 Cloudera

Hardware:

 Memory = Minimum 8 GB RAM


 CPU = multicore processor (minimum 2 cores)
 Storage = minimum of 30 GB free disk space is required to install VM

Step 1

Fig1: Start with Cloudera Quick Start

Department of CS&E, M.Tech, SJCIT 35


Big Data Lab (22SCSL26)

Step 2

Query->Editor->Hive

Default->Show database;

Department of CS&E, M.Tech, SJCIT 36


Big Data Lab (22SCSL26)

Step3

Step 4

Department of CS&E, M.Tech, SJCIT 37


Big Data Lab (22SCSL26)

Step 5

[cloudera@quickstart ~]$ hive

Logging intilaized using configuration in file:/etc/hive/conf.dist/hive:lo4j.properties

We will Warning message as

Warning : Hive CLI is depercated and migrataion two beeline is recommed.

hive> some databases;

Ok

Default

Time taken:0.712 seconds, fetched:1 row(s)

hive>create database office;

Ok

Time taken:0.7273 seconds

hive>show databases;

Ok

Default

Office

Time taken:0.021 seconds, fetched:2row(s)

hive>

Step6

[cloudera@quickstart ~]$ cd Documents

[cloudera@quickstart Documents]$ ls

Department of CS&E, M.Tech, SJCIT 38


Big Data Lab (22SCSL26)

[cloudera@quickstart Documents]$ cat Employee.csv

Id Name Dept Yoj, salary

1,Rose,IT,2012,26000

2,Sam,Sales,2012,22000

3,Quke,HR,2013,30000

4,Nick,SC,2013,20000

[cloudera@quickstart Documents]$ pwd

/home/cloudera/Documents

[cloudera@quickstart Documents]$ gedit Employee.csv

[cloudera@quickstart Documents]$

Step 7: We need save this data of employee in employee.csv file

Id Name Dept Yoj, salary

1,Rose,IT,2012,26000

2,Sam,Sales,2012,22000

3,Quke,HR,2013,30000

4,Nick,SC,2013,20000

Step 8

hive> show databases;

Department of CS&E, M.Tech, SJCIT 39


Big Data Lab (22SCSL26)

OK

default

office

Time taken: 0.012 seconds, Fetched: 2 row(s)

hive> show databases;

OK

default

office

default

office

Time taken: 0.013 seconds, Fetched: 2 row(s)

hives show databases;

OK

default

office

Time taken: 0.014 seconds, Fetched: 2 row(s)

hive> show databases;

ОК

default

office

Time taken: 0.02 seconds, Fetched: 2 row(s)

Department of CS&E, M.Tech, SJCIT 40


Big Data Lab (22SCSL26)

hive> create table employee

> (Id INT, Name STRING, Dept STRING, Yoj INT, salary INT)

> row, format delimited fields terminated by ',

> tblproperties ("skip.header.line.count"="1");

OK

Time taken: 0.271 seconds

Hive> show databases;

default

office

Time taken: 0.02 seconds, Fetched: 2 row(s)

hives create table employee

>(Id INT, Name STRING, Dept STRING, Yoj INT, salary INT)

row format delimited fields terminated by

> tblproperties ("skip.header.line.count"="1");

OK

Time taken: 0.271 seconds

hive> show tables;

OK

employee

Time taken: 0.056 seconds, Fetched: 1 row(s)

hive> describe employee;

Department of CS&E, M.Tech, SJCIT 41


Big Data Lab (22SCSL26)

hive> select * from employee;

OK

hive> LOAD DATA LOCAL INPATH

 ‘/home/cloudera/Documents/Employee.csv’
 INTO TABLE employee;

Loading dat to table office.emplyee [numFiles=1, totalSiz=116]

OK

hive> select * for employee;

Query!

Function

Retrieving information

AB values

Some values

Multiple criteria

Selecting specific columns

Retrieving unique output records

Sorting

Sorting backward
Department of CS&E, M.Tech, SJCIT 42
Big Data Lab (22SCSL26)

Counting rows

Grouping with counting

Maximum value

Selecting from multiple tables

join same table using alias WAS

MySQL

SELECT from columns FROM table WHERE conditions

SELECT FROM table;

SELECT FROM table HERE rec name="value":

SELECT FROM table WHERE reci-"value1" AND

rec2"value2";

SELECT column_name FROM table;

SELECT DISTINCT column_name FROM table;

SELECT coll, col2 FROM table ORDER BY col2;

SELECT coll, cel2 FROM table ORDER BY co12 DESC;

SELECT COUNT(*) FROM table;

SELECT owner, COUNT(*) FROM table GROUP BY owner;

SELECT MAX(col_name) AS label FROM table;

SELECT pet.name, comment FROM pet, event ERE pet.nase event.name;

HiveQL

SELECT from columns FROM table CE conditions:

Department of CS&E, M.Tech, SJCIT 43


Big Data Lab (22SCSL26)

SELECT FROM table;

SELECT FROM table HERE Cane "value":

SELECT FROM TABLE WHERE PC "values" ANDrecz "value":

SELECT column name FROM table;

SELECT DISTINCT column_name FROM tables

SELECT col1, colz raoM table ORDER BY calz;

SELECT coll, colz FROM table ORDER BY calz Desca

SELECT COUNT(*) FROM table;

SELECT owner, COUNT(*) FROM table GROUP BY

Owner:

SELECT MAX(col_name) AS label FROM table;

SELECT pet.name. comment FROM et 30EN event ON

(pet nase event.name);

Department of CS&E, M.Tech, SJCIT 44

You might also like