0% found this document useful (0 votes)
27 views45 pages

Advabced Database Technology Lab Record-1

Uploaded by

Jasvan Sundar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views45 pages

Advabced Database Technology Lab Record-1

Uploaded by

Jasvan Sundar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 45

EX No: NOSQL EXERCISES

MongoDB – CRUD Operations, Indexing, Sharding, Deployment


Date :

AIM:
To execute the queries to perform CRUD operations, Indexing, Sharding, Deployment in MongoDB.

PROCEDURE:

Step 1: Start the mongo deamon and run it in behind


Step 2: Start the mongo client
Step 3: Create the database
Step 4: Perform basic queries to perform CRUD (Create, Read, Update, Delete) operations.
Step 5: Perform basic queries for Indexing, Sharding and Deployment.

QUERIES:

//to start mongo deamon


C:/> mongod

//to start mongo client


C:/> mongo

//to list out database names


> show dbs

CRUD Queries:

//to create database


> use db1

//to check in which database I am working


> db

//to drop database in which I am working


> db.dropDatabase()

//To create collection


> db.createCollection('stud')

//to list out collection names


> show collections

//create collection by inserting document


> db.emp.insert({rno:1,name:'Bhavana'})

1
//Every row/document can be different than other
> db.emp.insert({name:'Amit',rno:2})
> db.emp.insert({rno:3, email_id:'[email protected]'})

// To display data from collection


> db.emp.find()
{ "_id" : ObjectId("5d7d3daf315728b4998f522e"), "rno" : 1, "name" : "Bhavana" }
{ "_id" : ObjectId("5d7d3f28315728b4998f522f"), "name" : "Amit", "rno" : 2 }
{ "_id" : ObjectId("5d7d3f56315728b4998f5230"), "rno" : 3, "email_id" : "[email protected]" }

//insert data by providing _id value


> db.emp.insert({_id:1,rno:4,name:"Akash"})

> db.emp.find()
{ "_id" : ObjectId("5d7d3daf315728b4998f522e"), "rno" : 1, "name" : "Bhavana" }
{ "_id" : ObjectId("5d7d3f28315728b4998f522f"), "name" : "Amit", "rno" : 2 }
{ "_id" : ObjectId("5d7d3f56315728b4998f5230"), "rno" : 3, "email_id" : "[email protected]" }
{ "_id" : 1, "rno" : 4, "name" : "Akash" }

// trying to insert data with duplicate _id, it will not accept as _id is primary key field
> db.emp.insert({_id:1,rno:5,name:"Reena"})
E11000 duplicate key error index: db1.emp.$_id_ dup key: { : 1.0 }

//Insert multiple documents at once


> db.emp.insert([{rno:7,name:'a'},{rno:8,name:'b'},{rno:8,name:'c'}])

> db.emp.find()
{ "_id" : ObjectId("5d7d3daf315728b4998f522e"), "rno" : 1, "name" : "Bhavana" }
{ "_id" : ObjectId("5d7d3f28315728b4998f522f"), "name" : "Amit", "rno" : 2 }
{ "_id" : ObjectId("5d7d3f56315728b4998f5230"), "rno" : 3, "email_id" : "[email protected]" }
{ "_id" : 1, "rno" : 4, "name" : "Akash" }
{ "_id" : 2, "rno" : 5, "name" : "Reena" }
{ "_id" : ObjectId("5d7d4244315728b4998f5231"), "rno" : 7, "name" : "a" }
{ "_id" : ObjectId("5d7d4244315728b4998f5232"), "rno" : 8, "name" : "b" }
{ "_id" : ObjectId("5d7d4244315728b4998f5233"), "rno" : 8, "name" : "c" }

// to insert multiple values for one key using []


> db.emp.insert({rno:10,name:'Ankit',hobbies:['singing','cricket','swimming'],age:21})

> db.emp.find()
{ "_id" : ObjectId("5d7d3daf315728b4998f522e"), "rno" : 1, "name" : "Bhavana" }
{ "_id" : ObjectId("5d7d3f28315728b4998f522f"), "name" : "Amit", "rno" : 2 }
{ "_id" : ObjectId("5d7d3f56315728b4998f5230"), "rno" : 3, "email_id" : "[email protected]" }
{ "_id" : 1, "rno" : 4, "name" : "Akash" }
{ "_id" : 2, "rno" : 5, "name" : "Reena" }
{ "_id" : ObjectId("5d7d4244315728b4998f5231"), "rno" : 7, "name" : "a" }
2
{ "_id" : ObjectId("5d7d4244315728b4998f5232"), "rno" : 8, "name" : "b" }
{ "_id" : ObjectId("5d7d4244315728b4998f5233"), "rno" : 8, "name" : "c" }
{ "_id" : ObjectId("5d7d433a315728b4998f5234"), "rno" : 10, "name" : "Ankit", "hobbies" :
[ "singing", "cricket", "swimming" ], "age" : 21 }

// Embedded document example


> db.emp.insert({rno:11, Name: {Fname:"Bhavana", Mname:"Amit", Lname:"Khivsara"}})
> db.emp.insert({rno:12, Name: "Janvi", Address:{Flat:501, Building:"Sai Appart", area:"Tidke
colony", city: "Nashik", state:"MH", pin:423101}, age:22})

// To insert date use ISODate function


> db.emp.insert({rno:15, name:'Ravina', dob: ISODate("2019-09-14")})

> db.emp.find()
{ "_id" : ObjectId("5d7d3daf315728b4998f522e"), "rno" : 1, "name" : "Bhavana" }
{ "_id" : ObjectId("5d7d3f28315728b4998f522f"), "name" : "Amit", "rno" : 2 }
{ "_id" : ObjectId("5d7d3f56315728b4998f5230"), "rno" : 3, "email_id" : "[email protected]" }
{ "_id" : 1, "rno" : 4, "name" : "Akash" }
{ "_id" : 2, "rno" : 5, "name" : "Reena" }
{ "_id" : ObjectId("5d7d4244315728b4998f5231"), "rno" : 7, "name" : "a" }
{ "_id" : ObjectId("5d7d4244315728b4998f5232"), "rno" : 8, "name" : "b" }
{ "_id" : ObjectId("5d7d4244315728b4998f5233"), "rno" : 8, "name" : "c" }
{ "_id" : ObjectId("5d7d433a315728b4998f5234"), "rno" : 10, "name" : "Ankit", "hobbies" :
[ "singing", "cricket", "swimming" ], "age" : 21 }
{ "_id" : ObjectId("5d7d4462315728b4998f5235"), "rno" : 11, "Name" : { "Fname" : "Bhavana",
"Mname" : "Amit", "Lname" : "Khivsara" } }
{ "_id" : ObjectId("5d7d4574315728b4998f5236"), "rno" : 12, "Name" : "Janvi", "Address" :
{ "Flat" : 501, "Building" : "Sai Appart", "area" : "Tidke colony", "city" : "Nashik", "state" : "MH",
"pin" : 423101 }, "age" : 22 }
{ "_id" : ObjectId("5d7d465d315728b4998f5237"), "rno" : 15, "name" : "Ravina", "dob" :
ISODate("2019-09-14T00:00:00Z") }
>

// Multi embedded document with data function


> db.emp.insert({rno:17, name:"Ashika",date:Date(), awards:[{name:"Best c -Designer",
year:2010, prize:"winner"},{name:"Wen site competition",year:2012,prize:"Runner-up"},
{name:"Fashion show", year:2015,prize:"winner"}], city:"Nashik"})

// ouput using pretty command


> db.emp.find().pretty()
{
"_id" : ObjectId("5d7d3daf315728b4998f522e"),
"rno" : 1,
"name" : "Bhavana"
}
{ "_id" : ObjectId("5d7d3f28315728b4998f522f"), "name" : "Amit", "rno" : 2 }
3
{
"_id" : ObjectId("5d7d3f56315728b4998f5230"),
"rno" : 3,
"email_id" : "[email protected]"
}
{ "_id" : 1, "rno" : 4, "name" : "Akash" }
{ "_id" : 2, "rno" : 5, "name" : "Reena" }
{ "_id" : ObjectId("5d7d4244315728b4998f5231"), "rno" : 7, "name" : "a" }
{ "_id" : ObjectId("5d7d4244315728b4998f5232"), "rno" : 8, "name" : "b" }
{ "_id" : ObjectId("5d7d4244315728b4998f5233"), "rno" : 8, "name" : "c" }
{
"_id" : ObjectId("5d7d433a315728b4998f5234"),
"rno" : 10,
"name" : "Ankit",
"hobbies" : [
"singing",
"cricket",
"swimming"
],
"age" : 21
}
{
"_id" : ObjectId("5d7d4462315728b4998f5235"),
"rno" : 11,
"Name" : {
"Fname" : "Bhavana",
"Mname" : "Amit",
"Lname" : "Khivsara"
}
}
{
"_id" : ObjectId("5d7d4574315728b4998f5236"),
"rno" : 12,
"Name" : "Janvi",
"Address" : {
"Flat" : 501,
"Building" : "Sai Appart",
"area" : "Tidke colony",
"city" : "Nashik",
"state" : "MH",
"pin" : 423101
},
"age" : 22
}
{
"_id" : ObjectId("5d7d465d315728b4998f5237"),
4
"rno" : 15,
"name" : "Ravina",
"dob" : ISODate("2019-09-14T00:00:00Z")
}
{
"_id" : ObjectId("5d7d4aa7315728b4998f5238"),
"rno" : 17,
"name" : "Ashika",
"date" : "Sat Sep 14 2019 16:16:39 GMT-0400 (EDT)",
"awards" : [
{
"name" : "Best C-designer",
"year" : 2010,
"prize" : "winner"
},
{
"name" : "Wen site competition",
"year" : 2012,
"prize" : "Runner-up"
},
{
"name" : "Fashion show",
"year" : 2015,
"prize" : "winner"
}
],
"city" : "Nashik"
}

// New collection for Find operation


> db.stud.insert([{rno:1, name:'Ashiti'}, {rno:2,name:'Savita'}, {rno:3,name:'Sagar'},
{rno:4,name:'Reena'},{rno:5,name:'Jivan'}])

//Simple Find Command


> db.stud.find()
{ "_id" : ObjectId("5d83af5aa44331f62bcd8369"), "rno" : 1, "name" : "Ashiti" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836a"), "rno" : 2, "name" : "Savita" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836b"), "rno" : 3, "name" : "Sagar" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836c"), "rno" : 4, "name" : "Reena" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836d"), "rno" : 5, "name" : "Jivan" }

//Find command with Condition


> db.stud.find({rno:5})
{ "_id" : ObjectId("5d83af5aa44331f62bcd836d"), "rno" : 5, "name" : "Jivan" }

5
//Find command with condition with giving name field only to show
> db.stud.find({rno:5},{name:1})
{ "_id" : ObjectId("5d83af5aa44331f62bcd836d"), "name" : "Jivan" }

//Find command with condition with giving name field only to show and _id to hide
> db.stud.find({rno:5},{name:1,_id:0})
{ "name" : "Jivan" }

// Find command to show only names without condition


> db.stud.find({},{name:1,_id:0})
{ "name" : "Ashiti" }
{ "name" : "Savita" }
{ "name" : "Sagar" }
{ "name" : "Reena" }
{ "name" : "Jivan" }

// To display data whose rno is greater than 2


> db.stud.find({rno:{$gt:2}})
{ "_id" : ObjectId("5d83af5aa44331f62bcd836b"), "rno" : 3, "name" : "Sagar" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836c"), "rno" : 4, "name" : "Reena" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836d"), "rno" : 5, "name" : "Jivan" }

// To display data whose rno is less than equal to 2


> db.stud.find({rno:{$lte:2}})
{ "_id" : ObjectId("5d83af5aa44331f62bcd8369"), "rno" : 1, "name" : "Ashiti" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836a"), "rno" : 2, "name" : "Savita" }

// To display data whose rno is less than 2


> db.stud.find({rno:{$lt:2}})
{ "_id" : ObjectId("5d83af5aa44331f62bcd8369"), "rno" : 1, "name" : "Ashiti" }

// To display data whose rno is not equal to 2


> db.stud.find({rno:{$ne:2}})
{ "_id" : ObjectId("5d83af5aa44331f62bcd8369"), "rno" : 1, "name" : "Ashiti" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836b"), "rno" : 3, "name" : "Sagar" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836c"), "rno" : 4, "name" : "Reena" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836d"), "rno" : 5, "name" : "Jivan" }

// To display data whose rno is either 1 or 3 or 5 using in operator


> db.stud.find({rno:{$in:[1,3,5]}})
{ "_id" : ObjectId("5d83af5aa44331f62bcd8369"), "rno" : 1, "name" : "Ashiti" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836b"), "rno" : 3, "name" : "Sagar" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836d"), "rno" : 5, "name" : "Jivan" }

// To display data whose rno is either 1 or 3 or 5 or 7 or 9 using in operator


> db.stud.find({rno:{$in:[1,3,5,7,9]}})
6
{ "_id" : ObjectId("5d83af5aa44331f62bcd8369"), "rno" : 1, "name" : "Ashiti" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836b"), "rno" : 3, "name" : "Sagar" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836d"), "rno" : 5, "name" : "Jivan" }

//Sorting Command -1 is for Descending


> db.stud.find().sort({rno:-1})
{ "_id" : ObjectId("5d83af5aa44331f62bcd836d"), "rno" : 5, "name" : "Jivan" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836c"), "rno" : 4, "name" : "Reena" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836b"), "rno" : 3, "name" : "Sagar" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836a"), "rno" : 2, "name" : "Savita" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd8369"), "rno" : 1, "name" : "Ashiti" }

//Sorting Command 1 is for Ascending


> db.stud.find().sort({name:1})
{ "_id" : ObjectId("5d83af5aa44331f62bcd8369"), "rno" : 1, "name" : "Ashiti" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836d"), "rno" : 5, "name" : "Jivan" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836c"), "rno" : 4, "name" : "Reena" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836b"), "rno" : 3, "name" : "Sagar" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836a"), "rno" : 2, "name" : "Savita" }

//Dispay rno & name whose rno is greater than 2. Show output in decending order by
rno
> db.stud.find({rno:{$gt:2}},{_id:0}).sort({rno:-1})
{ "rno" : 5, "name" : "Jivan" }
{ "rno" : 4, "name" : "Reena" }
{ "rno" : 3, "name" : "Sagar" }

//Collection with 3 and 5 rollno as duplicate values


> db.stud.find()
{ "_id" : ObjectId("5d83af5aa44331f62bcd8369"), "rno" : 1, "name" : "Ashiti" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836a"), "rno" : 2, "name" : "Savita" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836b"), "rno" : 3, "name" : "Sagar" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836c"), "rno" : 4, "name" : "Reena" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836d"), "rno" : 5, "name" : "Jivan" }
{ "_id" : ObjectId("5d83b8d9a44331f62bcd836e"), "rno" : 5, "name" : "Radhika" }
{ "_id" : ObjectId("5d83b8eba44331f62bcd836f"), "rno" : 3, "name" : "Manioj" }

//Distinct command to show only unique values for roll no


> db.stud.distinct("rno")
[ 1, 2, 3, 4, 5 ]

// Limit use to show only some records from starting- following command shows only
first 2 records from collection

> db.stud.find().limit(2)
{ "_id" : ObjectId("5d83af5aa44331f62bcd8369"), "rno" : 1, "name" : "Ashiti" }
7
{ "_id" : ObjectId("5d83af5aa44331f62bcd836a"), "rno" : 2, "name" : "Savita" }

// Skip use to show all records after skipping some records- following command shows
all records after first 2 records from collection
> db.stud.find().skip(2)
{ "_id" : ObjectId("5d83af5aa44331f62bcd836b"), "rno" : 3, "name" : "Sagar" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836c"), "rno" : 4, "name" : "Reena" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836d"), "rno" : 5, "name" : "Jivan" }
{ "_id" : ObjectId("5d83b8d9a44331f62bcd836e"), "rno" : 5, "name" : "Radhika" }
{ "_id" : ObjectId("5d83b8eba44331f62bcd836f"), "rno" : 3, "name" : "Manioj" }

// Shows documents where name starting with A


> db.stud.find({name:/^A/})
{ "_id" : ObjectId("5d83af5aa44331f62bcd8369"), "rno" : 1, "name" : "Ashiti" }

// Shows documents where name ending with i


> db.stud.find({name:/i$/})
{ "_id" : ObjectId("5d83af5aa44331f62bcd8369"), "rno" : 1, "name" : "Ashiti" }

// Shows documents where name having letter a anywhere


> db.stud.find({name:/a/})
{ "_id" : ObjectId("5d83af5aa44331f62bcd836a"), "rno" : 2, "name" : "Savita" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836b"), "rno" : 3, "name" : "Sagar" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836c"), "rno" : 4, "name" : "Reena" }
{ "_id" : ObjectId("5d83af5aa44331f62bcd836d"), "rno" : 5, "name" : "Jivan" }
{ "_id" : ObjectId("5d83b8d9a44331f62bcd836e"), "rno" : 5, "name" : "Radhika" }
{ "_id" : ObjectId("5d83b8eba44331f62bcd836f"), "rno" : 3, "name" : "Manioj" }
>

//findOne to show only first record


> db.stud.findOne()
{
"_id" : ObjectId("5d83af5aa44331f62bcd8369"),
"rno" : 1,
"name" : "Ashiti"
}

// count to show number of documents in collection


> db.stud.find().count()
7

> db.stud.find({rno:{$gt:2}}).count()
5

//Insert one embedded document( for address)


> db.stud.insert({rno:8,address:{area:"College Road",city:"Nashik",state:"MH"},name:"Arya"})
8
//To find document having city as Nashik(as city is key of address key specify "address.city"
> db.stud.find({"address.city":"Nashik"})
{ "_id" : ObjectId("5d83c04aa44331f62bcd8370"), "rno" : 8, "address" : { "area" : "College
Road", "city" : "Nashik", "state" : "MH" }, "name" : "Arya" }

//Insert one document with multiple values(eg hobbies)


> db.stud.insert({rno:9,hobbies:['singing','dancing','cricket']})

//To use find command on multi values attribute(eg hobbies)


> db.stud.find({hobbies:'dancing'})
{ "_id" : ObjectId("5d83c165a44331f62bcd8371"), "rno" : 9, "hobbies" : [ "singing", "dancing",
"cricket" ] }

//$unset will remove the column rno from document matching the given condition
> db.stud.update({rno:1},{$unset:{rno:1}})

//$set to update the value of rno


>db.stud.update({rno:2},{$set:{rno:22}})

//upsert use to update document if condition found otherwise insert document with
updates values.
> db.stud.update({rno:50},{$set:{rno:55}},{upsert:true})

//multi:true used to update in multiple documents


> db.stud.update({rno:5},{$set:{rno:15}},{multiple:true})

//It will remove record having rno as 4


> db.stud.remove({rno:4})

//It will remove only one record having rno as 4


> db.stud.remove({rno:4},1)

//It will remove all records


> db.stud.remove({})

Indexing:

//To create index on rno in ascending order(1)- //Single field Index example

>db.stud.createIndex({rno:1})

//To show the list of Index , v is version, key is on which field you created index
//ns-name space(database name.collection name), name- Name of index given by mongodb

>db.stud.getIndexes()
[
{
9
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "db1.stud",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"rno" : 1
},
"ns" : "db1.stud",
"name" : "rno_1"
}
]

//Compound Index Example (-1 is descending & 1 is ascending)


>db.stud.createIndex({rno:-1,name:1})

>db.stud.getIndexes()
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "db1.stud",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"rno" : 1
},
"ns" : "db1.stud",
"name" : "rno_1"
},
{
"v" : 1,
"key" : {
"rno" : -1,
"name" : 1
},
"ns" : "db1.stud",
"name" : "rno_-1_name_1"
}
]

// To drop single index


>db.stud.dropIndex({rno:1})
{ "nIndexesWas" : 3, "ok" : 1 }
10
// To drop all indexes at a time
>db.stud.dropIndexes()
{
"nIndexesWas" : 2,
"msg" : "non-_id indexes dropped for collection",
"ok" : 1
}

RESULT:
The queries to perform CRUD, Indexing, Sharding and Deployment was executed successfully.

11
EX No: NOSQL EXERCISES
Cassandra: Table Operations, CRUD Operations, CQL Types.
Date :

AIM:
To execute the queries to perform Table Operations, CRUD operations, CQL Types in Cassandra.

PROCEDURE:

Step 1: Start the Cassandra server and run it in behind.


Step 2: Start the CQL client shell.
Step 3: Create the database
Step 4: Perform basic queries to perform Table operations.
Step 5: Perform basic queries to perform CRUD (Create, Read, Update, Delete) operations.
Step 5: Perform basic queries for CQL Types.

QUERIES:

1. Create Keyspace:

cqlsh.> CREATE KEYSPACE stud WITH replication = {'class':'SimpleStrategy', 'replication_factor'


: 3};

cqlsh> CREATE KEYSPACE test WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy',


'datacenter1' : 3 } AND DURABLE_WRITES = false;

cqlsh> USE test;


cqlsh:test>

2. Table Operations

cqlsh:test> CREATE TABLE emp( emp_id int PRIMARY KEY, emp_name text, emp_city text,
emp_sal varint, emp_phone varint);

cqlsh:test> ALTER TABLE emp ADD emp_email text;

cqlsh:test> ALTER TABLE emp DROP emp_email;

cqlsh:test> DROP TABLE emp;

cqlsh:test> TRUNCATE student;

3. CRUD Operations
1. Create data
cqlsh:test> INSERT INTO emp (emp_id, emp_name, emp_city,
emp_phone, emp_sal) VALUES (1,'ram', 'Hyderabad', 9848022338, 50000);

cqlsh:test> INSERT INTO emp (emp_id, emp_name, emp_city,


emp_phone, emp_sal) VALUES(2,'robin', 'Hyderabad', 9848022339, 40000);

cqlsh:test> INSERT INTO emp (emp_id, emp_name, emp_city,


emp_phone, emp_sal) VALUES(3,'rahman', 'Chennai', 9848022330, 45000);

12
2. To read all data from table
cqlsh:test> SELECT * FROM emp;

emp_id | emp_city | emp_name | emp_phone | emp_sal


--------+-----------+----------+------------+---------
1 | Hyderabad | ram | 9848022338 | 50000
2 | Hyderabad | robin | 9848022339 | 40000
3 | Chennai | rahman | 9848022330 | 45000
(3 rows)

3. To update data in a table


cqlsh:test> UPDATE emp SET emp_city='Delhi',emp_sal=50000 WHERE emp_id=2;
cqlsh:test> select * from emp;

emp_id | emp_city | emp_name | emp_phone | emp_sal


--------+-----------+----------+------------+---------
1 | Hyderabad | ram | 9848022338 | 50000
2 | Delhi | robin | 9848022339 | 50000
3 | Chennai | rahman | 9848022330 | 45000
(3 rows)

cqlsh:test> SELECT emp_name, emp_sal from emp;

emp_name | emp_sal
----------+---------
ram | 50000
robin | 50000
rajeev | 30000
rahman | 50000
(4 rows)

4. To create index
cqlsh:test> CREATE INDEX sal ON emp(emp_sal);
cqlsh:test> SELECT * FROM emp WHERE emp_sal=50000;

emp_id | emp_city | emp_name | emp_phone | emp_sal


--------+-----------+----------+------------+---------
1 | Hyderabad | ram | 9848022338 | 50000
2| null | robin | 9848022339 | 50000
3 | Chennai | rahman | 9848022330 | 50000

5. To drop index
cqlsh:test> drop index sal;

6. To Deleting Data from a Table


cqlsh:tutorialspoint> DELETE emp_sal FROM emp WHERE emp_id=3;
cqlsh:test> select * from emp;

emp_id | emp_city | emp_name | emp_phone | emp_sal


--------+-----------+----------+------------+---------
1 | Hyderabad | ram | 9848022338 | 50000
2 | Delhi | robin | 9848022339 | 50000
3 | Chennai | rahman | 9848022330 | null
(3 rows)
13
To Delete a complete row in a column
cqlsh:test> DELETE FROM emp WHERE emp_id=3;
cqlsh:test> select * from emp;

emp_id | emp_city | emp_name | emp_phone | emp_sal


--------+-----------+----------+------------+---------
1 | Hyderabad | ram | 9848022338 | 50000
2 | Delhi | robin | 9848022339 | 50000
(2 rows)

4. CQL Types
CQL provides a rich set of built-in data types, including collection types. Along with these data
types, users can also create their own custom data types. The following table provides a list of
built-in data types available in CQL.
Data Type Constants Description

Ascii Strings Represents ASCII character string

bigint Bigint Represents 64-bit signed long

blob Blobs Represents arbitrary bytes

Boolean Booleans Represents true or false

counter Integers Represents counter column

decimal integers, floats Represents variable-precision decimal

double Integers Represents 64-bit IEEE-754 floating point

float integers, floats Represents 32-bit IEEE-754 floating point

inet Strings Represents an IP address, IPv4 or IPv6

int Integers Represents 32-bit signed int

text Strings Represents UTF8 encoded string

timestamp integers, strings Represents a timestamp

timeuuid Uuids Represents type 1 UUID

uuid uuids Represents type 1 or type 4 UUID

varchar Strings Represents uTF8 encoded string

varint Integers Represents arbitrary-precision integer

14
Collection Types
Cassandra Query Language also provides a collection data types. The following table provides a
list of Collections available in CQL.
Collection Description

List A list is a collection of one or more ordered elements.

Map A map is a collection of key-value pairs.

Set A set is a collection of one or more elements.

User-defined datatypes
Cqlsh provides users a facility of creating their own data types. Given below are the commands
used while dealing with user defined data types.
 CREATE TYPE − Creates a user-defined data type.
 ALTER TYPE − Modifies a user-defined data type.
 DROP TYPE − Drops a user-defined data type.
 DESCRIBE TYPE − Describes a user-defined data type.
 DESCRIBE TYPES − Describes user-defined data types.

1. Create a user-defined type named address.


CREATE TYPE mykeyspace.address (
street text,
city text,
zip_code int,
phones set<text> );

2. Create a user-defined type for the name of a user.


CREATE TYPE mykeyspace.fullname (
firstname text,
lastname text);

3. Create a table for storing user data in columns of type fullname and address. Use
the frozen keyword in the definition of the user-defined type column.
CREATE TABLE mykeyspace.users (
id uuid PRIMARY KEY,
name frozen <fullname>,
direct_reports set<frozen <fullname>>, // a collection set
addresses map<text, frozen <address>> // a collection map);

4. Insert a user's name into the fullname column.


INSERT INTO mykeyspace.users (id, name) VALUES (62c36092-82a1-3a00-93d1-
46196ee77204, {firstname: 'Marie-Claude', lastname: 'Josset'});

5. Insert an address labeled home into the table.


UPDATE mykeyspace.users SET addresses = addresses + {'home': { street: '191 Rue St.
Charles', city: 'Paris', zip_code: 75015, phones: {'33 6 78 90 12 34'}}} WHERE id=62c36092-
82a1-3a00-93d1-46196ee77204;

15
6. Retrieve the full name of a user.
SELECT name FROM mykeyspace.users WHERE id=62c36092-82a1-3a00-93d1-
46196ee77204;
name
-------------------------------------------------
{firstname: 'Marie-Claude', lastname: 'Josset'}

7. Using dot notation, you can retrieve a component of the user-defined type
column.
SELECT name.lastname FROM mykeyspace.users WHERE id=62c36092-82a1-3a00-93d1-
46196ee77204;
name.lastname
---------------
Josset

8. To create index
CREATE INDEX on mykeyspace.users (name);
SELECT id FROM mykeyspace.users WHERE name = {firstname: 'Marie-Claude',
lastname: 'Josset'};

id
--------------------------------------
62c36092-82a1-3a00-93d1-46196ee77204

9. To update a complete UDT


UPDATE mykeyspace.users SET direct_reports = { ( 'Naoko', 'Murai'), ( 'Sompom', 'Peh') }
WHERE id=62c36092-82a1-3a00-93d1-46196ee77204;

INSERT INTO mykeyspace.users (id, direct_reports) VALUES ( 7db1a490-5878-11e2-


bcfd-0800200c9a66, { ('Jeiranan', 'Thongnopneua') } );

SELECT direct_reports FROM mykeyspace.users;

direct_reports
-----------------------------------------------------------------------------------
{{firstname: 'Jeiranan', lastname: 'Thongnopneua'}}
{{firstname: 'Naoko', lastname: 'Murai'}, {firstname: 'Sompom', lastname: 'Peh'}}

RESULT:
The queries to create keyspace, create table and CQL types have been executed successfully.

16
EX No: NOSQL EXERCISES
HIVE: Data types, Database Operations, Partitioning – HiveQL
Date :

AIM:
To create a database for executing database operations , partitioning and execute hive queries in
HIVE.

PROCEDURE:
Step 1: Start the hadoop and run it in behind.
Step 2: Start the derby server and let it run in background.
Step 3: Start yarn/ MapReduce
Step 4: Start network server (use 0.0.0.0 as host address)
Step 5: Start hive. Create the database .Perform basic queries to perform database operations.
Step 5: Perform basic queries to perform partitioning. Perform basic queries for HiveQL .

THEORY and QUERY:


1. Data types
All the data types in Hive are classified into four types, given as follows:
 Column Types
 Literals
 Null Values
 Complex Types

Column Types
Column type are used as column data types of Hive. They are as follows:

Integral Types
Integer type data can be specified using integral data types, INT. When the data range exceeds the
range of INT, you need to use BIGINT and if the data range is smaller than the INT, you use
SMALLINT. TINYINT is smaller than SMALLINT.

The following table depicts various INT data types:


Type Postfix Example
TINYINT Y 10Y
SMALLINT S 10S
INT - 10
BIGINT L 10L

String Types
String type data types can be specified using single quotes (' ') or double quotes (" "). It contains two
data types: VARCHAR and CHAR. Hive follows C-types escape characters.

The following table depicts various CHAR data types:


Data Type Length
VARCHAR 1 to 65355
CHAR 255

Timestamp
It supports traditional UNIX timestamp with optional nanosecond precision. It supports
java.sql.Timestamp format “YYYY-MM-DD HH:MM:SS.fffffffff” and format “yyyy-mm-dd
hh:mm:ss.ffffffffff”.

17
Dates
DATE values are described in year/month/day format in the form {{YYYY-MM-DD}}.

Decimals
The DECIMAL type in Hive is as same as Big Decimal format of Java. It is used for representing
immutable arbitrary precision. The syntax and example is as follows:

DECIMAL (precision, scale)


Decimal (10,0)

Union Types
Union is a collection of heterogeneous data types. You can create an instance using create union. The
syntax and example is as follows:

UNIONTYPE<int, double, array<string>, struct<a:int,b:string>>


{0:1}
{1:2.0}
{2:["three","four"]}
{3:{"a":5,"b":"five"}}
{2:["six","seven"]}
{3:{"a":8,"b":"eight"}}
{0:9}
{1:10.0}

Literals
The following literals are used in Hive:

Floating Point Types


Floating point types are nothing but numbers with decimal points. Generally, this type of data is
composed of DOUBLE data type.

Decimal Type
Decimal type data is nothing but floating point value with higher range than DOUBLE data type. The
range of decimal type is approximately -10-308 to 10308.
Null Value
Missing values are represented by the special value NULL.

Complex Types
The Hive complex data types are as follows:

Arrays
Arrays in Hive are used the same way they are used in Java.
Syntax: ARRAY<data_type>

Maps
Maps in Hive are similar to Java Maps.
Syntax: MAP<primitive_type, data_type>

Structs
Structs in Hive is similar to using complex data with comment.
Syntax: STRUCT<col_name : data_type [COMMENT col_comment], ...>

2. Database Operations and partitions


18
To create Database:
hive> CREATE DATABASE financials;

hive> CREATE DATABASE IF NOT EXISTS financials;

To list the Database:

hive> SHOW DATABASES;


default
financials

hive> CREATE DATABASE human_resources;

hive> SHOW DATABASES LIKE 'h.*';


human_resources

To create database by specifying its location to store


hive> CREATE DATABASE financials
> LOCATION '/my/preferred/directory';

To create database with a comment


hive> DESCRIBE DATABASE financials;
financials Holds all financial tables
hdfs://master-server/user/hive/warehouse/financials.db

To create database with additional properties


hive> CREATE DATABASE financials
> WITH DBPROPERTIES ('creator' = 'Mark Moneybags', 'date' = '2021-01-02');

To describe the database design


hive> DESCRIBE DATABASE financials;
financials hdfs://master-server/user/hive/warehouse/financials.db

hive> DESCRIBE DATABASE EXTENDED financials;


financials hdfs://master-server/user/hive/warehouse/financials.db
{date=2021-01-02, creator=Mark Moneybags);

To set a database
hive> USE financials;

To delete a database
hive> DROP DATABASE IF EXISTS financials;

To drop a table inside the database before deleting the database


hive> DROP DATABASE IF EXISTS financials CASCADE;

To alter the database properties


hive> ALTER DATABASE financials SET DBPROPERTIES ('edited-by' = 'Joe Dba');

To create table
CREATE TABLE IF NOT EXISTS mydb.employees (
name STRING COMMENT 'Employee name',
salary FLOAT COMMENT 'Employee salary',
subordinates ARRAY<STRING> COMMENT 'Names of subordinates',
19
deductions MAP<STRING, FLOAT>
COMMENT 'Keys are deductions names, values are percentages',
address STRUCT<street:STRING, city:STRING, state:STRING, zip:INT>
COMMENT 'Home address')
COMMENT 'Description of the table'
TBLPROPERTIES ('creator'='me', 'created_at'='2021-01-02 10:00:00', ...)
LOCATION '/user/hive/warehouse/mydb.db/employees';

To copy a schema
CREATE TABLE IF NOT EXISTS mydb.employees2
LIKE mydb.employees;

To list out the tables


hive> USE mydb;

hive> SHOW TABLES;


employees
table1
table2

hive> USE default;

hive> SHOW TABLES IN mydb;


employees
table1
table2

To describe the table schema


hive> DESCRIBE EXTENDED mydb.employees;
name string Employee name
salary float Employee salary
subordinates array<string> Names of subordinates
deductions map<string,float> Keys are deductions names, values are percentages
address struct<street:string,city:string,state:string,zip:int> Home address

Detailed Table Information Table(tableName:employees, dbName:mydb, owner:me,


...
location:hdfs://master-server/user/hive/warehouse/mydb.db/employees,
parameters:{creator=me, created_at='2021-01-02 10:00:00',
last_modified_user=me, last_modified_time=1337544510,
comment:Description of the table, ...}, ...)

To describe the schema of a particular column


hive> DESCRIBE mydb.employees.salary;
salary float Employee salary

To partition the data first by country and then by state:


CREATE TABLE employees (
name STRING,
salary FLOAT,
subordinates ARRAY<STRING>,
deductions MAP<STRING, FLOAT>,
address STRUCT<street:STRING, city:STRING, state:STRING, zip:INT>
)
20
PARTITIONED BY (country STRING, state STRING);

To show the partitions


hive> SHOW PARTITIONS employees;
...
Country=CA/state=AB
country=CA/state=BC
...
country=US/state=AL
country=US/state=AK
...

To describe the partitioned extended table


hive> DESCRIBE EXTENDED employees;
name string,
salary float,
...
address struct<...>,
country string,
state string

Detailed Table Information...


partitionKeys:[FieldSchema(name:country, type:string, comment:null),
FieldSchema(name:state, type:string, comment:null)],
...

To delete a table
DROP TABLE IF EXISTS employees;

RESULT:
The database its operations , partitioning and hive queries have been executed successfully in HIVE.

21
EX No: NOSQL EXERCISES
OrientDB Graph database – OrientDB Features
Date :

AIM:
To study about the OrientDB Graph database and its features.

THEORY:
Introduction
OrientDB is an Open Source NoSQL Database Management System, which contains the features of
traditional DBMS along with the new features of both Document and Graph DBMS. It is written in
Java and is amazingly fast. It can store 220,000 records per second on commodity hardware.
OrientDB, is one of the best open-source, multi-model, next generation NoSQL product.

OrientDB is an Open Source NoSQL Database Management System. NoSQL Database provides a
mechanism for storing and retrieving NO-relation or NON-relational data that refers to data other
than tabular data such as document data or graph data. NoSQL databases are increasingly used in Big
Data and real-time web applications. NoSQL systems are also sometimes called "Not Only SQL" to
emphasize that they may support SQL-like query languages.

OrientDB also belongs to the NoSQL family. OrientDB is a second generation Distributed Graph
Database with the flexibility of Documents in one product with an open source of Apache 2 license.

Features MongoDB OrientDB

Uses the RDBMS JOINS to create Embeds and connects documents


relationship between entities. It has like relational database. It uses
Relationships
high runtime cost and does not scale direct, super-fast links taken from
when database scale increases. graph database world.

Costly JOIN operations. Easily returns complete graph with


Fetch Plan
interconnected documents.

Doesn’t support ACID transactions, Supports ACID transactions as well


Transactions
but it supports atomic operations. as atomic operations.

Query language Has its own language based on JSON. Query language is built on SQL.

Uses the B-Tree algorithm for all Supports three different indexing
Indexes indexes. algorithms so that the user can
achieve best performance.

Uses memory mapping technique. Uses the storage engine name


Storage engine
LOCAL and PLOCAL.

OrientDB is the first Multi-Model open source NoSQL DBMS that brings together the power of
graphs and flexibility of documents into a scalable high-performance operational database.

The main feature of OrientDB is to support multi-model objects, i.e. it supports different models like
Document, Graph, Key/Value and Real Object. It contains a separate API to support all these four
models.

22
Document Model
The terminology Document model belongs to NoSQL database. It means the data is stored in the
Documents and the group of Documents are called as Collection. Technically, document means a set
of key/value pairs or also referred to as fields or properties.
OrientDB uses the concepts such as classes, clusters, and link for storing, grouping, and analyzing
the documents.

The following table illustrates the comparison between relational model, document model, and
OrientDB document model

Relational Model Document Model OrientDB Document Model

Table Collection Class or Cluster

Row Document Document

Column Key/value pair Document field

Relationship Not available Link

Graph Model
A graph data structure is a data model that can store data in the form of Vertices (Nodes)
interconnected by Edges (Arcs). The idea of OrientDB graph database came from property graph.
The vertex and edge are the main artifacts of the Graph model. They contain the properties, which
can make these appear similar to documents.

The following table shows a comparison between graph model, relational data model, and OrientDB
graph model.

Relational Model Graph Model OrientDB Graph Model

Table Vertex and Edge Class Class that extends "V" (for Vertex) and "E" (for
Edges)

Row Vertex Vertex

Column Vertex and Edge Vertex and Edge property


property

Relationship Edge Edge

The Key/Value Model


The Key/Value model means that data can be stored in the form of key/value pair where the values
can be of simple and complex types. It can support documents and graph elements as values.

The following table illustrates the comparison between relational model, key/value model, and
OrientDB key/value model.

23
Relational Model Key/Value Model OrientDB Key/Value Model

Table Bucket Class or Cluster

Row Key/Value pair Document

Column Not available Document field or Vertex/Edge property

Relationship Not available Link

The Object Model


This model has been inherited by Object Oriented programming and supports Inheritance between
types (sub-types extends the super-types), Polymorphism when you refer to a base class and Direct
binding from/to Objects used in programming languages.

The following table illustrates the comparison between relational model, Object model, and
OrientDB Object model.

Relational Model Object Model OrientDB Object Model

Table Class Class or Cluster

Row Object Document or Vertex

Column Object property Document field or Vertex/Edge property

Relationship Pointer Link

Following are some of the important terminologies in OrientDB.


Record
The smallest unit that you can load from and store in the database. Records can be stored in four
types.
 Document
 Record Bytes
 Vertex
 Edge

Record ID
When OrientDB generates a record, the database server automatically assigns a unit identifier to the
record, called RecordID (RID). The RID looks like #<cluster>:<position>. <cluster> means cluster
identification number and the <position> means absolute position of the record in the cluster.

Documents
The Document is the most flexible record type available in OrientDB. Documents are softly typed
and are defined by schema classes with defined constraint, but you can also insert the document
without any schema, i.e. it supports schema-less mode too.
Documents can be easily handled by export and import in JSON format. For example, take a look at
the following JSON sample document. It defines the document details.

{
"id" : "1201",
"name" : "Jay",
24
"job" : "Developer",
"creations" : [
{
"name" : "Amiga",
"company" : "Commodore Inc."
},

{
"name" : "Amiga 500",
"company" : "Commodore Inc."
}
]
}

RecordBytes
Record Type is the same as BLOB type in RDBMS. OrientDB can load and store document Record
type along with binary data.

Vertex
OrientDB database is not only a Document database but also a Graph database. The new concepts
such as Vertex and Edge are used to store the data in the form of graph. In graph databases, the most
basic unit of data is node, which in OrientDB is called a vertex. The Vertex stores information for the
database.

Edge
There is a separate record type called the Edge that connects one vertex to another. Edges are
bidirectional and can only connect two vertices. There are two types of edges in OrientDB, one is
regular and another one lightweight.

Class
The class is a type of data model and the concept drawn from the Object-oriented programming
paradigm. Based on the traditional document database model, data is stored in the form of collection,
while in the Relational database model data is stored in tables. OrientDB follows the Document API
along with OPPS paradigm. As a concept, the class in OrientDB has the closest relationship with the
table in relational databases, but (unlike tables) classes can be schema-less, schema-full or mixed.
Classes can inherit from other classes, creating trees of classes. Each class has its own cluster or
clusters, (created by default, if none are defined).

Cluster
Cluster is an important concept which is used to store records, documents, or vertices. In simple
words, Cluster is a place where a group of records are stored. By default, OrientDB will create one
cluster per class. All the records of a class are stored in the same cluster having the same name as the
class. You can create up to 32,767(2^15-1) clusters in a database.
The CREATE class is a command used to create a cluster with specific name. Once the cluster is
created you can use the cluster to save records by specifying the name during the creation of any data
model.

Relationships
OrientDB supports two kinds of relationships: referenced and embedded. Referenced
relationships means it stores direct link to the target objects of the relationships. Embedded
relationships means it stores the relationship within the record that embeds it. This relationship is
stronger than the reference relationship.

25
Database
The database is an interface to access the real storage. IT understands high-level concepts such as
queries, schemas, metadata, indices, and so on. OrientDB also provides multiple database types. For
more information on these types, see Database Types.

RESULT:
The OrientDB Graph database concepts are examined along with its features successfully.

26
EX No: MySQL Database Creation, Table Creation, Query.

Date :

AIM:
To execute the basic queries like database creation, table creation and perform basic queries on tables
in MYSQL.

PROCEDURE:
1. Create database.
2. Create the needed Tables
A. Consider the following schema for a LibraryDatabase:
1. BOOK (Book_id, Title, Publisher_Name, Pub_Year)
2. BOOK_AUTHORS (Book_id, Author_Name)
3. PUBLISHER (Name, Address, Phone)
4. BOOK_COPIES(Book_id, Branch_id, No-of_Copies)
5. BOOK_LENDING (Book_id, Branch_id, Card_No, Date_Out, Due_Date)
6. LIBRARY_BRANCH (Branch_id, Branch_Name, Address)
3. Insert needed number of values (tuples) into the tables
4. Perform the various queries given below:
1. Retrieve details of all books in the library – id, title, name of publisher, authors, number
of copies in each branch, etc.
2. Get the particulars of borrowers who have borrowed more than 3 books, but from Jan
2017 to Jun2017
3. Delete a book in BOOK table. Update the contents of other tables to reflect this data
manipulation operation.
4. Partition the BOOK table based on year of publication. Demonstrate its working with a
simple query.
5. Create a view of all books and its number of copies that are currently available in the
Library.

QUERIES - SYNTAX:

1. Database Creation

CREATE DATABASE LIBRARYDATABASE;

USE LIBRARYDATABASE;

2. Table Creation

CREATE TABLE PUBLISHER (NAME VARCHAR (20) PRIMARY KEY, PHONE BIGINT,
ADDRESS VARCHAR (20));

CREATE TABLE BOOK (BOOK_ID INTEGER PRIMARY KEY, TITLE VARCHAR (20),
PUBLISHER_NAME VARCHAR(20), PUB_YEAR VARCHAR (20), FOREIGN KEY
(PUBLISHER_NAME) REFERENCES PUBLISHER (NAME) ON DELETE CASCADE);

CREATE TABLE BOOK_AUTHORS (BOOK_ID INTEGER, AUTHOR_NAME VARCHAR


(20),FOREIGN KEY(BOOK_ID) REFERENCES BOOK (BOOK_ID) ON DELETE CASCADE,
PRIMARY KEY (BOOK_ID, AUTHOR_NAME));

27
CREATE TABLE LIBRARY_BRANCH (BRANCH_ID INTEGER PRIMARY KEY,
BRANCH_NAME VARCHAR (50), ADDRESS VARCHAR (50));
CREATE TABLE BOOK_COPIES (NO_OF_COPIES INTEGER, BOOK_ID INTEGER,
BRANCH_ID INTEGER, FOREIGN KEY (BOOK_ID) REFERENCES BOOK (BOOK_ID) ON
DELETE CASCADE, FOREIGN KEY(BRANCH_ID) REFERENCES LIBRARY_BRANCH
(BRANCH_ID) ON DELETE CASCADE, PRIMARY KEY (BOOK_ID, BRANCH_ID));

CREATE TABLE CARD (CARD_NO INTEGER PRIMARY KEY);

CREATE TABLE BOOK_LENDING (DATE_OUT DATE, DUE_DATE DATE, BOOK_ID


INTEGER, BRANCH_ID INTEGER, CARD_NO INTEGER, FOREIGN KEY (BOOK_ID)
REFERENCES BOOK (BOOK_ID) ON DELETE CASCADE, FOREIGN KEY (BRANCH_ID)
REFERENCES LIBRARY_BRANCH (BRANCH_ID) ON DELETE CASCADE, FOREIGN KEY
(CARD_NO) REFERENCES CARD (CARD_NO) ON DELETE CASCADE, PRIMARY KEY
(BOOK_ID, BRANCH_ID, CARD_NO));

3. Insertion of Values to Tables


INSERT INTO PUBLISHER VALUES (MCGRAW-HILL‘, 9989076587,’BANGALORE‘);

INSERT INTO PUBLISHER VALUES (PEARSON‘, 9889076565,’NEWDELHI‘);

INSERT INTO PUBLISHER VALUES (RANDOM HOUSE‘, 7455679345,‘HYDERABAD‘);

INSERT INTO PUBLISHER VALUES (HACHETTE LIVRE‘, 8970862340,‘CHENNAI‘);

INSERT INTO PUBLISHER VALUES (GRUPOPLANETA‘,7756120238,’BANGALORE‘);

INSERT INTO BOOK VALUES (1,‘DBMS‘ ,‘JAN-2017‘, ‘MCGRAW-HILL‘);

INSERT INTO BOOK VALUES (2,‘ADBMS‘ ,‘JUN-2016‘, ‘MCGRAW-HILL‘);

INSERT INTO BOOK VALUES (3,‘CN‘ ,‘SEP-2016‘, ‘PEARSON‘);

INSERT INTO BOOK VALUES (4,‘CG‘ ,‘SEP-2015‘, ‘GRUPO PLANETA‘);

INSERT INTO BOOK VALUES (5,‘OS‘ ,‘MAY-2016‘, ‘PEARSON‘);

INSERT INTO BOOK_AUTHORS VALUES (‘NAVATHE‘, 1);

INSERT INTO BOOK_AUTHORS VALUES (‘NAVATHE‘, 2);

INSERT INTO BOOK_AUTHORS VALUES (‘TANENBAUM‘, 3);

INSERT INTO BOOK_AUTHORS VALUES (‘EDWARD ANGEL‘, 4);

INSERT INTO BOOK_AUTHORS VALUES (‘GALVIN‘, 5);

INSERT INTO LIBRARY_BRANCH VALUES (10,‘RR NAGAR‘,‘BANGALORE‘);

INSERT INTO LIBRARY_BRANCH VALUES (11,‘RNSIT‘,‘BANGALORE‘);

INSERT INTO LIBRARY_BRANCH VALUES (12,‘RAJAJI NAGAR‘, ‘BANGALORE‘);

INSERT INTO LIBRARY_BRANCH VALUES (13,‘NITTE‘,‘MANGALORE‘);


28
INSERT INTO LIBRARY_BRANCH VALUES (14,‘MANIPAL‘,‘UDUPI‘);

INSERT INTO BOOK_COPIES VALUES (10, 1, 10);

INSERT INTO BOOK_COPIES VALUES (5, 1,11);

INSERT INTO BOOK_COPIES VALUES (2, 2,12);

INSERT INTO BOOK_COPIES VALUES (5, 2,13);

INSERT INTO BOOK_COPIES VALUES (7, 3,14);

INSERT INTO BOOK_COPIES VALUES (1, 5,10);

INSERT INTO BOOK_COPIES VALUES (3, 4,11);

INSERT INTO CARD VALUES (100);

INSERT INTO CARD VALUES (101);

INSERT INTO CARD VALUES (102);

INSERT INTO CARD VALUES (103);

INSERT INTO CARD VALUES (104);

INSERT INTO BOOK_LENDING VALUES (‘17-JAN-07‘,‘17-JUN-01‘, 1, 10, 101);

INSERT INTO BOOK_LENDING VALUES (‘17-JAN-11‘,‘17-MAR-11‘, 3, 14, 101);

INSERT INTO BOOK_LENDING VALUES (‘17-FEB-21‘,‘17-APR-21‘, 2, 13, 101);

INSERT INTO BOOK_LENDING VALUES (‘17-MAR-15‘,‘17-JUL-15‘, 4, 11, 101);

INSERT INTO BOOK_LENDING VALUES (‗17-APR-12‘,‘17-MAY-12‘, 1, 11, 104);

4. Basic Queries

1. Query to Retrieve details of all books in the library – id, title, name of publisher, authors,
number of copies in each branch, etc.

SELECT B.BOOK_ID, B.TITLE, B.PUBLISHER_NAME, A.AUTHOR_NAME,


C.NO_OF_COPIES, L.BRANCH_ID FROM BOOK B, BOOK_AUTHORS A, BOOK_COPIES C,
LIBRARY_BRANCHL WHEREB.BOOK_ID=A.BOOK_ID AND B.BOOK_ID=C.BOOK_ID
AND L.BRANCH_ID=C.BRANCH_ID;

2. Query to Get the particulars of borrowers who have borrowed more than 3 books, but from
Jan 2017 to Jun2017.

SELECT CARD_NO FROM BOOK_LENDING WHERE DATE_OUT BETWEEN ‘01-JAN-2017‘


AND ‘01-JUL-2017‘ GROUP BY CARD_NO HAVING COUNT (*)>3;

29
3. Query to Delete a book in BOOK table. Update the contents of other tables to reflect this data
manipulation operation.

DELETE FROM BOOK WHERE BOOK_ID=3;

4. Query to Partition the BOOK table based on year of publication. Demonstrate its working
with a simple query.

CREATE VIEW V_PUBLICATION AS SELECT PUB_YEAR FROM BOOK;

5. Query to Create a view of all books and its number of copies that are currently available in
the Library.

CREATE VIEW V_BOOKS AS SELECT B.BOOK_ID, B.TITLE, C.NO_OF_COPIES FROM


BOOK B, BOOK_COPIES C, LIBRARY_BRANCH L WHERE B.BOOK_ID=C.BOOK_ID AND
C.BRANCH_ID=L.BRANCH_ID;

RESULT :
The queries to create database, create table and query table have been executed successfully.

30
EX No: MySQL Replication – Distributed Databases

Date :

AIM:
To implement Replication in distributed database using MYSQL.

THEORY :
MYSQL - Replication
MySQL supports replication capabilities that allow the databases on one server to be made available
on another server. Replication is used for many purposes. For example, by replicating your
databases, you have multiple copies available in case a server crashes or goes offline. Clients can use
a different server if the one that they normally use becomes unavailable. Replication also can be used
to distribute client load. Rather than having a single server to which all clients connect, you can set
up multiple servers that each handle a fraction of the client load.

MySQL replication uses a master/slave architecture:


o The server that manages the original databases is the master.
o Any server that manages a copy of the original databases is a slave.
o A given master server can have many slaves, but a slave can have only a single master. (If
done with care, it is possible to set up two-way or circular replication, but this study guide
does not describe how.)

A replication slave is set up initially by transferring an exact copy of the to-be-replicated databases
from the master server to the slave server. Thereafter, each replicated database is kept synchronized
to the original database. When the master server makes modifications to its databases, it sends those
changes to each slave server, which makes the changes to its copy of the replicated databases.

PROCEDURE:
Setting Up Replication
To set up replication, each slave requires the following:
o A backup copy of the master's databases. This is the replication "baseline" that sets the slave
to a known initial state of the master.
o The filename and position within the master's binary log that corresponds to the time of the
backup. The values are called the "replication coordinates." They are needed so that the slave
can tell the master that it wants all updates made from that point on.
o An account on the master server that the slave can use for connecting to the master and
requesting updates. The account must have the global REPLICATION SLAVE privilege. For
example, you can set up an account for a slave by issuing these statements on the master
server, where slave_user and slave_pass are the username and password for the account,
and slave_host is the host from which the slave server will connect:

mysql> CREATE USER 'slave_user'@'slave_host' IDENTIFIED BY 'slave_pass';


mysql> GRANT REPLICATION SLAVE ON *.* TO 'slave_user'@'slave_host';

Also, you must assign a unique ID value to each server that will participate in your replication setup.
ID values are positive integers in the range from 1 to 232

1. The easiest way to assign these ID values is by placing a server-id option in each server's option
file:
[mysqld] server-id=id_value

31
It's common, though not required, to use an ID of 1 for the master server and values greater than 1
for the slaves. The following procedure describes the general process for setting up replication.

1. Ensure that binary logging is enabled on the master server. If it is not, stop the server, enable
logging, and restart the server.
2. On the master server, make a backup of all databases to be replicated. One way to do this is by
using mysqldump:

shell> mysqldump --all-databases --master-data=2 > dump_file

Assuming that binary logging is enabled, the --master-data=2 option causes the dump file to
include a comment containing a CHANGE MASTER statement that indicates the replication
coordinates as of the time of the backup. These coordinates can be used later when you tell the
slave where to begin replicating in the master's binary log.

3. Copy the dump file to the replication slave host and load it into the MySQL server on that
machine:

shell> mysql < dump_file

4. Tell the slave what master to connect to and the position in the master's binary log at which to
begin replicating. To do this, connect to the slave server and issue a CHANGE
MASTER statement:

mysql> CHANGE MASTER TO -> MASTER_HOST = 'master_host_name', ->


MASTER_USER = 'slave_user', -> MASTER_PASSWORD = 'slave_pass', ->
MASTER_LOG_FILE = 'master_log_file', -> MASTER_LOG_POS = master_log_pos;

The hostname is the host where the master server is running. The username and password are
those for the slave account that you set up on the master. The log file and position are the
replication coordinates in the master's binary log. (You can get these from the CHANGE
MASTER statement near the beginning of the dump file.)

After you perform the preceding procedure, issue a START SLAVE statement. The slave should
connect to the master and begin replicating updates that the master sends to it. The slave also creates
a master.info file in its data directory and records the values from the CHANGE MASTER statement
in the file. As the slave reads updates from the master, it changes the replication coordinates in
the master.info file accordingly. Also, when the slave restarts in the future, it looks in this file to
determine which master to use.

By default, the master server logs updates for all databases, and the slave server replicates all updates
that it receives from the master. For more fine-grained control, it's possible to tell a master which
databases to log updates for, and to tell a slave which of those updates that it receives from the
master to apply. You can either name databases to be replicated (in which case those not named are
ignored), or you can name databases to ignore (in which case those not named are replicated). The
master host options are --binlog-do-db and --binlog-ignore-db. The slave host options are --replicate-
do-db and --replicate-ignore-db.

The following example illustrates how this works, using the options that enable replication for
specific databases. Suppose that a master server has three databases named a, b, and c. You can elect
to replicate only databases a and b when you start the master server by placing these options in an
option file read by that server:

[mysqld] binlog-do-db = a binlog-do-db = b


32
With those options, the master server will log updates only for the named databases to the binary log.
Thus, any slave server that connects to the master will receive information only for
databases a and b.

Enabling binary logging only for certain databases has an unfortunate side effect: Data recovery
operations require both your backup files and your binary logs, so for any database not logged in the
binary log, full recovery cannot be performed. For this reason, you might prefer to have the master
log changes for all databases to the binary log, and instead filter updates on the slave side.
A slave that takes no filtering action will replicate all events that it receives. If a slave should
replicate events only for certain databases, such as databases a and c, you can start it with these lines
in an option file:

[mysqld] replicate-do-db = a replicate-do-db = c

RESULT:
The MYSQL replication was executed successfully.

33
EX No: Spatial data storage and retrieval in MySQL
Date :

AIM:
To create a spatial data storage and retrieve data in mysql.

PROCEDURE:

Step 1: Start the MYSQL server.


Step 2: Create a database and set that database.
Step 3: Create table with spatial column and insert the data.
Step 4: Use select statement to retrieve and view the content.

QUERIES:

Creating Spatial Columns:


Use the CREATE TABLE statement to create a table with a spatial column:
CREATE TABLE geom (g GEOMETRY);

Use the ALTER TABLE statement to add or drop a spatial column to or from an existing table
ALTER TABLE geom ADD pt POINT;
ALTER TABLE geom DROP pt;

Populating Spatial Columns


After you have created spatial columns, you can populate them with spatial data. Values should be
stored in internal geometry format, but you can convert them to that format from either Well-Known
Text (WKT) or Well-Known Binary (WKB) format.
The following examples demonstrate how to insert geometry values into a table by converting WKT
values to internal geometry format:

Perform the conversion directly in the INSERT statement:


INSERT INTO geom VALUES (ST_GeomFromText('POINT(1 1)'));

SET @g = 'POINT(1 1)';


INSERT INTO geom VALUES (ST_GeomFromText(@g));

Perform the conversion prior to the INSERT:


SET @g = ST_GeomFromText('POINT(1 1)');
INSERT INTO geom VALUES (@g);

To insert more complex geometries into the table:


SET @g = 'LINESTRING(0 0,1 1,2 2)';
INSERT INTO geom VALUES (ST_GeomFromText(@g));

SET @g = 'POLYGON((0 0,10 0,10 10,0 10,0 0),(5 5,7 5,7 7,5 7, 5 5))';
INSERT INTO geom VALUES (ST_GeomFromText(@g));

SET @g ='GEOMETRYCOLLECTION(POINT(1 1),LINESTRING(0 0,1 1,2 2,3 3,4 4))';


INSERT INTO geom VALUES (ST_GeomFromText(@g));

34
To use ST_GeomFromText() function to create geometry values. We can also use type-specific
functions:

SET @g = 'POINT(1 1)';


INSERT INTO geom VALUES (ST_PointFromText(@g));

SET @g = 'LINESTRING(0 0,1 1,2 2)';


INSERT INTO geom VALUES (ST_LineStringFromText(@g));

SET @g = 'POLYGON((0 0,10 0,10 10,0 10,0 0),(5 5,7 5,7 7,5 7, 5 5))';
INSERT INTO geom VALUES (ST_PolygonFromText(@g));

SET @g ='GEOMETRYCOLLECTION(POINT(1 1),LINESTRING(0 0,1 1,2 2,3 3,4 4))';


INSERT INTO geom VALUES (ST_GeomCollFromText(@g));

Inserting a POINT(1 1) value with hex literal syntax:


INSERT INTO geom VALUES
(ST_GeomFromWKB(X'0101000000000000000000F03F000000000000F03F'));

An ODBC application can send a WKB representation, binding it to a placeholder using an


argument of BLOB type:
INSERT INTO geom VALUES (ST_GeomFromWKB(?))

Fetching Spatial Data:


Geometry values stored in a table can be fetched in internal format. You can also convert them to
WKT or WKB format. Fetching spatial data in internal format:Fetching geometry values using
internal format can be useful in table-to-table transfers:

CREATE TABLE geom2 (g GEOMETRY) SELECT g FROM geom;

Fetching spatial data in WKT format:The ST_AsText() function converts a geometry from
internal format to a WKT string.
SELECT ST_AsText(g) FROM geom;

Fetching spatial data in WKB format:The ST_AsBinary() function converts a geometry from
internal format to a BLOB containing the WKB value.
SELECT ST_AsBinary(g) FROM geom;

RESULT:
The spatial data storage creation and retrieve of data in mysql has been executed
successfully.

35
EX No:
Temporal data storage and retrieval in MySQL
Date :

AIM:
To create a Temporal data storage and retrieval in MySQL.

PROCEDURE:

Step 1: Start the MYSQL server.


Step 2: Create a database and set that database.
Step 3: Create table with temporal column and insert the data.
Step 4: Use select statement to retrieve and view the content.

THEORY :
TEMPORAL DATATYPE
MySQL provides data types for storing different kinds of temporal information. In the following
descriptions, the terms YYYY, MM, DD, hh, mm, and ss stand for a year, month, day of month,
hour, minute, and second value, respectively.
The following table summarizes the storage requirements and ranges for the date and time data types.

Type Storage Required Range


DATE 3 bytes '1000-01-01' to '9999-12-31'
TIME 3 bytes '-838:59:59' to '838:59:59'
DATETIME 8 bytes '1000-01-01 00:00:00' to '9999-12-31 23:59:59'
TIMESTAMP 4 bytes '1970-01-01 00:00:00' to mid-year 2037
YEAR 1 byte 1901 to 2155 (for YEAR(4)), 1970 to 2069
(for YEAR(2))
QUERIES:

To Create table with temporal data


mysql> CREATE TABLE ts_test1 ( ->ts1 TIMESTAMP, ->ts2 TIMESTAMP, ->fdata
CHAR(30) -> );
Query OK, 0 rows affected (0.00 sec)

To describe the table schema


mysql> DESCRIBE ts_test1;
+-------+-----------+------+-----+---------------------+-------+ | Field | Type | Null | Key | Default
| Extra | +-------+-----------+------+-----+---------------------+-------+ | ts1 | timestamp | YES | |
CURRENT_TIMESTAMP | | | ts2 | timestamp | YES | | 0000-00-00 00:00:00 | | | data |
char(30) | YES | | NULL | | +-------+-----------+------+-----+---------------------+-------+
3 rows in set (0.01 sec)

To insert
mysql> INSERT INTO ts_test1 (data) VALUES ('original_value');
Query OK, 1 row affected (0.00 sec)

mysql> SELECT * FROM ts_test1;


+---------------------+---------------------+----------------+ | ts1 | ts2 | data |
+---------------------+---------------------+----------------+ | 2005-01-04 14:45:51 | 0000-00-00 00:00:00 |
original_value | +---------------------+---------------------+----------------+
1 row in set (0.00 sec)

36
To update
mysql> UPDATE ts_test1 SET data='updated_value';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0

To retrieve
mysql> SELECT * FROM ts_test1;
+---------------------+---------------------+---------------+ | ts1 | ts2 | data |
+---------------------+---------------------+---------------+ | 2005-01-04 14:46:17 | 0000-00-00 00:00:00 |
updated_value | +---------------------+---------------------+---------------+ 1 row in set (0.00 sec)

mysql> CREATE TABLE ts_test2 ( -> created_time TIMESTAMP DEFAULT


CURRENT_TIMESTAMP, -> data CHAR(30) -> );
Query OK, 0 rows affected (0.00 sec)

mysql> INSERT INTO ts_test2 (data) VALUES ('original_value');


Query OK, 1 row affected (0.01 sec)

mysql> SELECT * FROM ts_test2;


+---------------------+----------------+ | created_time | data | +---------------------
+----------------+ | 2005-01-04 14:46:39 | original_value | +---------------------+----------------+
1 row in set (0.00 sec)

mysql> UPDATE ts_test2 SET data='updated_value';


Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0

mysql> SELECT * FROM ts_test2;


+---------------------+---------------+ | created_time | data | +---------------------+---------------+
| 2005-01-04 14:46:39 | updated_value | +---------------------+---------------+
1 row in set (0.00 sec)

mysql> CREATE TABLE ts_test3 ( -> updated_time TIMESTAMP ON UPDATE


CURRENT_TIMESTAMP, -> data CHAR(30) -> );
Query OK, 0 rows affected (0.01 sec)

mysql> INSERT INTO ts_test3 (data) VALUES ('original_value'); Query OK, 1 row affected (0.00
sec)

mysql> SELECT * FROM ts_test3;


+---------------------+----------------+ | updated_time | data | +---------------------
+----------------+ | 0000-00-00 00:00:00 | original_value | +---------------------+----------------+
1 row in set (0.00 sec)

mysql> UPDATE ts_test3 SET data='updated_value';


Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0

mysql> SELECT * FROM ts_test3;


+---------------------+---------------+ | updated_time | data | +---------------------+---------------+
| 2005-01-04 14:47:10 | updated_value | +---------------------+---------------+
1 row in set (0.00 sec)

mysql> CREATE TABLE ts_test4 ( -> created TIMESTAMP DEFAULT


37
CURRENT_TIMESTAMP, -> updated TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
-> data CHAR(30) -> );
ERROR 1293 (HY000): Incorrect table definition; there can be only one TIMESTAMP column with
CURRENT_TIMESTAMP in DEFAULT or ON UPDATE clause

mysql> CREATE TABLE ts_test5 ( -> created TIMESTAMP DEFAULT 0, -> updated
TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, -> data CHAR(30) -> );
Query OK, 0 rows affected (0.01 sec)

mysql> INSERT INTO ts_test5 (created, data) -> VALUES (NULL, 'original_value');
Query OK, 1 row affected (0.00 sec)

mysql> SELECT * FROM ts_test5; +---------------------+---------------------+----------------+ | created


| updated | data | +---------------------+---------------------+----------------+ | 2005-01-04
14:47:39 | 0000-00-00 00:00:00 | original_value | +---------------------+---------------------
+----------------+
1 row in set (0.00 sec)

mysql> UPDATE ts_test5 SET data='updated_value';


Query OK, 1 row affected (0.00 sec)
Rows matched: 1 Changed: 1 Warnings: 0

mysql> SELECT * FROM ts_test5;


+---------------------+---------------------+---------------+ | created | updated | data |
+---------------------+---------------------+---------------+ | 2005-01-04 14:47:39 | 2005-01-04 14:47:52 |
updated_value | +---------------------+---------------------+---------------+ 1 row in set (0.00 sec)

mysql> CREATE TABLE ts_null (ts TIMESTAMP NULL);


Query OK, 0 rows affected (0.04 sec)

mysql> DESCRIBE ts_null;


+-------+-----------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra |
+-------+-----------+------+-----+---------+-------+ | ts | timestamp | YES | | NULL | | +-------
+-----------+------+-----+---------+-------+ 1 row in set (0.10 sec)

mysql> SELECT @@global.time_zone, @@session.time_zone;


+--------------------+---------------------+ | @@global.time_zone | @@session.time_zone |
+--------------------+---------------------+ | SYSTEM | SYSTEM | +--------------------
+---------------------+
1 row in set (0.00 sec)

mysql> SET time_zone = '+00:00';


Query OK, 0 rows affected (0.00 sec)
mysql> SELECT @@session.time_zone; +---------------------+ | @@session.time_zone |
+---------------------+ | +00:00 | +---------------------+
1 row in set (0.00 sec)

mysql> CREATE TABLE ts_test (ts TIMESTAMP);


Query OK, 0 rows affected (0.01 sec)

mysql> INSERT INTO ts_test (ts) VALUES (NULL);


Query OK, 1 row affected (0.00 sec)

mysql> SELECT * FROM ts_test;


38
+---------------------+ | ts | +---------------------+ | 2005-01-04 20:50:18 | +---------------------+
1 row in set (0.00 sec)

mysql> SET time_zone = '+02:00';


Query OK, 0 rows affected (0.00 sec)

mysql> SELECT * FROM ts_test;


+---------------------+ | ts | +---------------------+ | 2005-01-04 22:50:18 | +---------------------+
1 row in set (0.00 sec)

mysql> SET time_zone = '-05:00';


Query OK, 0 rows affected (0.00 sec)
mysql>
SELECT * FROM ts_test; +---------------------+ | ts | +---------------------+ | 2005-01-04
15:50:18 | +---------------------+ 1 row in set (0.00 sec)

mysql> SELECT CONVERT_TZ('2005-01-27 13:30:00', '+01:00', '+03:00');


+-------------------------------------------------------+ | CONVERT_TZ('2005-01-27 13:30:00', '+01:00',
'+03:00') | +-------------------------------------------------------+ | 2005-01-27 15:30:00
| +-------------------------------------------------------+
1 row in set (0.00 sec)

RESULT:
The Temporal data storage and retrieval in MySQL is executed successfully.

39
EX No: Object storage and retrieval

Date :

AIM:
To create and execute Object data storage and retrieval.

PROCEDURE:

Step 1: Start the Oracle server.


Step 2: Connect to the server through the client.
Step 3: Create a database and set that database.
Step 4: Create table and insert the data.
Step 5: Use select statement to retrieve and view the content.

QUERY:

To create an object type:


CREATE TYPE StockItem_objtyp;
CREATE TYPE LineItem_objtyp;
CREATE TYPE PurchaseOrder_objtyp;
CREATE TYPE PhoneList_vartyp AS VARRAY(10) OF VARCHAR2(20);

CREATE TYPE Address_objtyp AS OBJECT (


Street VARCHAR2(200),
City VARCHAR2(200),
State CHAR(2),
Zip VARCHAR2(20)
)

CREATE TYPE Customer_objtyp AS OBJECT (


CustNo NUMBER,
CustName VARCHAR2(200),
Address_obj Address_objtyp,
PhoneList_var PhoneList_vartyp,

ORDER MEMBER FUNCTION


compareCustOrders(x IN Customer_objtyp) RETURN INTEGER
)

CREATE TYPE LineItem_objtyp AS OBJECT (


LineItemNo NUMBER,
Stock_ref REF StockItem_objtyp,
Quantity NUMBER,
Discount NUMBER
)

To create a table
CREATE TABLE Customer_objtab OF Customer_objtyp (CustNo PRIMARY KEY)
OBJECT ID PRIMARY KEY ;

40
CREATE TABLE Stock_objtab OF StockItem_objtyp (StockNo PRIMARY KEY) OBJECT ID
PRIMARY KEY ;

CREATE TABLE PurchaseOrder_objtab OF PurchaseOrder_objtyp ( /* Line 1 */


PRIMARY KEY (PONo), /* Line 2 */
FOREIGN KEY (Cust_ref) REFERENCES Customer_objtab) /* Line 3 */
OBJECT ID PRIMARY KEY /* Line 4 */
NESTED TABLE LineItemList_ntab STORE AS PoLine_ntab ( /* Line 5 */
(PRIMARY KEY(NESTED_TABLE_ID, LineItemNo)) /* Line 6 */
ORGANIZATION INDEX COMPRESS) /* Line 7 */
RETURN AS LOCATOR /* Line 8 */

To alter table
ALTER TABLE PoLine_ntab
ADD (SCOPE FOR (Stock_ref) IS stock_objtab) ;

CREATE OR REPLACE TYPE BODY PurchaseOrder_objtyp AS


MAP MEMBER FUNCTION getPONo RETURN NUMBER is
BEGIN
RETURN PONo;
END;
MEMBER FUNCTION sumLineItems RETURN NUMBER IS
i INTEGER;
StockVal StockItem_objtyp;
Total NUMBER := 0;
BEGIN
IF (UTL_COLL.IS_LOCATOR(LineItemList_ntab)) -- check for locator
THEN
SELECT SUM(L.Quantity * L.Stock_ref.Price) INTO Total
FROM TABLE(CAST(LineItemList_ntab AS LineItemList_ntabtyp)) L;
ELSE
FOR i in 1..SELF.LineItemList_ntab.COUNT LOOP
UTL_REF.SELECT_OBJECT(LineItemList_ntab(i).Stock_ref,StockVal);
Total := Total + SELF.LineItemList_ntab(i).Quantity *
StockVal.Price;
END LOOP;
END IF;
RETURN Total;
END;
END;

ALTER TABLE PoLine_ntab


ADD (SCOPE FOR (Stock_ref) IS stock_objtab);

To insert
INSERT INTO Stock_objtab VALUES(1004, 6750.00, 2) ;
INSERT INTO Stock_objtab VALUES(1011, 4500.23, 2) ;
INSERT INTO Stock_objtab VALUES(1534, 2234.00, 2) ;
INSERT INTO Stock_objtab VALUES(1535, 3456.23, 2) ;

INSERT INTO Customer_objtab


VALUES (
1, 'Jean Nance',
Address_objtyp('2 Avocet Drive', 'Redwood Shores', 'CA', '95054'),
41
PhoneList_vartyp('415-555-1212')
);

INSERT INTO Customer_objtab


VALUES (
2, 'John Nike',
Address_objtyp('323 College Drive', 'Edison', 'NJ', '08820'),
PhoneList_vartyp('609-555-1212','201-555-1212')
);

INSERT INTO PurchaseOrder_objtab


SELECT 1001, REF(C),
SYSDATE, '10-MAY-1999',
LineItemList_ntabtyp(),
NULL
FROM Customer_objtab C
WHERE C.CustNo = 1 ;

INSERT INTO PurchaseOrder_objtab


SELECT 2001, REF(C),
SYSDATE, '20-MAY-1997',
LineItemList_ntabtyp(),
Address_objtyp('55 Madison Ave','Madison','WI','53715')
FROM Customer_objtab C
WHERE C.CustNo = 2 ;

INSERT INTO TABLE (


SELECT P.LineItemList_ntab
FROM PurchaseOrder_objtab P
WHERE P.PONo = 1001
)

SELECT 02, REF(S), 10, 10


FROM Stock_objtab S
WHERE S.StockNo = 1535 ;

INSERT INTO TABLE (


SELECT P.LineItemList_ntab
FROM PurchaseOrder_objtab P
WHERE P.PONo = 2001
)
SELECT 10, REF(S), 1, 0
FROM Stock_objtab S
WHERE S.StockNo = 1004 ;

INSERT INTO TABLE (


SELECT P.LineItemList_ntab
FROM PurchaseOrder_objtab P
WHERE P.PONo = 2001
)
VALUES(11, (SELECT REF(S)
FROM Stock_objtab S
WHERE S.StockNo = 1011), 2, 1) ;

42
SELECT p.PONo
FROM PurchaseOrder_objtab p
ORDER BY VALUE(p) ;

Customer and Line Item Data for Purchase Order 1001


SELECT DEREF(p.Cust_ref), p.ShipToAddr_obj, p.PONo,
p.OrderDate, LineItemList_ntab
FROM PurchaseOrder_objtab p
WHERE p.PONo = 1001 ;

Total Value of Each Purchase Order


SELECT p.PONo, p.sumLineItems()
FROM PurchaseOrder_objtab p ;
Purchase Order and Line Item Data Involving Stock Item 1004
SELECT po.PONo, po.Cust_ref.CustNo,
CURSOR (
SELECT *
FROM TABLE (po.LineItemList_ntab) L
WHERE L.Stock_ref.StockNo = 1004
)
FROM PurchaseOrder_objtab po ;

SELECT po.PONo, po.Cust_ref.CustNo, L.*


FROM PurchaseOrder_objtab po, TABLE (po.LineItemList_ntab) L
WHERE L.Stock_ref.StockNo = 1004 ;

SELECT po.PONo, po.Cust_ref.CustNo, L.*


FROM PurchaseOrder_objtab po, TABLE (po.LineItemList_ntab) (+) L
WHERE L.Stock_ref.StockNo = 1004 ;

SELECT AVG(L.DISCOUNT)
FROM PurchaseOrder_objtab po, TABLE (po.LineItemList_ntab) L ;

To delete
DELETE
FROM PurchaseOrder_objtab
WHERE PONo = 1001 ;

RESULT:
The creation and execution of Object storage and retrieval was executed successfully.

43
EX No: XML Databases, XML table creation, XQuery FLWOR
expression
Date :

AIM:
To create and execute XML Databases , XML table creation, XQuery FLWOR expression.

PROCEDURE:

Step 1: Start the Oracle server.


Step 2: Connect to the server through the client.
Step 3: Create a database and set that database.
Step 4: Create table and insert the data.
Step 5: Use select statement (FLWOR) to retrieve and view the content.

QUERIES:

To create table
CREATE TABLE mytable1 (key_column VARCHAR2(10) PRIMARY KEY, xml_column
XMLType);

Table created.

CREATE TABLE mytable2 OF XMLType;

Table created.

To insert values:
INSERT INTO mytable2 VALUES (XMLType(bfilename('XMLDIR', 'purchaseOrder.xml'),
nls_charset_id('AL32UTF8')));

To retrieve using XQuery


SELECT XMLQuery('for $i in /PurchaseOrder
where $i/CostCenter eq "A10"
and $i/User eq "SMCCAIN"
return <A10po pono="{$i/Reference}"/>'
PASSING OBJECT_VALUE
RETURNING CONTENT)
FROM purchaseorder;

XMLQUERY('FOR$IIN/PURCHASEORDERWHERE$I/COSTCENTEREQ"A10"AND$I/
USEREQ"SMCCAIN"RET
--------------------------------------------------------------------------------
<A10po pono="SMCCAIN-20021009123336151PDT"></A10po>
<A10po pono="SMCCAIN-20021009123336341PDT"></A10po>
<A10po pono="SMCCAIN-20021009123337173PDT"></A10po>
<A10po pono="SMCCAIN-20021009123335681PDT"></A10po>
<A10po pono="SMCCAIN-20021009123335470PDT"></A10po>
<A10po pono="SMCCAIN-20021009123336972PDT"></A10po>
<A10po pono="SMCCAIN-20021009123336842PDT"></A10po>
<A10po pono="SMCCAIN-20021009123336512PDT"></A10po>
<A10po pono="SMCCAIN-2002100912333894PDT"></A10po>

44
<A10po pono="SMCCAIN-20021009123337403PDT"></A10po>

XML File:
<PurchaseOrder>
<Reference>SBELL-2002100912333601PDT</Reference>
<Actions>
<Action>
<User>SVOLLMAN</User>
</Action>
</Actions>
...
</PurchaseOrder>

<PurchaseOrder>
<Reference>ABEL-20021127121040897PST</Reference>
<Actions>
<Action>
<User>ZLOTKEY</User>
</Action>
<Action>
<User>KING</User>
</Action>
</Actions>
...
</PurchaseOrder>

RESULT:
The creation and execution of XML Databases , XML table creation, XQuery FLWOR expression
has been completed successfully.

45

You might also like