Sample 2
Sample 2
: 1
QUERY OPTIMIZER
Write SQL queries to retrieve employee information and use metadata to optimize the
queries
PROBLEM STATEMENT:
Database Setup: Ensure you have a relational database with a table named Employees with
columns like EmployeeID, FirstName, LastName, Department, Position, Salary, and
HireDate.
Use the EXPLAIN command to analyze the performance of the queries you wrote in
Task 1.
Based on the metadata information and query performance analysis, modify your
queries or indexes to improve performance. For example:
1. Create an index on the Department column if it’s frequently queried.
2. Rewrite queries to utilize indexes more effectively.
Re-run the EXPLAIN command on the optimized queries and compare the
performance with the initial queries. Document the improvements.
INTRODUCTION: "Query with optimizer by accessing the metadata" refers to the process
in which a database management system (DBMS) uses an internal component called a query
optimizer to determine the most efficient way to execute a database query. The optimizer
makes decisions based on metadata, which is data that provides information about the
structure and characteristics of the database.
1. Query:
o A request to retrieve or manipulate data stored in a database. For example, a
SQL query might request all records from a table where a certain condition is
met.
2. Optimizer:
o A software component within the DBMS that analyzes different ways to
execute a query. It considers various strategies or "execution plans" to find the
one that will complete the task most efficiently, often measured in terms of
time and resource usage (like CPU, memory, and I/O operations).
3. Metadata:
o Data about the data stored in the database. This includes information like:
The size of tables.
The number of rows in a table.
The distribution of values within columns.
The presence and type of indexes.
Data types and constraints on columns.
o Metadata helps the optimizer understand the characteristics of the data, which
is crucial for making informed decisions about the best way to execute a
query.
OBJECTIVES:
To create a relational database table named Employees, insert values, and display
employee details using various SQL queries. Additionally, access metadata, analyze
query performance, optimize queries, and evaluate the performance improvements
APPLICATIONS:
STEP-BY-STEP PROCESS:
Create the Employees Table and Insert Values
AIM:
The aim of this project is to optimize queries for an XML database that stores college
data, specifically focusing on retrieving student information efficiently.
ALGORITHMS:
Create Table and to insert into the values
Task 1: Write Basic Queries
Get the Structure of the Employees Table and View Existing Indexes on the Table
For example, to analyze the performance of the query to retrieve all employee details:
Task 4: Optimize Queries
IMPLEMENTATION:
DESCRIBE Employees;
or
Ensure that the queries are designed to take advantage of indexes. For example, the index on
Department will be used if the query filters on this column:
OUTPUT:
DESCRIBE Employees:
RESULT: Thus to create to optimize queries for an XML database that stores college data,
specifically focusing on retrieving student information has been successfully executed.
Ex.No.: 2
DISTRIBUTED DATABASE
Create a distributed database and run various queries Use stored procedures
PROBLEM STATEMENT:
INTRODUCTION:
OBJECTIVES:
Manage tables across distributed databases, perform data insertion, and run various queries to
combine and aggregate data.
APPLICATIONS:
E-commerce Platforms
Financial Services
Healthcare Systems
Social Media Networks
STEP-BY-STEP PROCESS:
To combine data from the Employees table in both databases, you need to use the UNION
operator. Assuming you have connected to both databases and can query them together:
If you want to include duplicate records, replace UNION with UNION ALL:
Now, let's create a view that aggregates employee information, showing total salary and
average salary by department:
Table Creation:
Data Insertion:
Write a SQL query to combine data from Employees tables in both databases using
UNION.
Create a view that aggregates employee data, such as total and average salaries by
department.
IMPLEMENTATION:
Database1
Database2
CREATE TABLE Employees (
EmployeeID INT PRIMARY KEY,
FirstName VARCHAR(50),
LastName VARCHAR(50),
DepartmentID INT
);
Step 1.2: Create the Departments Table in Both Databases
Database1
Database2
atabase1
Database2
Database1
Database2
Database1
Database2
Database1
Database2
To combine data from the Employees table in both databases, you need to use the UNION
operator. Assuming you have connected to both databases and can query them together:
If you want to include duplicate records, replace UNION with UNION ALL:
-- Combining Employees data from both Database1 and Database2, including duplicates
UNION ALL
-- Querying the view to display department name, employee count, total salary, and average
salary
OUTPUT:
Combined Data from Employees Table (UNION ALL)
Ex.No.: 3
OBJECT-ORIENTED DATABASE
Create OQL Queries to access the data from Object Oriented Database.
PROBLEM STATEMENT:
You are working with an object-oriented database designed to manage employee details for a
company. The database contains information about employees, departments, and projects.
You are required to write Object Query Language (OQL) queries to perform various data
retrieval tasks based on this database schema.
1. Employee
o employeeID: Integer
o firstName: String
o lastName: String
o email: String
o jobTitle: String
o department: Department (Reference to a Department object)
o salary: Float
o hireDate: Date
o projects: List<Project> (Collection of Project objects)
2. Department
o departmentID: Integer
o departmentName: String
o manager: Employee (Reference to an Employee object)
3. Project
o projectID: Integer
o projectName: String
o startDate: Date
o endDate: Date
o teamMembers: List<Employee> (Collection of Employee objects)
Write an OQL query to retrieve all details of employees, including their ID, name, email,
job title, and salary.
Write an OQL query to list all employees who belong to a specific department, such as
"Engineering". Display their ID, name, and department name.
Write an OQL query to find all employees with a salary above $70,000. Display their ID,
name, and salary.
Write an OQL query to find all employees who are managed by a specific manager, say
the one with employeeID = 1002. Display their ID, name, and the manager's name.
Write an OQL query to find all employees involved in a project named "Project X".
Display their ID, name, and project name.
Write an OQL query to count the number of employees in each department. Display the
department name and the count of employees.
Write an OQL query to retrieve all employees who were hired after January 1, 2021.
Display their ID, name, and hire date.
Write an OQL query to list all projects and their respective team members. For each
project, display the project name and the names of all team members.
OBJECTIVES:
APPLICATIONS:
APPLICATIONS:
STEP-BY-STEP EXPLANATION:
AIM: To write OQL queries that retrieve various data from an object-oriented database
containing information about employees, departments, and projects.
ALGORITHMS:
1. Retrieve All Details of Employees : Select all attributes from the Employee class.
5. Find Employees Involved in a Project Named "Project X": Join Employee with Project
and filter by project name.
6. Count the Number of Employees in Each Department: Group by department name and
count employees.
7. Retrieve Employees Hired After January 1, 2021: Filter employees by hire date.
8. List Projects and Their Respective Team Members: Join Project with Employee to get
team members.
9. Find Employees Not Assigned to Any Project: Filter employees with no associated
projects.
10. List Departments with Their Managers: Fetch department details and join with
manager information.
IMPLEMENTATION:
FROM Employee e
FROM Department d
OUTPUT:
1. Retrieve All Details of Employees :
RESULT: Thus To write OQL queries that retrieve various data from an object-oriented
database containing information about employees, departments, and projects has been
successfully executed
Ex.No.: 4
PARALLEL DATABASE
PROBLEM STATEMENT:
1. Install and configure a parallel database system and Java development environment.
(Ex. Apache HBase)
2. Write Java code to establish a connection to the parallel database.
3. Execute basic SQL queries against the parallel database
o Write Java code INSERT, to execute SELECT, UPDATE, and DELETE
queries.
o Use PreparedStatement for executing parameterized queries.
o Display the results of SELECT queries in the console.
INTRODUCTION:
Parallel databases are designed to handle large volumes of data by distributing the workload
across multiple processors or servers. This architecture allows for faster query processing,
improved scalability, and better fault tolerance compared to traditional single-node databases.
1. Data Partitioning: Data is split across multiple disks or nodes. Common partitioning
methods include:
o Horizontal Partitioning: Dividing rows across different nodes.
o Vertical Partitioning: Dividing columns across different nodes.
o Hash Partitioning: Distributing data based on a hash function.
o Range Partitioning: Dividing data based on a range of values.
2. Parallel Query Execution: Queries are processed simultaneously by different
processors or nodes. Techniques include:
o Intra-query parallelism: Breaking a single query into sub-tasks that run in
parallel.
o Inter-query parallelism: Running multiple queries simultaneously across
different processors.
3. Load Balancing: Ensuring that data and query processing is evenly distributed across
nodes to prevent bottlenecks.
4. Fault Tolerance: If one node fails, the system can continue processing using the
remaining nodes, often with data replication to ensure no data is lost.
OBJECTIVES:
To develop an application in Java that can efficiently interact with a parallel database system
to perform various data operations, including querying, updating, and managing data across
distributed nodes.
APPLICATION:
Data Warehousing
Big Data Analytics
Scientific Research
Financial Services
Telecommunications
Health Care
STEP-BY-STEP EXPLANATION:
1. Schema Design
3. Parallel Setup
Sharding: Distribute the tables across different nodes based on UnitID or other criteria
to balance the load.
Replication: Implement replication to ensure fault tolerance and high availability.
4. Basic Queries
Retrieve all personnel information, including their unit names and roles:
Find all missions conducted by a specific unit, including details about the personnel
assigned to those missions and the equipment used:
alculate the total number of missions conducted by each unit and the average duration
of these missions:
Generate a report listing the top 5 units with the highest number of missions,
including the total number of missions and average mission duration for each unit:
Aim:
Algorithm:
Setup and Configuration: Install Apache HBase and configure it with Python
environment.
Connection Establishment: Write Python code to connect to HBase.
CRUD Operations: Implement Python code to perform INSERT, SELECT,
UPDATE, and DELETE operations.
Parallel Operations: Use concurrency utilities to perform batch operations and
measure performance.
Query Optimization: Apply optimization techniques to enhance query performance.
IMPLEMENTATIONS:
Install and Configure Apache HBase and Python Development Environment
bash
Copy code
pip install happybase
1. Establish Connection:
python
Copy code
import happybase
python
Copy code
table_name = 'my_table'
table = connection.table(table_name)
# Insert data
table.put(b'row1', {b'cf1:col1': b'value1'})
print("Data inserted.")
o Select Data:
python
Copy code
# Fetch data
row = table.row(b'row1')
print("Retrieved value:", row[b'cf1:col1'].decode('utf-8'))
python
Copy code
# Update data
table.put(b'row1', {b'cf1:col1': b'new_value'})
print("Data updated.")
o Delete Data:
python
Copy code
# Delete data
table.delete(b'row1')
print("Data deleted.")
python
Copy code
# Results are displayed using print statements in the above code.
1. Batch Inserts:
python
Copy code
from concurrent.futures import ThreadPoolExecutor
import happybase
def insert_row(row_id):
table.put(f'row{row_id}'.encode(), {b'cf1:col1': f'value{row_id}'.encode()})
table_name = 'my_table'
table = connection.table(table_name)
2. Parallel Queries:
python
Copy code
from concurrent.futures import ThreadPoolExecutor
import happybase
def query_row(row_id):
row = table.row(f'row{row_id}'.encode())
return row.get(b'cf1:col1', b'No data').decode('utf-8')
table_name = 'my_table'
table = connection.table(table_name)
python
Copy code
import time
# Sequential Operation
start_time = time.time()
for i in range(100):
insert_row(i)
sequential_time = time.time() - start_time
print(f"Sequential execution time: {sequential_time:.2f} seconds")
# Parallel Operation
start_time = time.time()
with ThreadPoolExecutor(max_workers=4) as executor:
executor.map(insert_row, range(100))
parallel_time = time.time() - start_time
print(f"Parallel execution time: {parallel_time:.2f} seconds")
1. Use Efficient Row Keys: Design row keys to ensure even distribution of data.
2. Optimize Column Families: Minimize the number of column families and use them
effectively.
3. Tune HBase Configuration: Adjust settings for block cache, memstore, and other
parameters.
4. Monitor Performance: Utilize HBase metrics and monitoring tools to analyze
performance.
OUTPUT:
RESULT: Thus to design and implement a parallel database system to manage defense-
related information, including personnel, missions, and equipment has been successfully
executed
Ex.No.: 5
ACTIVE DATABASES
Create a Active database with facts and extract data using rules.
PROBLEM STATEMENT:
To create an active database that stores facts about a domain (e.g., a company’s employees,
departments, and projects) and to define rules that derive new facts or retrieve specific
information based on stored data.
i. Create and Test the facts
ii. Create and Test the Rules
iii. Create and Test the Complex Rules.
iv. Insert and Delete the facts in dynamically and Test the dynamic facts
INTRODUCTION:
An active database is not just a static repository of data; it also includes a set of rules that
automatically trigger actions when certain conditions are met. This type of database is highly
interactive, allowing the data to "work" for you by deriving new facts, enforcing constraints,
or automating tasks based on predefined rules.
What Are Facts and Rules?
Facts represent the fundamental units of knowledge within the database. They are
assertions about the world that the database knows to be true. For example, in a
company database, facts might include information like "John is a manager" or "The
Sales department is located in New York." Facts are the data points from which more
complex queries and inferences can be drawn.
Rules are logical statements that define how new information can be derived from the
existing facts. They encapsulate the logic of your domain, enabling the database to
infer new knowledge or respond to queries dynamically. For example, a rule might
state, "If someone is a manager, they are eligible for a promotion." When you query
the database, it will use this rule to determine which employees are eligible for
promotions based on the current facts.
Why Use an Active Database?
Active databases, particularly those implemented in logic programming languages like
Prolog, offer several advantages:
1. Inferred Knowledge: They can infer new knowledge from existing data, which
allows for more powerful queries and deeper insights.
2. Declarative Logic: The use of declarative rules makes it easier to express complex
relationships and business logic compared to traditional procedural code.
3. Dynamic Updates: The database can automatically respond to changes in the data,
dynamically updating the derived information without requiring manual intervention.
4. Simplified Querying: Complex queries that would be difficult to write in SQL can
often be expressed more naturally and succinctly using rules.
APPLICATIONS:
STEP-BY-STEP EXPLANATION:
AIM:
To create an active database that stores facts about a company's domain (e.g., employees,
departments, and projects) and to define rules that derive new facts or retrieve specific
information based on stored data.
ALGORITHMS:
Define the schema for the active database, including tables for employees,
departments, and projects.
Identify key attributes and relationships among the tables.
Populate the database with initial facts about employees, departments, and projects.
Implement mechanisms to dynamically insert and delete facts.
Define Rules:
Create rules to derive new facts or perform specific operations when certain
conditions are met.
Ensure that rules are automatically triggered when related facts are inserted, updated,
or deleted.
IMPLEMENTATIONS:
1. Schema Design:
o Create tables: Employees, Departments, Projects.
o Example schema:
Employees (EmpID, Name, DeptID, ProjectID, Salary)
Departments (DeptID, DeptName, ManagerID)
Projects (ProjectID, ProjectName, Budget)
2. Insert Initial Facts:
o Insert sample data into the tables.
Example:
Example:
BEGIN
UPDATE Employees
END;
Update a project’s budget and verify that the manager’s salary is updated.
Example
Example:
BEGIN
UPDATE Employees
Update a project’s budget and verify that the employees’ salaries are reduced
appropriately.
Example
Insert and Delete Facts Dynamically and Test the Dynamic Facts:
Example
Delete employees or projects and verify that rules handle these deletions correctly.
Example
Insert or delete facts, then query the database to see the derived facts or changes.
Ensure that rule evaluations are consistent with the current state of the database.
OUTPUT:
Initial Insertion:
After Rule Trigger (Budget Update):
DEDUCTIVE DATABASE
Create a knowledge database with facts and extract data using rules
Problem Statement: Design and implement a deductive database to store and retrieve
information about humans and their characteristics. The database should support storing
personal information, characteristics, and relationships, and allow for inferencing to derive
new facts based on predefined rules.
Tasks:
Recursive Rule: Infer if two people are indirectly related through a common friend
INTRODUCTION:
STEP-BY-STEP PROCESS:
Facts: Basic assertions about the world, similar to tuples in a relational database.
Rules: Logical statements that define relationships between facts, allowing the
inference of new facts.
In Prolog, the knowledge base is a file that contains facts and rules. Create a file named
birds.pl:
APPLICATIONS:
1. Expert Systems
Medical Diagnosis:
Legal Reasoning:
2. Data Integration
Fraud Detection:
Compliance and Policy Enforcement:
Semantic Parsing:
Information Extraction:
Reasoning in AI:
Explanation Generation:
6. Semantic Web
6. Cognitive Computing.
7.
AIM:
To create a deductive database that stores and retrieves information about humans, their
characteristics, and their relationships, and to support inference to derive new facts from
predefined rules.
ALGORITHMS:
Algorithms
IMPLEMENTATION:
Assuming you are using a Prolog-like logic programming language for the deductive
database, the implementation might look like this:
% Sample Data
human(1, 'Alice', 30).
human(2, 'Bob', 25).
human(3, 'Charlie', 35).
human(4, 'Diana', 28).
human(5, 'Eve', 40).
characteristic(1, 'Experienced').
characteristic(2, 'Beginner').
characteristic(3, 'Experienced').
characteristic(4, 'Experienced').
characteristic(5, 'Senior').
relationship(1, 2, 'Colleague').
relationship(2, 3, 'Friend').
relationship(3, 4, 'Colleague').
relationship(4, 5, 'Friend').
1. Rule 1: Infer if two humans are related if one is a colleague of the other.
Queries
?- related(ID1, ID2).
% Recursive Rule: Infer if two people are indirectly related through a common friend
OUTPUT:
RESULT: Thus to create and query a deductive database using Datalog to manage and infer
information about birds and their characteristics, such as habitat, diet, and size has been
successfully executed
Ex.No.: 7
ETL TOOL
Problem Statement:
Enhance the ETL process to include data aggregation and enrichment. The goal is to extract,
transform, and load data, while also performing aggregations and adding additional
information to enrich the data set.
Tasks:
INTRODUCTION:
ETL (Extract, Transform, Load) tools are software applications used to manage the process
of extracting data from various sources, transforming it into a suitable format, and loading it
into a target database or data warehouse. ETL processes are fundamental in data integration,
allowing businesses to consolidate data from different sources for analysis, reporting, and
decision-making
Step-by-Step Guide to Enhance the ETL Process with Data Aggregation and
Enrichment
Let's assume you're using a common ETL tool like Talend, Apache NiFi, or Pentaho Data
Integration (PDI). The steps will be similar across tools, with differences mainly in the user
interface.
4. Validation
APPLICATIONS:
1. Data Warehousing
3. Data Migration
4. Data Integration
AIM: Enhance the ETL (Extract, Transform, Load) process to not only load data from
multiple sources but also perform aggregations and enrich the dataset with additional
information.
ALGORITHMS:
Extract:
o Import data from the CSV file and SQL database.
Transform:
o Aggregation:
Aggregate sales data from the SQL database by month and product to
calculate TotalQuantitySold and TotalSaleAmount.
o Enrichment:
Enrich the production data by adding a ProductCategory based on
predefined rules or external data.
Load:
o Insert the transformed and aggregated data into the Production, Sales, and
MonthlySalesSummary tables.
Validation:
IMPLEMENTATION:
Extract Data:
CSV Extraction:
import pandas as pd
# Load CSV data into DataFrame
production_df = pd.read_csv('production_data.csv')
SQL Extraction
import pandas as pd
import sqlalchemy
# Connect to SQL database
engine = sqlalchemy.create_engine('mysql+pymysql://user:password@host/dbname')
sales_df = pd.read_sql('SELECT * FROM sales_data', engine)
Transform Data:
Aggregation:
Load Data:
Validation:
Check Aggregation Results:
OUTPUT:
Sample Output
1. Production Table:
2. Sales Table:
3. MonthlySalesSummary Table:
RESULT: Thus to Enhance the ETL (Extract, Transform, Load) process to not only load
data from multiple sources but also perform aggregations and enrich the dataset with
additional information has been successfully executed
Ex.No.: 8
ORACLE DATABASE
PROBLEM STATEMENT:
This lab simulates managing a large dataset of employee information in a company where
records are maintained separately for different departments (employee_department1 and
employee_department2). Due to organizational requirements, it is necessary to combine these
records and retrieve specific information based on various criteria, such as age or salary. The
goal is to demonstrate efficient data management and retrieval techniques using Oracle DB
when working with voluminous data.
INTRODUCTION:
Oracle Database is a robust and widely used relational database management system
(RDBMS) known for its scalability, performance, and security features. It supports a wide
range of data management tasks, including storing, retrieving, and manipulating large
volumes of data. Oracle's SQL language and advanced features like PL/SQL, partitioning,
and indexing make it a popular choice for enterprise applications.
Ensure you have access to an Oracle Database instance. You can use Oracle SQL Developer
or SQL*Plus for executing SQL queries.
You will create two tables to simulate storing employee records separately for different
departments.
The UNION operation will combine the records from both tables. The UNION operator
removes duplicates by default, but you can use UNION ALL if duplicates are not a concern.
You can apply various SQL queries to retrieve specific information from the combined
dataset.
APPICATIONS:
3. Data Warehousing
4. E-commerce
6. Healthcare
8. Telecommunications
AIM:
ALGORITHM:
Create Tables:
Create two tables, employee_department1 and employee_department2, to store
employee data separately for different departments.
Insert Data:
Insert a large amount of sample employee data into these tables to simulate a real-
world scenario with voluminous data.
Retrieve and display a subset of the data from both tables to verify that the data has
been correctly inserted.
Combine Data:
Use the SQL UNION operation to combine the data from both tables. This operation
eliminates duplicate records and provides a unified view of all employee data.
Retrieve and display the combined data, applying filters to demonstrate data retrieval
based on specific criteria such as age or salary.
Retrieve and display data for employees aged between 30 and 40.
Retrieve and display data for employees with a salary greater than 60,000.
IMPLEMENTATION:
SELECT * FROM (
SELECT * FROM employee_department1
UNION
SELECT * FROM employee_department2
) WHERE age
SELECT * FROM (
SELECT * FROM employee_department1
UNION
SELECT * FROM employee_department2
) WHERE salary > 60000;
OUTPUT:
All unique employee records from both departments are displayed after performing
the UNION operation. Example output:
A filtered set of combined data is displayed based on the specified criteria (e.g., age >
30 and salary > 50,000). Example output:
Output for Employees Aged Between 30 and 40:
RESULT: Thus to demonstrate how to efficiently manage and retrieve a large dataset of
employee information stored in separate tables for different departments using Oracle DB
has been successfully executed
Ex.No.: 9
NO SQL
Problem Statement:
You are the database administrator for a manufacturing company that produces various
electronic components. The company wants to adopt a NoSQL database to handle its
dynamic and complex data requirements more efficiently, including inventory management,
employee information, and production tracking. Your task is to design a MongoDB database
that meets these requirements and perform various CRUD operations to demonstrate its
capabilities.
INTRODUCTION:
NoSQL (Not Only SQL) databases are designed to handle large volumes of unstructured,
semi-structured, or structured data with high scalability and flexibility. Unlike traditional
relational databases that use SQL, NoSQL databases support various data models like key-
value, document, column-family, and graph. They are particularly useful in scenarios where
data needs to be distributed across many servers, or when the data model changes frequently,
such as in real-time analytics, content management, and Internet of Things (IoT) applications.
Before you begin, ensure you have access to an Oracle NoSQL Database instance. You can
run Oracle NoSQL on-premises, or you can use Oracle NoSQL Cloud Service.
You can insert records into these tables using Oracle NoSQL’s API. Below is an example in
Python:
Oracle Database supports integration with Oracle NoSQL Database, allowing you to expose
NoSQL data as relational tables. This can be achieved through Oracle SQL Access for
NoSQL.
Once the external tables are created, you can run SQL queries on the NoSQL data as if it
were in Oracle Database.
APPICATIONS:
3. E-commerce
6. Mobile Applications
AIM:
To leverage MongoDB to efficiently manage and analyze data related to inventory, employee
information, and production tracking.
ALGORITHM:
Database Design:
CRUD Operations:
Aggregation Pipelines:
Use aggregation to analyze and summarize data (e.g., total inventory value, employee
performance metrics).
IMPLEMENTATION:
Schema Design:
Products Collection:
{
"_id": ObjectId,
"productName": "string",
"category": "string",
"price": "number",
"stockQuantity": "number"
}
Employees Collection:
{
"_id": ObjectId,
"employeeName": "string",
"position": "string",
"hireDate": "ISODate",
"salary": "number"
}
ProductionRecords Collection:
{
"_id": ObjectId,
"productId": ObjectId,
"quantityProduced": "number",
"productionDate": "ISODate"
}
2. CRUD Operations
Create:
// Connect to MongoDB
const db = connect('mongodb://localhost:27017/manufacturing');
javascript
Copy code
// Find all products
db.Products.find({}).toArray();
// Delete a product
db.Products.deleteOne({ productName: "Smartphone" });
// Delete an employee
db.Employees.deleteOne({ employeeName: "Alice Smith" });
3. Aggregation Pipelines
db.ProductionRecords.aggregate([
{
$lookup: {
from: "Products",
localField: "productId",
foreignField: "_id",
as: "productDetails"
}
},
{ $unwind: "$productDetails" },
{
$group: {
_id: "$productDetails.category",
totalQuantityProduced: { $sum: "$quantityProduced" }
}
}
]).toArray();
4. Indexing
Create Indexes:
OUTPUT:
Ex.No.: 10
Build Web applications using Java servlet API for storing data in databases that can be
queried using variant of SQL.
You are hired to develop a web application for a company to manage its employee
information. The application should allow the administrative staff to add, view, update, and
delete employee records. This web application should be built using PHP for server-side
processing and MySQL for storing employee data.
In the context of web applications, Java Servlet API is commonly used to build dynamic web
applications that interact with databases. Java Servlets handle HTTP requests and responses,
while JDBC (Java Database Connectivity) allows Java applications to interact with relational
databases.
STEP-BY-STEP-PROCESS:
Before you start, ensure you have the following tools and software installed:
2. Create a Database
First, create a database that your web application will interact with.
1. Open Eclipse and go to File > New > Dynamic Web Project.
2. Enter the project name (e.g., UserManagementApp), select the target runtime (e.g.,
Apache Tomcat), and click Finish.
1. Download the JDBC driver for your database (e.g., MySQL Connector/J for MySQL).
2. Right-click on your project, go to Build Path > Configure Build Path, and add the
JDBC driver JAR file to the project’s classpath.
1. Deploy the Web Application: Right-click on the project and select Run As > Run on
Server.
2. Access the Application: Open a web browser and go to
http://localhost:8080/UserManagementApp/register.jsp to register a user, or
http://localhost:8080/UserManagementApp/login.jsp to log in.
APPLICATIONS:
1. E-commerce Platforms
5. Educational Platforms
AIM:
ALGORITHM:
Setup Environment
Install a local server environment like XAMPP or WAMP that includes PHP and
MySQL.
Create a MySQL database and table for storing employee information.
Establish a connection to the MySQL database using PHP’s mysqli or PDO extension.
Create functions for CRUD operations.
Design HTML forms for adding, viewing, updating, and deleting employee records.
Use PHP to handle form submissions and display data.
IMPLEMENTATION:
1. Install XAMPP/WAMP:
o Download and install XAMPP (Windows, Linux) or WAMP (Windows) from
their official sites.
o Start Apache and MySQL services from the control panel.
2. Create Database and Table:
sql
Copy code
CREATE DATABASE company_db;
USE company_db;
db_connect.php
php
Copy code
<?php
$servername = "localhost";
$username = "root";
$password = "";
$database = "company_db";
// Create connection
$conn = new mysqli($servername, $username, $password, $database);
// Check connection
if ($conn->connect_error) {
die("Connection failed: " . $conn->connect_error);
}
?>
index.php
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Employee Management</title>
</head>
<body>
<h1>Employee Management System</h1>
<h2>Add Employee</h2>
<form action="add_employee.php" method="post">
Name: <input type="text" name="name" required><br>
Position: <input type="text" name="position"><br>
Department: <input type="text" name="department"><br>
Email: <input type="email" name="email" required><br>
<input type="submit" value="Add Employee">
</form>
<h2>View Employees</h2>
<?php
$sql = "SELECT * FROM employees";
$result = $conn->query($sql);
if ($result->num_rows > 0) {
echo "<table
border='1'><tr><th>ID</th><th>Name</th><th>Position</th><th>Department</
th><th>Email</th><th>Actions</th></tr>";
while ($row = $result->fetch_assoc()) {
echo "<tr><td>{$row['id']}</td><td>{$row['name']}</td><td>{$row['position']}</
td><td>{$row['department']}</td><td>{$row['email']}</td>";
echo "<td><a href='update_employee.php?id={$row['id']}'>Update</a> | <a
href='delete_employee.php?id={$row['id']}'>Delete</a></td></tr>";
}
echo "</table>";
} else {
echo "No employees found.";
}
?>
</body>
</html>
add_employee.php
<?php
include 'db_connect.php';
$name = $conn->real_escape_string($_POST['name']);
$position = $conn->real_escape_string($_POST['position']);
$department = $conn->real_escape_string($_POST['department']);
$email = $conn->real_escape_string($_POST['email']);
$sql = "INSERT INTO employees (name, position, department, email) VALUES ('$name',
'$position', '$department', '$email')";
$conn->close();
?>
update_employee.php
<?php
include 'db_connect.php';
if ($_SERVER["REQUEST_METHOD"] == "POST") {
$id = (int)$_POST['id'];
$name = $conn->real_escape_string($_POST['name']);
$position = $conn->real_escape_string($_POST['position']);
$department = $conn->real_escape_string($_POST['department']);
$email = $conn->real_escape_string($_POST['email']);
$conn->close();
} else {
$id = (int)$_GET['id'];
$sql = "SELECT * FROM employees WHERE id=$id";
$result = $conn->query($sql);
$employee = $result->fetch_assoc();
}
?>
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Update Employee</title>
</head>
<body>
<h1>Update Employee</h1>
<form action="update_employee.php" method="post">
<input type="hidden" name="id" value="<?php echo $employee['id']; ?>">
Name: <input type="text" name="name" value="<?php echo $employee['name']; ?>"
required><br>
Position: <input type="text" name="position" value="<?php echo $employee['position'];
?>"><br>
Department: <input type="text" name="department" value="<?php echo
$employee['department']; ?>"><br>
Email: <input type="email" name="email" value="<?php echo $employee['email']; ?>"
required><br>
<input type="submit" value="Update Employee">
</form>
</body>
</html>
delete_employee.php
<?php
include 'db_connect.php';
$id = (int)$_GET['id'];
$conn->close();
?>
OUTPUT:
Adding an Employee
After filling out the form and submitting, you would see:
Viewing Employees
You would see a table with employee details and action links:
Updating an Employee
Deleting an Employee
Security Measures
SQL Injection Prevention: Use prepared statements to prevent SQL injection. For
instance, modify add_employee.php as follows: