Integrated CA OOC DB Report
Integrated CA OOC DB Report
DB (20%)
Late submissions will be accepted up to 5 calendar days after the deadline. All late
Late Submission submissions are subject to a penalty of 10% of the mark awarded.
Penalty: Submissions received more than 5 calendar days after the deadline above will not
be accepted and a mark of 0% will be awarded.
Method of
This assignment is submitted via Moodle.
Submission:
Feedback
Results posted in Moodle gradebook
Method:
CONTENT
1. Introduction……………………………………………………………………… 4
2. Object-Oriented Design: Overview…………………………............... 4-5
3. Database Interaction and MySQL Interaction: Overview…… 5-19
4. CA1 Updated Part……………………………………………………….......... 19-22
5. References………………………………………………………………………… 23
4
all tables and values and data were stored in real-time in the database while running
and executing the system.
Then we had to make sure that the user inputs were correct and valid and it followed
the rules mentioned. We also had to make sure that they were correctly stored in the
database. We had to make sure that only the admins with special authority have access
to all user data to ensure integrity. The other challenge was to show error management
while inputting data and make sure user input right data. There were also other
challenges like creating the right schema and tables, normalisation of data and making
the right decisions.
Strategies used to Overcome the Challenges
Various strategies were used to overcome the challenges.
▪ Researching and using available sources to create an idea on what to create.
▪ Created a schema and used iterative design. Used GitHub to pool the code and
cultivate ideas and complete the system program and design.
▪ Used OOP principles in JAVA to meet the requirements of the system.
▪ We implemented role-based access for different types of users.
▪ Used MySQL queries to check and test the table creation and data storage.
▪ Made use of the JAVA multi class attribute to divide different function into
various classes making it easy to design.
▪ Used error-handling mechanisms to check for errors and faulty user inputs.
▪ Made sure that the data was secure and was only accessible to admins by
utilising Encapsulation principle of OOP.
▪ We made a separate class just to ensure that if the connection was made
successfully or not. It showed us if the connection was made or if there were any
error with the connection.
calculations. The tables are expandable and can be expanded and be added new
functions as per need.
Logical Design
The User Management interacts with MySQL database to store the data and manage it. A
database schema was made which stores information about users including their
personal details, user types, passwords and operation logs. The connection between the
JAVA and MySQL is done by the DatabaseConnection class and the interactions and all
the workload is handled by the UserDOA class.
Mainly three tables were created: The User table for the users and the Operation_Log
table for carrying out the operations.
➢ user_types Table – this defines the types of users in the system: Admin and
Regular. They are identified by the type_id and type_name. These store the
numbers to choose if the the user wants to sign in as admin or as a regular user.
In this type_id is the Primary Key.
➢ users Table – it is used to store essential information about the user. It includes
the usernames, password email, gross_income, etc. Then there is type-id that
links the user to the user type in user_type table as a foreign key. In this table the
user_id is the Primary Key.
This is the SQL code to create the table:
CREATE TABLE users (
user_id INT PRIMARY KEY AUTO_INCREMENT,
username VARCHAR(50) UNIQUE NOT NULL,
password VARCHAR(255) NOT NULL,
first_name VARCHAR(50),
last_name VARCHAR(50),
8
email VARCHAR(100),
type_id INT,
gross_income DECIMAL(10,2) DEFAULT 0.00,
tax_credits DECIMAL(10,2) DEFAULT 0.00,
FOREIGN KEY (type_id) REFERENCES user_types(type_id)
);
This is the table: users Table
Column Name Data Type Constraints
user_id (PK) INT PRIMARY KEY, AUTO_INCREMENT
username VARCHAR(50) UNIQUE, NOT NULL
password VARCHAR(255) NOT NULL
first_name VARCHAR(50)
last_name VARCHAR(50)
email VARCHAR(100)
type_id (FK) INT FOREIGN KEY(REFERENCES
user_types(type_id))
gross_income DECIMAL(10,2) DEFAULT 0.00
tax_credits DECIMAL(10,2) DEFAULT 0.00
➢ operations_log – this tables logs or stores the actions that are performed by the
users like tax calculations, profile modifications, etc. In this table the
‘operation_id’ is the Primary Key. Each log entry includes an ‘operation_type’,
‘operation_details’, and an ‘operation_timestamp’. This table references users
through the’user_id’.
operation_details TEXT
operation_timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP
NORMALISATION
It is the process in which the database is organised to reduce dependency and
redundancy. It ensures data integrity as well as efficiency. Redundancy may cause
insertion, deletion and update anomalies. (Microsoft, 2024). They are a series of
guidelines which helps in ensuring that the data is efficiently organised. The schema we
created will be transformed from Un-normalized Form (UNF) to First, Second and Third
Normal Form (1NF, 2NF, 3NF).
The schema we created has three different entities. They are ‘user’, ‘user_types’ and
‘operations_log’.
Table: user_types
10
Table: user
user_id (PK) 1
username CCT
password Dublin
first_name Admin
last_name User
email [email protected]
type_id (FK) 1
gross_income 0.00
tax_credits 0.00
Table: operations_log
operation_id user_id operation_type operation_details operation_timestamp
(PK) (FK)
1 1 Login Successful login 2024-12-19
00:00:00
2 1 Profile Update Changed email 2024-12-19
01:00:00
These schema tables are the entities and they contain the dummy values passed to them.
We have to normalise these.
➢ 1st Normal Form – All the tables or entities are in First Normal Form as all the
columns contain atomic values. No groups in these tables have repeating values
and every column has a unique value. Therefore, they are in 1NF,
➢ 2nd Normal Form – They are already in 2NF as there are no partial key
dependencies. All the non-key attributes in all the tables completely depend on
the Primary Key. Therefore, they are in 2NF.
➢ 3rd Normal Form – All the entities are already in 3NF also. It is because to be in
3NF all non-key dependencies should be dependent on the Primary Key. It should
not be dependent on other non-key dependencies. It means there should be no
Transitive dependencies. All the entities completely follow this rule. Therefore,
they are in 3NF.
Is Normalisation Necessary?
In this case Normalisation is not necessary as all the entities are already in 3NF and
follow all the rules. They completed the normalisation process. Therefore, there is no
further need of Normalisation.
11
CHEN NOTATION
In this CHEN diagram the entities are related to each other.
• User Types to Users – It has a one-to-many relation as a user type can have
multiple users.
• Users to Operations Log – It has a one-to-many relation as a user can have
multiple operation logs.
SQL STATEMENTS
The system connects to MySQL database and stores all the data and the operations. SQL
queries will demonstrate how the system works and store the data in the database.
12
Then this will create the User Types table in the database just created. The table created
and its contents is shown on the left side under schema.
Next, this query will create a table for Users to store the user’s data. The table is also
shown on the left side under schema.
13
This query is to create the Operations Log table. This is also shown on the left side under
schema.
This query will insert dummy data values into the User Types table and the User table.
14
This live time shows the values entered into the User Type table. The values passed
through the code are inserted into the table as we can see.
This live time shows the values entered into the User table. The table now contains the
values passed through the query code into the User table.
15
The next 2 queries are to verify if the user tables was created or not and to drop the
foreign key from the Operation logs table.
This query is to add anew Foreign Key to the Operations Log table. It will add user_id as
the key.
16
The next queries have some purposes. It creates a trigger before a row in users table is
deleted. The next set of the query deletes all log operations from operations_log where
the user_id matches the user that is going to be deleted.
The last set of 5 lines of queries are Optional queries which can be executed if needed.
It copies the logs of the user that is going to be deleted into an Archive table.
Then, it delets the original log of that user from the operations_log table.
The END line indicates the end of the Trigger logic.
17
Database File
CREATE DATABASE IF NOT EXISTS user_management;
USE user_management;
DELIMITER //
1. Schema Design
Table 1: Employee Table
Column Name Data Type Constraints
employee_ID INT PRIMARY KEY,
AUTO_INCREMENT
first_name VARCHAR(50) NOT NULL
last_name VARCHAR(50) NOT NULL
date_of_birth DATE NOT NULL
gender CHAR(1) NOT NULL
email VARCHAR(100) UNIQUE, NOT NULL
phone_number VARCHAR(15) UNIQUE, NOT NULL
employment_status VARCHAR(20) NOT NULL
start_date DATE NOT NULL
end_date DATE NULLABLE
manager_ID INT FOREIGN KEY
REFERENCES employee_ID
The schema's data structure consists of four entities: Employee, Address, Job, and
Language. It enables relationships like one-to-many and recursive ones and employs
foreign and primary keys to provide referential integrity (Dohmen, et al., 2024). This
structure is scalable, redundant-free, and based on relational database concepts,
making it ideal for effective and accurate data administration.
21
ERD graphically depicts the database structure, outlining entities, their properties, and
relationships regarding language, job, address, and employee (Han & Wang, 2024). The
Employee entity has a one-to-many relationship with the Address entity since any
employee might have multiple addresses. Since an employee may have various job
records at any moment, the Employee-Job connection is also one-to-many. Employees
fluent in more than one language can be accommodated by maintaining a one-to-many
relationship with the language. In addition, the manager_ID property of the Employee
object represents a hierarchical link between employees and managers in the form of a
recursive one-to-many relationship. These linkages make flexible and accurate data
representation possible, reflecting real-world organisational hierarchies and preserving
referential integrity.
3. Normalisation
Table 5: 1NF
Table Name Column Names
Employee employee_ID (PK), first_name, last_name,
date_of_birth, gender, email, phone_number,
language
Address address_ID (PK), employee_ID (FK), address, city,
country
Job job_ID (PK), employee_ID (FK), job_title,
department, salary
Table 6: 2NF
Table Name Column Names
Employee employee_ID (PK), first_name, last_name,
date_of_birth, gender, email, phone_number
Language language_ID (PK), employee_ID (FK),
language
22
Table 7: 3NF
Table Name Column Names
Employee employee_ID (PK), first_name, last_name,
date_of_birth, gender, email, phone_number,
manager_ID (FK)
Language language_ID (PK), employee_ID (FK), language
Normalisation is key for effective data organisation, redundancy elimination, and logical
dependency assurance regarding relational database (Khattabi, et al., 2024). We
removed transitive dependencies, ensured atomicity, and removed partial dependencies
by going through 1NF, 2NF, and 3NF. This procedure improved query performance,
reduced data duplication, and strengthened data integrity. When data is partitioned into
related tables like Employee, Address, Job, and Language, the database becomes more
adaptable and easier to maintain. This enables consistent and scalable data
management.
23
References
Döhmen, T., Geacu, R., Hulsebos, M. & Schelter, S., 2024. SchemaPile: A Large Collection of
Relational Database Schemas. Proceedings of the ACM on Management of Data, Volume
172, pp. 1-25.
Han, Z. & Wang, J., 2024. Knowledge enhanced graph inference network-based entity-
relation extraction and knowledge graph construction for industrial domain. Frontiers of
Engineering Management, Volume 11, pp. 143-158.
Khattabi, M.-Z.E. et al., 2024. Understanding the Interplay Between Metrics, Normalization
Forms, and Data distribution in K-Means Clustering: A Comparative Simulation Study.
Arabian Journal for Science and Engineering, Volume 49, pp. 2987-3007.
GeeksForGeeks (2017). Inheritance in Java - GeeksforGeeks. [online] GeeksforGeeks.
Available at: https://www.geeksforgeeks.org/inheritance-in-java/.
Janssen, T. (2023). OOP Concept for Beginners: What is Encapsulation. [online] Stackify.
Available at: https://stackify.com/oop-concept-for-beginners-what-is-encapsulation/.
Taylor, T. (2023). Understanding the role of polymorphism in OOP | TechTarget. [online]
App Architecture. Available at:
https://www.techtarget.com/searchapparchitecture/tip/Understanding-the-role-of-
polymorphism-in-OOP.
Microsoft (2024). Database normalization description. [online] learn.microsoft.com.
Available at: https://learn.microsoft.com/en-us/office/troubleshoot/access/database-
normalization-description.
Simplilearn.com. (n.d.). what is Normalization in SQL? 1NF, 2NF, 3NF and BCNF |
Simplilearn. [online] Available at: https://www.simplilearn.com/tutorials/sql-
tutorial/what-is-normalization-in-sql.
Studytonight (2019). 1NF, 2NF, 3NF and BCNF in Database Normalization | Studytonight.
[online] Studytonight.com. Available at: https://www.studytonight.com/dbms/database-
normalization.php.