Case Study - History of DBMS
Case Study - History of DBMS
Before computers, everything was done the old-fashioned way—by hand. If you wanted
to know which books were borrowed or who borrowed them, you’d have to dig through a
pile of paper records. It was slow, full of errors, and if you lost a book register, all your
data was gone.
The problem: Searching through piles of paper was tough and time-consuming. Plus, if
you made a mistake, it wasn’t easy to fix. It was clear that a better system was needed.
In the 1960s, the first computers were used to manage data. People started using flat
files, which were basically just lists of data stored in text files or spreadsheets. Imagine
storing all the library records in an Excel sheet. Each row is a record, and each column
is a piece of information (book title, author, availability, etc.).
The issue: While flat files were a step up from paper, they still had their flaws. For one,
they could get really messy as the data grew. Searching for a specific book or member
was still pretty slow. Plus, you had to repeat the same data over and over, like listing a
book's title and author each time a new transaction happened.
In the 1970s, a major change happened. Edgar Codd at IBM introduced the relational
model. Instead of stuffing all data into one big file, it organized data into separate tables.
Think of a table like a simple list that stores related information. For example, one table
could store book details, another for members, and a third for transactions.
● Why it worked: This new system was cleaner and made it much easier to
manage. Each table had keys that linked them together. You could now look up a
book’s details in one table, and then link it to a transaction in another table.
Searching and updating data became faster and more reliable. The introduction
of SQL (Structured Query Language) made managing this data easier through
simple commands.
By the 1980s, SQL (the language for working with databases) became the standard.
Major companies like Oracle, DB2, and MySQL developed commercial database
systems that businesses used to handle their growing data needs. These systems were
much more efficient and scalable.
By the 1990s, things were getting more complex, and so were the databases. The
Object-Oriented Database (OODB)model came into the picture. Instead of just storing
data in tables, you could now store it as objects (just like in object-oriented
programming). So, instead of having just a list of book titles and authors, you could
have a book object that includes the title, author, and even functions to check if the
book is available or to borrow it.
● What was different: This made the database much better at handling complex
data like multimedia files, videos, and other large objects. But for most business
applications, relational databases were still the go-to because they were simpler
and more efficient for handling large amounts of structured data.
By the 2000s, the internet exploded, and we started dealing with huge amounts of data
—far more than traditional databases could handle. Enter NoSQL databases. Unlike the
relational model, NoSQL doesn’t store data in tables. Instead, it uses flexible data
structures like key-value pairs, documents, or graphs. These databases could scale
across many servers and handle massive amounts of unstructured data—think social
media posts, user activity logs, or product catalogs.
Why it worked: NoSQL made it easier to store and access huge amounts of
unstructured data quickly. For example, a NoSQL database might store a book record
JSON Format.
● What’s new: Imagine a database that can automatically adjust queries to make
them faster or predict which books might be most popular based on previous
borrowing patterns. AI is helping make these systems even more efficient and
user-friendly.