Top SQL Question For Data Science Interview
Last Updated :
23 Jul, 2025
In the field of data science, SQL knowledge is often tested through a range of interview questions designed to assess both fundamental and advanced skills. These questions cover various aspects of SQL, including basic queries, data manipulation, aggregation functions, subqueries, joins, and performance optimization. Understanding these topics not only helps in tackling interview questions but also in applying SQL to real-world data challenges.
SQL Question For Data Science Interview
This comprehensive list of SQL interview questions and answers aims to provide data science professionals with a solid foundation in SQL, ensuring they are well-prepared to handle a variety of scenarios encountered in both interviews and practical applications. Whether you are a novice looking to build your SQL skills or an experienced practitioner seeking to refresh your knowledge, this guide offers valuable insights and examples to enhance your SQL proficiency.
Basic SQL Queries
1. What is SQL?
SQL (Structured Query Language) is a standard language used for managing and manipulating relational databases. It allows users to perform tasks such as querying data, updating records, and managing schemas.
2. Explain the difference between SQL and NoSQL databases.
SQL databases are relational and use structured query language for defining and manipulating data. They are suitable for complex queries and transactions. NoSQL databases are non-relational and are used for unstructured data, offering flexibility in data storage and scalability.
3. What are the main SQL data types?
Common SQL data types include:
INT
: IntegerVARCHAR(length)
: Variable-length stringDATE
: DateFLOAT
: Floating-point numberBOOLEAN
: True or False value
4. Write a query to select all columns from a table named employees
.
Example: "SELECT * FROM employees;"
5. How do you filter records in SQL? Provide an example.
Use the WHERE
clause to filter records. Example: "SELECT * FROM employees WHERE department = 'Sales';"
6. What is the purpose of the GROUP BY
clause? Give an example.
GROUP BY
is used to group rows that have the same values into summary rows. Example: "SELECT department, COUNT(*) FROM employees GROUP BY department;"
7. Explain the difference between INNER JOIN
and LEFT JOIN
.
INNER JOIN
returns only rows with matching values in both tables. LEFT JOIN
returns all rows from the left table and matched rows from the right table; unmatched rows from the right table are filled with NULLs.
INNER JOIN
Example: "SELECT employees.name, departments.department FROM employees INNER JOIN departments ON employees.dept_id = departments.dept_id;"
LEFT JOIN
Example: "SELECT employees.name, departments.department FROM employees LEFT JOIN departments ON employees.dept_id = departments.dept_id;"
8. How do you sort the results of a query?
Use the ORDER BY
clause. Example: "SELECT * FROM employees ORDER BY salary DESC;"
9. What is the purpose of the HAVING
clause? How does it differ from WHERE
?
HAVING
is used to filter records after GROUP BY
has been applied, while WHERE
is used to filter records before grouping. Example using HAVING
: "SELECT department, AVG(salary) FROM employees GROUP BY department HAVING AVG(salary) > 50000;"
Aggregation and Functions
10. Write a query to find the top 5 highest salaries from a table named salaries
.
Example: "SELECT salary FROM salaries ORDER BY salary DESC LIMIT 5;"
11. How do you calculate the average of a column?
Use the AVG()
function. Example: "SELECT AVG(salary) FROM employees;"
12. What is the difference between COUNT(*)
and COUNT(column_name)
?
COUNT(*)
counts all rows, including those with NULLs. COUNT(column_name)
counts only rows where the specified column is not NULL.
- Example with
COUNT(*)
: "SELECT COUNT(*) FROM employees;"
- Example with
COUNT(column_name)
: "SELECT COUNT(salary) FROM employees;"
13. How do you find the maximum and minimum values in a column?
Use MAX()
and MIN()
functions.
- Example with
MAX()
: "SELECT MAX(salary) FROM employees;"
- Example with
MIN()
: "SELECT MIN(salary) FROM employees;"
14. Explain how the CASE
statement works in SQL.
The CASE
statement allows for conditional logic in SQL queries. Example: "SELECT name, CASE WHEN salary > 50000 THEN 'High' ELSE 'Low' END AS salary_status FROM employees;"
15. Write a query to find the number of employees in each department.
Example: "SELECT department, COUNT(*) FROM employees GROUP BY department;"
16. What is a window function? Provide an example.
Window functions perform calculations across a set of table rows related to the current row. They are used for running totals, rankings, etc. Example: "SELECT name, salary, RANK() OVER (ORDER BY salary DESC) AS rank FROM employees;"
17. Explain the use of the RANK()
function with an example.
The RANK()
function assigns a rank to each row within a partition of the result set, with gaps in rank values if there are ties. Example: "SELECT name, salary, RANK() OVER (ORDER BY salary DESC) AS rank FROM employees;"
Use date functions and operators to add or subtract time intervals. Example: "SELECT CURRENT_DATE - INTERVAL '30' DAY AS past_date;"
Subqueries and Joins
19. What is a CROSS JOIN
and when would you use it?
CROSS JOIN
returns the Cartesian product of two tables, meaning every row of the first table is combined with every row of the second table. It is rarely used but useful in some scenarios. Example: "SELECT * FROM employees CROSS JOIN departments;"
20. Write a query to calculate the running total of sales by month.
Example: "SELECT month, SUM(sales) OVER (ORDER BY month ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS running_total FROM sales_data;"
21. What is a subquery? Provide an example.
A subquery is a query within another query used to perform operations that depend on the results of the outer query. Example: "SELECT name FROM employees WHERE department_id IN (SELECT department_id FROM departments WHERE department_name = 'Sales');"
22. How do you use a subquery in the WHERE
clause?
Subqueries in the WHERE
clause are used to filter results based on the results of another query. Example: "SELECT name FROM employees WHERE salary > (SELECT AVG(salary) FROM employees);"
A correlated subquery refers to a subquery that references columns from the outer query. It is evaluated once for each row processed by the outer query. Example: "SELECT name FROM employees e1 WHERE salary > (SELECT AVG(salary) FROM employees e2 WHERE e1.department_id = e2.department_id);"
24. Explain the difference between UNION
and UNION ALL
.
UNION
combines the results of two queries and removes duplicates. UNION ALL
combines results without removing duplicates.
UNION
Example: "SELECT name FROM employees UNION SELECT name FROM managers;"
UNION ALL
Example: "SELECT name FROM employees UNION ALL SELECT name FROM managers;"
25. Write a query to find employees who have a higher salary than the average salary.
Example: "SELECT name, salary FROM employees WHERE salary > (SELECT AVG(salary) FROM employees);"
26. How do you join three tables? Provide a query example.
You can join three tables using multiple JOIN
operations. Example: "SELECT e.name, d.department_name, p.project_name FROM employees e INNER JOIN departments d ON e.department_id = d.department_id INNER JOIN projects p ON e.project_id = p.project_id;"
27. What is a self-join and when would you use it?
A self-join is a join where a table is joined with itself. It is useful for hierarchical data or comparing rows within the same table. Example: "SELECT e1.name AS Employee, e2.name AS Manager FROM employees e1 LEFT JOIN employees e2 ON e1.manager_id = e2.employee_id;"
28. How do you handle duplicate records in a query result?
Use the DISTINCT
keyword to remove duplicate rows. Example: "SELECT DISTINCT department FROM employees;"
29. What is the purpose of the EXISTS
clause?
The EXISTS
clause is used to check if a subquery returns any results. It returns TRUE if the subquery returns one or more rows. Example: "SELECT name FROM employees WHERE EXISTS (SELECT * FROM departments WHERE departments.department_id = employees.department_id);"
30. Write a query to find customers who have made more than 3 orders.
Example: "SELECT customer_id FROM orders GROUP BY customer_id HAVING COUNT(order_id) > 3;"
Data Manipulation and Transactions
31. How do you insert a new record into a table?
Use the INSERT INTO
statement. Example: "INSERT INTO employees (name, department, salary) VALUES ('John Doe', 'Marketing', 60000);"
32. What is the purpose of the UPDATE
statement?
The UPDATE
statement modifies existing records in a table. Example: "UPDATE employees SET salary = 70000 WHERE name = 'John Doe';"
33. How do you delete records from a table?
Use the DELETE FROM
statement. Example: "DELETE FROM employees WHERE name = 'John Doe';"
34. Explain the concept of transactions in SQL.
Transactions are sequences of operations performed as a single unit of work. They ensure that a series of operations either all succeed or all fail, maintaining database integrity.
35. What are COMMIT
and ROLLBACK
?
COMMIT
makes all changes made in the transaction permanent. ROLLBACK
undoes all changes made in the transaction.
- Example with
COMMIT
: "COMMIT;"
- Example with
ROLLBACK
: "ROLLBACK;"
36. How do you handle errors during transactions?
Use ROLLBACK
to revert changes if an error occurs, ensuring the database remains consistent.
37. What is an ACID
property in the context of SQL?
ACID
stands for Atomicity, Consistency, Isolation, and Durability, which are properties that guarantee reliable transactions in a database system.
38. How can you prevent SQL injection attacks?
Use parameterized queries or prepared statements to avoid SQL injection by ensuring that user inputs are properly escaped and treated as data rather than executable code.
39. Write a query to update the salary of employees in a specific department.
Example: "UPDATE employees SET salary = salary * 1.1 WHERE department = 'Sales';"
Use the INSERT INTO ... VALUES
statement with multiple value sets or use the LOAD DATA
command for large datasets.
- Example with multiple values:
"INSERT INTO employees (name, department, salary) VALUES ('Alice', 'HR', 55000), ('Bob', 'IT', 60000);"
- Example with
LOAD DATA
(MySQL): "LOAD DATA INFILE 'file_path.csv' INTO TABLE employees FIELDS TERMINATED BY ',';"
41. What is an index? Why is it important?
An index is a database object that improves the speed of data retrieval operations on a table. It is crucial for optimizing query performance, especially with large datasets.
42. How do you create an index on a table?
Use the CREATE INDEX
statement. Example: "CREATE INDEX idx_salary ON employees(salary);"
43. What are some common strategies for optimizing SQL queries?
Common strategies include:
- Using indexes appropriately
- Writing efficient queries
- Avoiding unnecessary columns in
SELECT
statements - Using
EXISTS
instead of IN
for subqueries
44. Explain the difference between clustered and non-clustered indexes.
A clustered index determines the physical order of data in the table and there can be only one per table. A non-clustered index creates a logical order of data but does not affect the physical storage, and multiple non-clustered indexes can exist per table.
45. What is query execution plan and how can it help in optimization?
A query execution plan is a detailed description of how the database engine executes a query. It helps identify performance bottlenecks and optimize queries by showing how data retrieval is performed.
Strategies include:
- Using indexing to speed up searches
- Partitioning tables to manage large datasets
- Optimizing queries to reduce resource consumption
- Implementing data compression techniques
47. What is normalization and why is it important?
Normalization is the process of organizing data to reduce redundancy and improve data integrity. It is important to ensure data consistency and efficient data management.
48. Explain the concept of denormalization with an example.
Denormalization involves merging tables or introducing redundancy to improve read performance. For example, storing aggregated data in a summary table to speed up queries. Example: "CREATE TABLE sales_summary AS SELECT department, SUM(sales) FROM sales GROUP BY department;"
49. What are the potential drawbacks of over-normalization?
Over-normalization can lead to excessive joins, which may degrade performance. It can also make the database schema complex and harder to manage.
Use tools like EXPLAIN
or EXPLAIN ANALYZE
to review query execution plans and identify slow-running queries. Analyze indices, examine query patterns, and optimize based on findings.
Similar Reads
Data Science Tutorial Data Science is a field that combines statistics, machine learning and data visualization to extract meaningful insights from vast amounts of raw data and make informed decisions, helping businesses and industries to optimize their operations and predict future trends.This Data Science tutorial offe
3 min read
Introduction to Machine Learning
What is Data Science?Data science is the study of data that helps us derive useful insight for business decision making. Data Science is all about using tools, techniques, and creativity to uncover insights hidden within data. It combines math, computer science, and domain expertise to tackle real-world challenges in a
8 min read
Top 25 Python Libraries for Data Science in 2025Data Science continues to evolve with new challenges and innovations. In 2025, the role of Python has only grown stronger as it powers data science workflows. It will remain the dominant programming language in the field of data science. Its extensive ecosystem of libraries makes data manipulation,
10 min read
Difference between Structured, Semi-structured and Unstructured dataBig Data includes huge volume, high velocity, and extensible variety of data. There are 3 types: Structured data, Semi-structured data, and Unstructured data. Structured data - Structured data is data whose elements are addressable for effective analysis. It has been organized into a formatted repos
2 min read
Types of Machine LearningMachine learning is the branch of Artificial Intelligence that focuses on developing models and algorithms that let computers learn from data and improve from previous experience without being explicitly programmed for every task.In simple words, ML teaches the systems to think and understand like h
13 min read
What's Data Science Pipeline?Data Science is a field that focuses on extracting knowledge from data sets that are huge in amount. It includes preparing data, doing analysis and presenting findings to make informed decisions in an organization. A pipeline in data science is a set of actions which changes the raw data from variou
3 min read
Applications of Data ScienceData Science is the deep study of a large quantity of data, which involves extracting some meaning from the raw, structured, and unstructured data. Extracting meaningful data from large amounts usesalgorithms processing of data and this processing can be done using statistical techniques and algorit
6 min read
Python for Machine Learning
Learn Data Science Tutorial With PythonData Science has become one of the fastest-growing fields in recent years, helping organizations to make informed decisions, solve problems and understand human behavior. As the volume of data grows so does the demand for skilled data scientists. The most common languages used for data science are P
3 min read
Pandas TutorialPandas is an open-source software library designed for data manipulation and analysis. It provides data structures like series and DataFrames to easily clean, transform and analyze large datasets and integrates with other Python libraries, such as NumPy and Matplotlib. It offers functions for data t
6 min read
NumPy Tutorial - Python LibraryNumPy (short for Numerical Python ) is one of the most fundamental libraries in Python for scientific computing. It provides support for large, multi-dimensional arrays and matrices along with a collection of mathematical functions to operate on arrays.At its core it introduces the ndarray (n-dimens
3 min read
Scikit Learn TutorialScikit-learn (also known as sklearn) is a widely-used open-source Python library for machine learning. It builds on other scientific libraries like NumPy, SciPy and Matplotlib to provide efficient tools for predictive data analysis and data mining.It offers a consistent and simple interface for a ra
3 min read
ML | Data Preprocessing in PythonData preprocessing is a important step in the data science transforming raw data into a clean structured format for analysis. It involves tasks like handling missing values, normalizing data and encoding variables. Mastering preprocessing in Python ensures reliable insights for accurate predictions
6 min read
EDA - Exploratory Data Analysis in PythonExploratory Data Analysis (EDA) is a important step in data analysis which focuses on understanding patterns, trends and relationships through statistical tools and visualizations. Python offers various libraries like pandas, numPy, matplotlib, seaborn and plotly which enables effective exploration
6 min read
Introduction to Statistics
Statistics For Data ScienceStatistics is like a toolkit we use to understand and make sense of information. It helps us collect, organize, analyze and interpret data to find patterns, trends and relationships in the world around us.From analyzing scientific experiments to making informed business decisions, statistics plays a
12 min read
Descriptive StatisticStatistics is the foundation of data science. Descriptive statistics are simple tools that help us understand and summarize data. They show the basic features of a dataset, like the average, highest and lowest values and how spread out the numbers are. It's the first step in making sense of informat
5 min read
What is Inferential Statistics?Inferential statistics is an important tool that allows us to make predictions and conclusions about a population based on sample data. Unlike descriptive statistics, which only summarize data, inferential statistics let us test hypotheses, make estimates, and measure the uncertainty about our predi
7 min read
Bayes' TheoremBayes' Theorem is a mathematical formula used to determine the conditional probability of an event based on prior knowledge and new evidence. It adjusts probabilities when new information comes in and helps make better decisions in uncertain situations.Bayes' Theorem helps us update probabilities ba
13 min read
Probability Data Distributions in Data ScienceUnderstanding how data behaves is one of the first steps in data science. Before we dive into building models or running analysis, we need to understand how the values in our dataset are spread out and thatâs where probability distributions come in.Let us start with a simple example: If you roll a f
8 min read
Parametric Methods in StatisticsParametric statistical methods are those that make assumptions regarding the distribution of the population. These methods presume that the data have a known distribution (e.g., normal, binomial, Poisson) and rely on parameters (e.g., mean and variance) to define the data.Key AssumptionsParametric t
6 min read
Non-Parametric TestsNon-parametric tests are applied in hypothesis testing when the data does not satisfy the assumptions necessary for parametric tests, such as normality or equal variances. These tests are especially helpful for analyzing ordinal data, small sample sizes, or data with outliers.Common Non-Parametric T
5 min read
Hypothesis TestingHypothesis testing compares two opposite ideas about a group of people or things and uses data from a small part of that group (a sample) to decide which idea is more likely true. We collect and study the sample data to check if the claim is correct.Hypothesis TestingFor example, if a company says i
9 min read
ANOVA for Data Science and Data AnalyticsANOVA is useful when we need to compare more than two groups and determine whether their means are significantly different. Suppose you're trying to understand which ingredients in a recipe affect its taste. Some ingredients, like spices might have a strong influence while others like a pinch of sal
9 min read
Bayesian Statistics & ProbabilityBayesian statistics sees unknown values as things that can change and updates what we believe about them whenever we get new information. It uses Bayesâ Theorem to combine what we already know with new data to get better estimates. In simple words, it means changing our initial guesses based on the
6 min read
Feature Engineering
Model Evaluation and Tuning
Data Science Practice