Top SQL Queries for Data Scientist
Last Updated :
23 Jul, 2025
SQL (Structured Query Language) is one of the critical instruments used in data manipulation and analysis. Knowledge of SQL queries is crucial for data scientists to efficiently select, modify, and analyse the collected big data. Indeed, using SQL queries plays a key role in improving the quality of findings from data by providing efficient techniques to analyze the data.
SQL Queries for Data ScientistThis article aims to identify various Top SQL queries that any data scientist should be conversant with within their line of work, including filtering methods, aggregation, and joining of data.
Basic SQL Queries
Retrieving Data with SELECT
The SELECT statement is fundamental for retrieving data from a database. For example, to retrieve all columns from a table named employees
:
SELECT * FROM employees;
Filtering Data with WHERE
The WHERE clause allows you to filter data based on specific conditions. To find employees in the 'Sales' department:
SELECT * FROM employees WHERE department = 'Sales';
Sorting Data with ORDER BY
The ORDER BY clause sorts the result set. To sort employees by their salary in descending order:
SELECT * FROM employees ORDER BY salary DESC;
Limiting Results with LIMIT
The LIMIT clause restricts the number of rows returned. To get the top 5 highest-paid employees:
SELECT * FROM employees ORDER BY salary DESC LIMIT 5;
Aggregation and Grouping
Using Aggregate Functions
Aggregate functions perform calculations on multiple rows. For example, to get the total salary expense:
SELECT SUM(salary) FROM employees;
Grouping Data with GROUP BY
The GROUP BY clause groups rows that have the same values. To find the average salary by department:
SELECT department, AVG(salary) FROM employees GROUP BY department;
Filtering Groups with HAVING
The HAVING clause filters groups based on aggregate conditions. To find departments with an average salary above 50,000:
SELECT department, AVG(salary) FROM employees GROUP BY department HAVING AVG(salary) > 50000;
Advanced Filtering Techniques
Using Subqueries in WHERE Clause
Subqueries can be used within a WHERE clause to filter data. To find employees who earn more than the average salary:
SELECT * FROM employees WHERE salary > (SELECT AVG(salary) FROM employees);
A correlated subquery refers to the outer query. To find employees who have the highest salary in their department:
SELECT * FROM employees e1 WHERE salary = (SELECT MAX(salary) FROM employees e2 WHERE e1.department = e2.department);
Using CASE Statements for Conditional Logic
The CASE statement allows for conditional logic. To categorize employees based on their salary:
SELECT name, salary,
CASE
WHEN salary > 70000 THEN 'High'
WHEN salary BETWEEN 50000 AND 70000 THEN 'Medium'
ELSE 'Low'
END AS salary_category
FROM employees;
Joins and Unions
Understanding Different Types of Joins
Joins combine rows from two or more tables. An INNER JOIN returns only matching rows:
SELECT e.name, d.department_name
FROM employees e
INNER JOIN departments d ON e.department_id = d.id;
A LEFT JOIN returns all rows from the left table, and matching rows from the right table:
SELECT e.name, d.department_name
FROM employees e
LEFT JOIN departments d ON e.department_id = d.id;
Combining Results with UNION and UNION ALL
The UNION operator combines the result sets of two queries, removing duplicates:
SELECT name FROM employees
UNION
SELECT name FROM contractors;
The UNION ALL operator includes duplicates:
SELECT name FROM employees
UNION ALL
SELECT name FROM contractors;
Handling NULL Values in Joins
NULL values can affect join results. To handle NULLs in a LEFT JOIN:
SELECT e.name, d.department_name
FROM employees e
LEFT JOIN departments d ON e.department_id = d.id
WHERE d.department_name IS NOT NULL;
Advanced SQL Functions
String Functions
String functions manipulate text data. For example, to concatenate first and last names:
SELECT CONCAT(first_name, ' ', last_name) AS full_name FROM employees;
Date and Time Functions
Date functions handle date and time data. To get the current date and time:
SELECT NOW();
Numeric Functions
Numeric functions perform operations on numbers. To round salaries to the nearest thousand:
SELECT ROUND(salary, -3) FROM employees;
Window Functions
Window functions perform calculations across a set of table rows. To assign a row number to each employee:
SELECT name, ROW_NUMBER() OVER (ORDER BY salary DESC) AS row_num FROM employees;
Using ROW_NUMBER, RANK, and DENSE_RANK
These functions assign ranks to rows. ROW_NUMBER gives a unique rank:
SELECT name, ROW_NUMBER() OVER (ORDER BY salary DESC) AS rank FROM employees;
RANK can give the same rank to ties:
SELECT name, RANK() OVER (ORDER BY salary DESC) AS rank FROM employees;
DENSE_RANK ensures no gaps in rank values:
SELECT name, DENSE_RANK() OVER (ORDER BY salary DESC) AS rank FROM employees;
Aggregating Data with OVER Clause
The OVER clause defines the window for aggregate functions. To calculate a running total of salaries:
SELECT name, salary, SUM(salary) OVER (ORDER BY salary) AS running_total FROM employees;
Common Table Expressions (CTEs)
Basics of CTEs
CTEs define temporary result sets. To define and use a CTE:
WITH HighSalaryEmployees AS (
SELECT * FROM employees WHERE salary > 70000
)
SELECT * FROM HighSalaryEmployees;
Recursive CTEs for Hierarchical Data
Recursive CTEs handle hierarchical data. To list an employee hierarchy:
WITH RECURSIVE EmployeeHierarchy AS (
SELECT id, name, manager_id FROM employees WHERE manager_id IS NULL
UNION ALL
SELECT e.id, e.name, e.manager_id FROM employees e
INNER JOIN EmployeeHierarchy eh ON e.manager_id = eh.id
)
SELECT * FROM EmployeeHierarchy;
Using CTEs for Complex Queries
CTEs simplify complex queries. To calculate department budgets and average salaries:
WITH DepartmentSalaries AS (
SELECT department, SUM(salary) AS total_salary, AVG(salary) AS avg_salary
FROM employees
GROUP BY department
)
SELECT * FROM DepartmentSalaries;
Data Modification Queries
Inserting Data with INSERT
The INSERT statement adds new rows to a table. To insert a new employee:
INSERT INTO employees (name, department, salary) VALUES ('John Doe', 'Sales', 60000);
Updating Data with UPDATE
The UPDATE statement modifies existing data. To give all employees in 'Sales' a 10% raise:
UPDATE employees SET salary = salary * 1.10 WHERE department = 'Sales';
Deleting Data with DELETE
The DELETE statement removes rows from a table. To delete employees with a salary below 30000:
DELETE FROM employees WHERE salary < 30000;
Merging Data with MERGE (Upserts)
The MERGE statement combines insert and update operations. To insert or update employee records:
MERGE INTO employees AS target
USING new_employees AS source
ON target.id = source.id
WHEN MATCHED THEN
UPDATE SET target.name = source.name, target.salary = source.salary
WHEN NOT MATCHED THEN
INSERT (id, name, salary) VALUES (source.id, source.name, source.salary);
Conclusion
SQL becomes an essential component of a data scientist’s arsenal since it allows for efficient data extraction as well as manipulation and analysis. It is crucial for a data scientist to have knowledge of basic as well as advanced levels of SQL to manage various types of data sets and extract useful information from them. SELECT, WHERE, and JOIN are the essential parts for data acquisition and extraction, while window functions, CTEs, and pivot tables are more advanced features that augment one’s capability of performing various calculations and creating elaborate reports. With these SQL queries applied, the experience of a data scientist will be made easier, the ability to analyze complex data will become more accurate, and the formulation of the right decisions will be possible in the different domains.
Similar Reads
Data Science Tutorial Data Science is a field that combines statistics, machine learning and data visualization to extract meaningful insights from vast amounts of raw data and make informed decisions, helping businesses and industries to optimize their operations and predict future trends.This Data Science tutorial offe
3 min read
Introduction to Machine Learning
What is Data Science?Data science is the study of data that helps us derive useful insight for business decision making. Data Science is all about using tools, techniques, and creativity to uncover insights hidden within data. It combines math, computer science, and domain expertise to tackle real-world challenges in a
8 min read
Top 25 Python Libraries for Data Science in 2025Data Science continues to evolve with new challenges and innovations. In 2025, the role of Python has only grown stronger as it powers data science workflows. It will remain the dominant programming language in the field of data science. Its extensive ecosystem of libraries makes data manipulation,
10 min read
Difference between Structured, Semi-structured and Unstructured dataBig Data includes huge volume, high velocity, and extensible variety of data. There are 3 types: Structured data, Semi-structured data, and Unstructured data. Structured data - Structured data is data whose elements are addressable for effective analysis. It has been organized into a formatted repos
2 min read
Types of Machine LearningMachine learning is the branch of Artificial Intelligence that focuses on developing models and algorithms that let computers learn from data and improve from previous experience without being explicitly programmed for every task.In simple words, ML teaches the systems to think and understand like h
13 min read
What's Data Science Pipeline?Data Science is a field that focuses on extracting knowledge from data sets that are huge in amount. It includes preparing data, doing analysis and presenting findings to make informed decisions in an organization. A pipeline in data science is a set of actions which changes the raw data from variou
3 min read
Applications of Data ScienceData Science is the deep study of a large quantity of data, which involves extracting some meaning from the raw, structured, and unstructured data. Extracting meaningful data from large amounts usesalgorithms processing of data and this processing can be done using statistical techniques and algorit
6 min read
Python for Machine Learning
Learn Data Science Tutorial With PythonData Science has become one of the fastest-growing fields in recent years, helping organizations to make informed decisions, solve problems and understand human behavior. As the volume of data grows so does the demand for skilled data scientists. The most common languages used for data science are P
3 min read
Pandas TutorialPandas is an open-source software library designed for data manipulation and analysis. It provides data structures like series and DataFrames to easily clean, transform and analyze large datasets and integrates with other Python libraries, such as NumPy and Matplotlib. It offers functions for data t
6 min read
NumPy Tutorial - Python LibraryNumPy (short for Numerical Python ) is one of the most fundamental libraries in Python for scientific computing. It provides support for large, multi-dimensional arrays and matrices along with a collection of mathematical functions to operate on arrays.At its core it introduces the ndarray (n-dimens
3 min read
Scikit Learn TutorialScikit-learn (also known as sklearn) is a widely-used open-source Python library for machine learning. It builds on other scientific libraries like NumPy, SciPy and Matplotlib to provide efficient tools for predictive data analysis and data mining.It offers a consistent and simple interface for a ra
3 min read
ML | Data Preprocessing in PythonData preprocessing is a important step in the data science transforming raw data into a clean structured format for analysis. It involves tasks like handling missing values, normalizing data and encoding variables. Mastering preprocessing in Python ensures reliable insights for accurate predictions
6 min read
EDA - Exploratory Data Analysis in PythonExploratory Data Analysis (EDA) is a important step in data analysis which focuses on understanding patterns, trends and relationships through statistical tools and visualizations. Python offers various libraries like pandas, numPy, matplotlib, seaborn and plotly which enables effective exploration
6 min read
Introduction to Statistics
Statistics For Data ScienceStatistics is like a toolkit we use to understand and make sense of information. It helps us collect, organize, analyze and interpret data to find patterns, trends and relationships in the world around us.From analyzing scientific experiments to making informed business decisions, statistics plays a
12 min read
Descriptive StatisticStatistics is the foundation of data science. Descriptive statistics are simple tools that help us understand and summarize data. They show the basic features of a dataset, like the average, highest and lowest values and how spread out the numbers are. It's the first step in making sense of informat
5 min read
What is Inferential Statistics?Inferential statistics is an important tool that allows us to make predictions and conclusions about a population based on sample data. Unlike descriptive statistics, which only summarize data, inferential statistics let us test hypotheses, make estimates, and measure the uncertainty about our predi
7 min read
Bayes' TheoremBayes' Theorem is a mathematical formula used to determine the conditional probability of an event based on prior knowledge and new evidence. It adjusts probabilities when new information comes in and helps make better decisions in uncertain situations.Bayes' Theorem helps us update probabilities ba
13 min read
Probability Data Distributions in Data ScienceUnderstanding how data behaves is one of the first steps in data science. Before we dive into building models or running analysis, we need to understand how the values in our dataset are spread out and thatâs where probability distributions come in.Let us start with a simple example: If you roll a f
8 min read
Parametric Methods in StatisticsParametric statistical methods are those that make assumptions regarding the distribution of the population. These methods presume that the data have a known distribution (e.g., normal, binomial, Poisson) and rely on parameters (e.g., mean and variance) to define the data.Key AssumptionsParametric t
6 min read
Non-Parametric TestsNon-parametric tests are applied in hypothesis testing when the data does not satisfy the assumptions necessary for parametric tests, such as normality or equal variances. These tests are especially helpful for analyzing ordinal data, small sample sizes, or data with outliers.Common Non-Parametric T
5 min read
Hypothesis TestingHypothesis testing compares two opposite ideas about a group of people or things and uses data from a small part of that group (a sample) to decide which idea is more likely true. We collect and study the sample data to check if the claim is correct.Hypothesis TestingFor example, if a company says i
9 min read
ANOVA for Data Science and Data AnalyticsANOVA is useful when we need to compare more than two groups and determine whether their means are significantly different. Suppose you're trying to understand which ingredients in a recipe affect its taste. Some ingredients, like spices might have a strong influence while others like a pinch of sal
9 min read
Bayesian Statistics & ProbabilityBayesian statistics sees unknown values as things that can change and updates what we believe about them whenever we get new information. It uses Bayesâ Theorem to combine what we already know with new data to get better estimates. In simple words, it means changing our initial guesses based on the
6 min read
Feature Engineering
Model Evaluation and Tuning
Data Science Practice