Difference between Frequency Array and Frequency Distribution Last Updated : 23 Jul, 2025 Comments Improve Suggest changes Like Article Like Report The number of times a specific value appears in a distribution is known as its frequency. For instance, there are 30 students in a class, and fifteen of them have received 80 points, ten have received 90 points, and five have received 100 points. The frequencies will now be 15, 10, and 5. A table in which frequencies and corresponding values of a variable are written side by side is referred to as a frequency distribution. A frequency distribution can be classified as "Discrete" or "Continuous" based on whether the variable is discrete or continuous.What is Frequency Array (Discrete Series)?Discrete Series is a series in which the individual values differ from each other by a definite amount. In this frequency distribution series, different values of the variables are shown with their respective frequencies. The classification of data for a discrete variable is known as Frequency Array. A discrete variable does not take a fractional value but takes just definite integral values. Discrete Frequency Distribution Construction (using Tally marks)The method of Tally Bars or Tally Marks is used by the investigator to count the number of observations or the frequency of each value of the variable. The steps involved in the construction of discrete frequency distribution using tally marks are as follows:Prepare a table with three columns with headings as Variable, Tally Marks, and Frequency.The first column consists of each possible value of the variable.The second column includes a tally mark denoted by | for every observation against its corresponding value. If a specific value has occurred for the fifth time, we put a cross on the tally mark (||||), by cutting the four tally bars and count the line cutting as the fifth number. For the sixth item, we will put another tally bar after leaving some space. The third column includes the total of tally marks corresponding to each value of the variable. Example:Represent the marks of 22 students of a class in the form of frequency array. Solution:Frequency ArrayThe data is presented in the form of a frequency array in the table above. According to the table, out of 22 students, 6 students got 10 marks, 4 students got 15 marks, 2 students got 18 marks, 5 students got 20 marks, and 5 students got 22 marks. The items in such a series are organised in ascending (or descending) order. The frequency is given in front of each item to indicate the number of times the item appears in the series. The marks obtained are shown in ascending order in this table. The frequency of these marks, that is, the number of students who have obtained these marks, is placed beside them.What is Frequency Distribution (Continuous Series)?A discrete series cannot take any value in an interval; therefore, in cases where it is essential to represent continuous variables with a range of values of different items of a given data, Continuous Series is used. In this series, the measurements are only approximations and these approximations are expressed in the form of class intervals. The classes are formed from beginning to end, without any breaks. Other names of Continuous Series are Frequency Distribution, Grouped Frequency Distribution, Series with Class Intervals, and Series of Grouped Data. Example:From the marks given below of 30 students, prepare a frequency distribution table with classes 0-10, 10-20, and so on. Solution:The steps to construct the frequency distribution table are as follows:The first column of the table will include the class intervals starting from 0-10 til 60-70, as the value of the highest observation is 69.The second column of the table will include the tally marks.The third column of the table will include the frequency of the by counting the tally marks corresponding to the different values. Difference between Frequency Array and Frequency DistributionThe arrangement of data is shown by the frequency array and frequency distribution, with frequencies as variables corresponding to different values of an X-variable. The key difference is that in the case of frequency arrays, the X-variable must be discrete. In the case of a frequency distribution, X-variable is a distribution with different class intervals rather than a discrete variable. Example: In a nutshell, frequency arrays are discrete series that show frequencies corresponding to discrete values, whereas frequency distributions are series that show frequencies corresponding to different class intervals. Comment More infoAdvertise with us Next Article What is Data Science? N nikitajaincalm Follow Improve Article Tags : Data Science Commerce - Difference Between Commerce - 11th Similar Reads Data Science Tutorial Data Science is a field that combines statistics, machine learning and data visualization to extract meaningful insights from vast amounts of raw data and make informed decisions, helping businesses and industries to optimize their operations and predict future trends.This Data Science tutorial offe 3 min read Introduction to Machine LearningWhat is Data Science?Data science is the study of data that helps us derive useful insight for business decision making. Data Science is all about using tools, techniques, and creativity to uncover insights hidden within data. It combines math, computer science, and domain expertise to tackle real-world challenges in a 8 min read Top 25 Python Libraries for Data Science in 2025Data Science continues to evolve with new challenges and innovations. In 2025, the role of Python has only grown stronger as it powers data science workflows. It will remain the dominant programming language in the field of data science. Its extensive ecosystem of libraries makes data manipulation, 10 min read Difference between Structured, Semi-structured and Unstructured dataBig Data includes huge volume, high velocity, and extensible variety of data. There are 3 types: Structured data, Semi-structured data, and Unstructured data. Structured data - Structured data is data whose elements are addressable for effective analysis. It has been organized into a formatted repos 2 min read Types of Machine LearningMachine learning is the branch of Artificial Intelligence that focuses on developing models and algorithms that let computers learn from data and improve from previous experience without being explicitly programmed for every task.In simple words, ML teaches the systems to think and understand like h 13 min read What's Data Science Pipeline?Data Science is a field that focuses on extracting knowledge from data sets that are huge in amount. It includes preparing data, doing analysis and presenting findings to make informed decisions in an organization. A pipeline in data science is a set of actions which changes the raw data from variou 3 min read Applications of Data ScienceData Science is the deep study of a large quantity of data, which involves extracting some meaning from the raw, structured, and unstructured data. Extracting meaningful data from large amounts usesalgorithms processing of data and this processing can be done using statistical techniques and algorit 6 min read Python for Machine LearningLearn Data Science Tutorial With PythonData Science has become one of the fastest-growing fields in recent years, helping organizations to make informed decisions, solve problems and understand human behavior. As the volume of data grows so does the demand for skilled data scientists. The most common languages used for data science are P 3 min read Pandas TutorialPandas is an open-source software library designed for data manipulation and analysis. It provides data structures like series and DataFrames to easily clean, transform and analyze large datasets and integrates with other Python libraries, such as NumPy and Matplotlib. It offers functions for data t 6 min read NumPy Tutorial - Python LibraryNumPy (short for Numerical Python ) is one of the most fundamental libraries in Python for scientific computing. It provides support for large, multi-dimensional arrays and matrices along with a collection of mathematical functions to operate on arrays.At its core it introduces the ndarray (n-dimens 3 min read Scikit Learn TutorialScikit-learn (also known as sklearn) is a widely-used open-source Python library for machine learning. It builds on other scientific libraries like NumPy, SciPy and Matplotlib to provide efficient tools for predictive data analysis and data mining.It offers a consistent and simple interface for a ra 3 min read ML | Data Preprocessing in PythonData preprocessing is a important step in the data science transforming raw data into a clean structured format for analysis. It involves tasks like handling missing values, normalizing data and encoding variables. Mastering preprocessing in Python ensures reliable insights for accurate predictions 6 min read EDA - Exploratory Data Analysis in PythonExploratory Data Analysis (EDA) is a important step in data analysis which focuses on understanding patterns, trends and relationships through statistical tools and visualizations. Python offers various libraries like pandas, numPy, matplotlib, seaborn and plotly which enables effective exploration 6 min read Introduction to StatisticsStatistics For Data ScienceStatistics is like a toolkit we use to understand and make sense of information. It helps us collect, organize, analyze and interpret data to find patterns, trends and relationships in the world around us.From analyzing scientific experiments to making informed business decisions, statistics plays a 12 min read Descriptive StatisticStatistics is the foundation of data science. Descriptive statistics are simple tools that help us understand and summarize data. They show the basic features of a dataset, like the average, highest and lowest values and how spread out the numbers are. It's the first step in making sense of informat 5 min read What is Inferential Statistics?Inferential statistics is an important tool that allows us to make predictions and conclusions about a population based on sample data. Unlike descriptive statistics, which only summarize data, inferential statistics let us test hypotheses, make estimates, and measure the uncertainty about our predi 7 min read Bayes' TheoremBayes' Theorem is a mathematical formula used to determine the conditional probability of an event based on prior knowledge and new evidence. It adjusts probabilities when new information comes in and helps make better decisions in uncertain situations.Bayes' Theorem helps us update probabilities ba 13 min read Probability Data Distributions in Data ScienceUnderstanding how data behaves is one of the first steps in data science. Before we dive into building models or running analysis, we need to understand how the values in our dataset are spread out and thatâs where probability distributions come in.Let us start with a simple example: If you roll a f 8 min read Parametric Methods in StatisticsParametric statistical methods are those that make assumptions regarding the distribution of the population. These methods presume that the data have a known distribution (e.g., normal, binomial, Poisson) and rely on parameters (e.g., mean and variance) to define the data.Key AssumptionsParametric t 6 min read Non-Parametric TestsNon-parametric tests are applied in hypothesis testing when the data does not satisfy the assumptions necessary for parametric tests, such as normality or equal variances. These tests are especially helpful for analyzing ordinal data, small sample sizes, or data with outliers.Common Non-Parametric T 5 min read Hypothesis TestingHypothesis testing compares two opposite ideas about a group of people or things and uses data from a small part of that group (a sample) to decide which idea is more likely true. We collect and study the sample data to check if the claim is correct.Hypothesis TestingFor example, if a company says i 9 min read ANOVA for Data Science and Data AnalyticsANOVA is useful when we need to compare more than two groups and determine whether their means are significantly different. Suppose you're trying to understand which ingredients in a recipe affect its taste. Some ingredients, like spices might have a strong influence while others like a pinch of sal 9 min read Bayesian Statistics & ProbabilityBayesian statistics sees unknown values as things that can change and updates what we believe about them whenever we get new information. It uses Bayesâ Theorem to combine what we already know with new data to get better estimates. In simple words, it means changing our initial guesses based on the 6 min read Feature EngineeringWhat is Feature Engineering?Feature engineering is the process of turning raw data into useful features that help improve the performance of machine learning models. It includes choosing, creating and adjusting data attributes to make the modelâs predictions more accurate. The goal is to make the model better by providing rele 5 min read Introduction to Dimensionality ReductionWhen working with machine learning models, datasets with too many features can cause issues like slow computation and overfitting. Dimensionality reduction helps to reduce the number of features while retaining key information. Techniques like principal component analysis (PCA), singular value decom 4 min read Feature Selection Techniques in Machine LearningIn data science many times we encounter vast of features present in a dataset. But it is not necessary all features contribute equally in prediction that's where feature selection comes. It involves selecting a subset of relevant features from the original feature set to reduce the feature space whi 5 min read Feature Engineering: Scaling, Normalization, and StandardizationFeature Scaling is a technique to standardize the independent features present in the data. It is performed during the data pre-processing to handle highly varying values. If feature scaling is not done then machine learning algorithm tends to use greater values as higher and consider smaller values 6 min read Principal Component Analysis(PCA)PCA (Principal Component Analysis) is a dimensionality reduction technique used in data analysis and machine learning. It helps you to reduce the number of features in a dataset while keeping the most important information. It changes your original features into new features these new features donât 7 min read Model Evaluation and TuningEvaluation Metrics in Machine LearningWhen building machine learning models, itâs important to understand how well they perform. Evaluation metrics help us to measure the effectiveness of our models. Whether we are solving a classification problem, predicting continuous values or clustering data, selecting the right evaluation metric al 9 min read Regularization in Machine LearningRegularization is an important technique in machine learning that helps to improve model accuracy by preventing overfitting which happens when a model learns the training data too well including noise and outliers and perform poor on new data. By adding a penalty for complexity it helps simpler mode 7 min read Cross Validation in Machine LearningCross-validation is a technique used to check how well a machine learning model performs on unseen data. It splits the data into several parts, trains the model on some parts and tests it on the remaining part repeating this process multiple times. Finally the results from each validation step are a 7 min read Hyperparameter TuningHyperparameter tuning is the process of selecting the optimal values for a machine learning model's hyperparameters. These are typically set before the actual training process begins and control aspects of the learning process itself. They influence the model's performance its complexity and how fas 7 min read ML | Underfitting and OverfittingMachine learning models aim to perform well on both training data and new, unseen data and is considered "good" if:It learns patterns effectively from the training data.It generalizes well to new, unseen data.It avoids memorizing the training data (overfitting) or failing to capture relevant pattern 5 min read Bias and Variance in Machine LearningThere are various ways to evaluate a machine-learning model. We can use MSE (Mean Squared Error) for Regression; Precision, Recall, and ROC (Receiver operating characteristics) for a Classification Problem along with Absolute Error. In a similar way, Bias and Variance help us in parameter tuning and 10 min read Data Science PracticeData Science Interview Questions and AnswersIn this Data Science interview questions guide, you will explore interview questions for Data Science for beginners and experienced professionals. Here you will find the frequently asked questions during the data science interview. Practicing all the questions below will help you explore your career 15+ min read Data Science Coding Interview QuestionsTo excel in data science coding interviews, it's essential to master a variety of questions that test your programming skills and understanding of data science concepts. We have prepared a list of the Top 50 Data Science Interview Questions along with their answers to ace interviews. Q.1 Write a fun 15 min read Top 65+ Data Science Projects with Source Code Dive into the exciting world of data science with our Top 65+ Data Science Projects with Source Code. These projects are designed to help you gain hands-on experience and sharpen your skills, whether youâre a beginner or looking to upscale your data science knowledge. Covering everything from trend 6 min read Like