Lession - 3 Data Types in C#
Lession - 3 Data Types in C#
Now let us understand what are the different data types available in .NET Framework and in which
scenario which data type is suitable. Why I am going to keep the focus on this is that most of the
time .NET Developers use limited data types. See, you will see that most of the time as a .NET
developer, we are acquainted to use int, bool, double, string, and Datetime data types. These five
data types are mostly used by .NET Developers. Because of the limited use of data types, we lose in
terms of optimization and performance. So, at the end of this session, you will understand what are
the different data types available in .NET Framework and in which scenario you need to use which
data types.
In order to represent the basic unit of computer i.e. byte, in .NET we are provided with the Byte data
type.
Note: If it is a signed data type, then what will be the maximum and minimum values? Remember
when a data type is signed, then it can hold both positive and negative values. In that case, the
maximum needs to be divided by two i.e. 256/2 which is 128. So, it will store 128 positive numbers
and 128 negative numbers. So, in this case, the positive numbers will be from 0 to 127 and the
negative numbers will be from -1 to -128.
ASCII Code:
To understand the byte data type in detail, we need to understand something called ASCII code.
Please visit the following site to understand the ASCII Codes. ASCII stands for American Standard
Code for Information Interchange.
https://www.cs.cmu.edu/~pattis/15-1XX/common/handouts/ascii.html
When you visit the above site, you will get the following table which shows the Decimal Number and
its equivalent character or symbol.
We have already discussed how to convert Decimal to Binary numbers. Now, suppose you want to
store decimal number 66, whose binary representation i.e. 1000010. And you can see in the above
table that the capital letter B is the character equivalent of 66. So, for the decimal number 66, its
ASCII value is the capital letter B.
Console.WriteLine($"Decimal: {b}");
Console.WriteLine($"Equivalent Character: {(char)b}");
Console.WriteLine($"byte Min Value:{byte.MinValue} and Max Value:{byte.MaxValue}");
Console.WriteLine($"byte Size:{sizeof(byte)} Byte");
Console.ReadKey();
}
}
}
Output:
Note: The most important point that you need to remember, if you want to represent 1-byte unsigned
integer number, then you need to use the Byte data type in C#. In other words, we can also say that, if
you want to store numbers from 0 to maximum 255 or the ASCII value of these numbers, then you
need to go for byte data type .NET Framework.
Again, it is a signed data type which means it can store only positive numbers. If you go to the
definition of char data type, you will see the Maximum and Minimum values as follows.
Here, the ASCII symbol ‘\uffff’ represents 65535 and ‘\0’ represents 0. As char is 2-Byte length, so it
will contain 216 numbers i.e. 65536. So, the minimum number is 0 and the maximum number is 65535.
For a better understanding, please have a look at the below example.
using System;
namespace DataTypesDemo
{
class Program
{
static void Main(string[] args)
{
char ch = 'B';
Console.WriteLine($"Char: {ch}");
Console.WriteLine($"Equivalent Number: {(byte)ch}");
Console.WriteLine($"Char Minimum: {(int)char.MinValue} and Maximum:
{(int)char.MaxValue}");
Console.WriteLine($"Char Size: {sizeof(char)} Byte");
Console.ReadKey();
}
}
}
Output:
Now, you might have one question. Here, we are representing the letter B using char data type which
is taking 2 Bytes. We can also represent this letter B using byte data type which is taking 1 Byte. Now,
if byte and char are doing the same thing, then why do we need char data type which is taking some
extra 1 Byte of memory?
So, in other words, the byte is good if you are doing ASCII representation. But if you are developing a
multilingual application, then you need to use Char Data Type. Multilingual application means
applications that support multiple languages like Hindi, Chinese, English, Spanish, etc.
Now, you may have a counterargument that why not always use the char data type instead of byte
data type because char is 2 bytes and it can store all the symbols available in the world. Then why
should I use byte data type? Now, remember char is basically used to represent Unicode Characters.
And when we read char data, internally it does some kind of transformations. And there are some
scenarios where you don’t want to do such a kind of transformation or encoding. Now, let’s say you
have a raw image file. The raw image file has nothing to do with those transformations. In scenarios
like this, we can use the Byte data type. There is something called a byte array you can use in
situations like this.
So, the byte data type is good if you are reading the raw data or binary data, or the data without doing
any kind of transformations or encoding. And char data type is good when you want to represent or
show the multilingual data or Unicode data to the end user.
To see, the list of UNICODE characters, please visit the following site.
https://en.wikipedia.org/wiki/List_of_Unicode_characters
Console.ReadKey();
}
}
}
Output:
In C#, the string is a reference type data type. Now, if you go to the definition of string data type, then
you will see that the type is going to be a class as shown in the below image and the class is nothing
but a reference type in C#.
Note: If you want to know the Max value and Min value of the Numeric data type, then you need to
use MaxValue and MinValue field constants. If you want to know the size of the data type in bytes,
then you can use sizeof function and to this function, we need to pass the data type (value type data
type, not the reference type data type).
Console.ReadKey();
}
}
}
Output:
One more important point that you need to remember is that these three data types can have other
names as well. For example, Int16 can be represented as a short data type. Int32 can be
represented as int data type and Int64 can be represented as a long data type.
So, in our application, if we are using a short data type, it means it is Int16 i.e. 16-Bit Signed Numeric.
So, we can use Int16 or short in our code and both are going to be the same. Similarly, if we are using
int data type it means we are using Int32 i.e. 32-Bit Signed Numeric. So, we can use Int32 or int in our
application code and both are going to be the same. And finally, if we are using long, it means we are
using 64-Bit Signed Numeric. So, we can use Int64 or long in our code which is going to be the same.
For better understanding, please have a look at the below example.
using System;
namespace DataTypesDemo
{
class Program
{
static void Main(string[] args)
{
//Int16 num1 = 123;
short num1 = 123;
//Int32 num2 = 456;
int num2 = 456;
// Int64 num3 = 789;
long num3 = 789;
Console.ReadKey();
}
}
}
Output:
Now, what if you want to store only positive numbers, then .NET Framework also provided the
unsigned versions of each of these numeric signed data types. For example, for Int16 there is UInt16,
for Int32 there is UInt32, and for Int64, there is UInt64. Similarly, for short we have ushort, for int we
have uint and for long we have ulong. These unsigned data types are going to store only positive
values. The size of these unsigned data types is going to be the same as their signed data type. For a
better understanding, please have a look at the following example.
using System;
namespace DataTypesDemo
{
class Program
{
static void Main(string[] args)
{
//UInt16 num1 = 123;
ushort num1 = 123;
Console.ReadKey();
}
}
}
Output:
As you can see in the above output, the min value of all these unsigned data types is 0 which means
they are going to store only positive numbers without the decimal point. You can see, that when we
use unsigned data type, there is no division by 2 which is in the case of signed numeric data type.
When to use Signed and when to use unsigned data type in C#?
See, if you want to store only positive numbers, then it is recommended to use unsigned data type,
why because with signed short data type the maximum positive number that you can store
is 32767 but with unsigned ushort data type the maximum positive number you can store is 65535.
So, using the same 2 Byes of memory, with ushort, we are getting a chance to store a bigger positive
number as compared with the short data type positive number and the same will be in the case int
and unit, long and ulong. If you want to store both positive and negative numbers then you need to
use signed data type.
The Single data type takes 4 Bytes, Double takes 8 Bytes and Decimal takes 16 Bytes of memory.
For a better understanding, please have a look at the below example. In order to create a single
value, we need to add the suffix f at the end of the number, similarly, if you want to create a Decimal
value, you need to suffix the value with m (Capital or Small does not matter). If you are not suffixing
with anything, then the value is going to be double by default.
using System;
namespace DataTypesDemo
{
class Program
{
static void Main(string[] args)
{
Single a = 1.123f;
Double b = 1.456;
Decimal c = 1.789M;
Console.ReadKey();
}
}
}
Output:
Instead of Single, Double, and Decimal, you can also use the short-hand name of these data types
such as float for Single, double for Double, and decimal for Decimal. The following example uses the
short-hand names for the above Single, Double, and Decimal data types using C# Language.
using System;
namespace DataTypesDemo
{
class Program
{
static void Main(string[] args)
{
float a = 1.123f;
double b = 1.456;
decimal c = 1.789m;
Console.ReadKey();
}
}
}
Output:
Range:
1. The float value ranges from approximately -3.402823E+38 to 3.402823E+38.
2. The double value ranges from approximately -1.79769313486232E+308 to
1.79769313486232E+308.
3. The Decimal value ranges from approximately -79228162514264337593543950335 to
79228162514264337593543950335.
Precision:
1. Float represents data with the single-precision floating-point number.
2. Double represent data with the double-precision floating-point numbers.
3. Decimal represents data with the decimal floating-point numbers.
Accuracy:
1. Float is less accurate than Double and Decimal.
2. Double is more accurate than Float but less accurate than Decimal.
3. Decimal is more accurate than Float and Double.
Console.WriteLine(a);
Console.WriteLine(b);
Console.WriteLine(c);
Console.ReadKey();
}
}
}
Output:
Let us see an example to understand how the data types impact the application performance in C#
Language. Please have a look at the below example. Here, I am creating two loops which will be
executed 100000 times. As part of the first for loop, I am using a short data type to create and
initialize three variables with the number 100. In the second for loop, I am using decimal data type to
create and initialize three variables with the number 100. Further, I am using StopWatch to measure
the time taken by each loop.
using System;
using System.Diagnostics;
namespace DataTypesDemo
{
class Program
{
static void Main(string[] args)
{
Stopwatch stopwatch1 = new Stopwatch();
stopwatch1.Start();
for (int i = 0; i <= 10000000; i++)
{
short s1 = 100;
short s2 = 100;
short s3 = 100;
}
stopwatch1.Stop();
Console.WriteLine($"short took : {stopwatch1.ElapsedMilliseconds} MS");
Console.ReadKey();
}
}
}
Output:
So, you can see, short took 30 MS compared with 73 MS with decimal. So, it is matter, that you need
to choose the right data type in your application development to get better performance.