0% found this document useful (0 votes)
11 views9 pages

24mcs1025 Ex1 Part A

The document is an exercise submission for a Deep Learning Lab course, focusing on TensorFlow. It includes tasks such as creating tensors, finding their shapes, ranks, and sizes, generating random tensors, and performing matrix multiplication and dot product. Additionally, it covers creating tensors with specific shapes and finding their minimum and maximum values.

Uploaded by

keerthana.r2024
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views9 pages

24mcs1025 Ex1 Part A

The document is an exercise submission for a Deep Learning Lab course, focusing on TensorFlow. It includes tasks such as creating tensors, finding their shapes, ranks, and sizes, generating random tensors, and performing matrix multiplication and dot product. Additionally, it covers creating tensors with specific shapes and finding their minimum and maximum values.

Uploaded by

keerthana.r2024
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

24mcs1025-ex1-part-a

February 24, 2025

**
MCSE603P: Deep Learning Lab
EXERCISE 1: Introduction to TensorFlow
Submitted By:
Keerthana R (24MCS1025)
M.Tech CSE
SCOPE/VIT Chennai
Submitted To:
Dr.Rajalakshmi R
Associate Professor
SCOPE/VIT Chennai

[2]: import tensorflow as tf

1 1. Create a vector, scalar, matrix, and tensor with values of


your choice using tf.constant().
1.Creating a scalar
[3]: scalar = tf.constant(25)
scalar

[3]: <tf.Tensor: shape=(), dtype=int32, numpy=25>

Creating a vector
[4]: vector = tf.constant([23,24,25,26])
vector

[4]: <tf.Tensor: shape=(4,), dtype=int32, numpy=array([23, 24, 25, 26], dtype=int32)>

2.Creating a matrix

1
[5]: matrix = tf.constant([[1,2,3],[4,5,6],[7,8,9]])
matrix

[5]: <tf.Tensor: shape=(3, 3), dtype=int32, numpy=


array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]], dtype=int32)>

3.Creating a tensor
[6]: tensor =tf.constant([[[67,68],[69,70]],[[71,72],[73,74]]])
tensor

[6]: <tf.Tensor: shape=(2, 2, 2), dtype=int32, numpy=


array([[[67, 68],
[69, 70]],

[[71, 72],
[73, 74]]], dtype=int32)>

2 2. Find the shape, rank and size of the tensors (which you
created for Q1).
1.Printing Shape of all tensors
[7]: print("Shape of scalar",scalar.shape)
print("Shape of vector",vector.shape)
print("Shape of matrix",matrix.shape)
print("Shape of tensor",tensor.shape)

Shape of scalar ()
Shape of vector (4,)
Shape of matrix (3, 3)
Shape of tensor (2, 2, 2)
2.Printing Rank of all tensors
[8]: print("Rank of scalar",tf.rank(scalar))
print("Rank of vector",tf.rank(vector))
print("Rank of matrix",tf.rank(matrix))
print("Rank of tensor",tf.rank(tensor))

Rank of scalar tf.Tensor(0, shape=(), dtype=int32)


Rank of vector tf.Tensor(1, shape=(), dtype=int32)
Rank of matrix tf.Tensor(2, shape=(), dtype=int32)
Rank of tensor tf.Tensor(3, shape=(), dtype=int32)
3.Printing size of all tensors

2
[9]: print("Size of scalar",tf.size(scalar))
print("Size of vector",tf.size(vector))
print("Size of matrix",tf.size(matrix))
print("Size of tensor",tf.size(tensor))

Size of scalar tf.Tensor(1, shape=(), dtype=int32)


Size of vector tf.Tensor(4, shape=(), dtype=int32)
Size of matrix tf.Tensor(9, shape=(), dtype=int32)
Size of tensor tf.Tensor(8, shape=(), dtype=int32)

3 3. Create two tensors containing random values between 0 and


1 with shape [5, 300].
[10]: t1=tf.random.uniform(shape =[5,300],minval =0,maxval=1)
t2=tf.random.uniform(shape =[5,300],minval =0,maxval=1)
print(t1)
print(t2)

tf.Tensor(
[[0.8091574 0.594347 0.67483544 … 0.95423675 0.73473 0.8098073 ]
[0.60397184 0.5028323 0.07199478 … 0.97781396 0.4862994 0.7226087 ]
[0.25994086 0.23808265 0.33961916 … 0.29676974 0.28038502 0.36464262]
[0.6383574 0.68120027 0.24551308 … 0.7787988 0.8700501 0.25336266]
[0.24806392 0.05760801 0.12655997 … 0.8647369 0.03902519 0.2348597 ]],
shape=(5, 300), dtype=float32)
tf.Tensor(
[[0.29074728 0.01891863 0.8829454 … 0.7341484 0.9688134 0.12316489]
[0.42282426 0.909804 0.15008497 … 0.01267278 0.12384367 0.7326139 ]
[0.35857046 0.21476388 0.9160445 … 0.0170027 0.17271769 0.40384483]
[0.18319094 0.63475573 0.5678283 … 0.30026555 0.7924441 0.6353425 ]
[0.80496764 0.6398188 0.42550123 … 0.32466578 0.7163081 0.9910978 ]],
shape=(5, 300), dtype=float32)

4 4. Multiply the two tensors (created in Q3) using matrix mul-


tiplication.
[13]: result =tf.matmul(t1,tf.transpose(t2))
print(result)

tf.Tensor(
[[82.89256 72.2052 81.85942 73.42679 76.32473 ]
[82.59998 74.5827 82.203445 74.39873 75.46951 ]
[68.58944 63.571373 68.09952 63.0169 62.47265 ]
[81.72216 69.42413 76.112305 72.12494 73.24727 ]
[80.05264 70.77235 76.11891 72.066605 69.63198 ]], shape=(5, 5),
dtype=float32)

3
5 5.Multiply the two tensors (which you created for Question 3)
using dot product.
[14]: tf.tensordot(t1,tf.transpose(t2),axes=1)

[14]: <tf.Tensor: shape=(5, 5), dtype=float32, numpy=


array([[82.89256 , 72.2052 , 81.85942 , 73.42679 , 76.32473 ],
[82.59998 , 74.5827 , 82.203445, 74.39873 , 75.46951 ],
[68.58944 , 63.571373, 68.09952 , 63.0169 , 62.47265 ],
[81.72216 , 69.42413 , 76.112305, 72.12494 , 73.24727 ],
[80.05264 , 70.77235 , 76.11891 , 72.066605, 69.63198 ]],
dtype=float32)>

6 6. Create a tensor with random values between 0 and 1 with


shape [224, 224, 3].
[15]: t3=tf.random.uniform(shape =[224,224,3],minval =0,maxval=1)
t3

[15]: <tf.Tensor: shape=(224, 224, 3), dtype=float32, numpy=


array([[[0.3971026 , 0.15089345, 0.20154393],
[0.52001476, 0.35992956, 0.09549856],
[0.82970285, 0.27970028, 0.44870698],
…,
[0.6135253 , 0.69084966, 0.7921655 ],
[0.43341458, 0.6835841 , 0.3474034 ],
[0.2023915 , 0.9600184 , 0.63368857]],

[[0.47929418, 0.14116824, 0.7293595 ],


[0.45509613, 0.03958046, 0.2668196 ],
[0.6047485 , 0.9215952 , 0.26235235],
…,
[0.907285 , 0.186064 , 0.36659396],
[0.05686045, 0.19612086, 0.28441453],
[0.72225046, 0.9526434 , 0.3641889 ]],

[[0.8135387 , 0.38065398, 0.5177479 ],


[0.5029644 , 0.2858851 , 0.99594665],
[0.74883556, 0.21931434, 0.65266323],
…,
[0.9111185 , 0.52097785, 0.76133 ],
[0.3817302 , 0.9534242 , 0.46045423],
[0.06659079, 0.1632328 , 0.9772552 ]],

…,

4
[[0.63778865, 0.6490238 , 0.03876233],
[0.59320354, 0.34076023, 0.09987032],
[0.15262663, 0.20540273, 0.9321383 ],
…,
[0.31028306, 0.09064078, 0.80564225],
[0.13211608, 0.5236708 , 0.8006624 ],
[0.79365826, 0.16971159, 0.82635 ]],

[[0.49921978, 0.35491586, 0.27386785],


[0.11761665, 0.8625643 , 0.99682283],
[0.10821104, 0.3957559 , 0.27220285],
…,
[0.9920306 , 0.00627947, 0.94670475],
[0.03027987, 0.5870342 , 0.9069879 ],
[0.35614824, 0.17092192, 0.9549676 ]],

[[0.77095795, 0.351871 , 0.4344796 ],


[0.00567389, 0.82145 , 0.7225357 ],
[0.26773167, 0.36528718, 0.34645987],
…,
[0.24428678, 0.08318615, 0.7691251 ],
[0.7592329 , 0.03250706, 0.78985417],
[0.9716971 , 0.9415436 , 0.6393144 ]]], dtype=float32)>

7 7. Find the min and max values of the tensor (created in Q6).
1.Minimum Value
[18]: min =tf.reduce_min(t3)
min

[18]: <tf.Tensor: shape=(), dtype=float32, numpy=3.5762787e-07>

2.Maximum Value
[21]: max=tf.reduce_max(t3)
max

[21]: <tf.Tensor: shape=(), dtype=float32, numpy=0.9999994>

# 8.Created a tensor with random values of shape [1, 224, 224, 3] then squeeze it to
change the shape to [224, 224, 3]
1.Created tensor of shape [1,224,224,3]

[23]: t4 = tf.random.uniform(shape=[1, 224, 224, 3], minval=0, maxval=1)

5
t4, t4.shape

[23]: (<tf.Tensor: shape=(1, 224, 224, 3), dtype=float32, numpy=


array([[[[0.34612274, 0.53801477, 0.7270776 ],
[0.459715 , 0.35359693, 0.9739468 ],
[0.09276116, 0.4201125 , 0.4245696 ],
…,
[0.6824478 , 0.29208183, 0.7307304 ],
[0.03076339, 0.8200798 , 0.88963723],
[0.70424104, 0.06080949, 0.8636968 ]],

[[0.80728316, 0.8245249 , 0.8298775 ],


[0.3621285 , 0.42166018, 0.18143165],
[0.72141457, 0.52103865, 0.89956653],
…,
[0.01590216, 0.8474145 , 0.9894401 ],
[0.05067122, 0.48287857, 0.6156907 ],
[0.7875259 , 0.12014532, 0.8414848 ]],

[[0.00523186, 0.35781407, 0.42784023],


[0.5796423 , 0.27566838, 0.5066861 ],
[0.40714395, 0.78580403, 0.35708177],
…,
[0.83406353, 0.8448372 , 0.7226907 ],
[0.240924 , 0.4439063 , 0.5718895 ],
[0.8832259 , 0.8942605 , 0.03558481]],

…,

[[0.35500908, 0.9354501 , 0.94322133],


[0.22420979, 0.02713633, 0.08418977],
[0.6469008 , 0.5946264 , 0.36269617],
…,
[0.9090775 , 0.47371078, 0.36991358],
[0.33465064, 0.11599791, 0.37037385],
[0.0961262 , 0.42473996, 0.59555554]],

[[0.8945197 , 0.8394604 , 0.55770063],


[0.30466652, 0.3937807 , 0.99953973],
[0.94918406, 0.8456913 , 0.06104887],
…,
[0.8713335 , 0.06324041, 0.2139194 ],
[0.57146466, 0.5657935 , 0.7970512 ],
[0.70135677, 0.42073858, 0.82387316]],

[[0.1749593 , 0.70448613, 0.3120656 ],

6
[0.80708146, 0.5467967 , 0.33809483],
[0.9515544 , 0.5343975 , 0.33860803],
…,
[0.4060731 , 0.7671851 , 0.72042096],
[0.95961964, 0.8141395 , 0.11800551],
[0.3762436 , 0.51210093, 0.0970763 ]]]], dtype=float32)>,
TensorShape([1, 224, 224, 3]))

Squeezing
[24]: t4.squeeze = tf.squeeze(t4)
t4.squeeze, t4.squeeze.shape

[24]: (<tf.Tensor: shape=(224, 224, 3), dtype=float32, numpy=


array([[[0.34612274, 0.53801477, 0.7270776 ],
[0.459715 , 0.35359693, 0.9739468 ],
[0.09276116, 0.4201125 , 0.4245696 ],
…,
[0.6824478 , 0.29208183, 0.7307304 ],
[0.03076339, 0.8200798 , 0.88963723],
[0.70424104, 0.06080949, 0.8636968 ]],

[[0.80728316, 0.8245249 , 0.8298775 ],


[0.3621285 , 0.42166018, 0.18143165],
[0.72141457, 0.52103865, 0.89956653],
…,
[0.01590216, 0.8474145 , 0.9894401 ],
[0.05067122, 0.48287857, 0.6156907 ],
[0.7875259 , 0.12014532, 0.8414848 ]],

[[0.00523186, 0.35781407, 0.42784023],


[0.5796423 , 0.27566838, 0.5066861 ],
[0.40714395, 0.78580403, 0.35708177],
…,
[0.83406353, 0.8448372 , 0.7226907 ],
[0.240924 , 0.4439063 , 0.5718895 ],
[0.8832259 , 0.8942605 , 0.03558481]],

…,

[[0.35500908, 0.9354501 , 0.94322133],


[0.22420979, 0.02713633, 0.08418977],
[0.6469008 , 0.5946264 , 0.36269617],
…,
[0.9090775 , 0.47371078, 0.36991358],
[0.33465064, 0.11599791, 0.37037385],
[0.0961262 , 0.42473996, 0.59555554]],

7
[[0.8945197 , 0.8394604 , 0.55770063],
[0.30466652, 0.3937807 , 0.99953973],
[0.94918406, 0.8456913 , 0.06104887],
…,
[0.8713335 , 0.06324041, 0.2139194 ],
[0.57146466, 0.5657935 , 0.7970512 ],
[0.70135677, 0.42073858, 0.82387316]],

[[0.1749593 , 0.70448613, 0.3120656 ],


[0.80708146, 0.5467967 , 0.33809483],
[0.9515544 , 0.5343975 , 0.33860803],
…,
[0.4060731 , 0.7671851 , 0.72042096],
[0.95961964, 0.8141395 , 0.11800551],
[0.3762436 , 0.51210093, 0.0970763 ]]], dtype=float32)>,
TensorShape([224, 224, 3]))

# 9. Create a tensor with shape [10] using your own choice of values, then find # the
index which has the maximum value
[34]: t5 = tf.random.uniform(shape=[10], minval=0, maxval=5, dtype=tf.int32)

t5
#tf.argmax(t5)

[34]: <tf.Tensor: shape=(10,), dtype=int32, numpy=array([3, 1, 4, 3, 2, 4, 0, 2, 2,


0], dtype=int32)>

[35]: tf.argmax(t5)

[35]: <tf.Tensor: shape=(), dtype=int64, numpy=2>

8 10.One-hot encode the tensor you created in 9


[37]: tf.one_hot(t5, depth=10)

[37]: <tf.Tensor: shape=(10, 10), dtype=float32, numpy=


array([[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],

8
[0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.]], dtype=float32)>

[ ]:

You might also like