0% found this document useful (0 votes)
24 views90 pages

Foundation of Computer Systems - Live Session - 21 - 1 - 2024 - English - Generated)

Uploaded by

dhruv
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views90 pages

Foundation of Computer Systems - Live Session - 21 - 1 - 2024 - English - Generated)

Uploaded by

dhruv
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 90

So just to, uh, first

before I begin into this course,

let briefly introduce myself.

So this is my background.

Uh, I am actually an assistant professor at the Indian

Institute of Technology Belay.

I have joined in this institute in August, 2020,

so close to three and a half years.

And then prior to this, uh, I'm sorry, prior

to I did my PhD from IAT kpu.

And then after PhD, I went

for a postdoctoral position at the

University of Texas at Austin.

My bachelor's also from NATI have some

industrial experience.

I worked for Oracle for about two years after my bachelor's.

And I also have some, uh,

research internet experience at the

IBM Research India Private Limited.

So my research work mainly lies in this, uh,

high performance computing domain.

For example, graphics process, GPUs, how to use it,

and uh, how to do machine learning and graphs, et cetera.

So, and then graph analytics mainly focused towards the high

performance computing, right?

So this is basically the background about myself

and, uh, today's, uh, basically lecture.

I want to, uh, give an introduction to

somebody has raised a hand.

So do you have any questions?


Please feel free to talk.

Uh, no, sir. It was just, uh, for the yes and no.

Oh, okay. Fine. Okay.

So again, in between, during the coastal access,

you may feel free to stop me at any point in time.

You can ask me any questions.

If I'm going very fast, then also you can tell me, uh,

so we'll try to go according to your department.

Okay. So today, flex, I'll go, uh, I'll go,

I'll be talking about the overview

of the computer architectures,

and then I'll give an overview question

that other major com major components of this scope, right?

So if you are, if you have, uh,

if you are learning anything,

technology about the computing, right?

So first of all, you should have some basics,

certain concepts, right?

So, uh, whenever you're writing some programs,

so you should know what actually happens

inside when you're trying to, uh,

when the program is getting executed, right?

So that basically you'll come to know with the help

of the computer architectures, right?

And then when there are a lot of program, you are dealing

with the operating, I mean, executing multiple programs,

how the process has been work out, et cetera.

So all these things, uh,

one should understand basically if they're right,


if they're doing any codes related

to the computing technology, whether it's a cloud computing

or maybe it's a big data chain, big data and blockchain.

So you should have some fundamental concepts about this.

Uh, computer systems should be cleared.

So these code is mainly targeted for that.

And, uh, I'll, uh, focus, first of all on the, uh, giving,

in, uh, giving the details about the computer architectures.

I'll talk about that.

And then I'll give some details about the operating system

from the next lecture.

So I'll cover these two topics majorly

as a part of this course.

But in this lectures,

I will give some overview about these topics.

Basically overview of computer architectures

and then overview of computer systems.

Basically, in a computer organ architecture itself,

we will talk about how the computer organization is,

how the instructions will be executed,

how you can represent the machine instructions

and how the instructions gets executed,

and what is the different memory hierarchy.

And also in the operating system, we are going to talk about

how the different programs gets executed,

what is the process and threats,

and what happens if tables two processes trying

to execute simultaneously, what kind

of problems you may have.

And then we are going to talk about how the memory, as well
as files are getting managed within the operating system.

Right? So these are some of the subtopics we are going

to cover in very detail throughout this course.

Right? So let's, uh, begin with, uh, uh, basic thing.

So I would like to ask, uh, what is the computer?

Uh, can anyone tell me, sorry,

just in layman terms,

also you can tell that's not a problem.

Yeah, it's a machine. It's a machine, uh,

taking the instruction and give the result, uh, to the, uh,

To the processor.

Yeah, like, uh, we, we give the,

we give the instruction and mm-hmm.

It'll take the instruction

and, uh, it, it'll, it'll, uh, it'll, uh, process

on the instruction and give the result, uh, in form

of whatever we want.

Okay. So Pargo also has raised a hand you would

like to answer or

Yeah, I agree with the,

the previous person who just answer.

Oh, okay. Uh, yeah.

The computer is a machine who takes, uh, data is input, uh,

processes the data and shows the output

and format we, we, we want.

Okay. Any other, uh, different answers?

Uh, yeah. So it is a machine which consists of processors.

There is CPO memory input of devices with the help of that.

Uh, so it convert, uh, some, uh, arithmatic


or logical input into, uh, automatically.

Oh, okay. Thank you.

So, anybody else? It is a machine which process data?

Mm-Hmm.

It is a mission which will process only mathematical

data, mathematical

Operations, Medical operations.

Okay. I think most of you have,

It's a Yes.

Tell me. Sorry. No problem.

Yeah, it's a machine which simplifies on a day-to-day

activity by giving, uh, instructions to the system

to get the desired output.

Okay, good. Uh, so, so

Electronic device, which one?

For automat and logical data, uh,

we can say it's a advanced level calculator

with much more functionality, which takes input, process it,

and gives output according to our instructions.

Good. And, uh, uh, somebody was trying

to answer. Yes. Uh,

So computer, basically,

it's a programmable electronic device that accept raw data

as an input and a process with a set of instructions

to produce the result.

Uh, I think most of you have, uh,

given the correct answers,

but in some slightly one form or the other.

So it, it's basically, it's a machine that can be instructed

to perform certain tasks.


Basically, instructed means it can be proven.

We can tell that, okay, uh, this machine,

we can tell the machine to do certain tasks,

and then it'll try to perform those

tasks and give the results to us.

I think most of you have captured the intuition.

So now, so what kind of task it can perform?

So this machine can perform certain tasks like addition,

subtraction, multiplications.

There are a lot of instructions that it can perform, right?

So this machine will understand these kind of tasks.

This can do this kind of operations.

So for example, you have this operations, let's say you have

to do add, add or a comma B,

and then you want to put the result C, for example, again,

subtraction, it can take A and B as in operates,

and then it can give the result C ass A minus P,

and then multiplication C equal two, two B, right?

So these, uh, operations are executed

by the central processing units, right?

So CPUs basically can execute this particular operation

for whatever it's, so it has some well-defined,

uh, setup operations.

It can perform in a well-defined fashion.

And this central processing units basically execute this

instruction, right?

But, uh, we have to put this, uh, instructions, right?

So somewhere say, where do we put these instructions?

Basically, it can do these tasks. Any idea where it can put


Memory? Memory,

right? Resistors.

So, uh, memory, uh, no, no resistors actually it's fine.

But for a pro, pro programmer point of view, uh,

these instructions are actually put in memory, right?

Right. And then

after that, uh, now you supply this,

all this information, right?

A, B, C, whatever the instruction you put it,

and the corresponding data also, you may want

to put it in memory, but, uh, let's say now this, uh,

add instruction you take, right?

C two A, uh, C two A plus P, right?

So this A and P are the inputs, right?

So how do you give these inputs

Input, input Device.

So we give this, yes, we give this, uh,

input from the in input devices

and whatever data we enter, that should go

to the member location, right?

So we have a lot of input devices that you can give it.

So some people may use touch devices,

some people may put mouse, some people may use this input

as the keyboard, et cetera.

You have a lot of input devices that basically connects

to them, and that gives the data

and will be stored into the memory.

Now, the CPU performs these particular operations,

whatever operations it can perform,

and the results are stored in the memory,


but the user is not just the,

the user does not want it in the memory date.

Uh, the, the output actually the user wants to see, right?

So, uh, where do you see the output specificly

Output devices? Like

monitor. Monitor,

Yes. So

you have a lot of output devices, so

where you want to visualize it.

So monitor, basically, when you perform the ation,

you want the result to be backed, so see,

to be seen on the screen.

So you will just write the output into the output table.

So this is basically the high level overview about this

capital organization will look like.

You have central processing memory, you have, uh,

central processing units, and then you have memory which

reads the data as well as instruction.

The data is paid from the input device

and stored on the device,

and one that results a computer onto the memory.

You can display them on any kind

of output devices you're interested, right?

So this is basically in terms of the, uh, the hiring,

but it, you see in the, uh, in little more detail,

like it'll, this is the same picture, basically, right?

Boil down to, uh, so you have, uh,

on left side you can see one is there is

central processing units.


That central processing units basically, uh,

is responsible mainly for executing this operations,

whatever the user has provided the instructions.

Okay? So these instructions, uh,

for executing this instruction, you also require some kind

of hardware unit basically to perform these computations.

Like for example, add, subtract,

whatever automatic operations you are interested in, right?

You require some hardware circuit to perform it.

That is called the a LU.

This is called the automatic logic unit.

And you also have set of registers.

These are like fast accessible storage units that helps you

to perform the computations very fast, right?

So you bring the, you bring the instructions,

and you bring the data from the main memory to the registers

before you start accessing the, to perform the data on this.

So registers are very good in numbers.

And then you have, uh, uh,

basically it's the other fast flexible storage units, right?

And then you have something control unit.

Basically the control units, uh,

basically generates signals, tells you if you have

to execute the instruction, add instruction,

what are signals I have to generate it, if I have

to execute multiplication instruction,

what are signals I have to generate so

that this particular multiple in

that particular instructions get, uh, execute, right?

So this is all about center processing unit.


And then you have this, uh,

after center processing unit, you have its main memory also.

So this main memory is basically stored the instructions

as well as the data that is written by the programmers,

whatever programmer that wants to execute it.

And then you also have, uh, input and output devices.

You can see input devices like keyboard

and mouse, various input devices.

You can connect it, so the bus as well as this thing.

And then you have display as, uh, the output device, like

for example, displays, display units,

and then printers, et cetera.

And, uh, these are like whatever I discussed, uh, here.

And then you also have this IO store,

these devices like secondary store, these devices,

which can also store, uh, very large amount of data, right?

So this is how the, uh, overall computer organization,

the computer, will look like intermittently when you try

to wind down to the multiple pieces.

Uh, any questions here before we proceed? Anything further?

Yeah. Uh, mm-Hmm? Tell me,

Uh, sir, could you please elaborate on, uh,

what is the fundamental difference

between memory and storage?

Okay, so this main memory, right,

it is a volatile storage, which mean

that if the power goes off, then the contents

of the main memory is lost.

But in secondary storage, you are like seeing the hard grid.
So this is not volatile.

This is a durable, so the way the, the data that you write,

unless there are failures, it'll be there.

Even if the power is lost, it'll not be,

the contents are not lost.

Yeah. It's basically them.

And Rome, no, Rome is, again, different.

Rome is the read only memory.

So you only write one, basically. So secondary,

Is it the same Like SA ram

and, uh, the hard desktop device,

Right? Yes. Yes.

Main memory is basically ram.

I wanna talk about secondary

And storage hard device.

The, yes. So, so even in memory, if we like deep dive,

are there some persistent memory as well?

Like which store some data?

Because for example, when the machine start,

the bios is loaded, so it is stored somewhere in the memory

That is stored in the room, basically,

that is stored in the room.

That is called memory. That is the, uh, that is

Also kind of memory, right?

Yes. That is the kind of memory.

So the contents of the memory is not lost

even if the pump is lost.

So the moment process starts, sir, is

that storage allocated?

Uh, or is it dynamic?
Because once the process gets completed, uh, that's,

that's when it'll decide, right? How many

Processor you meant to say the computer starts or

Correct. Yeah. Any, any

computational tasks.

How does it decide that how much memory I need

to allocate at the start?

Only? Let's suppose I, uh, start a task and power goes off.

Mm-Hmm. So in that situation,

how does this dynamically decide that, okay,

I need so much of memory.

Okay, so basically what happens is that

whenever you write any program, you have

to compile it, right?

Correct. Yeah. So the compiler will try to tell,

or depending on the program that you write, it tells you

how many variables are there, how many are there,

so it can calculate how much sign requires.

So these kind of static memory, uh, whatever is needed,

it can try to all of it, right?

So it knows, but there is something

called the dynamic memory.

It doesn't know how much size it'll grow.

So it'll give you, uh,

the pointer is starting address to this.

And then as in, when this, uh, the demand is increasing,

the, the size of the area which can grow can also increase.

Okay. Uh, there are some areas

where they're not allowing the data centers to be clear.


Uh, I mean to be built because of the combustion, right?

So in the central processing unit, uh,

which one do you think it'll generate more combustion?

I mean, to say to burden,

I think I, I, uh, I think I, I, your point was not clear.

So can you try to little bit

More, uh, see basically the data centers within, uh, out,

I mean within the, I mean, in the world, if you look at us,

some of the countries are not allowing to, uh,

built a new data centers because of the combustion.

Combustion means, uh, sea what, or something.

I'm not into details in that. Right? Okay.

So when we have a large data centers like that, uh,

when we really referring the CPU unit here, Mm-Hmm.

Which will create more, uh, more,

which will create more combustion, I mean,

which will create more, uh, uh, sea, what kind of thing,

which, uh, mm-Hmm.

Government do not load it.

Uh, I just wanted to understand in the which, uh,

area central processing unit that will need more energy,

that will release more.

Um,

Okay, so there actually you are saying that, uh,

those kind of hardware units, which requires more,

which generates a lot of heat as well

as energy, they're not, ah,

Yeah, exactly, exactly. That is the question.

Uh, so there we are talking about mainly the data center.

Yeah. Yeah. So basically in the data center,


you have a lot of high performance computing units

where there are a lot of processing unit process.

Each processing unit will also of memory. So for example,

Sorry, your voice is a little bit closer, please.

Hello? Better.

Yeah, before it was better, but now all down. You

Better. Now. Is this

Better? Uh, now

it is really good.

Okay. Okay.

So it depends on, basically, uh,

nowadays you have this multiple architectures,

which are not only having one single processing unit,

you have multiple shipping processing units,

and then you also have many core access rate.

For example, like GPUs, et cetera.

You have one thousands of processing cores.

So whichever is having, uh, this, uh, processing core units,

which is doning, lot of energy,

maybe they do not wanna, uh, allow.

Okay, fine. No, sir. Any other questions?

Uh, sir? Uh, sorry,

but I still did not understood, uh,

to bring you back on the question.

So we talked about like, uh,

the persistence is the main difference,

fundamental difference between memory and storage.

Mm-Hmm. But what I'm trying

to understand is there is some


persistence in memory as well.

Like we can change the bio configuration also,

we can make changes to the bio.

So it is taking those changes and making, uh,

and keeping those changes even after the power is cut off.

Mm-Hmm. So that doesn't make the fundamental difference

between memory and storage, right? So,

Oh, no, no. The, there

are different technologies the way it is

actually, uh, that developed, right?

So what is the target about it?

So in main memory, uh, you have ran

that is a volatile storage.

Again, there are various other kinds

of memory technologies also, like you had earlier, uh, uh,

uh, flops, right?

So Flo disk, so that you had, you had CD roll,

you had center secondary store also.

So some of them are actually, uh, uh, even if,

if we talk about in the main memory also,

you have different kinds of memory units also.

Like for example, you have registers, you have caches.

There are, uh, different, uh, the way, uh, basically, uh,

how you want to make it fast

or whether you want to make it cost effective,

or you want to be, make it fast, or you want to be storage.

So there are different parameters that are trying

to distinguish which memory you want trying to target

for design, even at the time of implementation.

Also, these registers have a different technology,


main memory, have a different technology for designing

and secondary sec, uh,

secondary storage also has a different technology

for designing, depending on the file, uh,

accessibility requirements as well as, uh,

the durability requirements, the cost requirements,

different memory units are designed.

So like what is like, uh, the fundamental difference,

which differentiates it with, uh, it from the story,

like whatever different kind of memories.

It should be different from story.

Like, someone asks me like, what is the difference

between memory and storage?

So one liner, what could I tell the person that this thing,

Yes, that that's what I Will differentiate every kind

of memory from the specific storage devices

you can choose from.

Typically, the, uh, storage devices are meant

for durable devices,

but the main memory units are not meant for durable.

This is volatile storage, typically

to make it faster executions

that I would, I think we, we will have, uh,

one detailed lecture about, uh, one

to two lectures on memory technology system.

Then I think it may be a little more clear.

So we'll talk about registers arounds ramps,

and then you will also talk about caches.

We'll also talk about storage units.


So all these things we, we are going

to talk in very detailed also, then that maybe more clear.

Okay. Okay, sir. Thank you,

Sir. One quick, uh,

question. Mm-Hmm.

So sir, can we say that, uh, in a way

that any data which I want

to persist will always use secondary memory

that has gotten no role to play with main memory, right?

Yes, you can. Yes. If you want

to have a durable storage unit, you durable, uh, data,

then you can use the storage

And data manipulation also, if we do,

that's again, related to second memory,

Uh, I mean you want whatever manipulated data,

and that also you want to be on the, uh,

persistent then you can use on the storage.

But intermediate computation, right?

Whatever modifications, manipulation you have done,

first you bring the data from second memory to main memory,

and then you do the manipulation.

And if you want them to be persistent,

you put it under the second storage afterwards.

So main memory is basically just for the processing types

that allocates that

Is a fast, faster, yes.

Faster storage, actually. Okay.

So there is a harder of, uh, hundreds of thousands of mag

magnitude in terms of, uh, speed.

Main memory is very fast,


and compared to main memory, it registers

and c cache are even very fast. So that's why this

Secondary memory only, right, sir?

When you say cache, when you spoke of cache, no,

No. Cache is not secondary

storage cache also, uh, sits in

between the, uh, central processing unit as well

as main memory, which is faster than even the main memory.

And then this is smaller in size basically.

And, uh, cache is allocated from versa.

Hardware wise, if we see what exactly, uh,

main memory is split so that it can,

uh, store the cache, is it? Yes,

Cache is actually, uh, sits in

between the processor as well as main memory.

And, uh, who allocates it?

If the system allocates it,

how the memory allocation is done.

It's basically, there are different

technologies in the cache.

Also, there are different policies, uh, when

to bring the data, when to avail the data, et cetera.

So all these things we are going to discuss in one

or two, one or two complete hours.

Okay.

Question, sir. The registers are, uh,

like permanent memory or we,

It'll be No, they're also like not permanent memory.

They're, once the power is lost, the contents


of the registers are also lost.

So, so the registers are very fast accessible storage units,

very small in quantity, unlike the main memories can go out

of the gbs, right?

But registers are very few numbers.

So the central processing unit

And the main memory and secondary memory.

So what is the distance that we can think it means?

My question is, uh, CP unit

and memory are really to be, uh, very close

to attain some purpose,

or we can create some distance between these two areas,

like memory area.

If I have the separate data center,

which is having a hundred kilometers far mm-hmm,

and CPU unit, if I have a hundred

kilometers far, is that works?

Uh, basically you would like

to keep the data which is near to the, uh,

near the processing units as well as main memory units.

You would like to keep it on the same, uh, as much near

as possible so that you can, there is not, not much latency

for accessing the data.

Uh, in fact, some technologies are also coming back,

like mainly in computing memory technologies also, right?

How you can now bring the

processing units closer to the main.

If the more you keep you far away,

then actually the longer latency is going to suffer.

So typically you would like to have it on the same chip.


You can see the ram is also inserted,

and then, uh, on the same motherboard you can see, uh,

the CPU unit cycle, main memory units,

you have dedicated slots for it.

Oh, okay. Okay. Uh, sir, one question I have.

Uh, so generally, uh, we have, uh, do the programming

and also that programming will go

to the secondary memory, right? So program,

Uh, program that you write file, let's say write the file,

and then you open it and you say

that will go into the secondary scope.

And what, what about the main memory?

So actually I have question regarding the ram.

So we have some, uh, like we have some software,

like we install the software, right?

Mm-Hmm mm-Hmm. So it'll be think the education will be slow,

uh, because with the ram is slow,

and we did retain the higher program.

Mm-Hmm. So in that case, we increase the RAM

and something, then the, uh,

ram will increase the system wise.

So that why the so has happened, the happened

because of the main memory,

because the education is not happened due

to the software not loading, uh, with the education

of the right, uh, ramp speed. So,

Okay, so I'll tell you.

So what happens is that when the moment you, uh,

click any executable, right?


So the moment the program starts,

the program gets loaded onto the main memory, right?

And then from there onwards, let's say it's not

that you are just executing one ma one program,

like you can see in your console,

there are many programs like some web browser

and the text editors, you will have some, uh,

any other application

that are actually loaded, already loaded.

So all of them have to be fitted

to the same main memory, right?

And the execution happens, uh, when the data is in main,

when the program is in main memory,

and then start executing.

So let's say when there are so many, uh, programs

that you will be executing simultaneously, then the main,

the, there is something called an operating system that has

to manage all these programs so

that effectively it is utilized.

For example, if there is not sufficient, uh, memory

for this, uh, for a particular program, maybe it tries

to be some parts of the program and then execute it.

And whenever it requires to access some program,

which is not in memory, then it has

to bring from the particular test and then load.

So during that process, it may be become very slow.

So this process, everything is called as, uh,

main memory management.

And, uh, that we are going to talk in very detail

around two lectures or few lectures in the operating system.


Okay? Okay.

So when you talk of process,

all the process IDs are in main, main memory,

right? The moment process,

Yeah. Yes, yes.

Okay.

Okay. So, uh, but,

but basically, uh, you understood this, uh,

the overall about the computer, right?

So whatever questions you have, right?

I think, uh, it requires a detailed, uh,

answers only through the course.

Throughout the course, I'll be able to, uh, answer all

of these in very detailed,

because very minor details are there for each

and every topic, even for the central

processing, its main memory itself.

I will have two, three lectures,

how the different technologies are there

and how the program is loaded,

how the instructions get executed.

All these things are very fundamental units

that I think we should, uh, try to understand it.

So this course is basically, this course is mainly

for making your fundamental concepts in very clear.

So what happens is starting from, uh, basically, uh, first

of all, how the machines, instructions, whatever you write,

how they executed in the internet, how they're stored,

and then how when you have read different particular


programs, how they get executed.

So all these things are going to be covered in this, uh,

discourse in a full, full detail manner, basically.

Yeah. I hope, uh, you'll also refer, uh, the cloud.

I mean, uh, the setup when you are retailing, the

explaining the terms here, right?

I mean, to say not, uh, a kind of a, a local system,

you are, uh, your examples almost referred in the area

of cloud computing, right?

I mean, to say this particular area.

Uh, so actually, whatever, uh, topics say, first of all,

uh, it is, it is almost related to cloud computing, right?

Because if you don't know the basics,

even if you're learning something higher,

so if you're unable to connect it,

then it doesn't make sense.

For example, you have to do the addition, right?

So additional multiplication you can use to get it,

but what goes behind addition?

What is the kind of hardware?

How do you do the addition, right?

So those things, if you can, uh, understand it fundamentally

that you can kind of appreciate what is going.

So it's not about, okay, so you use the technologies

to use it is all about

how they design the technology, how things work out.

We should have this fundamentals also very clear.

Okay? Got it. Our focus is tomorrow.

So for example, you have to teach a child

to do the CA calculation.
You don't give her the calculator to, uh, do the competition

that you, you'll ask them to make understand

what goes, how to do that.

So this course is about understanding all the fundamentals

about the capital systems, systems.

Of course, they're all connected to cloud company, any kind

of technology, latest technology we talk about.

If you don't understand the fundamental,

maybe you'll not be able to appreciate so much about it.

So this basically brings the foundation very clear in terms

of computing technologies.

Yeah, yeah, yeah. So our focus, uh, towards

to the cloud computing by, uh,

addressing the latest technologies around, uh,

the cloud setup, right?

I mean to say from the basics, yes,

Yes, yes. Basis. Yeah.

Yeah. Thanks a lot. Yeah, thanks.

Okay. Feel free to ask any other question.

I can, it is not that I have to trust through it,

so I'll try to go slowly, even if I cannot cover this,

then I maybe go to the next lecture.

It's okay. But I'll try to cover the whole syllabus somehow.

Okay. Great. Shall I proceed further?

Yeah. One last question. I have the being the question

being raised about the CPU

and also the distance between the memory.

Yesterday we were in a session where, uh, some

of the data centers not being allowed to create


because of the com conversion, heavy energy,

uh, burnout, right?

Mm-Hmm mm-Hmm. So that is the reason which I have raised

this question, because central processing unit in is such an

area where it'll create more combustion, I mean,

burning, right?

Energy center, correct.

So idea behind this, uh, if that is a case,

then I can place CPU unit in one country

and memory unit on the other countries where, uh,

the countries will allow to create the data centers.

So that is the whole idea I am trying to come out with.

So is that, is that a research going still,

or it is already set up for across the board now?

No, no research is going on that area,

Right?

Uh, we have those devices.

Those are called, if you go ahead

and search it is Cisco Edge devices.

Mm-Hmm. So the storage is in a separate area

and the, uh, processing,

and I mean, the other hardwares are in a different

area. Yes. Yes. Okay. Cisco. Cisco.

But there's Latency issues with those

Yeah. Cisco edge,

uh, edge devices, you mean to say, right?

Yes, sir. So those are for the, uh,

C ps you're referring to?

Yes, sir. And the backend storage is in a

different data center altogether.


Oh, great. Yeah, that is what, uh, I was,

I was just, uh, looking at.

Okay, thanks a lot. Yeah,

Of course.

Actually, sir, I work in storage industry only. Yeah,

Yeah. No, no, course.

See, when I even, I learned things from you

whenever you talk about, so,

because you're all working, right? So yeah,

I am into the cloud technologies. Yeah. So

Question. So question. So if you have some point,

yeah, yeah.

Maybe also whatever, uh, you know,

you can also share some information about data technologies,

whatever you're working on, so all of us can run together.

Yeah, yeah, yeah. Service

improvement part of our life now. Yes,

Yes. Because

I, my instructor, I teach according

to the student's point of view,

but, uh, the way I have

to teach is slightly different for you.

So if I have to Yeah,

it's really very good the way you're representing, right?

Or it's really very good. Thank you.

Okay. So if there are no questions,

then I'll try to go ahead.

Okay. So now we have understood that, uh,

we have this central processing units.


These are very powerful.

Uh, it can do a lot of computations, very fast, lot

of operations, uh, can be done in millions

of operations, can be done in seconds.

And then, uh, but to achieve this also, you have a lot

of adverse circuitry also.

So to even to do the addition,

subtraction, everything, right?

So somebody has to build a circuit corresponding to it.

So all this, it has a complex circuitry to achieve all

of these things, right?

But the thing is, uh, this machine can only, uh,

understand this once and zero lines.

So everything that you want to do, it can only understand

with one and zero.

It cannot understand anything like that.

Like even if you have to do addition, multiplication,

everything, you have to specify whether the data,

whatever is this, everything is in one and zero,

and this is college binary.

So all these things it does, but only in machine line.

So this is called the binary language, okay?

So for example, let me tell you, uh, let's say you have

to do so simple addition, all right?

So, uh, addition of two variables, ab,

and then you want to put the result in c.

Let's say you have to do a subtraction.

So you have to do everything in binary itself.

So for this, basically we try to give some kind of,

because it doesn't understand anything other


than zeros and one, right?

So we have to represent everything zeros and one system.

Let's say the, uh, the pro processor will understand the,

uh, add addition operation.

It'll try to represent it with zero,

and let's say subtraction.

If it has to do, and it's ion,

let's say it represents 51, right?

And then the data, A, let's say you represent it

with a zero zero, and then the, then the variable B,

you represent it with zero one, right?

Let's say now, if you have to perform the addition,

let's say addition of A to B,

and you want to put the C, right?

So this, we understand in this,

but the machine will not understand it, right?

So for the machine, we have to tell you in a way

that it can understand it.

So you have to represent this everything in one sense.

So basically the first is the add.

So first add is right, so you have to put it as zero,

then add add of what you have to do, basically, A

and B, you have to try, right?

So A is represented with zero, zero,

and then B is represented with zero one,

and then C is represented with one zero, right?

So this is how you have to specify.

So everything, whatever you want to specify, you have

to specify in one set, zero.


So similarly, subtraction, right?

So subtract is basically the meaning of subtract.

You have to, and then A is zero, zero,

B is again, zero one, the D is one, right?

So if you have to express your thing, you have

to write everything in the machine model.

So if you, if the machine can only understand this language,

it cannot understand basically

what the programmer actually writes in, right?

So this is how the pro programs that, that

how it can understand, of course,

this is a simple name, example.

The detailed instructions, checks are there.

So basically everything, whatever we have to write, we have

to write in a machine language so that the machine can,

for further mission, to execute this instruction,

fundamental, right?

So, uh, writing all these things, yes, yes.

Uh, I have a doubt. I'm really sorry if it makes sense

or not, but it's bothering me,

So, yes, no problem.

Yes. So like, we have been learning like from decades now

that computer only underst understands binary language,

which is one and zero still on and off, right?

Because it's an electronic device.

So why is it so that every technology so advanced

and sophistication has come on various layers,

whether it be on different levels, let's not go into that.

But the wider language,

which computer understand though the fundamental language


has not advanced or been sophisticated,

like it's still one and zero only.

Yeah. So that comes with the hardware, actually, right?

So these hardware signals have, uh, this, uh,

zero is represented with analog signal.

If you have, uh, digital signal, you understand it with, uh,

zero is at the, at the lower level,

and then one is at the higher level,

Yes, on and off, right? Right

On and off kind of thing, right?

Yeah. So that is the kind of technology it develops.

But if you can design another circuit circuitry, uh,

which can actually fluctuate between various modes,

and then if you can exploit that, then

that may be another revolution

We are having, uh, quantum computing,

I think the technology is gonna change in that. Mm-Hmm.

So that basically all the things are coming from the

physical devices, so

how these physical devices are even operating.

So even this transistors, right?

So that's basically the nature about these devices,

how they're actually meant for.

So accordingly, uh, the computation has changed.

So the, the devices have behaved like

this according to that.

We have tried to use this devices

to do our computation task.

Do we cover, uh, anything related to quantum computing?


No, no. I, we do not cover,

Can someone please tell me one word?

I mean, one sentence. What is mean

by con quantum computing here?

So like in, in, uh, like if you want, see

how it work in the, uh, like work in the form of zero one.

So it is a, uh, we can say probability of zero

and probability of one, basically

Uhhuh.

So it is like a, my zero view, uh,

like I'm zero and am one.

Like, uh, if, if, if you see now, uh, like I I, I'm also new

to quantum comput, so like I, I'm working on A PQC,

which is post quantum cryptography.

So in, in that, what happened?

So at the time, uh, like, uh, how gonna explain it?

I, I'll do one thing. I'll explain it in the group.

So I have that, uh, diagram, which I have drawn,

so I'll share it in the group, okay?

Okay, great. Yeah. Yeah. Thank you. Thank you.

So, uh, this binary language has, uh, basically tied up

with this physical device, how the properties

and how the electronic devices are behaving.

So according to that, that has the nature to,

and has a hand and off.

Then accordingly, we try

to encode this information as 0 7 1.

And because everything it does in front of zero,

we also everything zero seven,

Basically an on and off condition, right?


Mm-Hmm. When we look at zeros and ones, correct? Yeah.

And whatever the advancement

that we have observed still today, uh, on the base

of these zeros and ones, which is called

as machine lang language, uh, and so on.

Uh, and after that, we have actually, uh,

created many layers on top of it, like, uh,

machine learning, assembly language, C layer,

and then c plus plus layer to ease our life.

Yes. But it'll internally convert on and,

Right? Correct. Yeah,

that perfect.

Okay. So somebody was trying to write, okay, okay.

That's why. Yeah, yeah, no problem.

So now, uh, these are some simple examples, right?

Add, subtract, I've taken,

but in general, whatever it can do to operation, right?

So each instruction, uh, each machine, it can,

it can support certain type of instruction, right?

So some machine will support, uh, add, subtract in it,

and some, some other machine can also support little bit

more advanced computer can support, basically, depending on

how the circuitry is being designed.

So set up instruction

that a machine can support is basically

what is the instruction architecture.

So it tell you about how this, uh, instruction, uh,

how you can understand the instructions

and what kind of operation we can perform,


and how do you have to specify the operations,

and, uh, how do you have

to specify this addressing mode, et cetera.

For example, you see, right?

So here, uh, what we have done, we have represented a,

and then we have represented first variable,

and then second variable, and then third variable, right?

So it is also possible that you can design another machine,

which under which represent this, uh,

same operation in a different way.

For example, first you may put the result,

and then you put the two variables,

and then you put the last, the operation that you,

so the meaning of the program may change if you go from one

instruction, data architecture to another instruction,

data architecture, it basically tells you the instruction

set straight architecture tells you of what kind

of instruction, talk about it,

how you can represent this instruction,

and then how can I submit it, right?

So basically, each machine has, uh,

certain things it can do,

and, uh, the way it can do,

the way you can specify everything is done,

done using instructions at architecture, right?

So each machine has its own instructions at Target, right?

So now, uh, where are these instructions stored?

For example, uh, somebody was trying to speak,

okay, so now where this instruction test stored,

basically they are stored in, uh, main memory.


So whenever the program has to be executed, the data has

to be whatever programmer has written.

Now that, uh, ultimately it gets boiled down

to the machine instruction,

and then the, the machine language, the language,

and this basically data is stored in this main memory,

and this basically stored in main memory,

and you have a different memory hierarchy also.

That's called like the main memory.

And then you have CS also as main memory,

typically slower compared to cs.

And then, but caches has limited in

price compared to memory.

And then the, the instructions can also loaded onto the

register even faster accessable storage units

and typically monitoring quantity than the cash.

And so you have different, uh, memory hierarchy.

So this, we are going

to talk about it very detailed in the up

right, so we have understood that, uh, so far, uh,

we have certain memory, certain, certain, certain things

that we want to perform,

and then we have to write in a way

that the machine can understand it.

So the machine, you write it in the form of instruction,

the way it is, the instruction,

the machine can understand it,

and then you put them into the storage,

and then the instructions have to be executed.


So before we proceed further, uh, any questions

Sir, going forward?

Uh, are you also expected

to write assembly level programs in the course?

In the course? Uh, actually, uh, we are going

to have some minor exercises, not in very detailed,

but we, I'm going to talk about assembly language also.

Okay, sir. Thank you. Yes, sir.

Yes, Yes, sir. In the previous slide, you showed that,

uh, you know, the in instruction sheet has having, you know,

offend op codes and all those sort of things.

So can you just, you know, briefly, I mean, uh,

elaborate those things.

I mean, with a, yes, with an example.

So yeah, it can be understand easily.

Okay? Okay. So OP code is nothing,

but what are the different operations

supported by the machine, right?

So here, uh, this example I have taken add and zero, right?

Add and ra. So these are the two

different operations, right?

So you can represent, if there are only two operations,

then you can, one bit is, uh,

one bit is sufficient to represent it.

So add, you can represent it

with zero as well as subtraction.

You can represent it with one.

Let's say you have hundreds of operations, right?

So then, then, uh, let's say you have 64 operation,

then you can represent all these 64 operation using one


and zero using six weeks, right?

So that basically called us apco.

APCO tells you what instruction, what kind

of operation you want to perform,

that you can distinguish it, and operations are nothing,

but what are the, uh, inputs to

that particular operation, right?

So for example, here, AD requires two operates, uh,

two input elements, that is A and B,

and you want to put the result in C, right?

So basically that has to be specified using operates, right?

And addressing mode is basically telling you, uh,

what are the different ways you can perform this operation?

What, right? So you can perform this operation directly.

Uh, this A and B can be either, it can,

can it be either visitors or it can be main memory,

or it can, uh, bring it from the, uh, whether,

whether it can, it can be kind

of pointer kind of representation.

So how you can, uh, what kind

of different representations are, uh,

allowed is basically specified by this addressing mode.

So this is in the higher level,

but, uh, next lecture I'm going

to talk about this instruction data.

Uh, so thank you. Sorry, one question may wrong right?

Code when you're referring to 64 operations,

so can I say 64 bit?

Something is related to that?


No, no, no. 64 operations.

See, you can see two operations, you are able

to do it using one bit, right?

So can, okay. Yeah, yeah.

So let's say four operations, sir, there,

how many bits are sufficient?

Uh, one, No, no, for four.

Four operations, four operations

that add subtrac multiplication division.

Okay, four, Two. Then how many bits are needed?

Two bits. Two bits are sufficient. Okay. Like that.

If you have 64 operation, then you can use a log

of 64 base two, that is nothing but six six.

So six bits are sufficient

to represent all kinds of population,

Okay? Okay.

So the value within that op code, uh, what the value

that you mentioned in those six bits,

it'll tell you which instruction you're trying to, which,

uh, operation you're trying to refer to.

Oh, okay. Got it.

Okay. So this instruction, yes.

Tell someone has a question,

Sir. Sir, I just

wanna ask you, uh, like, uh, when we go

for a machine operations

or, uh, we'll be, uh, studying, uh, assembly language part.

So, uh, in depth we go, uh, tell key

how many machine cycles a particular, um, uh,

code which we have written, uh, will take,


and how to optimize that thing.

So we'll be starting till that level,

or, uh, that will be until, uh, assembly language patterns.

We'll, we'll just try to the assembly part

because this, uh, I should go to that level,

but the only thing is, uh, for the number of lecture hours,

which hours for this total 40, including operative system,

computer architecture, and networks.

So I'll be mainly talking about computer architecture

and, uh, operating system.

Each one is about for three hours lecture.

So I think I will not have enough time to go to that level,

but I'll go into the level of assembly.

Okay? And so one more thing is, uh, would you like to, uh,

go for something that

what new research is going on in the particular area

and, uh, what sort of things we can, uh,

Yes, I will try to highlight upon whatever things

that going in the computer architecture domain, uh,

how we need to give

Some, uh, some idea about, uh, what sort

of research is going on currently.

Yes, yes, yes, yes. We have

Currently we have, uh, three

or four operating systems only,

and, uh, what sort

of new things are going on, suppose correct,

Correct, correct.

Try to, to bring up on the latest, uh, research


that is going around, uh, so that you get

to map whatever you are working on,

maybe whatever we thought you can correlate

Answer, uh, at level we can contribute something, uh, uh,

towards a research or it's a very high end research.

We, we cannot go for that like that.

So, uh, this, as per the target of this course's concern,

the syllabus is concerned, uh, I will try

to make the fundamental very clear,

and also I will try

to bring upon the latest technologies going

around in this computer architecture domain

as well as operating system.

But of course, I will not have enough time

to go into all this tech latest technologies in very detail

if I have to restrict that part of,

but I can give, uh, uh,

high level overview about these latest technologies

that those things you can little bit more understand when

you go, when you spend lot more lectures.

But as per the course curriculum is concerned, I have

to restrict to whatever is going on.

Of course, I can go into this latest architecture

completely in very detail.

That's not a problem. That itself takes one course.

Actually, sir, can you please repeat?

Addressing mode, addressing,

basically addressing mode is telling you how you can ex

how you, what are the different ways in which the same, uh,

operation can be expressed.


For example, uh, you have this A and B, right?

So add A comma, B, comm C.

So, uh, the, the things are like, so some instruc,

some architectures may only specify,

and B, can be resistors only.

Some of them can specify that A can be from the main memory,

B can be from the resistor,

or B can be like a indirect memory reference.

So what are kinds of, uh, ways in which the programmer can,

so what, what are different ways, uh,

in which the instruction exhibition can be support

instructions can be represented,

that is specified using

addressing. Hi, professor,

I'm always confused with X 86 and X 64, uh, architecture.

Could you explain please?

Yeah, so it basically, uh, has different, uh, set

of instructions, different kind of, uh,

way in which you can, uh,

which can address those instructions.

So that's the main difference.

Uh, maybe in terms of the manual, if you go on, it'll be for

Thank you.

Uh,

Sir, I have a doubt. Uh,

regarding the next slide in which you mentioned about

the three different memory, h hierarchy,

different trend of kind memory.

So earlier you also through, uh, illuminated on the, uh,


the speed of these, uh, memory, uh, mm-hmm,

Correct. These kind of

memories, like you said,

registers are very fast cache,

this is somewhere between memory.

So, uh, what makes the difference between the speed of, uh,

these different kind of memories?

It, the, uh, the type of the material they're made of,

or the technology which is used to make these

Different kind memories?

Yes, yes, yes. The, the, the physically,

how it's actually made, what kind

of technology used, the material used.

Yeah, so please, could you please elaborate on like, uh,

what, like, uh, between any two kind of memories, like

what kind of, uh, things are used in making these, like,

is it, uh, semiconductors

or is it capacitors, or what exactly are

These? Yes, yes. Main, main memory

units are actually made using

these capacitors also,

and the registers are fully using the transisters.

Uh, so I, we will have one detailed on,

so it'll be more clear what can, how this main memory,

different main memories, uh, units are actually, uh, made up

of how they're designed of

what kind of technology.

Hello? Is this clear? Hello?

Yes, sir. Oh, yes. Yeah, yeah.

So we'll, we'll, we'll talk about in


where detail about this also.

Okay? So now, uh, we have understood about, uh,

that we have the computer can do some in some operations,

and, uh, all it can understand only with one and zeros.

And, uh, if someone has to use this machine, so he has

to write some, uh, tasks in the sequence of one

and zeros, the way it can understand it,

because you understand,

patient can understand only in some specific way.

So whatever the way in which can, uh, it can understand the,

we have to write according to that, right?

And to store all of this, you have

to read basically the main,

but, uh, let's now try to understand how they, I mean, you,

this is only specification, right?

So specification about instructions, what the tasks,

basically what you wanna perform,

but actually how this has to be executed,

how these instructions get executed.

Uh, maybe I would like to explain with some, uh,

real time examples, some of these examples so

that you can appreciate it.

So for example, uh, let's say you have, uh,

you have a, a long, uh, raw milk, for example,

in the steel plant, you have big meals, right?

So these big mills, let's say, uh, they have

to do some computation, right?

So the computation is that,

so you have very thick rods on the left hand side,


you can do, and then these thick rods have to go through,

uh, sudden, uh, uh, series of, uh, transformation.

The finally, you want the end output

to be small rod thin rods, right?

So this is a task we have to do, say for example,

for example, here in the, in, in our sense,

basically you have to do the addition instruc.

So now how the instruction get executed.

So it's basically how you transform from here to here,

what are the different steps in involved in it, how,

and how the things will go on.

We'll try to understand with this is right.

So let's say this is the overall task we have to do,

and I, so of course all of this,

suddenly you don't get this thinner material

from the thicker material.

So, uh, can someone give an inclusion about

how thi this can be performed?

A multiplication can be, uh, no, no,

As Continuous

Addition? No, this exam, no, no. In

this

example, basically, oh, sorry. Rods basically,

Uh, step by step, it'll keep on reducing the diameter so

that the length will be increased

By heating, basically half by heating, right?

Yeah. I think it should be, you know, heated first.

I mean, after hitting, it'll be, you know, tactile enough

to, you know, stretch it

and, you know, make a, you know, the fixed diameter size.
Yes, yes. So that's the main thing, right?

So you have, uh, different steps involved in it.

So we can, first it won't not happen, all the goal.

So first thing is, uh, you may have to check the, the bill,

the, uh, the, the rod that,

the thick rod that's called the bill.

So first of all, you have to check whether the end points

are uniform or not.

So maybe you have, it is not uniform

and you have to cut it, right?

So you, so that you have the rod,

which is in asper specification maybe,

and then you have to eat the raw without eating the rod,

you cannot delete it up to the proper shape.

Whatever you, you cannot thin it.

So you have to eat the raw.

And so that basically, and then

after heating the rod, you have to roll it so

that it becomes thinner, right?

So roll it to make it thinner.

Once the rod comes to the, uh, the size in which, uh,

which you're expecting it

after making it thinner, then you try basically cut the out.

So only at when it is heated, only

that time you can cut the out.

And then finally, once the shape has been the shape

and the length has been according to your target,

then finally, uh,

after you cut it, then you have to cool it so


that it becomes like the way you want it, right?

So there are different kinds of steps

involved in each of these things.

So each of these step, uh, maybe, uh,

it'll take some time, right?

So for example, uh, first each, each of this step,

let's say take one minute step, right?

So cutting the rod, the end points to make it uniform.

One minute as well as heating the rod one minute

or rolling it to take one minute, cutting it again

to one minute cooling is also takes let one, each

of the step is that it taking one, right?

So now, uh, just lemme ask you,

okay, so total how much time it is taking for one draw

Five minutes. Five

minutes, because summation a follow.

Okay, so now let me ask you one step, one more question.

Let's say there are four routes.

Uh, how much time we should take?

Two. Two, that's correct.

Now I will, let me ask one more question.

Can you do better than 20?

Yes, yes, yes.

If one is computer, then uh, uh, well process supports, uh,

we can first,

we can do simultaneously.

So it all can do in five minutes. Yeah. Yeah.

Parallel processing. No one is done. Only

One. The second

That only one unit can do


he he heating of it.

Let's say you have only one unit of this cutting, uh,

the device which can cut is only one unit

for you heating the route.

Yeah, suppose So In

that case. So

optimization of device may work first being cut.

Then if it goes to hit the lot,

then you can put the second route into the cutting point.

So the in the first machine.

Okay. So that's correct. So now according to that,

can you just calculate how much time it'll take

for four rods in that answer, in that, uh, logic?

Uh, five minutes. 5, 3, 8,

Uh, 5 3 8, uh, is okay, but who said five minutes?

Can someone try to explain?

Someone has answered five minutes.

So, uh, I just want to know what is the logic to get that?

So this total process will be completed, right?

So even if we are, uh, working it from starting front,

the total process make complete.

So once we pull it

For one of the, I think it

Nine, I think it is nine minutes.

It is nine minutes, yeah.

I think, you know, if someone has said one

or, um, I mean five also, that could be correct.

You know, if there, there is a parallel

processing running, you know, but


What I'm saying is that No, no, that's correct.

But the thing is that I'm telling

that only you have only one unit of device for each of these

Four plus four. Sorry.

There are no, no, it's suppose all of them, uh,

let's say there are five units of each of these device,

then you can use five in.

But the thing is, I'm telling you only one unit.

Okay? Then it'll Be 20.

No, no, no, it is eight minutes. Final.

Final. Okay. Any other answer?

I'm not good enough in math,

but what I can see is, uh, once we process the in, uh, cut,

cutting the in process, we can send it to hit part.

Okay. Then put another, another load

to cutting parts so it can work. Simul.

Simul, Yeah. So

Yeah, you're unable to come with it, I appreciate,

but the logic has said

It'll, it'll take five minutes.

Five minutes, it'll take.

But if you have unlimited number of units,

Yeah, actually, actually you have a thick road

and you want to, uh, divide in a thin road now,

Huh? Basically, yes,

Its one thick road.

It goes to one, uh, four thin road, basically.

Mm-Hmm. So that's why, uh, one thick road is, uh,

perform all this, uh, five steps

and, uh, convert a four row or four,


Yes. One, one. Each of

them will take one unit.

So five units it would take.

But if I, I'm asking you that,

let's say there are four rows, you have to do this, right?

If you have to assume that you have

to doing everything in parallel, then you need

to have basically, uh, unlimited number of these devices.

But I'm not giving unlimited.

Each one is having only one device to do

This. It's very interesting.

It's simple.

Eight pole will take only one minute

for in any number of rods.

No, no, no, no. I'm taking that.

It can only one unit of rod.

It cannot take two rods. The same.

It, it is simple. All,

all the rod is fine.

Yeah, it'll 20 minutes,

20 minutes, minute extra.

So five plus six plus seven plus eight minutes. Total.

Total. Okay?

So, uh, uh, we have different, different answer,

but, uh, one of the logic that, uh, people have said that,

uh, first to do this cut, and then we go to the heat,

and then while the heating is going on, uh,

you can cut another, right?

So this, the kind of technology is called


as the pipeline technology.

So this is how it goes on.

So basically, uh, let's say I will represent all

of this is task as 1, 2, 3, 5 steps a

day, right?

So, uh, let's say, uh,

you have basically doing the S one first.

S one is basically the cutting, the wrong, right?

And then, uh, you perform this, uh, s one,

once the cutting part is done, then you can, uh,

hit the rod, right?

So in the second step you have, you can hit the rod in

between, you can bring another rod, S two

and then start executing it, and then S one and S two.

Now, uh, when cutting is,

so basically when the heating is going on

and the cutting is going on, so no other rod,

you can bring it and

because it has to be done sequentially, right?

So you cannot, uh, he

before heating itself, you cannot cooling it.

So you, it has to cooling

and then rolling one by one step only you have to do,

you have to do it sequencing,

otherwise you cannot do with the computer, right?

So while, uh, one cutting is going on, second one is, uh,

heating is going on.

The, for the third one, I can start cutting the point. Okay?

So one, one rod can be in the rolling stage,

one rod can be in the heating stage,


and another rod can be in the cutting stage, right?

So that's how you can, so similarly, S one, s two, S3,

S four, you can do, and then S two, S3 S four is four.

You can do, so you can see totally five plus three eight

units of cycle except eight units of time,

you're able to get it done.

Yeah, we can relate it to space complexity

and time complexity, right?

Mm-hmm, something like that.

Time complexity is basically telling you

how much time you will be taking space complexity,

telling you how much, how much memory required.

But where we are not talking about memory, it's basically

how much time it'll take.

So it's that, uh, so while the, the thing that I want to say

that, so while cutting is going on,

the rolling unity is free.

So instead of waiting it to be free,

let's bring another one to roll.

And, uh, while cutting and rolling is going on,

if the whole heat unit is free, then you can exit,

you can basically use the heating unit, right?

So basically you can do parallels where the thing is

parallel, the different, different units, same time,

it's not that rolling, you can do for all five,

uh, rods at the same time.

Rolling. You can do for one at the same

time at any point in time.

H each device processing only one, uh, rod at the same time,
it cannot process the, so if you're assuming

that there are finite number of rods, definitely, uh,

you can use each of them.

You can distribute for this thing.

Then in five minutes, uh, there's four, uh, four, four rows.

You can do it in five minutes.

But I was not telling you each unit actually only one

unit, each device.

So you have to do it like,

and this concept is, it's

More like, you know, q qing, as soon as the

yeah, process free,

You can have to execute, right?

So now the same thing goes in the concept of, uh,

this concept is known as pipeline concept.

Like you pipeline the things.

For example, you go to the, uh, for example, visa center

or maybe the passport center, right?

So you don't go everything at once, right?

You go to the ticket counter, you get the ticket,

and then you go to the next step.

Maybe they will do the document processing, and then

after you do the document processing, they will go

to ask you to go to the specialist who will verify your, uh,

put everything into this.

And then after that, they'll give you the, okay,

so there are three different states, each of them have

to go, right?

So if supposed there are, uh, at each step, let's say assume

that only one person can do one processing at the same time,
then definitely, uh, you have to go in sequentially pipeline

that that has to be done.

So it's not that, uh, if, if all

of them are like infinite units are there at each

and every step, then you can do everything pallet.

Otherwise, you go in this pipeline.

So the same thing will go in case of instructor.

Is the intuition clear?

Yes, sir. Okay.

So, uh, basically now, uh, the same thing will go in case

of, uh, the programs that you have written, right?

So programs, uh, so we basically have,

uh, whatever programs, right?

So it's basically sequence of instructions, right?

So sequence of instruction, ultimately they get boiled down

to the machine instruction zeros and ones, right?

So you have series of instruction.

Now you have to be executed.

These are, the instructions have to be executed by the CPUs.

It's like several work has to be done by the CPU.

And then how do you, okay,

so the same technology will go here also,

it just goes by one at a time.

And then there are several units in between,

and each unit can do certain tasks.

And then, uh, while something is done,

the the next one is ready.

You just keep on executing it, right?

So it's basically something similarly will go as, as is


with respect to the planning concept, right?

So what, so in order to define this pipeline, you have

to understand what are the stages

that are there within this pipeline architecture, right?

So here we have boiled down to these five, uh, steps, right?

Uh, in the same, uh, per the machine also

to perform exhibit one, complete instruction.

Also, it's not just done at one subset, right?

There are different, uh, steps are involved.

So we can try to decode these steps, right?

So these, uh, steps are called these, uh, five stages, uh,

is called in the pipeline.

So the first stage is basically called the page phase. Okay?

So basically before you start executing it,

all the instructions are in the main memory.

So the first phase is that you fetch the instruction.

Whatever you want to read, you fetch

that particular tructure, that fundamental, right?

Uh, sir, I have a question.

Uh, so pipeline and scheduling both are different or same.

Yeah, different, different, different, okay.

Scheduling concept is different.

So this pipeline in this, in the pipeline,

you the entire task of execution.

Also, you can divide it into step, right?

So the first step is basically read the

instruction from the main memory.

So once you have read this particular instruction, right?

So it's basically you have one and zeros something, right?

So a machine has to understand basically what this sequence


of one and zeroes, right?

Even though it's an instruction zeroes and one,

but you have to know what is it

actually the instruction, right?

So that, uh, understanding of this instruction

by this processor is done is basically using the decode,

yes, decode this instruction, right?

So whether is it an admin instruction,

whether it's subtract instruction, uh, what kind

of operations that are there,

and where do you have to put the result?

So all these things is being done in the

decode stage, right?

So you face the instruction and then you decode it.

So once you understand the meaning of, let's say,

if it is a subtraction instruction,

then I have to do the subtraction.

So the actual instruction execution

is done in the third stage.

That is called the execute instruction, right?

So while at the time of executing the instruction,

you may also want made some times

to read from the main memory, also the data reading, as well

as writing to some near memory location also,

that is also called another step

that is called memory stage.

And the final stage is that

after you perform the computation, whatever, you have

to obtain the results where you want to write the report.


So that is called a write back stage, right?

So the entire instruction execution also happens, uh,

does not happen in once, it happens in sequence of steps.

And this is, uh, sequence of steps, uh,

is basically followed for execution in

the pipeline architecture.

Any doubt here?

So it's an old tradition method,

or still we are following the same pipeline architecture in

the latest technologies.

It's still there. Even if take the intel code processor

you take, there are these things, are there pipeline.

But the only thing is that

how fast each of this you can do it.

And if you have multi code processing systems,

how you can still do a performance, uh, you can try

to achieve it better, better you can, uh, optimize each end

of the chase stage so that you get the better performance.

So that is, uh, basically the research goes on in this

direction, mm-hmm.

So pipeline architecture is still there.

Okay, any other questions?

Okay, let's move on then.

So just a quick summary, uh, so far, uh, is

that we have tried to understand basically an overview about

the computer organization just for our different device.

What are different elements are there in a computer?

And, uh, instructions also, we have understood

that basically different kinds of instruction that are,

can be supported and you can represent it in a way that is,


the computer can understand it.

And, uh, we just briefly understood about how,

what are different stages in the instruction solutions.

So before I proceed to the next topic, so I think I want

to take, uh, I want to give around five minutes break.

So right now it is 10 16 from my side.

So we'll assemble at 10 21.

Is that okay? Okay.

Yeah. Thank you.

Thank you, sir.

Would everybody go on mute, please?

Yeah, yeah, yeah. Hello? I, I think Thea is all unmuted.

Er, please mute yourself.

Could someone tell me in previous light,

what was the maly step for in pipelining?

Ah, uh, which one?

Uh, Asha. All these steps you wanna know?

No, only the fourth one, memory,

ACHA. First was patch,

second was decoding.

The third was execution,

and then memory, and then write back.

We are writing the result.

What was in memory?

Memory? I can say,

I can say it don't record, but I don't remember.

I think, I don't know.

We'll, we'll ask Sarah when he comes back.

So please tell what was


what you were telling, but write back.

I remember write back. I was telling

that you write there is, yeah.

So you have to write back that I,

but memory is what I missed

Memory. I even,

I missed, so sorry. Yeah,

I, I think The memory step is

for loading the results back into the memory,

and then from memory it goes to the output where you have

to write the instructions.

Okay. Okay. So to and from

Memory. So

first from memory,

and, uh, this step is like a con contrary part of it,

writing back it to the memory.

Great, great, great. Okay. Thank you.

Thank you.

So is the session completed or what, or we have break.

Oh, it's a break till 10 21.

Sam, before we move forward, can you, uh,

explain the memory step again?

Like there was some doubts earlier, so to clarify

This One? Yeah, this

one in this fourth step? Yeah.

Can you please elaborate on that?

So basically at the time of, uh, yes, can you hear me now?

Yeah. Yes. Yeah. So at the time

of executing the instruction, so the instruction,

let's say you have to exclude, add A, B, C, right?


Add, add A, B, and then put the result C.

So A, the value of a can be found either in register

or it can be in the main memory location, right?

If it is from the register, it can directly read.

Uh, it doesn't take any extra cycles

because registers are, uh, so if it is in the register,

it's straightforward, uh,

but if it is in main memory, it takes a lot of cycles

to get it right.

So that's why you have a dedicated step called

as a main memory reading operation

or maybe writing operation.

That step is done using the main memory itself.

So is it happening after the instruction?

Is execution like the

No, no, no, no, no, no, no. It'll

not though. Basically, this, uh,

is also called a dedicated, uh, stage.

But this entire thing, when these two things are done,

then the fully instruction is,

Uh, sir, I have it doubt, uh,

when pipeline architecture is, uh, even going today, uh,

why, what is a major difference between a MD

and Intel architecture?

Because some of the applications are not compatible.

I'm bit confused there. No,

No, no. Applications are

basically, uh, it basically set you

that they have their own specification, like a MD and intel.


You can also come up with some other different,

you can design your own, uh, processor

with the different instruction mode, the architectures,

and basically, uh, the way they want to speci,

they'll help you to specify the instruction.

That's mainly the difference, uh,

types in different processor architecture.

So now it is not that, uh, so let's say, uh, you are not,

you're saying that an application, you, uh, write it

and then, uh, it may be, uh, not basically, uh,

executed on this, but the other one can execute it on this.

Basically, you have to compare to

that suitable architecture, then only it can run.

If you're not able to compare to

that suitable architecture, it'll not run

Uhhuh The way it has to be

because you have to, uh, represent in a way that is

that the machine can understand.

Mm-Hmm. Okay.

So now, uh, let's try to understand, uh,

some more details, uh, in the higher level itself.

Uh, so how to talk to computers, right?

So this, uh, machine, uh, that, uh, we have understood

so far, so is basically can understand, uh,

only in one sets years, right?

So if you have to perform some task, whatever task

that it can understand, the way it can understand, you have

to write everything in one and zero, right?

So there are a lot of disadvantages.

If you have to write everything in one


and zeros, for example, it has to, uh, it takes a lot

of time to write the everything in one and zeros

and, uh, it's quite, quite inefficient.

And, uh, it can also lead a lot of errors, right?

So if you are, let's say, if you have to even,

there is some small change in terms

of one you have written zero,

and then it is, first of all,

you don't know whether there is an error also.

And it is, uh, difficult to try that. Okay?

I written down into one, I have written non zero,

you cannot easily detect also.

And if let's say, uh, you have detected also,

then you may get the different results also corresponding,

even if it change the one bit, right?

So it is not, uh, in a efficient way

of writing the programs.

So the machine only, uh, can understand it can do a lot

of computation that is there,

but it can do everything in one centimes.

But that is not, uh, rated need.

That's not sufficient for a programmer's point of view.

So then what we have to do,

any idea, we make a language,

I blocked, uh, assembly language for that.

Uh, it uses onic

and is very similar to the English language that we use.

So it acts as a, uh, middleware between the humans

as well as the machine level language,


Correct? Correct. So

assembly is one, uh, thing is,

and uh, somebody also has talked about, uh,

program something else, yeah.

Programming language, correct?

So writing in this level language is actually maybe good

for the machine, but we don't, uh, for the users

and programmers, it's not convenient, right?

So something we have

to go in the higher level abstraction, right?

So like, there are different end number

of programming languages are developed.

So CC plus plus Java and assembly is also one way,

but assembly is, are slightly at the lower level compared

to the higher level programming languages

that we typically write it.

So there are a number of, uh,

different programming languages have come into pictures, uh,

that, that the user can, uh, write in a, in a better way so

that, uh, it's easy for the programmer point

for writing this task, right?

So now we have, uh, one side, uh, a user

who can understand this, uh, higher level languages, uh,

and the other side we have machine,

which can only understand this, uh, machine code,

basically zero sums, right?

So now we have a problem, right?

So there is some something like a gap in between.

So how do you fix this gap?

So with the help of translators, we have interpreters


and compilers, okay?

Interpreter translates the program one sentence at a time.

So one by one, mm-Hmm. And compiler translates the

entire code in one go.

Okay? Okay, fine.

So now this is a job of done by the translators itself.

So, so the job of translators, basically,

there are two types of translator.

One is a compilers, whatever she described.

So basically you take the entire source program

and then you translate it so that, uh, you, uh,

get the program that is the, according to the machine,

whatever machine you are trying to want to execute it, uh,

the, in a, in a language that machine can understand

that it will try to translate.

So you have the program, which is the user can understand.

This compiler will try to translate it to the program

that the machine can understand.

So you have to tell which machine

you want to generate the code.

So it'll generate the code according to that.

So this compiler will try to do that.

So there is another thing that is called

as, uh, interpreters.

Interpreters, uh, will not just the com translate it,

but it also execute the program.

So it takes the source program, it'll take instruction

by instruction, and then it also, uh, not just translate it,

but also take the input, corresponding input,


and it executes the results also on that machine,

whatever you want to execute it.

So translate compilers will only generate the executable.

For example, you write in the Linux C program, if you try

to compile it, you get something called as a a,

which is an executable.

If you do it on windows, you try to compile it,

you get the EXC file that is an executable,

that is basically the target program.

And, uh, the interpreter will not just translate it,

it'll not generate intermediate,

but it'll directly execute it.

So can someone tell me the, uh, examples of interpreters?

So basic Shell programming. Shell programming, yes.

Uh, Python. Has anybody use Python also? Yeah, Python.

Python. That is also interpreted.

And, uh, some examples of compilers, uh, rust,

C plus plus, All of them just first executable.

And that executable, you have to execute it, right?

But by interpreters also execute the program itself on the

way itself while the after translation, right?

So just the first, uh, the history of the compiler,

if you just look at it, the first compiler, uh,

practical compiler was developed as a part of, uh,

PhD thesis C in 1951 by this Colorado bomb.

And the first, this is a part of the practical compiler,

but the first commercial compiler was developed for Tron,

uh, by IBM, uh, uh, when he, by, uh, John w

at IBM in 1957.

It took almost 80 18% years


to develop this commercial compiler.

So that can, uh, translate the program with an language

to the machine code language, okay?

So if you have to, uh,

basically translate it is the compiler,

different compilers you can have, uh, to generate it.

But what is the charact good characteristics, uh,

for compiler when you have to translate it?

Let's say there are two compilers.

How do you tell which compiler is good?

Compile fast. Compiled fast, yeah, same. Yes.

Compiled fast. Any other thing you expect from the compiler?

It should support to measure architectures

architecture. Yes. Yes.

Platform independence Dependent.

What, what is dependent? Oh, this thing.

What you're saying, the different architecture. Taking,

Taking the, taking the, uh, input, uh, as simple

as we can, uh, use.

Okay, so, uh, I got some of the answers.

Use less resources. Use less resources. That is also less.

Very good point. And, uh, take less time to compile.

Yeah, correct. Education, education will be fast.

Execution should also be fast.

Not just the compilation itself.

The generated code should also be fast debugging

and error detection should also debugging.

Yeah. Yeah, that is the most important

thing in the compilers.


Uh, it may not be there with other kinds of unit,

but for compilers, you should generate, uh, messages, uh, so

that there is a,

the programmer can easily understand the error message.

For example, you missed out semicolon in a program,

and then you have to clearly tell the program, okay,

at this line number, at this one, you miss semicolon.

If you can tell the user in a very friendly manner

that error messages, then that is very good question.

Hey, so these are the, some of the important thing

that we have to, uh,

worry about when you have to design the compilers.

And the first and foremost is basically, uh,

before all of this is basically the preserving

semantics, right?

So if, suppose you are, do writing a program, uh,

for a sorting, sorting of n numbers,

and, uh, if you are, so the output is not for the, that

that compiled program is not for sorting,

but it for something else it is trying to do, uh,

which you don't want it, right?

So whatever program you have translated for, flexible,

the translation has to be correct, right?

It should preserve the cement, whatever.

If you're doing the translation per do doing some additional

operation, it should do the only the, uh,

generate the program for additional operation.

It should not generate the program

for subtraction operation.

So first and foremost is the, whatever you have generated,


it should be correct, right?

And then after that, it should take less

time for compilation.

And also the generated code also should be taking less time

as per so fast execution time.

And uh, third point, uh, is basically the error handling,

which is a key component in the compiler design.

So you have to, uh, design so

that the error message you have to throw so

that the user can easily understand it and then can go

and modify as a course, right?

And of course, there are other things like resources,

it has to consume less.

And then, uh, power also,

maybe you can think about it should consume less amount

of power, it should take less memory

to generate report, et cetera.

So there are other things for also data

as, right?

So I think, let me see how much time I have.

So I have, so now, uh, just to give me, uh,

let me give you an overview about

how the compilers will work out,

and then I will go into the next topic.

That is, right.

So first, for example, uh, let's say you have to, uh,

uh, how the compiler works.

I'll tell you. So let's say you have this sentence

called I, right?
So first of all, you have to do the, uh,

before you tell that what is actually, uh, that

before you finally translate it.

So first of all, you read the characters from left to right,

uh, left, right, you read it.

First of all, this I is nothing but a subject,

and then it is nothing but a work.

And then banana is something called as an object, right?

So basically these things are called as tokens.

So you have subject and work and then followed by object.

So if you have subject followed by work as well as object,

if you get all of them in sequence, then you say

that this program is correct,

this basically sequence is correct,

otherwise you say that the sentence is wrong.

Let's say you write that I banana eat, right?

Then you write it, right? So this is not a correct sentence

according to English language,

because English language says that you have to call subject

followed by word and then object.

That's in the programming language.

Also, you have to define something in English,

you define this grammar, right?

So in the programming language also,

whatever programming language you write for C

or c plus plus everything

for everything, you have to define the grammar.

And then you tell whether this particular sentence,

whatever program you give,

whether it is basically satisfying this particular


grammar that you have written down.

If it is written down according to the way, then you say

that there is no error in the program.

Otherwise, you say that there is a error within the program.

So that's how you can throw the error

messages, right?

So there are various steps also in the design

of compilers itself.

I will not go into the details

because that itself is another complete subject

and it'll take around 30, 40 hours to cover this thing.

So just, I will try to give

what basically goes in the compiler,

different steps only over you.

So there are different steps.

Uh, you don't do everything translation at a time

because we, the goal of compiler is to

take the source program and then to get the target program.

And you don't do everything at once.

So you do it in, uh, certain steps.

So the first step is called as a lexical analysis

that tells you basically, uh,

whether each character in the program is correct, right?

If, if each character is trying to be correct,

and you try to give the some kind of token correspondence,

for example, you say that I is the subject, right?

And then once you read it, then you say that it is a verbal,

and then if, suppose you write the banana, banana, right?

B-A-N-A-N-A. And then let's say you, uh, write something,


the spelling is correct, incorrect, right?

So then itself, you can say that, okay, this spelling,

this word is not correct, then you can show anything, right?

So if word is also correct, then you try to categorize

what this word is, whether it's a object or not.

So this basically you read character by character

and then you try to recognize the words

and you tell what this word is, whether it's a subject

or it's a word, it's object, whatever, it's, right.

So that is basically done in the electrical analyzer phase.

So now the next phase is called this in index analyzer.

So what it tells you that whether this sequence of tokens,

basically whether whatever you have generated this subject

were and object, whether the sequence is correct

or if there is any mismatch in the sequence,

then it tells you that, okay, this is not according to the,

my specification of the grammar.

Then you throw that edit correspond into it.

So that is done using the semantic analyze syntax analyzer.

So the next phase is called as a semantic analyzer,

which basically tells you that

what is the meaning of the sentence, right?

So basically, let's say you have

to transfer this ie banana into the, uh, the final, uh,

let's some other language, maybe Hindi

or maybe some other local lobby that you want to translate.

First of all, you have to understand this meaning about it,

then you can translate, right?

So that basically is done in this sematic analyzer phase

as well as this intermediate code generation phase.


And this entire phase is first four phases

or call frontend code

and the front end basically right

of the compilation process.

So now once this is done,

then finally the next phase is basically, uh,

is called the backend process.

In the backend process, you basically translate it

to the final, uh, the language that you want to generate.

For example, you want to translate to this i

into something in the language, right?

So you, you have understood that the sentence is correct

and these words are correct, the sentences are correct,

you understood the meaning of it.

Then you see how you can generate it to another line, right?

So that basically goes into the pattern

step of the compiler.

And there are different steps also like machine,

independent optimizers, code optimizers,

and finally code generations, right?

Basically these phases try

to generate the code corresponding to the target machine.

In this, in our context, we are trying

to generate from higher level language, for example, c

to the the final machine language, right?

So the front end does all kinds of checkings

and try to understand the pattern, will try

to generate the program in a way

that the machine can understand basically the final language


that one sensor do

Also. Sorry,

yes. Also known as analysis

and synthesis phases, right? Yes, yes,

Yes, yes, yes.

Uh, I have a question here, uh, to understand the meaning

of the, uh, meaning of the sentence, right?

What you have, I ate banana, for example. Mm-hmm mm-Hmm.

Is that specific to the compiler, right? I mean,

So basically if you're generating a language compiler from

English twin, the did that makes sense.

We're trying to understand the English to some other

language, let's say tell you the meaning is the same,

but the way you try to represent it

to the target language that could be different.

Mm-Hmm. Right?

So here, uh, let tell in the context of, uh, our thing.

Let's say you write C equal to A plus B, right?

So this is basically what the programmer writes,

let's say in C language.

But internally, what does this mean?

Uh, it is to do the addition of two variables, right?

So you have to understand the meaning of it,

and then you have to represent the same thing in the way

the machine can understand.

So the machine has a specific format,

let's say first you have to specify add,

and then you have to specify A and then B,

and then C, right?

So you have to understand, once you understand the meaning


of this, then you can write in the way that the,

the other machine can understand.

Yep. Got it. Okay.

So before we go to the next topic, uh, any questions?

Okay, so these are all done by the, uh, the compiler phase.

Basically, you are given one program,

which is the user can understand in the higher level

language, cc plus the Java, et cetera.

And now the job of compiler is to just make sure

that you generate the code according

to the way the machine can understand it, whatever machine,

if it is X 86 or CP or a MD, anything.

So according to that, it'll generate the code.

You have to specify, okay, I want

to generate the target machine code

and just the machine I want to specify,

and then this is my program, and this compiler

will try to generate it.

And it has, you have various compilers are available

for various different programming languages, okay?

So we have understood so far how to talk to the computers.

Basically, uh, you, you don't talk basically in the one

and zeros, but you try to talk in a way

that you understand in your way.

Then the translator will the job of, uh, to the translation

so that the machine can, uh, perform this track, whatever.

So we have understood now only in the terms of now only, uh,

uh, how the, how we specify it

and how the exhibit group,


that's all the high level abstraction we have, right?

This is only with respect to one program.

It only one program, but in general, it's not like

that you have multiple programs running at the same time.

Somebody may open this browser

and then somebody will open text editors

and then play games, video songs, et cetera.

You have different kinds of programs you want to be, uh,

running at the same time, right?

So then how do you do this?

Okay, so this is basically, uh, the next topic

that is basically done using the operating systems

before we, uh, proceed.

Any questions so far till now?

Uh, sir, I have a doubt regarding the last slide.

Can you please go back one step, one more step?

Uh, the lifecycle of the, yeah, so these, uh,

these steps are standardized for all kind of compilers

or like the more, uh, recent latest.

Most, most of the compilers,

even the latest compilers also try

to go this particular strategies.

Okay, so these are standard

process for all the compilers standard

Process? Yes.

Uh,

so do we need to understand all these systems

or no, no, no, no, no, no, no.

Because this is not a part of this, uh,

this thing of course.


I mean, just to give a high level abstraction. So, and,

Uh, one question, sir. So

it depends on the any, uh, like, uh, like different, uh,

uh, localization as well,

because os has also building a different localization.

So in every case, these all the same,

Yeah. In any kind of

translation, these are the typical steps.

Even the latest, uh, compilers are also defined.

They have to follow these kind, they follow this, these kind

that's been more standardized.

And because os is also like OA German O using recent

French, and also they have also defined a different concept

of the, uh, processing the, like, the compiler.

So I'm, I'm not sure like this is the same in every case.

It just went to us,

Uh, uh, you meant to say OS is designed,

uh, for their language. You saying,

Because o see, uh, os has also built in German os French

and all they have in the, like Microsoft related to the,

uh, user interface.

Yes. So, uh, even the program we have done with the,

that language, if you, the thing we like, uh, uh,

So the programmer will write in a way

that they can understand it,

and then the voice also they can write.

Ultimately it has to be get translated

to the machine learning whatever way voice is also a program

that also be translated.


Okay, so that is also coming the picture this way. Hmm.

Uh, so today we have a concept of generative AI

and uh, other AI concept, the,

those concepts also works on the same, uh, uh,

line of, uh, no,

No, no. This that is

actually, so here, uh, the program here,

the technology is different, I'll tell you.

So here, the main intention is that you have

to write everything correctly.

Whether you say if it's ad, you have to do it, add,

or there should not be,

and there is a hundred percent it has to be correct,

but this generative AI and all this things, right?

So this language translators, whatever,

Google translators, et cetera, you have it.

There can be inequities also.

But here we cannot have any in,

it has to be a hundred percent correct.

If it has to be hard, it has to be,

but we generative AI don't use this technology.

Of course, I'm, I'm not telling that that's not good

or that's not this is bad or that it good,

but the objective here is that we want

to get everything a hundred percent correct,

but the generative ai, all these technologies,

there could be a wrong also, right?

So even 95%, even if you can have some tolerance of errors,

those models, machine learning models, those transfers,

Uh, I mean to say the basic concept behind, uh, uh,


them are a bit common or no, no,

They're different. Different,

different. The machine, the generative way,

machine learning technologies,

they learn it in a different way.

So that technology is different.

And this is basically

how the typically compilers will work out.

And they have the machine learning models,

neural networks model, the technology is different.

And there the target is the a hundred

percent target is not there.

I mean, of course they want to achieve,

but if they don't achieve,

then also maybe they can tolerate,

but you cannot target area in this case.

Uh, for example, you tell me if I have

to do the addition in a calculator,

and if you do some result,

which is wrong, will you accept it?

No. Right? So the same,

but maybe, uh, when you're trying to translate it,

like the music, Google translator, you type and sentence,

and maybe if it's not fully correct, also you accept,

but here we don't accept it.

So that's, the technology is completely different.

Thank you, sir. Thank you. Okay. Okay. Any other,

Uh, sir, like, uh, we are seeing in this, uh,

the whole strength process, there are a lot of steps


of analyzing and then integrating the code

and then optimizing correct to optimize the whole process.

So that, like for us, uh, single instructions

or small programs, it might not be very significant,

but when we are processing like huge chunks of data

or programs, mm-Hmm.

So compilation can be improved.

So is there a way to optimize the whole

process in a more better way?

Yes, yes. There, there are different steps, right?

There are seven steps. So first thing is called the, the,

in this 50 step is called the machine

independent code optimizer.

So that is basically the steps.

We'll try to do a lot of optimizations itself.

And then you also have the last step that is

for the machine dependent optimizer.

There itself, you do a lot of optimization so that if a, uh,

newer processor is coming,

then if it's better than the existing one,

maybe you can further optimize it

or maybe the code

that you have written itself can you generate optimal code,

uh, so that you get a better, uh, translated code.

So that is also done in the part of, as a part

of compiler design itself.

Okay. It's just not that the translation,

it also optimize it code also.

And if suppose the user wants to, uh,

maybe generate a better optimized code, uh,


I mean from the user point of view,

maybe they has create a better program.

But if the, the, uh, if the thing has

to be done automatically, then it has to have,

you should have a knowledge of compilers,

whereas you can optimize it, et cetera.

So like, uh, some of the steps like, uh, like

ations cementing ions, so these can be done while, uh,

writing the piece of the code

or program as well, like using various id.

There are various technologies which provide like

intelligence is a pro a thing, which

lets you know while writing the code

that if it is se clear syntax, if the syntax

of the code is right or not,

while the right not at the execution.

Mm-Hmm. So some of the layers can be, is it possible?

Yes, yes. Make It more

Efficient. Yeah. So

what you're saying that during,

before you go for the compilation itself at the time

of writing the thing itself, whatever you are saying, no,

whatever analysis it is trying to do at the time itself,

it can give you some hints that if it is not correct, see

that maybe there is something it's not correct.

So you can offload this thing

before the compilation

that is the fault. So the latest tools, so

Will it improve the, will it improve the, uh,


efficiency of the whole compilation

Process? Yes, yes. It'll

definitely,

because you are, uh, trying to detect the, you are trying

to resolve the errors before it, correct.

But if you, if your program is having error itself at the

syntax analyzer itself, it can catch,

it can catch itself there itself.

Okay? You don't need to go all the way

to the machine independent optimizer.

The syntax analyzer itself, you know

that it is not according to the syntax you can throw there,

is there, but of course, as you said,

the tool scan now can be smarter saying that some

of these things, you can do it at the tool level itself

before you start the comparison.

Okay. So any other questions?

Okay, so now, uh, let me go to the next topic.

That is that, uh, now let's say we,

so far we talked about only one program

and uh, how they translated

and then, uh, how the instructions, uh,

how the program translated from the zeros

and ones, how they basically it can be executed,

but now there, the city stages, uh,

the setting is different.

You have multiple programs

and then you have to execute all of them.

So in this context, yes, somebody was talking.

Yeah, yeah. So, uh, uh, what I understood as


of now the way, I mean, uh, the coming up, uh, the compiler

and all and latest technologies are also utilizing the older

tradition, uh, base, uh, using that, right?

Yes, exactly. We really looking at the quantum

quantum technology, mm-hmm.

And the AI that you're referring to, mm-Hmm.

So if those are coming up, do you think that all these

languages which are really on top of the base,

which is in the 1970s

and eighties mm-hmm mm-hmm, uh,

applying zeroes and ones there.

Mm-Hmm mm-Hmm. All that, all the tradition method will go,

Yes, actually what is happening is that,

so even at the compiler design also, right?

So how you can use this AI technology

to make the compiler even faster.

So that is also there, research is going on this direction,

whether, for example, you throw error messages, right?

Mm-Hmm. So can you just throw, uh, instead of just saying

that there is a semicolon error, can you tell that, okay,

this gives some hints, okay, you just give some option,

you the semicolon option at the interface itself, right?

So these are kind of AI technologies, the compiler,

I mean the standard steps are also there,

but you make it, uh, it is very easy

for the programmer itself, right at the front end level.

So when you're doing the addition, while writing the program

for the editor level itself, can you generate hints

for the programmer so that you can use AI technology


to make it better?

Okay? Okay. So the layer of the AI will help, the,

will help the compiler not

to burden more when any grammatical mistakes, right? The

Job of compiler, the job of compiler is

to make the translation easy and efficient,

and the programmer also can write it very easily, right?

So all these things, AI technologies you can use to do this.

Of course there is a lot of research is going on in

this direction also.

So it means that the core, uh, the core working, uh,

working yes, will not go,

It'll Not go, will not go.

It'll not go. So I wanted to, yes,

When the architecture, right?

So basically you are, uh, the zeros and one, the execution.

It is there unless you have samples,

let's say any revolution is coming that okay, I don't,

I cannot only express zeros

and one, maybe I can express in multiple ways then

that you will have a different way of computing.

The quantum computing is said that

that is based on the spinning of the, uh,

the electron is basically this, which side it is going on.

mm-Hmm. So some kind of, uh, the physical issue,

change it at that level, it'll change it also.

Now, again, I saying that architecture at the machine

learning, so people are working on it.

So design the architecture is also not straightforward.

How can you infor incorporate the machine learning


technologies into the design of the architecture itself?

You can make it faster unit.

And then, because there are a lot of, uh,

complexities when you have to design a sheet, right?

So first of all, your logic should be correct

and the layer physical layer has to be correct.

Everything. There are other complexities.

So people are also using, how can I use the machine learning

technologies there at the time of architecture design so

that the architectures are also designed faster manner

and more efficient manner, things like that.

Yeah. So this in terms of, yeah, when it's actually,

it's not completely replaceable,

but this is assistive technology,

Even the con quantum computing when the probabilities

of zeros and one, even if it is coming

up in the near future.

So that will not, uh,

override the current compiler technology as of

As of not. So,

so, so, so far I don't foresee it,

but of course we never know.

Mm-Hmm, I see.

So what I am actually, uh,

and concerned about that is there are a lot, a lot

of applications based on these compilers

by layer down into CC plus plus Java

and all if the kind of a quantum computing kind

of a thing will come up.


So all this will go right, everybody, no,

Basically you, you, you'll be using quantum compiler,

that thing, you'll generate programs

that can generate the quantum market itself, the compiler,

Uh, then compatibility things

that will come into the place. Yeah,

But the, but the thing is the translation methods

of translations are not going to be lost.

It'll be there, but only the it'll there.

Which machine, which machine you want to generate.

Quantum you want to generate, right?

So according to the, that quantum machine can understand

that will be according to that.

Oh, okay. Okay. And now I got it. Yeah, thank you.

Uh, and then, uh, these things, so of course, uh,

whatever new technologies coming up, new,

what the fundamentals are going

to be assisting in, in all the ways.

So nowadays you have, let's say high

performance supercomputers.

You have this like GPUs, you have it.

So people are also doing research on, now,

GPU has a different kind of set of, uh,

architecture compared to the standard A

and A MD architecture.

Now how you generate the program corresponding

to this new architecture.

So these things are there. So the research keeps going on.

Everybody works 'cause everybody also wants to survive.

So life also.
Yeah, that's true. Okay,

so now you have, uh, different programs.

Uh, I will try to take only five minutes,

so if something is left over,

maybe I'll continue next class.

So, uh, those programs, you have multiple programs.

You have, uh, you compile each of these program

and then you generate the executable.

For example, if you are on windows,

you have several executables,

and then you click on executables

and then the programs will start, right?

So now, uh, it's not that you have only one program,

you have multiple programs, uh, on one side,

but you have this hardware resources basically you have

CPUs, you have main memories, storage can code, input.

You have many reasons, right?

So certain program may require, uh,

reading the data from it.

Some stage it can be in the reading stage of the input

and some something maybe displaying output,

something maybe processes running.

So different processes, you will have, uh, programs

and then you want to be running it.

So basically you have to, uh,

manage all these resources, right?

So, uh, for a given program it was clear,

but now it is basically you have multiple programs.

You have and you have, uh, some dedicated hardware you have,
and then there should be someone basically

managing all these resources, right?

So that is basically done by the job

of the operating systems, right?

So now the translation has been done, the programs are ready

for execution, right?

But, and each program how to execute is also clear.

But the thing is, there are many resources.

Somebody then if suppose two

of them wants the same name, then how do you do this?

Suppose somebody wants to use two programs, want

to use the same keyboard, then how do you basically do it?

Same display. So all of them,

like you have multiple programs

and you want, uh, this resources has

to be done in a more efficiently, right?

So that is basically done at the job

of the operating system, right?

So operating system, uh, it has various resources

and it tells that, okay, uh, which portion has to be used by

what, at what time, et cetera.

For example, you take the, uh, supermarket itself maybe,

uh, basically you have, uh, several stock

that's coming up, right?

So you organize. So there are some people

who are try to organize, right?

So in which way I should place which resources.

Suppose something is over,

then which stock I should bring on and how do you put it?

Uh, whether I should put it in a systematic


manner or how do you put it?

So like these kind of things are like

managerial point, right?

So this operating system is something like the,

is like a manager who manages all these hardware resources

or what are the demands

that is coming from different programs

and how you can actually try to use the better utilize so

that you get the better efficiency of the processes

that you have, uh, computer organization

that you have designed so far.

So, so that is the, basically the goal

of this operating system.

It sits in between the, uh, user

and with the computer hardware, the whole computer hardware

and the whatever programs that you have written down,

it makes sure that you have the efficient resource

utilization about all these things, right?

So execute the program, uh,

and make the solving the pro user program easier

and use the computer hardware in a

efficient, right?

So the goal of operating system is to allocate the resources

and monitor these activities

and manage these files, et cetera.

For example, you write some C program, right?

So you write, uh, you write uh, programs having, dealing

with files, but who actually allocates the file?

The, this is actually having less hundreds of gb,


but where do you put the file, uh,

where you have to read the file from?

Uh, where I should, right?

How is it sequentially stored IT support?

Uh, you have to update the file, right?

So how do you do all of this?

And this all basically done by this operating system.

It manages all that.

And for example, if two programs have to be required

to execute it, two programs require the main memory.

And if suppose there are not sufficient memory, then

how do you resolve, right?

All these things are actually done by this goal

of this completely, the operating system itself, right?

So operating system, the goal is to do all the management.

There are different management have to be done.

So the managers management has

to be done at the process level and process nothing

but the program which has to execute it.

And you also have to manage the memory also.

So you have the main memory, also storage devices, uh,

how do you design it when two different programs requiring

some, uh, certain memory.

Then how do you make sure that there is no collision

and how do you make sure that the systematically accessed

and the main efficiencies also very, uh, fast.

And uh, that, that basically is done

by the main memory management.

And you also have this file management, basically how

to store the files, how do you retrieve the files if you


have to modify the files

and uh, how, what is the storage organization

and how all these things are actually done by this,

uh, file management.

And then you also have something called a synchronization.

Basically it tells you when two processor has

to execute the same memory location

and on the same variable, then how do you ensure

that programs are still correct, right?

So all these things are basically

done in the synchronization.

For example, you two people are trying

to book the same ticket, right?

You're trying to book some,

and two people are trying to cook the same, uh, train

and same, uh, seed they want to reserve basically.

Then how do you synchronize it?

So all this basically done at the operating system levels

also, uh, so this is basically the job

of operating system is basically to do all the management

of all the resources, uh, programs management,

memory management, file management, synchronization,

all these things go on behind the operating systems.

So I think, uh, maybe two,

three minutes I'll talk about post logistics

and I'll just find off, right?

So these are some of the standard textbooks,

uh, for this score.

But the first one is for the computer organization


and second one is for the operating systems.

So compilers, we are not in the syllabus,

so I'll not talk about it.

So, but these two are the part of syllabus,

so you can refer this textbooks.

Alright, so course material, I'll, uh, provide them,

you can access them from the LMS, uh, this,

I think you have this platform.

And then I'll also keep sending the lecture notes

and references, et cetera.

So whatever you send, uh, on the LMS and, uh, the, the

and the the notes there are enough to, uh, for,

for our course and for the exam,

Yes, it should be enough,

but if you suppose sudden things are not clear, right?

It's not possible that I may be able

to answer all of these questions.

So something somebody may not be able. No,

Sir. Actually,

actually the thing is that we are,

we're already in a working, so, uh, it's, it's 11

to 12 hours in the office

Or syllabus, whatever plate.

If you can go through it, uh, then

that's good for this syllabus. Thank

You. Thank,

So you have assessments, like assignments, midterms,

external, so I think that details are already shared

with you, so according to that logic.

So if you have any questions, uh,


you can also drop me an email.

Of course I take during the live sessions also,

but if there is still something that is not clear,

you can also send me an email

and, uh, my email ID is, uh,

so that's all I have for today.

Uh, thank you for your patience for the long session

and uh, if you have any questions,

I can just stop by for one or two minutes.

Thank you, sir. Thank you very much. Sir.

Thank you, sir. Thank you so much.

Thank you so much, sir. Thank you, sir.

Okay. Thank you. Thank you. Thank you, sir.

Bye. Uh, sir, I have one question.

Uh, like in which month these mids

and terms would be happen? I mean,

Yeah, I think, uh, I'm not the right person,

but the coordinator can tell you the, all the details

because I only just teach the lessons

and as when the schedule is up, up,

then I'll prepare the questions.

Okay, sir, I hope I was able

to just give an overview about the entire computer.

Okay, then that's all. I thank you for myself. Bye.

Thank you, sir. Bye. Thank you. Thank, thank

Bye.

Thank you, sir. Thank you.

You might also like