0% found this document useful (0 votes)
186 views52 pages

AIML Magazine

Uploaded by

hillol kashyap
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
186 views52 pages

AIML Magazine

Uploaded by

hillol kashyap
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

T EJAS VOLUME 1 EDITION 1

2022

AIML
DEPARTMENT
EXPLORING A NEW WORLD
Artificial Intelligence and Machine
Learning Department

Vision

“To become a department of international


relevance in the field of Artificial Intelligence and
Machine learning”

Mission

"To nurture students with sound engineering


knowledge in the field of AIML through effective
use of modern tools with a focus on imbibing
professionalism, leadership qualities, ethical
attitude, lifelong learning and social sensitivity."
Program Outcomes POs

PO 1: ENGINEERING KNOWLEDGE: Apply


Knowledge of Mathematics, Science,
engineering fundamentals and an engineering
specializationto the solution of complex.

PO 2 : PROBLEM ANALYSIS: Identify, formulate, research


literature and analyze complex engineering problems
reaching substantiated conclusions using first principles of
mathematics, natural sciences and engineering sciences.

PO 3 : DESIGN / DEVELOPMENT OF SOLUTIONS: Design


solutions for complex engineering problems and design
system components or processes that meet specified needs

Your paragraph text


with appropriate consideration for public health and safety,
cultural, societal and environmental considerations

PO 4 : CONDUCT INVESTIGATIONS OF COMPLEX PROBLEMS:


Using research based knowledge and research methods
including design of experiments, analysis and interpretation of
data and synthesis of information to provide valid conclusions
Program Outcomes POs

PO 9: INDIVIDUAL AND TEAM WORK: Function effectively as


an individual, and as a member or leader in diverse teams
and in multi-disciplinary settings.

PO 10: COMMUNICATION: Communicate effectively on complex


engineering activities with the engineering community and with
society at large, such as being able to comprehend and write
effective reports and design documentation, make effective
presentations and give and receive clear instructions.

PO 11: LIFE-LONG LEARNING: Recognize the need for and


have the preparation and ability to engage in

Your paragraph text


independent and lifelong learning in the broadest context
of technological change.

PO 12 : PROJECT MANAGEMENT & FINANCE: Demonstrate


knowledge and understanding of engineering and
management principles and apply these to one’s own work, as
a member and leader in a team, to manage projects in
multidisciplinary environments.
Program Outcomes POs

PO 5: MODERN TOOL USAGE: Create, select and apply


appropriate techniques, resources and modern
engineering and IT tools including prediction and
modelling to complex engineering activities with an
understanding of the limitations.

PO 6 : THE ENGINEER AND SOCIETY: Apply reasoning


informed by contextual knowledge to assess societal,
health, safety, legal and cultural issues and the
consequent responsibilities relevant to professional
engineering practice.

PO 7: ENVIRONMENT AND SUSTAINABILITY: Understand the


impact of professional engineering solutions in societal and
environmental context and demonstrate knowledge of and
Your paragraph text
need for sustainable development.

PO 8: ETHICS: Apply ethical principles and commit to


professional ethics and responsibilities and norms of
engineering practice.
Programme Educational
Objectives (PEOs)

PEOs: Ability to contribute to problem


identification, analysis, design, and
01 development of systems using
principles and concepts of Artificial
Intelligence and Machine Learning.

PEOs: Ability to apply the concepts,


principles and practices of Artificial
Intelligence and Machine Learning and
02 critically evaluate the results with
proper arguments, selection of tools
and techniques when subjected to
loosely defined scenarios.

PEOs: Use Artificial Intelligence

03 and Machine Learning models on


data for enabling better decision
making.
CONTENTS
1 Messages

2 FACULTY ARTICLES

3 STUDENT ARTICLES

4 INTERVIEW

5 ACHIEVEMENTS

6 ACKNOWLEDGEMENT
MESSAGES
Dr. Megharani Patil
Incharge Head of
Department - AI&ML

Education is the ability to inculcate discipline, build trust and enhance the
growth of individuals at various levels. With hard work and punctiliousness
laced with knowledge and interaction, one can achieve the great success
that one desires. The vision of our magazine is to impart quality education
in all core disciplines of knowledge by focusing on the empowerment of our
students with overall development.

With our first AI&ML department magazine, TEJAS serves as a wonderful


platform for students to reflect their ideas and research into knowledge.
The enthusiastic contribution of students in form of articles not only boosts
their linguistic, semantic and technical expertise, but also provides readers
with beguiling and interesting information.

The topics covered in the magazine not only cover the various domains
being studied but serve as a beacon of inspiration for students to aim for
greater heights. Thus, through TEJAS, we have tried to inculcate the value
of lifelong learning and to thus make our own little contribution towards
the betterment of our society. We hope that the readers grasp all that we
wish to convey through this issue, acknowledging the hard work.

Lastly, we would like to congratulate and thank the committee and the
students, and faculty for their exemplary contribution, valuable time and
effort.
Mrs. Shilpa Mathur
Faculty Incharge

It gives me immense pleasure to present the very first issue of our technical
magazine “TEJAS”. The technical magazine is one of the best platforms for our
students to put forward innovative ideas. This magazine intends to bring out
the creativity and flamboyance of the minds of the students at TCET.
It is the talent and outcome of our students which is reflected through this. In
its very first edition, we have tried to keep it informative, inspiring, and fun.
I applaud all the students for making our department one of its kind and
unique by participating in different national and state-level Hackathons,
Coding Competitions, and many other activities.
I heartily congratulate all the editorial members and faculty members for
helping and working together to publish this magazine. Thank you all for your
precious time and noteworthy efforts.
We all wish that TEJAS stands tall for all the future editions to come.
Mr Anand A. Maha
Faculty Incharge

First of all, I would like to congratulate the editorial team for their effort in
bringing out the first issue of departmental technical magazine Tejas.
Tejas is a cloud of information which provides an opportunity to the
students and staff to express their original thoughts on technical topics.
The magazine plays an instrumental role in providing a technical platform
to the students to express their innovative ideas in Artificial Intelligance
and Machine Learning. This also brings professional attitude, leadership,
ethical and social sensitivity among students.
This first issue of Tejas has come up with topics like technical papers which
is the first step towards research and development. On a concluding note,
I would like to wish you all the best for more such initiatives and future
endeavors.
TEAM

Kushal Singh
(TT,Chief Editor)

Yoshit Verma Mangesh Pal


(TT,Art Designer) (ST, Designer)

Afzal Asar Bipin Madheshiya


(ST, Editor) (ST, Designer&Editor)
FACULTY
ARTICLE
FACULTY

Model Selection in Machine Learning

The central issue in all of Machine Learning explanatory attributes x1,x2,x3 the
is “how do we extrapolate what has been model y = ax1 +bx2 is ’simpler’ than
learnt from a finite amount of data to all the model y = ax1 +bx2 +cx3 — the
possible inputs ’of the same kind’?”. We build latter requires 3 parameters
models from some training data. However compared to the 2 required for the
the training data is always finite. On the first model.
other hand the model is expected to have The degree of the function, if it is a
learnt ’enough’ about the entire domain from polynomial. Considering regression
where the data points can possibly come. again, the model y = ax2 1 +bx3 2
Clearly in almost all realistic scenarios the would be a more complex model
domain is infinitely large. How do we ensure because it is a polynomial of degree 3.
our model is as good as we think it is based Size of the best-possible
on its performance on the training data, even representation of the model. For
when we apply it on the infinitely many data instance the number of bits in a binary
points that the model has never ’seen’ (been encoding of the model. For instance
trained on)? more complex (messy, too many bits
of precision, large numbers, etc.) the
Occam’s Razor is a predictive model has to coefficients in the model, more
be as simple as possible, but no simpler. complex it is. For example the
Often referred to as the Occam’s Razor, this expression (0.552984567 ∗ x 2 +
is not just a convenience but a fundamental 932.4710001276) could be
tenet of all of machine learning. considered to be more ’complex’ than
say (2x +3x 2 +1), though the latter
To measure the simplicity, we often use its has more terms in it.
complementary notion — that of the The depth or size of a decision tree.
complexity of a model. More complex the
model, less simple it is. There is no universal
definition for the complexity of a model used Intuitively more complex the model, more
in machine learning. However here are a few ’assumptions’ it entails. Occam’s Razor is
typical ways of looking the complexity of a therefore a simple thumb rule — given two
model. models that show similar ’performance’ in
the finite training or test data, we should
Number of parameters required to pick the one that makes fewer assumptions
specify the model completely. For about the data that is yet to be seen. That
example in a simple linear regression for essentially means we need to pick the
the response attribute y on the explan- ’simpler’ of the two models.
learning a ’concept’ using a model and
not really the training data itself. So
ideally the model must be immune to the
specifics of the training data provided
and rather somehow pick out the
essential characteristics of the
phenomenon that is invariant across any
training data set for the problem. So it is
generally better for a model to be not too
sensitive to the specifics of the data set
on which it has been trained. Complex
In general, among the ’best performing’ models tend to change wildly with
models on the available data, we pick the changes in the training data set. Again
one that makes fewest assumptions, using the machine learning jargon simple
equivalently the simplest among them. models have low variance, high bias and
There is a rather deep relationship between complex models have low bias, high
the complexity of a model and its usefulness variance. Here ’variance’ refers to the
in a learning context. We elaborate on this variance in the model and ’bias’ is the
relationship below. deviation from the expected, ideal
behaviour. This phenomenon is often
Simpler models are usually more referred to as the bias-variance
’generic’ and are more widely applicable tradeoff.
(are generalizable).
Simpler models make more errors in the
training set — that’s the price one pays
Explanation:
for greater predictability. Complex
models lead to overfitting — they work
One who understands a few basic principles
very well for the training samples, fail
of a subject (simple model) well, is better
miserably when applied to other test
equipped to solve any new unfamiliar
samples.
problem than someone who has memorized
an entire ’guidebook’ with a number of
The validity of Occam's razor has long been
solved examples (complex model). The latter
debated. Critics of the principle argue that it
student may be able to solve any problem
prioritizes simplicity over accuracy and that,
extremely quickly as long as it looks similar
since one cannot absolutely define
to one of the solved problems in the
“simplicity,” it cannot serve as a sure basis
guidebook. However given a new unfamiliar
of comparison.
problem that doesn’t fall neatly into any of
the ’templates’ in the guidebook, the second
student would be hard pressed to solve it
than the one who understands the basic
concepts well and is able to work his/her
way up from first principles. A model that is
able to accurately ’predict.

Simpler models require fewer training


samples for effective training than the
more complex ones and are
consequently easier to train. In machine
learning jargon, the sample complexity is
lower for simpler models.
Simpler models are more robust — they
are not as sensitive to the specifics of the
training data set as their more complex -Mrs Shilpa Mathur
counterparts are. Clearly we are Assistant Professor-AIML
STUDENTS
ARTICLES
STUDENT

VIRTUAL ASSISTANT USING AI


INTRODUCTION Lexical Analysis
Syntactic Analysis
A virtual assistant is considered as a Semantic Analysis
technology based on artificial intelligence. Disclosure Integration
The software uses a device’s microphone to Pragmatic Analysis
receive voice requests while the voice
output takes place at the speaker. But the b) Automatic Speech Recognition: To know
foremost exciting thing happens between command according to user’s input.
these two actions. It's a combination of c) Artificial Intelligence: To find out things
several different technologies: voice from user and to store all information about
recognition, voice analysis and language behaviour and relations of user. The power of
processing. Voice recognition may be a a system to calculate, reason, perceive
complex process using advanced concepts relationships and analogies, learn from
like neural networks and machine learning. experience, store and retrieve information
The auditory input is processed and a neural from memory, solve problems, comprehend
network with vectors for every letter and complex ideas, use natural language fluently,
syllable is created. This is often called the classify, generalize, and adapt new
data set. When an individual speaks the situations.
device compares it to this vector and the
different syllables are pulled out with which d) Inter Process Communication: To urge
it has the highest correspondence. important information from other software
applications.
2.WORKING
3. ADVANTAGES
The working of Virtual Assistant uses • A VA makes your life easier
following rules: • A VA saves you time and money
• A VA gives you more free time for your
a) Natural Language Processing: Natural personal life
Language Processing (NLP) refers to AI • A VA are often an expert in the field
method of communicating with an intelligent
system using a natural language such as 4. APPLICATIONS
English. Processing of NLP is required when • Alexa
you want an intelligent system like robot to • Siri
perform as per your instructions, once you • Google Assistant
want to hear decision from a dialogue based • Cortana
clinical expert system, etc. • Bixby
Five Steps in Natural Language Processing
are: 5. FUTURE TRENDS
• Voice bots are getting mainstream.
• AI-based bots will get more human-like.
• Deep customer insights to empower virtual
assistant behaviour
• Messaging platforms as a growth driver for
virtual assistants.

6. REAL WORLD EXAMPLE

Alexa

Alexa, Amazon's virtual assistant, is made


into the Amazon Echo line of smart speakers.
you'll also find it on some third-party
speakers from brands like Sony. You'll ask
the Echo questions like, "Alexa, what's the
star cast of the movie ‘Sholay’?" You can also
ask it to play a song, make a call, or control
your smart home devices. It's a feature
called "multi-room music," which allows you
to play the same tunes from each of your
Echo speakers.
Alexa recognizes a couple of wake words,
including "Alexa," "Amazon," "Computer,"
"Echo," and "Ziggy."
You can also configure the Amazon Echo
with third-party apps, so you'll use it to call
an Uber, pull up a recipe, or lead you to do
workout

-Vaibhav Yadav
TT AIML
STUDENT

1Introduction ral networks, to make systems that can drive


autonomously.
Auto driving buses are the buses which can The neural networks identify patterns in the
drive on road without any mortal interaction. data, which is fed to the machine learning
This is done with the help of detector, camera, algorithms. That data includes
radar and artificial intelligence. The detectors That data includes images from cameras on
and radars give the condition of the tone- driving buses from which the neural
surrounding and consequently the sheering network learns to identify business lights,
move. There are numerous things which the trees, checks, climbers, road signs and other
auto descry use while driving vehicle similar as corridor of any given driving terrain.
business light, road lane, people walking on For illustration, Google's tone- driving auto
the road, people crossing the road, etc. design, called Waymo, uses a blend of
The society of robotization have stated 6 detectors, lidar( light discovery and ranging--
position of robotization in buses . position 0 a technology analogous to RADAR) and
where the is completely homemade there no cameras and combines all of the data those
robotization in the vehicle, position 1 then one systems induce to identify everything around
or two functionality are automated position 2 the vehicle and prognosticate what those
then the ka perform the driving by controlling objects might do next. This happens in
boscage and steering but it need fragments of a alternate. Maturity is important
administrator of mortal being, in position 3 the for these systems. The more the system
auto analyses the terrain and can perform drives, the further data it can incorporate into
numerous task like parking, etc but then also its deep literacy algorithms, enabling it to
mortal being have to supervise, when it comes make further nuanced driving choices.
to position 4 buses can perform any task
under specific circumstances that a normal The following outlines how Google Waymo
auto do manually. Geofencing is needed. vehicles work
Humans can stamp the command. The last
position 5 then the buses can perform any * The motorist( or passenger) sets a
task without mortal monitoring and commerce. destination. The auto's software calculates a
route.
Working * A rotating, roof- mounted Lidar detector
monitors a 60- cadence range around the
AI technologies power tone- driving auto auto and creates a dynamic three-
systems. inventors of tone- driving buses use dimensional( 3D) chart of the auto's current
vast quantities of data from image recognition terrain.
systems, along with machine literacy and nue- * A detector on the left hinder wheel observes
Belgium, France, Italy and the UK are
planning to operate transport systems for
automated buses , and Germany, the
Netherlands, and Spain have allowed
public testing in business. In 2015, the UK
launched public trials of the LUTZ
Pathfinder automated cover in Milton
Keynes. Beginning in summer 2015, the
French government allowed PSA
Peugeot- Citroen to make trials in real
conditions in the Paris area. The trials
sideways movement to descry the auto's were planned to be extended to other
position relative to the 3D chart. metropolises similar as Bordeaux and
* Radar systems in the front and hinder Strasbourg by 2016. The alliance between
fenders calculate distances to obstacles. French companies THALES and Valeo(
* AI software in the auto is connected to all provider of the first tone- parking auto
the detectors and collects input from Google system that equips Audi and Mercedes
Street View and videotape cameras inside the premi) is testing its own system. New
auto. Zealand is planning to use automated
* The AI simulates mortal perceptual and vehicles for public transport in Tauranga
decision- making processes using deep and Christchurch
literacy and controls conduct in motorist
control systems, similar as steering and Example
thickets.
* The auto's software consults Google Charts Tesla Autopilot is a full suite of motorist-
for advance notice of effects like milestones, backing programs features that range
business signs and lights. from lane to auto- parking. This system
* An override functioon is available to enable a requires minimum motorist intervention.
mortal to take control of the vehicle Features similar as those set up in this
system are likely to be present in unborn
Advantages tone- driving vehicles.
* Reduce the cost of transportation Robo race is an independent race auto
* Creates a new sluice of job openings that made its debut between 2016 and
* Offer lesser mobility independence for 2018. These vehicles operate in a
people with disability competitive environment. still, their
*Eco-Friendly brigades ’capacities in developing artificial
intelligence and real- time algorithms
Application show pledge for these buses’ capacities
Autonomous exchanges and vans: Companies as a whole.
similar as Otto and Starsky Robotics have
concentrated on independent exchanges.
robotization of exchanges is important, not
only due to the advanced safety aspects of
these veritably heavy vehicles, but also due to
the capability of energy savings through
platooning. Autonomous vans are being
developed for use by online grocers similar as
Ocado. Research has also indicated that
goods distribution on the macro ( civic
distribution) and micro position( last afar
delivery) could be made more effective with
the use of independent vehicles thanks to the
possibility of lower vehicle sizes. -Sarvesh Kumar Yadav
Transport systems In Europe, metropolises in TT AIML
STUDENT

Introduction predicting whether a patient will contract a


specific disease. It approaches issues in
Artificial intelligence (AI) and similar terms of variables, weights, or "features," that
technologies are becoming more and more link inputs and outputs. It has been compared
common in business and society, and they are to how neurons interpret signals, however the
starting to be used in healthcare too. Many comparison to how the brain works is not very
facets of patient care could be changed by strong.
this technology, as well as internal
administrative procedures at payer, provider, 2. Natural Language Processing: The
and pharmaceutical organizations. In the generation, comprehension, and classification
future, AI will be used more and more in the of clinical documentation and published
healthcare industry as a result of the research are the primary applications of NLP
complexity and growth of data in the sector. in the field of healthcare. NLP systems are
Payers, care providers, and life sciences able to conduct conversational AI, create
organizations currently use a variety of AI reports (for example, on radiological
technologies. The main application categories examinations), analyze unstructured clinical
include recommendations for diagnosis and notes on patients, and record patient
treatment, patient engagement and interactions.
adherence, and administrative tasks. Despite
the fact that there are many situations in 3. Physical Robots: Surgical robots give
which AI can execute healthcare duties just as surgeons "superpowers," enhancing their
well as or better than humans, implementation vision, capacity to make precise, minimally
issues will prohibit the widespread automation invasive incisions, close wounds, and other
of healthcare professional positions for a surgical procedures. However, important
substantial amount of time. choices are still made by human surgeons.
Types of AI that is used in the healthcare Gynecologic surgery, prostate surgery, and
sector head and neck surgery are among the
common surgical procedures performed with
1. Machine Learning- Neural Networks and robotic surgery.
Deep Learning: The neural network is a more
advanced type of machine learning. This 4. Robotic Process Automation: They are
technology, which has been around since the employed in the healthcare industry for
1960s and has been widely employed in routine duties like billing, prior authorization,
medical research for several decades, is used and patient record updates. They can be used
for categorization applications like predecting to extract data from, say, faxed photographs
they must be accepted by regulators,
integrated with EHR systems, sufficiently
standardised so that related products
function similar to how it is taught to
physicians.

Conclusion

The future of machine learning lies in


complimenting human experience and
knowledge with machine learning
technologies in order to maximise the
and feed it into transactional systems when decision-making for patients with serious
paired with other technologies like image injuries.
recognition.
5. Rule Based Experiment Systems: In the
1980s and succeeding decades, expert
systems built on databases of "if-then" rules
dominated the field of artificial intelligence.
Over the past two decades, they have been
extensively used in the healthcare industry for
"clinical decision support" purposes, and they
are still frequently used today. Today, a lot of
suppliers of electronic health records (EHRs)
provide a set of guidelines with their systems.

Current Trends in Healthcare

Current use of AI and Machine Learning


shows a future of possibilities. Today, a
number of significant businesses and start-
ups, such as Enlitic, MedAware, and Google,
have started large-scale projects aimed at
advancing AI and ML and integrating it into
the healthcare domain, for example Google's
DeepMind Health project and IBM's Avicenna
software. Additionally, the Cleveland Clinic
and Atrius Health are working together with
IBM's Watson Health to integrate cognitive
computing into their healthcare system, which
experts anticipate will lead to a decrease in
physician burnout. Recently, k-nearest
neighbours, naive and semi-naive Bayes,
lookahead feature building, backpropagation
neural networks, and other ML methods have
been evaluated and developed.

Future of AI in healthcare

The biggest hurdle for AI in various healthcare


sectors is not determining whether the
technologies will be capable enough to be
beneficial, but rather guaranteeing their
acceptance in routine clinical practice. In -Samhita K.R.
order for AI systems to be widely adopted , TT AIML
STUDENT

Deep-LEARNING

Introduction comparing the outputs will produce a Cost


Function, indicating how much the AI is off
Deep learning is a sub-field of machine from the real outputs.
learning dealing with algorithms inspired by After every iteration through the data set,
the structure and function of the brain called the weights between neurons are adjusted
artificial neural networks. In other words, It using Gradient Descent to reduce the cost
mirrors the functioning of our brains. Deep function.
learning algorithms are similar to how nervous
system structured where each neuron Advantages
connected each other and passing
information. Cost Effectiveness: While training deep
Deep learning models work in layers and a learning models can be cost-intensive, once
typical model atleast have three layers. Each trained, it can help businesses cut down on
layer accepts the information from previous unnecessary expenditure.
and pass it on to the next one.
Deep learning models tend to perform well Advanced Analytics: Deep learning, when
with amount of data wheras old machine applied to data science, can offer better and
learning models stops improving after a more effective processing models. Its ability
saturation point. to learn unsupervised drives continuous
improvement in accuracy and outcomes.
How does Deep Learning Works ?
Scalability: Deep learning is highly scalable
Deep Learning uses a Neural Network to due to its ability to process massive amounts
imitate animal intelligence. of data and perform a lot of computations in a
There are three types of layers of neurons cost- and time-effective manner.
in a neural network: the Input Layer, the
Hidden Layer(s), and the Output Layer. Self-Learning Capabilities: The multiple layers
Connections between neurons are in deep neural networks allow models to
associated with a weight, dictating the become more efficient at learning complex
importance of the input value. features and performing more intensive
Neurons apply an Activation Function on computational tasks.
the data to “standardize” the output
coming out of the neuron. APPLICATIONS
To train a Neural Network, you need a
large data set. Self-driving Cars
Iterating through the data set and compar- Sentiment Analysis
Virtual Assistant
Social Media
Healthcare

Real-World Applications and Examples

AGRICULTURE: Agriculture will remain a key


source of food production in the coming
years, so people have found ways to make the
process more efficient with deep learning and
AI tools. In fact, a 2021 Forbes article
revealed that the agriculture industry is
expected to invest $4 billion in AI solutions by
2026. Farmers have already found various
uses for the technology, wielding AI to detect
intrusive wild animals, forecast crop yields
and power self-driving machinery.
Blue River Technology has explored the
possibilities of self-driven farm products by
combining machine learning, computer vision
and robotics.

-Aman Gupta
TT AIML
STUDENT

Computer Vision in Sports


For developing a Sportsman’s
efficiency

In todays era, Artificial Intelligence and Deep Player Tracking :


Learning was introduced in sports field not
less than 5 years of span. Computer Vision is It is one of the key aims of applying computer
making its way into various industries and now vision in sports i.e. to keep track of players at
even in sports. It is used to enhance the a particular moment. It allows coaches to
broadcasting experience for any sport or club instantly analyse the performance of their
to be more competitive and achieve success. team or any individual player moving in the
The sports industry has substantially field or to track the formation of the team.
increased the adoption of new technology in a Automated Segmentation technique is used in
very less time. In major sports it is observed application of computer vision in sports to
that a sportsperson has fast motion that pinpoint the regions that correspond to
becomes difficult for the coach or the analyst players. The result obtained from the
to track those details. The data insights computer vision system expanded by
obtained from these footages require manual applying ML and Data Mining Techniques to
inputs and spends numerous hours manually the raw player tracking data. Semantic
noting and multiple replays. In such manner Information can be generated once the
Computer vision can play a major role to fill information is gathered from the video frame,
these gaps between sports and analytical in order to create context on what action the
insights by giving valuable and accurate player is performing (i.e. pass ,run , dribble ,
analysis via automated systems which locate defend , etc). These techniques are labelled
segments and follow them throughout the as semantic events. Such data can be used
footage. for analysis of a player or team. Construction
In a sports , footage can be acquired through of suggestion can be done based on the
various cameras installed at a specific optimal position of the player on the pitch ,
proximity where the event will be held ( i.e. the kicking angle of the ball,the accuracy of
goal post , midline , boundaries , etc). The ball hitting the best spot , etc. These statistics
positioning of camera , its angle and hardware can be displayed to the coach which they can
requirements may vary from sport to sport, compare and do the desired changes which
event to event. With the help of these will improve the efficiency of the team or a
footages , precise position of a player can be specific player. This technology of tracking
detected as well as its movement , direction player , analysing and suggesting has the
can be recorded which can be difficult for a potential to revolutionise training and
normal person to capture and track. Computer improving the efficiency of a team.
vision has partially solved the limitations with The above example not only limits to a
its application of image processing, specific sport Football , instead it can be
differentiation of ground players and object. applied to various other sports like Basketball,
leverage modern technologies to improve
the performance and become more
competitive. It goes undoubtedly that the
expansion of Computer vision will
transform the key areas in Performance
Analysis in sport.

Badminton, Tennis, Tabe-Tennis, Cricket, etc.


A great example of using computer vision can
be related to a major tournament in sports
Tennis in 2017, Wimbledon which was
partnered by IBM. In the tournament,
highlights were created by automating the key
moments in the match by gathering data from
the audience and the player. A small device
designed by Grégoire Gentil which called in
and out in a tennis match which gathered the
data of speed and placement of a shot which
determined the ball was out of bounds. In FIFA
,7 - camera computer vision system was
developed by Hawk-Eye which used a gaol
detection systems with multiple view high-
speed cameras. These cameras covered each
goal area which detected moving objects by
sorting objects which resembled a playing ball
on its area, colour and shape with an accuracy
error rate of 1.5cm and a detection speed of
1s. It helped the referees to take decisions if
the ball crossed the goal line.

Challenges of Computer Vision :

Even though Computer Vision has so many


plus points there are still critical areas that
need to be overcome before it can be fully
exploited in sports and field of analysis. A
major challenge encountered is that optical
tracking systems cannot yet cope up to
varying body posture of a human while
exercising. Tracking a player is quite
challenging due to the swift motion, similar
appearance of players in team sports and
even close interaction betweenplayers.
Nevertheless I the field of AI and Computer
Vision its rapid advancement in computer
power, increase datasets and develop new
techniques to get closer to our human
capabilities. Eventually these advancements
will overcome the challenges and make their -Hardik Chemburkar
way into sports as teams and team aim to TT AIML
STUDENT

Ambient Intelligence

Introduction -aking during the third phase, which was


networks-focused.
With the help of ambient intelligence (AMI), Several search engines and recommender
which is a developing field, our everyday systems using intelligent agents and, more
environments become intelligent and user- recently, ontologies were developed
sensitive. It stands for a network of covert, during the 1990s Web boom.
intelligent interfaces that can detect users'
presence and change their surroundings to Correlation with AI
suit our current needs. AmI environments
could be extremely varied, such as your In AmI environments and scenarios, AI
house, vehicle, workplace, or even a museum methods and techniques can help accomplish
you're visiting. These environments contain the following important tasks required for an
AmI systems that take in data, communicate ergonomic environment.
with users, engage in complex reasoning, and
direct actions on the environment. Information 1. Speech Recognition:
is collected through sensing, either manually An electric signal is obtained from a
by humans using their senses or automatically microphone for speech recognition. Signal
by machines like ultrasonic devices, cameras, processing and pattern recognition are used
and microphones. in the first step, which is to identify the
Human decisions and actions, as well as those phonemes in this signal. The following step
of artificial intelligence systems like robots and involves connecting phonemes and locating
agents, are used to affect these environments. words. Depending on how the user speaks,
different speech recognition systems are
Evolution available and can be more or less successful.

AI was initially implemented in hardware, 2. Natural Language Processing (NLP):


such as the SNARC(Stochastic Neural The output of a speech recognition system,
Analog Reinforcement Computer) from a keyboard, or even from a written
developed by Dean Edmonds and Marvin document is referred to as natural language
Minsky. input. The goal of natural language processing
One of the technologies used on such is to comprehend this data. Semantic analysis
systems was neural networks. The Mycin comes comes after syntax analysis as the
expert system is a good illustration of AI's next step. In NLP, knowledge representation
second phase, when AI was computer- is significant. One of the most researched
focused. The Authorizer's Assistant by areas of NLP, using statistical and knowledge-
American Express served as a ground-bre based methods,is automatic translation systs.
1, doi: 10.1109/ISSCC.2002.992922.
Gams, Matjaz et al. ‘Artificial
Intelligence and Ambient Intelligence’.
1 Jan. 2019 : 71– 86.
P. Remagnino and G. L. Foresti,
"Ambient Intelligence: A New
Multidisciplinary Paradigm," in IEEE
Transactions on Systems, Man, and
Cybernetics - Part A: Systems and
Humans, vol. 35, no. 1, pp. 1-6, Jan.
2005, doi:
10.1109/TSMCA.2004.838456.
3. Computer Vision: Wikipedia contributors. "Ambient
Humans' most sophisticated sensory input is intelligence." Wikipedia, The Free
vision. Therefore, the capacity to automate Encyclopedia. Wikipedia, The Free
vision is crucial. In essence, computer vision is Encyclopedia, 29 Sep. 2022. Web. 17
a problem of geometric reasoning. The field of Oct. 2022.
computer vision covers a wide range of
topics, including image acquisition,
processing, object recognition in two and
three dimensions, scene analysis, and image
flow analysis. In AimI, computer vision can be
applied to a variety of scenarios. It can be
used, for instance, by intelligent
transportation systems to detect traffic
issues, traffic patterns, or approaching
vehicles. Computer vision can also recognise
human gestures for controlling machinery or
facial expressions for reading emotions.

Future Scope

Artificial intelligence is a necessary ingredient


for achieving ambient intelligence. The AI
community's next exciting challenge is
presented by AMI environments. Machine
learning is used frequently these days, so AmI
will probably need to manage this technology
as well. Learning through user observation is
one of AmI's requirements. Many systems can
understand user commands, but they lack the
intelligence to refrain from acting in ways that
the user would prefer. AmI systems will be
able to learn from users by using fundamental
machine learning techniques, which will
increase users' acceptance of these systems.

References

F. Boekhorst, "Ambient intelligence, the


next paradigm for consumer electronics:
how will it affect silicon?," 2002 IEEE
International Solid-State Circuits
Conference. Digest of Technical Papers -Shreyash Salunke
(Cat. No.02CH37315), 2002, pp. 28-31 vol. TT AIML
STUDENT

Agri-TECH

Introduction yield, it’s essential to ensure that plants have


enough water. This can be a challenge in
Agriculture is an essential part of human life. areas where rainfall is scarce or
It’s how we produce the food that we eat, and unpredictable. However, there are now many
it’s been a cornerstone of civilization for types of devices that can help with irrigation.
thousands of years. But as our population has From simple hand-held pumps to large-scale
grown, so too has the demand for food systems that can cover entire fields, these
leading to a need for more efficient ways to devices make it possible to provide crops with
produce it. the water they need when they need it.
Thankfully, over the years, agricultural
technology has evolved to meet this 3. Soil Preparation Devices
challenge, and today there are many devices
and machines which help farmers do their job Another important part of agriculture is soil
more efficiently and with fewer resources. In preparation. Before crops can be planted, the
this article, we’ll explore seven such soil needs to be properly prepared. This can
advancements in agricultural technology. involve a number of different tasks, such as
tilling, plowing, and applying fertilizer. In the
1. Machines for Harvesting Crops past, all of these tasks were done by hand or
with simple tools. However, there are now
One of the most important aspects of many types of machines that can automate
agriculture is automation. With the help of these tasks-making it possible to prepare
machines, it’s possible to produce more food large areas of land in a short amount of time.
with fewer resources. One example of this is
the many types of devices that are used for 4. Machines for Planting Crops
harvesting crops. There are machines that can
pick fruits and vegetables, as well as grain Planting crops is another essential part of
harvesters which can quickly harvest large agriculture. In the past, this was a task which
fields of wheat or corn. This helps farmers to was done by hand-seeds were simply planted
reduce the amount of time and manpower into the ground one at a time. However, there
needed to harvest their crops, which in turn are now many types of machines that can
reduces both costs and environmental impact. plant crops much more quickly and efficiently.
From small hand-held devices to large
2. Devices for Irrigation tractor-mounted machines, these devices can
plant hundreds or even thousands of seeds in
Another important aspect of agricultural a single day-reducing both the time and labor
technology is irrigation. To maximize crop needed to get crops in the ground.
preparation machines, and crop
monitoring sensors-to create a network
that can be used to optimize agricultural
production. By using this technology,
farmers can get real-time information
about the status of their crops, allowing
them to make better decisions about how
to manage their land and resources.

Conclusion

In conclusion, agricultural technology has


5. Machines for Crop Monitoring come a long way over the years. With the
help of automation and precision
Finally, agricultural technology can also be agriculture techniques, it’s now possible
used for crop monitoring. In order to produce for farmers to produce more food with
a high-quality product, farmers need to be fewer resources. In the years to come, we
able to track the condition of their crops can expect to see even more
throughout the growing season. This can advancements in this field as farmers
involve tasks such as measuring soil moisture continue to find new and innovative ways
levels, checking for pests or diseases, and to increase their production.
assessing the nutritional needs of plants.
There are now many types of machines that
can automate these tasks-making it possible
to monitor large areas of land in a short
amount of time.
For instance, drones in agriculture are
becoming increasingly popular. Drones can be
used to quickly and easily assess the
condition of crops over large area-providing
farmers with valuable information that can be
used to improve yield and quality.

6. Precision Agriculture

Precision agriculture is a relatively new field


that uses technology to help farmers optimize
their production. This involves using sensors
and data analytics to track the condition of
crops, soil, and water in order to make better
decisions about when and where to apply
fertilizer, pesticide, or irrigation. By using
precision agriculture techniques, farmers can
reduce inputs while maintaining or even
increasing yields.

7. The Internet of Things

The internet of things is a term that refers to


the way in which everyday objects are being
connected to the internet. This includes
everything from home appliances to cars and
even livestock. In agriculture, the internet of
things is being used to connect various -Jwala Chourasiya
devices such as irrigation systems, soil prepa- TT AIML
STUDENTS

QUANTAM COMPUTERS &


ITS APPLICATIONS

Quantum computing has bits, just like any


computer. But instead of ones and zeros,
its quantum bits, or qubits, can represent a
one, a zero, or both at once, which is a
phenomenon known as superposition.
However, the superposition that occurs in
a quantum computer is very different than
any conventional computer – it allows two
or more qubits to behave in a coordinated
way that cannot be explained by
supposing each is doing its own thing. This
is called entanglement, and it’s what gives
Since the 20th century, the word quantum a quantum computer its uncanny power.
computers has been really a buzzword, but Quantum thinking will allow us to re-
why was this “Quantum Computers” a imagine computing that can solve some
buzzword during the 20th century, well it was problems too hard for conventional
found out that quantum theory applies not computers, and do things with information
only to atoms and molecules but to bits and no one thought possible.
logic operations in a computer. So how do
Applications of Quantum Computers
these QCs(Quantum Computers) helps us? So
There is an immense application of QCs
There are problems that even the most
that will help us solve real-world problems
powerful classical computers are unable to
these includes :
solve because of their scale or complexity.
Quantum computers may be uniquely suited ❖ Artificial Intelligence
to solve some of these problems because of ❖ Drug Development
their inherently quantum properties. ❖ Financial Modelling
❖ Complex Manufacturing
Last year at Multicon PPC in my technical ❖ Weather forecast and climate change
paper, I talked about some of the
fundamentals of quantum mechanics that are
involved within a QC, Some of the
terminologies that are explicit to a QC. The At the last, I would also like to express my
difference between Classical Computers and special gratitude to the dept of AI&ML for
Quantum Computers.The different algorithms giving me an opportunity to express myself
that will be used in quantum computers. What and my research in my technical paper.
will be the effect on blockchain technology
due to QCs. Let’s dive more into this QC.

-SANSKAR MISHRA
ST AI&ML
STUDENTS

Vyommitra: The AI space


robot..!

“Next year one or two human beings of


Indian origin will go to space. The
preparations for our Gaganyaan have been
done. Before that, two trials will be
conducted by the end of this year. The
first trial will be empty and in the second a
female robot (astronaut) will be sent
whose name is Vyommitra,” Jitendra Singh
added.

The aim of Gaganyaan mission is to


demonstrate the capability of sending
With the ethos of sky is not the limit, India’s humans to Low Earth Orbit (LEO) in the
premier space agency Indian Space Research short-term, which will lay the foundation
Organisation (ISRO) is gearing up for the for a sustained Indian human space
country’s maiden human space-flight mission exploration programme in the long run.
‘Gaganyaan’. Sharing the details about the
landmark endeavor, Union Minister of State ISRO’s Vyommitra (vyoma means space,
for the Ministry of Science and Technology, mitra means friend) is called a half-
Jitendra Singh recently said that the first trial humanoid since she has a head, two hands
of the human space mission will be done by and a torso, and doesn’t have a lower
the end of 2023 or at the beginning of 2024. limbs. Coming to the functioning of
Vyommitra, she can switch panel
What is worth mentioning here is that the operations, ECLSS [environment control
Minister informed about ‘Vyommitra’, a half- and life support systems] functions, can
humanoid being developed by the ISRO, which also be a companion,and converse with
will eventually fly to space after the first flight. the astronauts by recognising them and
It aims to lay the ground for ISRO’s manned also responding to their queries.
mission.
The human spaceflight programme has
“The first trial for Gaganyaan will be done by both tangible and intangible benefits for
the end of 2023 or the beginning of 2024. The the nation, which includes progress
first test flight will be followed by sending a towards a sustained and affordable human
female-looking humanoid robot – ‘Vyommitra’. and robotic programme to explore the
We have to ensure that the final Gaganyaan solar system and beyond.
flight should be normal and uneventful,” Union
Minister of State Jitendra Singh informed.

Last month in August, the Indian Space -Anusha Yadav


Agency achieved a major milestone in the Ga- ST AI&ML
STUDENTS

TECHNOLOGY IN HEALTH CARE-EARLY CARE

The healthcare industry is the emerging field Digital technology can also ameliorate
of digital innovation and evolution. Increase in provider experience and case care. Simplified
digital technology occupancy has the potential relations through recording, automated
to improve the health care partner and patient patient outreach, and using artificial
experience and also renovate fully safe care intelligence to descry complaint more
delivery. While these advancements made a snappily in individual testing are just a many
efficiencies which benefit both patient and tools helping to lighten the cargo and help
health provider, the methods in which these collapse of our healthcare professionals.
technologies are employed can increase the
scale on their impact. COVID- 19 created an optimal case study that
instanced the occasion technology brought to
Case study of patient through the healthcare uncloak. Organizations saw the use of
system vary extensively. thus, there's no robotization, chatbots, conversational AI and
single approach in perfecting experience. operation program interfaces( API) abused on
Cases pierce the health system for different a coordinated,
types of medical care. They've different large scale for testing, vaccination, and
communication preferences grounded on communication processes. COVID- 19 needed
digital knowledge and or access to digital massive scalability with limited mortal coffers,
bias. While numerous cases who live on their which only technology could sustain across
computers day to day enjoy the speed and the world. literacy were deduced from each
convenience of digital drug, a large portion of associations approach and rigidity t, as well
cases find these tech- enabled relations to be as automated tradition renewals, to name a
cold and impassive. many.These technologies have a two fold
benefit of making the experience more simple
Despite the essential challenges associated for the case, and further cost-effective for
with investing digital technology into current the health system. It was needed as demand
care operations, communicating the “ why ” retrograded and flowed.One of the topmost
can help with change operation. For exemplifications of this principle was the
illustration, creating an experience that finding that virtual visits between cases and
includes robotic process robotization and providers can be an effective and effective
artificial intelligence offers benefits like way to render health care.
expedited completion of attestation, simplified
content confirmation, streamlining bill Artificial intelligence is set to revolutionize
payment.
Technology is a part of a result to a problem but
isn't the result. In order to be successful, it'll be
necessary to keep the case in the center of the
design. Understanding essential complications
will allow us to truly transfigure case and
provider experience and redesign care delivery,
all while lowering the total cost of care. All
laudible pretensions that have been fugitive, on a
larger scale, until now.

the healthcare industry completely. It has


the ability to mine medical records and the AI
algorithms can design treatment plans,
develop drugs quicker than any current
doctor, and diagnose cancerous and non
cancerous tissue samples.Virtual reality is
changing the lives of patients and physicians
alike. Looking into the future, you could
travel to Spain or home while you are in a
hospital bed or you may watch operations as
if you’re holding the scalpel. Pain
management is one area that has benefited
from virtual reality. For example, during
labor pain, women are being equipped with
VR headsets that allow them to visualize a
soothing landscape. Patients diagnosed with
cardiac, neurological, gastrointestinal, and
post-surgical pain have shown a decrease in
their pain levels when using VR as a stimuli.

In order to understand the advantages of


digital technology in care the look method is
crucial. it's necessary to interact patients
and suppliers regularly throughout the
method. Understanding crucial aspects of
our patient and supplier wants can cause
roaring solutions. For patients, it's
imperative to know social health additionally
as physicial health. Access to devices,
broadband and digital acquisition can all
impact the expertise. it'll be necessary to
style digital methods with these factors in
mind. Not together with these factors within
the style can cause a divide among our most
vulnerable patients United Nations agency
would like facilitate accessing care. For
suppliers, it's necessary to form solutions
that really improve their progress and skill to
provide higher outcomes. Digital solutions
that add additional work or ar tough to use
can ultimatily fail.The manner in which we -Janhavi Chaubey
deploy digital technology will determine its ST AI&ML
utility.
STUDENTS

AV1:HISTORY & FUTURE

What is AV1 ? Not only that, but YouTubeon desktop also


supports AV1, and you can enableit in your
AV1 is a codec developed by the Alliance for account settings so long as you’re using a
Open Media, a conglomerate of a ton of compatible browser. In fact, the companyhas
differentcompanies in the technology designed its own silicon for the encoding of
space.Its main benefits are that it’s royalty- AV1 video that will be used in data centers for
free (so, companies can implement it in their YouTube.The chip, code-named“Argos”, is a
software for free), and it has some immense second-gen Video (trans) Coding Unit (VCU)
savings over the likes of VP9 and H264. that converts videos uploaded to the platform
Facebook Engineering conductedtests in to various compression formats and optimizes
2018, concluding that the AV1 reference them for differentscreen sizes. Google claims
encoder achieved 34%, 46.2%, and 50.3% that its new Argos VCU can handle videos 20-
higher data compression than Iibvpx-vp9, 33 times more efficiently than conventional
x264 High profile, and x264 Main profile, servers.
respectively. This means that for those on
slower connections, you may be able to enjoy
a quality higher than what you’re used to, and The History of AV1
for those on faster connections, you’ll be able
to get an even higher bitrate on the same
connection speed.

The first smartphone chipset to support AV1


decode was the MediaTek Dimenisty 1000,
which supported up to 4K 60 FPS. The Nvidia
Geforce 3000 series supporteddecoding, the
new Nvidia Geforce4000 series supportsboth
encoding and decoding, and Samsung’s
Exynos 2100/2200 both support AV1 decode The context behind AV1 and why it was
as well. Support is slowly growing in the createdis important as well. VP9 is a royalty-
industry, and the chipset in the Chromecast free codec developed by Google that anyone
HD also supports AV1 decode, too. We can use, and because it’s royalty-free, it could
reached out to Google for comment and were be implemented on any platform or service
told that the Chromecast with Google TV (HD) that wanted it. YouTube made use of the
supports AV1. codec on any device that could support
support it (as that meant big savings for Google do not offer standard licensing terms — instead
thanks to reduced bandwidth), and it has even been requiringyou to negotiate terms.
adopted by video-on-demand services such as
Netflix, Twitch, and Vimeo. These steep royalties were already problematic
for products like Google Chrome, Opera, Netflix,
However, because Google has a vested interest in Amazon Video, Cisco WebEx Connect, Skype,
adopting better compression algorithms to reduce and others, and they completely exclude HEVC
the bandwidth usage of its data centers,it began to as an option for projects like Mozilla Firefox.
work on VP10 — the successor to VP9. A tiny This is because it goes against multiple core
increase in video compression per video can result values of the Firefox project: Firefoxneeds to be
in huge cost savings and a major improvement in royalty-free in order to ship in many FOSS
user experience when you’re accounting for billions projects, which HEVC usage would prevent it
of video minutes.Google announced that they from being; and Mozilla believes in a free and
planned to release VP10 in 2016, and then would open web, and that isn’t possible if you promote
release an update every 18 months to ensure a patent-encumbered standards. Even ignoring
steadyprogression. It got to the point where those two problems, Mozilla simply cannot
Googleeven started to release code for VP10,but the afford to waste hundredsof millions of dollars on
company announced the cancellation of VP10 and royalties and all that time negotiating the
formed the Alliance for Open Media (AOMedia) necessary licensingagreements.
instead.
A fun fact as well, these same problems are
The Alliance for Open Media includes everyone from what prevented Firefox (and Chromium) from
processor designers (AMD, Arm, Broadcom, even including native H.264 playbackon many
Chips&Media, Intel, Nvidia) to browser developers platforms until a couple of years ago... and it
(Google, Microsoft, and Mozilla), to streaming and stillrequires a plugin on Linux. It’s unlikelythat
videoconferencing services(Adobe, Amazon, BBC Firefox will even be able to support HEVC before
R&D, Cisco, Netflix, Youtube).All of these its patents expirein the 2030s (or possibly even
companieshave been offering up some form of later). Even to this day, Firefox only supports
support to AV1, be it through hardwaredecoders H.264 natively thanks to Cisco offering to pay all
introduced in chipsets, the implementation of of the licensing costs for Mozilla through
decoders in browsers, or the use of the codec on OpenH264, in order to standardize H.264 for
streaming services. streaming across the market until the next
generation codec was ready. On the Mozilla
video codec guide, the company says that
“Mozilla will not support HEVC while it is
encumbered by patents.” To this day, only Edge
and InternetExplorer support native HEVC
playback,and only on specific hardwarethat
AV1 versus HEVC/H265 supports decoding.

A fun fact as well, these same problems are


what prevented Firefox (and Chromium) from
even including native H.264 playbackon many
platforms until a couple of years ago... and it
stillrequires a plugin on Linux. It’s unlikelythat
Firefox will even be able to support HEVC before
its patents expirein the 2030s (or possibly even
The biggest difference betweenAV1 and HEVC
later). Even to this day, Firefox only supports
(High-Efficiency Video Coding), also known as
H.264 natively thanks to Cisco offering to pay all
H.265, is in the licensing. In order to ship a product
of the licensing costs for Mozilla through
with HEVCsupport, you need to acquire
OpenH264, in order to standardize H.264 for
licensesfrom at least four patent pools (MPEG LA,
HEVC Advance, Technicolor, and Velos Media) as
well as numerous other companies, many of which
do not offer standard
for streaming across the market until the next
generation codec was ready. On the Mozilla
video codec guide, the company says that
“Mozilla will not support HEVC while it is
encumbered by patents.”To this day, only
Edge and InternetExplorer support native
HEVC playback,and only on specific
hardwarethat supports decoding.

In efficiency terms, both codecs go toe-to-


toe against each other. Their efficiency is
generally on-par with each other (thoughtests
have shown AV1 to edge slightly ahead), but
there’s a catch — AV1 typically takes
significantly longer to encode, thanks to the
lack of hardware encoding capabilities. The
University of Waterloo found in 2020 that
while AV1 offered a bitrate saving of 9.5%
when compared to HEVC in encoding a 4K
video, AV1 videos also took 590-times longer
to encode than AVC. In contrast,HEVC took
only 4.2-times longer. These tests were
obviouslyrun quite early on in AV1’s
lifespanwhen hardware supportwasn’t really
available.

The Future of AV1

It’s looking likely that AV1 will blaze the trail


for high-quality compressed videoplayback,
as more and more devices support hardware
decoding. Given that HEVC is only supported
by one browser on a desktop (now that
Internet Explorer is dead, anyway),AV1 is
clearly the go-to codec for the future as a
VP9 successor.With support only expected to
grow, more and more devices are going to
end up using it. There are already some
experiment flags referring to AV2 on the AOM
repository and a “starting anchor for AV2
research” that was committed to the
repository last year, which suggests that we’ll
see iterations in the future as well.

-Vibhanshu Pandey
ST AI&ML
STUDENTS

Artificial Intelligence-
Revolutionizing the
Car Designing and
Manufacturing
Industry.

Since cars were introduced in the 19th But recent advancements in the Artificial
century, humans have always been a part of Intelligence and Machine Learning sector are
its manufacturing process. From the initial gradually proving otherwise.
design until the product’s final launch, the
human workforce carried out all the processes Design teams only have a limited amount of
in between. As the years passed, advanced time, money, and resources at their disposal
technology assisted workers in performing and hence cannot prototype more than a few
these processes. And now, in the 21st designs. But there can be a chance that some
century, manual labor is slowly being replaced of the designs that don’t make it to the
by intelligent machines that hardly require any prototyping stage have the potential to
human assistance. But one aspect of car produce a lighter, cheaper, and better
manufacturing that hasn’t completely been product. This is where the concept of
taken over by machines and is still human- Machine Learning comes into the picture.
dependent is Design.
Design teams only have a limited amount of
Designing is a task that has always been done time, money, and resources at their disposal
by hand – be it sketching, moulding clay, or and hence cannot prototype more than a few
digital rendering. The Automotive design designs. But there can be a chance that some
process consists of developing the of the designs that don’t make it to the
appearance, both interior and exterior, along prototyping stage have the potential to
with the mechanical design which includes produce a lighter, cheaper, and better
designing the parts and components of the product. This is where the concept of
vehicle. While designing a vehicle, a designer Machine Learning comes into the picture.
must keep in mind factors such as the basic
geometry, the dimensional requirements, and Machine Learning is a branch of Artificial
the industry standards to name a few. Intelligence (A.I.) that focuses on building and
understanding methods that leverage data to
Even though designers nowadays are assisted improve performance on a specific set of
by machines for purposes like digital tasks. Machine Learning ‘Algorithms’ work by
rendering and prototyping, and access to building a model based on sample data,
more tools, humans cannot be completely known as ‘training’ data that ‘train’ the model
replaced by machines as the creativity and to make predictions and decisions without
emotions present in man-made designs specifically being programmed to do so.
cannot be mimicked by machines.
Machine Learning is a branch of Artificial
Intelligence (A.I.) that focuses on building
and understanding methods that leverage
data to improve performance on a specific
set of tasks. Machine Learning ‘Algorithms’
work by building a model based on sample
data, known as ‘training’ data that ‘train’ the
model to make predictions and decisions
without specifically being programmed to do
so.
The Chassis of the 21C designed by the
In the Automotive Design scenario, the
Czinger A.I.
process works by inputting a very large
A chassis is the base frame of the car.
number of designs (training data) into the
model. The model interprets and analyzes
these designs thereby learning more
information. The end product is a unique
design created by the model.

Another major part of AI used to design cars


is decision-making. The programmers will
input certain parameters and rules, set by
the engineers and designers, that are to be
followed by the computer. This is to ensure
that the final design is of the required
dimensions and proportions.
The Engine Bay of the Czinger 21C
A real-life example of this new-age showcasing its 3D printed components.
technology is the American-made hypercar,
the Czinger 21C. The car has been designed
using A.I. and built using 3D printing. Its A.I.
learns millions of mechanical principles to Artificial Intelligence and Machine Learning
produce the most cost-effective design; when implemented with Computational
while accounting for external factors and Engineering is one of the most sufficient
natural phenomena such as wind resistance ways to design a car. It allows engineers to
and gravity. This precision makes the car optimize their designs and increase their
look exactly as if it were designed by a efficiency via machine learning systems that
human. The additive manufacturing method can automate a number of tedious design
also decreases the number of resources tasks.
used and is far cheaper than conventional
manufacturing methods. While products created by human labour are
of more value and are time intensive to
manufacture, manufacturing a product using
automated machines within a short time is
more efficient and is suited to the masses.
Automated machines and processes need
little to no input from people or other
processes but to assess whether a design is
of value, will still require a human. Hence,
we can say that machines won’t completely
replace us-at least not in the near future.

The Czinger 21C-The first ever production car


-Akshay Vennikkal
to be completely designed by an A.I.
ST AI&ML
STUDENT

Introduction The core idea of Web 3.0 is that it is :-

Decentralized
Before we jump into Web 3.0, Let’s learn about
Web and its history. In a nutshell web is a Instead of large swathes of the internet
common name of WWW(World Wide Web) it is a controlled and owned by centralized
subset of Internet. Before Web 3.0 became a entities, ownership gets distributed
dream of Thousands of Technologist, We had amongst its builders and users.
Web 1.0 and Web 2.0. Yes you are right web
·Permission less.
3.0 is a Phase or evolution of Web. To explain in
short Web 1.0 is an earliest version of Internet, Everyone has equal access to
It was developed by Berners-lee in 1990 at participate in Web3, and no one gets
CERN. According to its founder web 1.0 is read excluded.
only web. Which means web 1.0 allowed us to It has a native payment
search for information and read only with very
little or no interaction with web. It uses cryptocurrency for spending and
sending money online instead of relying
Web 2.0 was a Second stage in development of on the outdated infrastructure of banks
Internet. In next 10-20 years after the launch of and payment processors.
web 2.0 every bland webpages of web 1.0 was Trust less
changed to interactive, social connective and
user-generated content of web 2.0. Web 2.0 It operates using incentives and
allowed user to create their own content that economic mechanisms instead of
can be viewed by millions of people around the relying on trusted third-parties.
globe. The Exponential growth of web 2.0 has
been driven by key innovation such as mobile And many more. Apart from ideas there are
internet access and social Networks. These all some key features of Web 3.0
innovation has enabled the dominance of app
that has expanded online interactivity of user
through apps like Meta, Twitter, Whatsapp and
Google to name a few, We can say that web 2.0
is Read-Write web.

Web 3.0 was coined by Ethereum co-founder


Gavin Wood, web 3.0 will be a third phase in
evolution of Web. This new technology believe
Key Features of Web 3.0:- Challenges of Web 3.0:-

Blockchain-based. Security
Blockchain is the enabler for the creation of Since blockchain technology is trustless by
decentralized applications and services. With default, Web 3.0 remains vulnerable to
blockchain, the data and connection across certain types of attacks: from hard fork and
services are distributed in an approach that is 51% attack to DDoS, DNS hijack and sniping
different than centralized database bots. Regular scams, including targeted ads,
infrastructure. may also work in the new environment.
These security risks generally rely on the
Autonomous and artificially intelligent. human factor, but targeted hacker attacks
More automation overall is a critical feature of could also lead to massive financial losses.
Web 3.0, and that automation will largely be
powered by AI. Development complexity
· The Web 3.0 applications, namely the
Decentralized. Decentralized Apps or DApps, are inherently
As opposed to the first two generations of the complex because of the consensus
web, where governance and applications were approach. They often require the knowledge
largely centralized, Web 3.0 will be of new programming languages, additional
decentralized. Applications and services will be frameworks. A single bug in code could lead
enabled in a distributed approach, where there to the loss of millions of dollars in
isn't a central authority. Cryptocurrency.
·
Cryptocurrency-enabled. Scalability
Cryptocurrency usage is a key feature of Web The scalability element becomes evident
3.0 services and largely replaces the use of fiat once a large blockchain-based app gains
currency. popularity. A rapid increase in the number of
users led to a sharp increase in gas
Till now we had learn and known many thing (transaction fees), The solution could be an
about Web 3.0 and it features. But, at the end of introduction of layer 2 blockchain offloading
the question remains. Why is Web 3.0 important. the transaction element. However, a quick
deployment of these solutions jeopardizes
Importance of Web 3.0:- the decentralization component.

Ownership Reliance on crypto


Web3 gives you ownership of your digital assets Crypto assets – cryptocurrencies and tokens
in an unprecedented way. For example, say – are critical to Web 3.0. Since they
you're playing a web2 game. You purchase an represent a form of payment or storage of
in-game item. If the game creators delete your value on the blockchain, their failure could
account, you will lose these items. entail the failure of the entire ecosystem.
Web3 allows for direct ownership through non-
fungible tokens (NFTs). No one, Has the power
to take away your ownership.

Censorship resistance
The power dynamic between platforms and
content creators is massively imbalanced.
Web 2.0 requires content creators to trust
platforms, that they don’t censor them
On Web3, your data lives on the Blockchain.
When you decide to leave a platform, you can
take your reputation with you
- Moin Syed
Decentralized autonomous organizations ST AIML
(DAOs)
NON TECHNICAL
ARTICLE
STUDENT

Social Benefits of Plastics & Applications

This article explains the records, from 1600 BC masses of plastic substances are
to 2008, of substances which are nowadays commercially available, best a handful of
termed ‘plastics’. It includes manufacturing these qualify as commodity thermoplastics in
volumes and contemporary consumption terms of their high quantity and comparatively
styles of five predominant commodityplastics: low fee. Theseplastics and their fractional
polypropylene, polyethylene, polyvinyl consumption on a international basis are
chloride, polystyrene and polyethylene shown below. Low-density polyethylene
terephthalate. The use of components to (LDPE), high density PE (HDPE),
adjust the houses of these plastics and any polypropylene (PP), PVC, PS and polyethylene
related safety, in use, issues for the resulting terephthalate (PET) accountfor about ninetyin
polymeric materials are described. A contrast line with cent of the full demand.
is made with the thermal and barrier
residences of other materials to demonstrate 1.COMMODITY PLASTICS
the versatility of plastics. Societal blessings
for fitness, protection, electricity saving and This organization consists of PP and PE.
material conservation are described, and the Polypropylene became observed in 1954
unique advantages of plastics in society are through Giulio Natta, and commercial
mentioned. Concernsrelating to littering and manufacturing of the resin commenced in
traits in recycling of plastics also are 1957. It is the unmarried most extensively
defined.Finally, we provide predictions for a used thermoplastic globally.It is a completely
number of the capability applications of plastic useful cost- powerful polymer and can be
over the subsequent twenty years. injection moulded, blow-moulded,
thermoformed, blown movie extruded or
Humans have benefited from the use of extruded into a diffusion of products.
polymers considering that about 1600 BC Examples of these encompass bendy barrier
while the historical Mesoamericans first film pouches (inclusive of the biaxially
processed natural rubber into balls, oriented packaging movie used for crisps and
figurinesand bands (Hosleret al. 1999). In the nuts); stackable crates for delivery and
intervening years, guy has relied an increasing garage, caps and closures for containers,
number of on plastics and rubber, first blow-moulded bottles,skinny-walled bins (e.g.
experimenting with herbal polymers,horn, Margarine tubs,yoghurt cups, food trays)
waxes, herbal rubber and resins, till the 19th used within the food industry; and tree
century, whilst the development of modern- shelters, soil sieves, fork handles, mulch
day thermoplastics began. Although actually movies, and glass substitute, window or door
and lustre additives can also be used to
enhance the arrival of a plastic product.
Additives are often the most pricey thing of a
formulation, and the minimum amount had to
obtain a given stage of performance is usually
used. The components are intimately mixed
with the polymer or ‘compounded’ into a
components that is processed into the shape
of the very last product.
Plasticizers are a particular group of additives
that has raised concerns; however, there are
numerous kinds of plasticizer (e.g. Adipates,
water or sewage pipes and geomembranes polymerics, trimellitates,
utilized in building packages.Polypropylenes 1,2-cyclohexanedicarboxylic acid diisononyl
are also used in family items including bowls, ester, citrates, phthalates, and so on.) utilized
kettles, cat litter trays; private items inclusive in plastics. Of those, about 8 differing types
of combs, hair dryers, movie wrap for apparel; are in commonplace use. It is
and in other packaged items. no longer viable to conduct a generalized
threat assessment on phthalates as a
About a half of the 35 million tonnes of PE category of compounds used as plasticizers.
resin produced is used to make plastic film, Some phthalates, e.G. Diisononyl phthalate
observed through13–14% in injection-moulded and diisodecyl phthalate, have been through
and blow-moulded products.North American, complete European Risk Assessments and
westernEuropean and Asian markets each feature a completely easy bill of health in all
consume about 25–30% of the PE film packages, while with other phthalates
produced inclusive of dibutyl phthalate and diethylhexyl
phthalate, threat-reduction measures are
globally. Typical applications of PE are in required(described within the ECB postedRisk
blow-moulded boxes with volumes starting Assessments in the on-lineORATS (2008)
from some millilitres such as detergent databaseto be had from The Phthalates
bottles(2 hundred–500 cm3) and milk jugs Information Centre Europe) to make certain
(0.5–4 l) to hundreds of litres including water that secure use has been recognized.
and chemical barrels. Film packages
encompass provider bags, sandwich bags, As suggested by the futurist Hammond (2007)
freezer bags and adhere wrap, and in his recent book ‘The World in 2030’, the
horticultural uses consist of irrigation pipes, speed of technological development is
glass alternative and subject accelerating exponentially and, for this reason,
liners.Polyethylene is likewisebroadly used as by way of the yr 2030, it will seem as if a
a dielectric insulator in electrical cables. complete century’s worth of progress has
taken area within the first three decades of
2.PLASTICS ADDITIVES the twenty-first century. In many approaches,
lifestyles in 2030 may be unrecognizable in
Plastics are hardly ever utilized by themselves; comparison with life these days. During this
typically, the resins are mixed with different time, plastics will play a extensively extended
substances known as ‘additives’ to beautify positionin our lives.
overall performance. These can also include
inorganic fillers (e.G. Carbon or silica) to
enhance the plastic material, thermal
stabilizers to permit the plastics to be
processed at excessive temperatures,
plasticizers to render the cloth pliable and
flexible, fire retardants to deter ignitionand
burning, and UVstabilizers to
preventdegradation while uncovered to - Banti Pathak
daylight. Colorants, matting marketers, opacifi TT AIML
INTERVIEW
INTERVIEW
Shalu Mishra
Software developer at
persistent system

Q1 How did your fascination with machine learning and deep learning begin?

ANS More than a specific event or experience, my journey has been a gradual
progression of experiences that strengthened my fascination with machine learning
and deep learning. My first academic degree was in mathematics, which continues
to interest me to this day. My Master’s thesis in image processing was perhaps my
first foray into the broader field that enthused me into this research area.

My PhD thesis was in an interdisciplinary group that applied machine learning to


real-world problems, which showed me the applied side of research. Through all
these years, my natural ideation process always gravitated towards machine
learning algorithms, which was perhaps an ideal middle ground between math on
the one hand and real-world application on the other. This aspect of machine
learning and deep learning continues to interest me, and I still try to work on
solutions for real-world challenges. I do believe there is a long way to go.

Q2 What were your initial challenges, and how did you address them?

ANS Different challenges are encountered at different stages of career. As a research


student, my initial challenges were similar to many others – understanding what
kind of a research problem I want to work on; knowing when to read and when to
experiment; what it takes to publish your research; how to write a paper; and so on.

To a large extent, these questions were answered through the mentorship I


received from different people, discussing with peers, and sometimes just trial-
and-error, or what one can call experience.

Q3 Did you encounter any data science problems during your research?

ANS I have had to deal with two biggest problems: dataset creation; and finding the
problem one can solve given a certain kind of dataset. Creating a dataset is a skill
by itself and often a longitudinal effort with multiple players, especially when it
pertains to a domain that requires expertise, such as healthcare or aerospace or
agriculture. Knowing what kind of data collection is required to solve a problem is
non-trivial. Discussions with multiple stakeholders and thinking through every detail
of data collection and annotation is paramount. Ethical and privacy implications
must also be kept in mind, and suitable consent and permissions must be taken.
INTERVIEW
Q4 What are your thoughts on the scope of AI research in India compared to the global
scenario?
ANS
The scope of AI in India across different application domains is immense. Plenty of
opportunities to learn AI/ML – especially certificate programs – have emerged over
the last couple of years. Governments, both at central and state levels, have shown
great leadership and enthusiasm in adopting AI, and even AI-based startups have
emerged in diverse application domains. However, considering our population,
there is always scope for more. One of the issues we lag is – innovation in AI.
However, AI as a field is still growing, and making new technological innovations at
the heart of AI itself – be it fundamental or applied – is something we need more of.

There are two aspects to the research scope of AI in India: to use and leverage AI
to develop cutting-edge products and services in various application sectors, or to
make fundamental advancements to the field of AI itself, be it theoretical or applied.
The former is important not merely as an indicator of technological advancement
but more fundamentally for improving the quality of life of every citizen in the
country and helping scale the efforts of enablers to reach out to every corner of the
country.

Q5 What’s IITs’ approach towards AI education and research?

ANS At IIT-Hyderabad, we have an AI department that offers Bachelors, Masters and


PhD programs in AI. We also offer a certificate program for anyone interested in AI,
even if they are not registered as an IIT-H student. The department consists of
faculty members in AI with different backgrounds: computer science, electrical
engineering, mathematics, mechanical engineering, design, liberal arts, physics,
etc.

How can the government and corporations play a role in encouraging more
Q6
students to enter AI fields?

ANS There is always a need to do more. India has talented students across almost every
institution; however, there is a dearth of mentorship opportunities to take their
learning to the next level. Many faculty, though sincere and dedicated, have not
had the access to state-of-the-art topics and practices.
ACHIEVEMENTS
Achievements
AI&ML DEPARTMENT

Jwala Chorasiya
& Rohit Gupta
Winner at SITS,
Pune Hackathon

Prabhat Shukla
TCET TSDW:
Champion Debate-e-
darbaar
Rotaract 2021-22:
Best Debater

Shivam kumar
Chaurasia
INNERVE HACKS '22 WINNER
(National level Hackathon).
4 STAR CODER AT
CODECHEF.

Nidhi Worah
2nd Runner up in
computer vision project
exhibition
Achievements
AI&ML DEPARTMENT

Tanmay & Mayani


Won Innerve Hacks, A
Pan India Hackathon

Ankur yadav
Secured 1st place in
“OFF D’ CUFF 2022” -
Sorcerers of Intellect.

Bhavya Jain &


Aavya Upadhyay
First prize in Debate-e-
Darbar (Institute level
debate competition)

Sanskar Mishra
Multicon PPC Second
prize &
SIH participant
ACKNOWLE-
DGEMENT
ACKNOWLEDGEMENT

"Coming together is the beginning, Keeping together is progress and Working


together is success."

For the first issue of AI&ML Department Magazine, Team TEJAS has
worked hard to provide you the finest magazine ever. We would like
to extend our sincere gratitude to our management for their constant
support. Also, we would like to thank our Principal, Dr. B.K. Mishra,
Vice-Principal, Dr. Kamal Shah, Dean's, Dr.Sheetal Rathi (Academic)
, Dr. Vinitkumar Jayaprakash Dongre (R&D), and Dr. Lochan Jolly
(Student & Staff Welfare) for their encouragement. We would also like
to thank our Incharge Head of Department Dr. Megharani Patil for the
innovative ideas for the additions made to our magazine, and the
Faculty Incharge for this issue Mrs. Shilpa Mathur and Mr. Anand
Maha for shaping "TEJAS" the first edition of the magazine of AIML
department. Lastly, we would like to convey our heartfelt gratitude to
all the faculty members, students, and all stakeholders for their
valuable input.
-The Editorial Team
TEJAS

You might also like