Artificial Intelligence & Machine Learning Unit 6: Applications Question Bank and Its Solution
Artificial Intelligence & Machine Learning Unit 6: Applications Question Bank and Its Solution
net/publication/361525297
CITATIONS READS
0 4,359
1 author:
Abhishek D. Patange
College of Engineering, Pune
60 PUBLICATIONS 198 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
Machine Health Monitoring using Wavelet Feature Extraction : A Machine Learning Approach View project
All content following this page was uploaded by Abhishek D. Patange on 25 June 2022.
Unit 6: Applications
Third Year Bachelor of Engineering (Choice Based Credit System)
Mechanical Engineering (2019 Course)
Board of Studies – Mechanical and Automobile Engineering, SPPU, Pune
(With Effect from Academic Year 2021-22)
Unit 6: Applications
Syllabus:
Content Theory
Human Machine Interaction
Predictive Maintenance and Health Management
Fault Detection
Dynamic System Order Reduction
Image based part classification
Process Optimization
Material Inspection
Tuning of control algorithms
HMI is all about how people and automated systems interact and communicate with
each other. That has long ceased to be confined to just traditional machines in industry
and now also relates to computers, digital systems or devices for the Internet of Things
(IoT).
More and more devices are connected and automatically carry out tasks. Operating all of
these machines, systems and devices needs to be intuitive and must not place excessive
demands on users.
Human-machine interaction is all about how people and automated systems interact
with each other.
HMI now plays a major role in industry and everyday life: More and more devices are
connected and automatically carry out tasks.
A user interface that is as intuitive as possible is therefore needed to enable smooth
operation of these machines. That can take very different forms.
Smooth communication between people and machines requires interfaces: The place
where or action by which a user engages with the machine.
Simple examples are light switches or the pedals and steering wheel in a car: An action is
triggered when you flick a switch, turn the steering wheel or step on a pedal.
However, a system can also be controlled by text being keyed in, a mouse, touch screens,
voice or gestures.
The devices are either controlled directly: Users touch the smartphone’s screen or issue a
verbal command. Or the systems automatically identify what people want: Traffic lights
change color on their own when a vehicle drives over the inductive loop in the road’s
surface.
Other technologies are not so much there to control devices, but rather to complement
our sensory organs. One example of that is a virtual reality glass.
There are also digital assistants: Chatbots, for instance, reply automatically to requests
from customers and keep on learning.
User interfaces in HMI are the places where or actions by which the user engages with
the machine.
A system can be operated by means of buttons, a mouse, touch screens, voice or
gesture, for instance.
One simple example is a light switch – the interface between the machine ―light‖ and a
human being.
It is also possible to differentiate further between direct control, such as tapping a touch
screen, and automatic control.
In the latter case, the system itself identifies what people want.
Think of traffic lights which change color as soon as a vehicle drives over the inductive
loop in the road’s surface.
For a long time, machines were mainly controlled by switches, levers, steering wheels or
buttons; these were joined later by the keyboard and mouse.
Now we are in the age of the touch screen. Body sensors in wearables that automatically
collect data are also modern interfaces.
Voice control is also making rapid advances: Users can already control digital assistants,
such as Amazon Alexa or Google Assistant, by voice.
That entails far less effort. Chatbots are also used in such systems and their ability to
communicate with people is improving more and more thanks to artificial intelligence.
Gesture control is at least as intuitive as voice control. That means robovacs, for example,
could be stopped by a simple hand signal in the future.
Google and Infineon have already developed a new type of gesture control by the name
of ―Soli‖:
Devices can also be operated in the dark or remotely with the aid of radar technology.
Technologies that augment reality now also act as an interface. Virtual reality glasses
immerse users in an artificially created 3D world, while augmented reality glasses
superimpose virtual elements in the real environment.
Mixed reality glasses combine both technologies, thus enabling scenarios to be
presented realistically thanks to their high resolution.
Modern HMI helps people to use even very complex systems with ease. Machines also
keep on getting better at interpreting signals – and that is important in particular in
autonomous driving.
Human needs are identified even more accurately, which means robots can be used in
caring for people, for instance. One potential risk is the fact that hackers might obtain
information on users via the machines’ sensors.
Last but not least, security is vital in human-machine interaction. Some critics also fear
that self-learning machines may become a risk by taking actions autonomously.
It is also necessary to clarify the question of who is liable for accidents caused by HMI.
Whether voice and gesture control or virtual, augmented and mixed reality, HMI
interaction is far from reaching the end of the line.
In future, data from different sensors will also increasingly be combined to capture and
control complex processes optimally.
The human senses will be replicated better and better with the aid of, for example, gas
sensors, 3D cameras and pressure sensors, thus expanding the devices’ capabilities.
In contrast, there will be fewer of the input devices that are customary at present, such as
remote controllers.
Even complex systems will become easier to use thanks to modern human-machine
interaction. To enable that, machines will adapt more and more to human habits and
needs. Virtual reality, augmented reality and mixed reality will also allow them to be
controlled remotely. As a result, humans expand their realm of experience and field of
action.
Machines will also keep on getting better at interpreting signals in future – and that’s
also necessary: The fully autonomous car must respond correctly to hand signals from a
police officer at an intersection. Robots used in care must likewise be able to ―assess‖ the
needs of people who are unable to express these themselves.
The more complex the contribution made by machines is, the more important it is to
have efficient communication between them and users. Does the technology also
understand the command as it was meant? If not, there’s the risk of misunderstandings –
and the system won’t work as it should. The upshot: A machine produces parts that don’t
fit, for example, or the connected car strays off the road.
People, with their abilities and limitations, must always be taken into account in the
development of interfaces and sensors. Operating a machine must not be overly complex
or require too much familiarization. Smooth communication between human and
machine also needs the shortest possible response time between command and action,
otherwise users won’t perceive the interaction as being natural.
One potential risk arises from the fact that machines are highly dependent on sensors to
be controlled or respond automatically. If hackers have access to the data, they obtain
details of the user’s actions and interests. Some critics also fear that even learning
Dr. Abhishek D. Patange, Mechanical Engineering, College of Engineering Pune (COEP)
QUESTION BANK FOR UNIT 6: APPLICATIONS
machines might act autonomously and subjugate people. One question that has also not
been clarified so far is who is liable for accidents caused by errors in human-machine
interaction, and who is responsible for them.
Reference: https://www.infineon.com/cms/en/discoveries/human-machine-interaction/
8. Make a list of maintenance and explain in brief. Discuss the scope of AIML.
9. Explain fault diagnosis (of any suitable machine element) using ML.
Refer following articles and explain the procedure they have adopted.
Sakthivel, N. R., Sugumaran, V., & Babudevasenapati, S. (2010). Vibration based fault
diagnosis of monoblock centrifugal pump using decision tree. Expert Systems with
Applications, 37(6), 4040-4049.
https://www.sciencedirect.com/science/article/pii/S0957417409008689
Dr. Abhishek D. Patange, Mechanical Engineering, College of Engineering Pune (COEP)
QUESTION BANK FOR UNIT 6: APPLICATIONS
10. Explain an intelligent approach for classification of Nuts, Bolts, Washers and
Locating Pins?
An intelligent approach to classify Nuts, Bolts, Washers and Locating Pins as our Cats and
Dogs is explained here.
A flowchart of a Machine Learning algorithm trained on Images of Nuts and Bolts using a
Neural Network Model.
Data-set
We downloaded 238 parts each of the 4 classes (Total 238 x 4 = 952) from various part
libraries available on the internet. Then we took 8 different isometric images of each part.
This was done to augment the data available, as only 238 images for each part would not be
enough to train a good neural network. A single class now has 1904 images (8 isometric
images of 238 parts) a total of 7616 images. Each image is of 224 x 224 pixels.
Images of the 4 classes. 1 part has 8 images. Each image is treated as single data. We then
have our labels with numbers 0,1,2,3 each number corresponds to a particular image and
means it belongs to certain class #Integers and their corresponding classes
{0: 'locatingpin', 1: 'washer', 2: 'bolt', 3: 'nut'} After training on the above images we will then
see how well our model predicts a random image it has not seen.
Methodology
The process took place in 7 steps. We will get to the details later. The brief summary is
1. Data Collection : The data for each class was collected from various standard part
libraries on the internet.
2. Data Preparation : 8 Isometric view screenshots were taken from each image and
reduced to 224 x 224 pixels.
3. Model Selection : A Sequential CNN model was selected as it was simple and good
for image classification
4. Train the Model: The model was trained on our data of 7616 images with 80/20
train-test split
5. Evaluate the Model: The results of the model were evaluated. How well it predicted
the classes?
6. Hyperparameter Tuning: This process is done to tune the hyperparameters to get
better results . We have already tuned our model in this case
7. Make Predictions: Check how well it predicts the real world data
Data Collection
We downloaded the part data of various nuts and bolts from the different part libraries on
the internet. These websites have numerous 3D models for standard parts from various
makers in different file formats. Since we will be using FreeCAD API to extract the images we
downloaded the files in neutral format (STEP).
Data Preparation
Then we ran a program using FreeCAD API that automatically took 8 isometric screenshots
of 224 x 224 pixels of each part. FreeCAD is a free and open-source general-purpose
parametric 3D computer-aided design modeler which is written in Python.
A Convolutional Neural network. A basic visualization of how our algorithm will work
The following code is how our CNN looks like. Don’t worry about it if you don’t understand.
The idea is the 224 x 224 features from each of our data will go through these network and
spit out an answer. The model will adjusts its weights accordingly and after many iterations
will be able to predict a random image’s class.
#Model description
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
===========================================================
======
conv2d_1 (Conv2D) (None, 222, 222, 128) 1280
_________________________________________________________________
activation_1 (Activation) (None, 222, 222, 128) 0
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 111, 111, 128) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 109, 109, 128) 147584
_________________________________________________________________
activation_2 (Activation) (None, 109, 109, 128) 0
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 54, 54, 128) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 373248) 0
_________________________________________________________________
dense_1 (Dense) (None, 64) 23887936
_________________________________________________________________
dense_2 (Dense) (None, 4) 260
_________________________________________________________________
activation_3 (Activation) (None, 4) 0
===========================================================
======
Total params: 24,037,060
Trainable params: 24,037,060
Non-trainable params: 0
Model Training
Now finally the time has come to train the model using our dataset of 7616 images. So our
[X] is a 3D array of 7616 x 224 x224 and [y] label set is a 7616 x 1 array. For all training
purposes a data must be split into at least two parts: Training and Validation (Test) set (test
and validation are used interchangeably when only 2 sets are involved).
The validation data usually comes from the same distribution as the training set and is the
data the model has not seen. After the model has trained from the training set, it will try to
predict the data of the validation set. How accurately it predicts this, is our validation
accuracy. This is more important than the training accuracy. It shows how well the model
generalizes. In real life application it is common to split it even into three parts. Train,
Validation and Test. For our case we will only split it into a training and test set. It will be a
80–20 split. 80 % of the images will be used for training and 20% will be used for testing.
That is train on 6092 samples, test on 1524 samples from the total 7616.
If the algorithm predicts incorrectly the cost increases, if it predicts correct the cost
decreases.
After training for 15 epochs we can see the following graph of loss and accuracy. (Cost and
loss can be used interchangeably for our case)
Graph generated from matplot.lib showing Training and Validation loss for our model
The loss decreased as the model trained more times. It becomes better at classifying the
images with each epoch. The model is not able to improve the performance much on the
validation set.
Graph generated from matplot.lib showing Training and Validation accuracy for our model
The accuracy increased as the model trains for each epoch. It becomes better at classifying
the images. The accuracy is for the validation set is lower than the training set as it has not
trained on it directly. The final value is 97.64% which is not bad.
Hyperparameter Tuning
The next step would to be change the hyperparameters, the learning rate,number of epochs,
data size etc. to improve our model. In machine learning, a hyperparameter is a parameter
whose value is used to control the learning process. By contrast, the values of other
parameters (typically node weights) are derived via training.[3] For our purpose we have
already modified these parameters before this article was written, in a way to obtain an
optimum performance for display on this article. We increased the dataset size and number
of epochs to improve the accuracy.
catalogue. There are serial codes to remember as a change in a single digit or alphabet
might mean a different type of part.
Reference: Five High-Impact Research Areas in Machine Learning for Materials Science by
Bryce Meredig. https://pubs.acs.org/doi/10.1021/acs.chemmater.9b04078
Over the past several years, the field of materials informatics has grown dramatically. (1)
Applications of machine learning (ML) and artificial intelligence (AI) to materials science are
now commonplace. As materials informatics has matured from a niche area of research into
an established discipline, distinct frontiers of this discipline have come into focus, and best
practices for applying ML to materials are emerging. (2) The purpose of this editorial is to
outline five broad categories of research that, in my view, represent particularly high-impact
opportunities in materials informatics today:
Validation by experiment or physics-based simulation. One of the most common
applications of ML in materials science involves training models to predict materials
properties, typically with the goal of discovering new materials. With the availability of
user-friendly, open-source ML packages such as scikit-learn, (3)keras, (4) and pytorch, (5)
the process of training a model on a materials data set—which requires only a few lines
of python code—has become completely commoditized. Thus, standard practice in
designing materials with ML should include some form of validation, ideally by
experiment (6−8) or, in some cases, by physics-based simulation. (9,10) Of particular
interest are cases in which researchers use ML to identify materials whose properties are
superior to those of any material in the initial training set; (11) such extraordinary results
remain scarce.
ML approaches tailored for materials data and applications. This category
encapsulates a diverse set of method development activities that make ML more
applicable to and effective for a wider range of materials problems. Materials science as a
field is characterized by small, sparse, noisy, multiscale, and heterogeneous
multidimensional (e.g., a blend of scalar property estimates, curves, images, time series,
etc.) data sets. At the same time, we are often interested in exploring very large, high-
dimensional chemistry and processing design spaces. Some method development
examples to address these challenges include new approaches for uncertainty
quantification (UQ), (12) extrapolation detection, (13) multiproperty optimization, (14)
descriptor development (i.e., the design of new materials representations for ML),
(15−17) materials-specific cross-validation, (18,19) ML-oriented data standards, (20,21)
and generative models for materials design. (22)
High-throughput data acquisition capabilities. ML is notoriously data-hungry. Given
the typically very high cost of acquiring materials data, both in terms of time and money,
the materials informatics field is well-served by research that accelerates and
democratizes our ability to synthesize, characterize, and simulate materials. Examples
include high-throughput density functional theory calculations of materials properties,
(23−25) applications of robotics, automation, and operations research to materials
science, (26−30) and natural language processing (NLP) to extract materials data from
text corpora. (31,32)
ML that makes us better scientists. A popular refrain in the materials informatics
community is that ―ML will not replace scientists, but scientists who use ML will replace
those who do not.‖ This bon mot suggests that ML has the potential to make scientists
more effective and enable them to do more interesting and impactful work. We are still
in the nascent stages of creating true ML-based copilots for scientists, but research areas
such as ML model explainability and interpretability (33,34) represent a valuable early
step. Another example is the application of ML to accelerate or simplify materials
characterization. Researchers have used deep learning to efficiently post-process and
understand images generated via existing characterization methods such as scanning
transmission electron microscopy (STEM) (35) and position averaged convergent beam
electron diffraction (PACBED). (36)
Integration of physics within ML, and ML with physics-based simulations. The
paucity of data in many materials applications is a strong motivator for formally
integrating known physics into ML models. One approach to embedding physics within
ML is to develop methods that guarantee certain desirable properties by construction,
such as respecting the invariances present in a physical system. (37) Another strategy is
to use ML to model the difference between simulation outputs and experimental results.
For example, Google and collaborators created TossingBot, a robotic system that learned
to throw objects into bins with the aid of a ballistics simulation. (38) The researchers
found that a physics-aware ML approach, wherein ML learned and corrected for the
discrepancy between the simulations and real-world observations, dramatically
outperformed a pure trial-and-error ML training strategy. In a similar vein, ML can enable
us to derive more value from existing physics-based simulations. For example, ML-based
interatomic potentials (39−41) represent a means of capturing some of the physics of
first-principles simulations in a much more computationally efficient model that can
simulate orders of magnitude more atoms. ML can also serve as ―glue‖ to link physics-
based models operating at various fidelities and length scales. (42)
As ML becomes more widely used in materials research, I expect that efforts addressing one
or more of these five themes will have an outsized impact on both the materials informatics
discipline and materials science more broadly.
Reference: Vasudevan, R., Pilania, G., & Balachandran, P. V. (2021). Machine learning for
materials design and discovery. Journal of Applied Physics, 129(7), 070401.
https://doi.org/10.1063/5.0043300
Liu, Y., Zhao, T., Ju, W., & Shi, S. (2017). Materials discovery and design using machine
learning. Journal of Materiomics, 3(3), 159-177.
https://www.sciencedirect.com/science/article/pii/S2352847817300515
The fundamental framework for the application of machine learning in material property
prediction.
*********************