AI Course Interview V1docx
AI Course Interview V1docx
Answer: AI is the simulation of human intelligence in machines programmed to think and learn. It
enables machines to perform tasks typically requiring human intelligence, like visual perception,
speech recognition, and decision-making.
| |
2. What is Machine Learning (ML)?
Answer: ML is a subset of AI that enables systems to learn and improve from experience
automatically, without being explicitly programmed. It uses algorithms to identify patterns in data.
Answer: Neural networks are computational models inspired by the human brain. They consist of
interconnected nodes (neurons) that process data in layers, enabling deep learning and other
complex AI functions.
Answer: In supervised learning, the model is trained on labeled data, where each input has a known
output. In unsupervised learning, the model is trained on data without labeled responses, aiming to
find patterns or groupings.
Answer: Activation functions determine whether a neuron should be activated. They introduce non-
linear properties to the network, enabling it to learn complex data patterns.
Answer: It’s a type of ML where an agent learns by interacting with its environment, making
decisions, and receiving feedback (rewards or penalties) to maximize a goal.
Answer: Transfer learning involves reusing a pre-trained model on a new, similar problem, saving
time and resources, especially when data for the new problem is limited.
Answer: Overfitting occurs when a model learns the training data too well, including noise, and
performs poorly on new data. Underfitting happens when a model is too simple, failing to capture
the data’s patterns, resulting in poor performance.
Answer: Regularization involves adding a penalty term to the loss function to prevent overfitting.
Common techniques include L1 (Lasso) and L2 (Ridge) regularization.
11. 11. Describe a decision tree.
Answer: A decision tree is a model that splits data into branches based on feature values, leading to
decision nodes, and ultimately a prediction outcome. It’s easy to interpret but can be prone to
overfitting.
Answer: Hyperparameters are configuration settings external to the model that need to be set
before training (like learning rate and number of epochs). They differ from parameters learned by
the model.
Answer: Gradient descent is an optimization algorithm that minimizes the cost function by
iteratively adjusting parameters in the direction that reduces error.
Answer: Backpropagation is an algorithm used to train neural networks. It calculates the gradient of
the error with respect to weights, adjusting them to minimize the error.
Answer: CNNs are specialized neural networks for image data, using convolutional layers to
automatically detect patterns in images.
Answer: Proposed by Alan Turing, the Turing Test evaluates if a machine's behavior is
indistinguishable from a human’s.
Answer: AI Winter refers to periods when AI research and funding stagnated due to unmet
expectations.
Answer: It’s a computer system that mimics human expertise in specific fields, using a knowledge
base and inference rules.
Answer: Milestones include the creation of neural networks, the Turing Test, IBM’s Deep Blue beating a
chess grandmaster, and recent advances in deep learning.
21. What is logistic regression?
Answer: Logistic regression is a classification algorithm used to model binary outcomes, predicting
probabilities between 0 and 1.
Answer: k-NN is a non-parametric algorithm that classifies data points based on the closest training
examples in the feature space.
Answer: SVM is a classification algorithm that finds the hyperplane that best separates classes in
high-dimensional space.
Answer: Naïve Bayes is a probabilistic classifier based on Bayes’ theorem, assuming independence
between features.
Answer: K-Means is an unsupervised algorithm that partitions data into k clusters, with each point
assigned to the nearest cluster center.
Answer: Random Forest is an ensemble learning method that builds multiple decision trees and
averages their predictions for improved accuracy.
Answer: PCA is a dimensionality reduction technique that transforms data into principal
components, reducing feature space.
Answer: Ensemble methods, like boosting and bagging, combine multiple algorithms to improve
predictive performance and robustness.
Answer: GBM is an ensemble method that builds models sequentially, focusing on correcting errors
of previous models to improve accuracy.
Answer: XGBoost is an optimized gradient boosting algorithm for efficiency, accuracy, and speed,
popular in machine learning competitions.
Answer: Transformers are neural network architectures based on self-attention mechanisms, widely
used in natural language processing tasks.
46. What is Mean Absolute Error (MAE) and Mean Squared Error (MSE)?
Answer: MAE measures average absolute errors, while MSE measures the average squared
differences, with MSE penalizing larger errors more heavily. gpt
Answer: The learning rate determines the step size in gradient descent, affecting model
convergence speed and accuracy.
Answer: The bias-variance tradeoff is the balance between a model’s accuracy on training data (low
bias) and its ability to generalize to new data (low variance).
Answer: AI improves crop monitoring, pest detection, yield prediction, and resource management,
increasing agricultural productivity.
Answer: Image classification involves categorizing images into predefined classes, typically by
training models to recognize patterns in labeled data.
Answer: Image segmentation is the process of dividing an image into segments, or regions, to
simplify analysis. Types include semantic, instance, and panoptic segmentation.
101. What are some common datasets for computer vision tasks?
Answer: Popular datasets include ImageNet for classification, COCO for object detection and
segmentation, and CIFAR-10/CIFAR-100 for small image classification tasks.
102. What is YOLO (You Only Look Once)?
Answer: YOLO is a real-time object detection model that divides an image into regions,
classifying and localizing objects in a single pass, making it highly efficient.
103. Explain the concept of transfer learning in computer vision.
Answer: Transfer learning in computer vision uses pre-trained models (e.g., VGG, ResNet) on large
datasets to adapt to new, similar tasks with fewer data.
Answer: OCR is a computer vision technique that converts images of text into machine-readable
text, commonly used in document scanning and text recognition.
Answer: Face recognition identifies individuals in images or video by analyzing facial features,
commonly used in security and user authentication.
Answer: Feature extraction identifies relevant attributes or patterns in images, often using
convolutional layers in CNNs to create meaningful representations.
107. What is augmented reality (AR) and how does AI play a role?
Answer: AR overlays digital information on the real world. AI enables accurate tracking, object
recognition, and interactions, enhancing AR experiences.
Answer: Anomaly detection in computer vision identifies deviations from expected patterns, often
used in quality control and defect detection in manufacturing.
Answer: Edge computing processes data closer to its source (the "edge") instead of sending it to
centralized cloud servers, reducing latency and bandwidth needs.
Answer: Edge computing allows AI applications to run closer to users or sensors, enabling real-time
analysis, enhancing privacy, and reducing dependency on internet connectivity.
Answer: Applications include smart surveillance, industrial IoT, autonomous vehicles, remote health
monitoring, and augmented reality.
Answer: Challenges include limited computational power, storage constraints, model optimization
for low power consumption, and security considerations.
Answer: Model quantization reduces model size and computational load by converting floating-
point numbers to lower precision, making AI models more efficient on edge devices.
115. How does pruning work in model optimization for edge computing?
Answer: Pruning removes less significant parameters or neurons from a model, reducing size and
computation, which is essential for resource-limited edge devices.
Answer: AI ethics encompasses the moral implications of AI, focusing on responsible, fair,
transparent, and accountable AI use in society.
Answer: Transparency helps users understand AI systems’ decisions, fostering trust and
accountability, especially in sensitive applications.
Answer: The black box problem refers to the lack of transparency in complex AI models, making it
hard to interpret how decisions are made.
Answer: Techniques include collecting diverse data, ensuring balanced representation, using fairness
constraints, and monitoring for biases throughout the AI lifecycle.
Answer: Human oversight ensures that AI systems are aligned with human values, can handle edge
cases, and prevent harmful decisions in critical scenarios.
Answer: Considerations include consent, transparency in data usage, anonymization, and fairness to
avoid exploitation or harm to data subjects.
128. How would you approach building an AI system for predictive maintenance in a factory?
Answer: Collect sensor data, apply feature engineering, use time-series models or deep learning for
predictions, and integrate with a real-time monitoring system.
129. If you had to build an AI-powered medical diagnostic tool, what ethical concerns would
you address?
Answer: Address data privacy, model transparency, ensuring fairness across demographics, and the
reliability of predictions to avoid harmful misdiagnoses.
130. How would you design an AI system that ensures fairness across demographic groups?
Answer: Use diverse datasets, monitor fairness metrics, adjust for demographic imbalances, and
continually validate results for fairness post-deployment.
131. If tasked with optimizing a model for low-power devices, what strategies would you
use?
Answer: Consider model quantization, pruning, knowledge distillation, using lightweight
architectures, and tuning for efficient computation.
132. What steps would you take to troubleshoot a computer vision model with low accuracy?
Answer: Analyze data quality, check for data augmentation, evaluate model architecture, tune
hyperparameters, and validate using cross-validation.
133. If tasked with building a recommendation system for a new e-commerce platform, what
approach would you take?
Answer: Start with collaborative filtering and content-based filtering, use user interactions and
feedback, and consider a hybrid approach for more personalized recommendations.
134. How would you assess if an AI model’s predictions are biased?
Answer: Use fairness metrics like demographic parity and equal opportunity, analyze predictions
across groups, and assess feature impact on outcomes.
135. What would you do if a deployed AI model performed poorly in production?
Answer: Diagnose data drift, evaluate if the training and production data match, monitor model
performance, and consider retraining or fine-tuning with recent data.
136. How is AI applied in medical imaging?
Answer: AI assists in detecting anomalies, segmenting tissues, and diagnosing conditions from
medical images like X-rays, MRIs, and CT scans, enhancing accuracy and speed.
137. What is predictive policing, and what ethical issues does it raise?
Answer: Predictive policing uses AI to forecast crime hotspots, raising ethical concerns around
fairness, bias, and potential civil rights violations.
138. How does AI support climate change research?
Answer: AI models analyze environmental data, simulate climate impacts, optimize renewable
energy sources, and monitor ecosystems for conservation.
139. What are the uses of AI in agriculture?
Answer: AI aids in crop monitoring, pest control, soil analysis, yield prediction, and resource
management, improving productivity and sustainability.
140. How is AI used in supply chain optimization?
Answer: AI predicts demand, optimizes inventory, and improves logistics, helping reduce waste, cut
costs, and ensure timely deliveries.
Answer: It mandates that AI systems should not harm individuals or society, emphasizing safety,
fairness, and respect for human rights.
Answer: Autonomous weapons raise ethical and safety concerns, including lack of accountability,
potential for misuse, and accidental harm.
154. How do ethical considerations differ between private sector and public sector AI
applications?
155. Answer: Public sector applications may prioritize fairness and transparency, while
private sector AI often balances profit motives with ethical responsibilities.
Answer: Challenges include overfitting, vanishing gradients, requiring large amounts of data, and
computational costs, as well as the need for careful hyperparameter tuning.
Artificial Intelligence is an area of computer science that emphasizes the creation of intelligent
machine that work and reacts like humans.
3) What are the various areas where AI (Artificial Intelligence) can be used?
Artificial Intelligence can be used in many areas like Computing, Speech recognition, Bio-
informatics, Humanoid robot, Computer software, Space and Aeronautics’ etc.
Strong AI makes strong claims that computers can be made to think on a level equal to humans
while weak AI simply predicts that some features that are resembling to human intelligence can
be incorporated to computer to make it more useful tools.
Statistical AI is more concerned with “inductive” thought like given a set of pattern, induce the
trend etc. While, classical AI, on the other hand, is more concerned with “deductive” thought
given as a set of constraints, deduce a conclusion etc.
Artificial Key: If no obvious key either stands alone or compound is available, then the last
resort is to, simply create a key, by assigning a number to each record or occurrence. This is
known as artificial key.
Compound Key: When there is no single data element that uniquely defines the occurrence
within a construct, then integrating multiple elements to create a unique identifier for the
construct is known as Compound Key.
Natural Key: Natural key is one of the data elements that is stored within a construct, and which
is utilized as the primary key.
Anything perceives its environment by sensors and acts upon an environment by effectors are
known as Agent. Agent includes Robots, Programs, and Humans etc.
16) What are the two different kinds of steps that we can take in constructing a plan?
a) Add an operator (action)
21) What is the function of the third component of the planning system?
In a planning system, the function of the third component is to detect when a solution to problem
has been found.
A top-down parser begins by hypothesizing a sentence and successively predicting lower level
constituents until individual pre-terminal symbols are written.
24) Mention the difference between breadth first search and best first search in artificial
intelligence?
These are the two strategies which are quite similar. In best first search, we expand the nodes in
accordance with the evaluation function. While, in breadth first search a node is expanded in
accordance to the cost function of the parent node.
Frames are a variant of semantic networks which is one of the popular ways of presenting non-
procedural knowledge in an expert system. A frame which is an artificial data structure is used to
divide knowledge into substructure by representing “stereotyped situations’. Scripts are similar
to frames, except the values that fill the slots must be ordered. Scripts are used in natural
language understanding systems to organize a knowledge base in terms of the situation that the
system should understand.
26) What is FOPL stands for and explain its role in Artificial Intelligence?
FOPL stands for First Order Predicate Logic, Predicate Logic provides
b) An inference system to deductive apparatus whereby we may draw conclusions from such
assertion
b) A set of variables
29) Which search algorithm will use a limited amount of memory in online search?
RBFE and SMA* will solve any kind of problem that A* can’t by using a limited amount of
memory.
30) In ‘Artificial Intelligence’ where you can use the Bayes rule?
In Artificial Intelligence to answer the probabilistic queries conditioned on one piece of
evidence, Bayes rule can be used.
31) For building a Bayes model how many terms are required?
For building a Bayes model in AI, three terms are required; they are one conditional probability
and two unconditional probability.
32) While creating Bayesian Network what is the consequence between a node and its
predecessors?
While creating Bayesian Network, the consequence between a node and its predecessors is that a
node can be conditionally independent of its predecessors.
33) To answer any query how the Bayesian network can be used?
If a Bayesian Network is a representative of the joint distribution, then by summing all the
relevant joint entries, it can solve any query.
34) What combines inductive methods with the power of first order representations?
Inductive logic programming combines inductive methods with the power of first order
representations.
36) In top-down inductive learning methods how many literals are available? What are
they?
There are three literals available in top-down inductive learning methods they are
a) Predicates
c) Arithmetic Literals
37) Which algorithm inverts a complete resolution strategy?
‘Inverse Resolution’ inverts a complete resolution, as it is a complete algorithm for learning first
order theories.
39) In speech recognition which model gives the probability of each word following each
word?
Biagram model gives the probability of each word following each other word in speech
recognition.
To solve temporal probabilistic reasoning, HMM (Hidden Markov Model) is used, independent
of transition and sensor model.
42) In Hidden Markov Model, how does the state of the process is described?
The state of the process in HMM’s model is described by a ‘Single Discrete Random Variable’.
The process of determining the meaning of P*Q from P,Q and* is known as Compositional
Semantics.
b) Validity
c) Satisfying ability
49) Which algorithm in ‘Unification and Lifting’ takes two sentences and returns a
unifier?
In ‘Unification and Lifting’ the algorithm that takes two sentences and returns a unifier is
‘Unify’ algorithm.
50) Which is the most straight forward approach for planning algorithm?
State space search is the most straight forward approach for planning algorithm because it takes
account of everything for finding a solution.
Advantages: Neural networks (specifically deep NNs) have led to performance breakthroughs for
unstructured datasets such as images, audio, and video. Their incredible flexibility allows them
to learn patterns that no other ML algorithm can learn.
Disadvantages: However, they require a large amount of training data to converge. It's also
difficult to pick the right architecture, and the internal "hidden" layers are incomprehensible.