ML Mod-1
ML Mod-1
Ever since computers were invented, we have wondered whether they might be made to
learn. If we could understand how to program them to learn-to improve automatically with
experience-the impact would be dramatic.
• Imagine computers learning from medical records which treatments are most
effective for new diseases
• Houses learning from experience to optimize energy costs based on the particular
usage patterns of their occupants.
• Personal software assistants learning the evolving interests of their users in
order to highlight especially relevant stories from the online morning
newspaper
A successful understanding of how to make computers learn would open up many new
uses of computers and new levels of competence and customization
• Some tasks cannot be defined well, except by examples (e.g., recognizing people).
• Relationships and correlations can be hidden within large amounts of data.
Machine Learning/Data Mining may be able to find these relationships.
• Human designers often produce machines that do not work as well as desired
in the environments in which they are used.
• The amount of knowledge available about certain tasks might be too large for
explicit encoding by humans (e.g., medical diagnostic).
• Environments change over time.
New knowledge about tasks is constantly being discovered by humans. It may be difficult to
continuously re-design systems “by hand”
Examples
1. Checkers game: A computer program that learns to play checkers might improve
its performance as measured by its ability to win at the class of tasks involving
playing checkers games, through experience obtained by playing games against
itself.
The history of Artificial Intelligence is not entirely linear. Throughout the years, there have
been significant discoveries, but also the so-called "AI winters."
In 1623, Wilhelm Schickard invented a device that allowed him to perform arithmetic
operations completely mechanically, he called it the calculating clock.
Its operation was based on rods and gears that mechanized the functions that were previously
performed manually.
In 1822, Babbage was able to develop and partially design a mechanical calculator capable of
performing calculations in tables of numerical functions by the method of differences and
designing the analytical machine to run tabulation or computation programs. Among his
inventions, there is also the differential machine.
Later, Babbage worked with Ada Lovelace to translate her writing into Italian on the analytical
machine. Their relationship would help to cement the principles of what would become
artificial intelligence.
Another important contribution of Lovelace was the concept of the universal machine. He
created a device that, in theory, could be programmed and reprogrammed to perform a variety
of tasks not limited to mathematical calculation, such as processing symbols, words and even
music.
Their beginnings in the business would lead them to be leaders in software solutions, hardware,
and services that have marked the technological advancement of this era.
The company has managed to adapt to the technological changes in the market to create
innovative solutions over the years.
This computational model can be adapted to simulate the logic of any algorithm. Its creation
demonstrated that some of these Turing machines could perform any mathematical
computation if it were representable by an algorithm.
It is a conversation between a human, a computer, and another person, but without knowing
which of the two conversationalists is a machine. The person asks questions to the chatbot and
another person, and in case of not distinguishing the human from the machine, the computer
will have successfully passed the Turing test.
This important event was the starting point of Artificial Intelligence. McCarthy coined the term
Artificial Intelligence for the first time during this event. It was also determined that in the next
25 years computers would do all the work humans did at that time. In addition, theoretical logic
was considered the first Artificial Intelligence program to solve heuristic search problems.
This period began after the first attempts to create machine translation systems, which were
used in the Cold War and ended with the introduction of expert systems that were adapted by
hundreds of organizations around the world.
They began to be designed to translate Russian into English for Americans in the early 1960s.
Still, they did not have the expected result until 1980, when different algorithms and
computational technologies were applied to provide a better experience.
But in 1987, the market collapsed with the dawn of the PC era as this technology overshadowed
the expensive LISP machines. Now Apple and IBM devices could perform more actions than
their predecessors, making them the best choice in the industry.
The creation and research of these systems began in 1990. They are able to interpret and process
the information they receive from their environment and act based on the data they collect and
analyze, to be used in news services, website navigation, online shopping and more.
The most popular virtual assistant is undoubtedly Siri, created by Apple in 2011. Starting with
the iPhone 4s, this technology was integrated into the devices. It understood what you said and
responded with an action to help you, whether it was searching for something on the internet,
setting an alarm, a reminder or even telling you the weather.
2016:Sophia
Sophia was created in 2016 by David Hanson. This android can hold simple conversations like
virtual assistants, but unlike them, Sophia makes gestures like people and generates knowledge
every time it interacts with a person, subsequently mimicking their actions.
2018:BERTbyGoogle
BERT, designed by Google in 2018, is a Machine Learning technique applied to natural
language processors, aiming to understand better the language we use every day. It analyses all
the words used in a search to understand the entire context and yield favourable user results.
It is a system that uses transformers, a neural network architecture that analyses all possible
relationships between words within a sentence.
2020:Autonomous AI
The North American firm, Algotive, develops Autonomous Artificial Intelligence algorithms
that enhance video surveillance systems in critical industries.
Its algorithms rely on Machine Learning, the Internet of Things (IoT), and unique video
analytics algorithms to perform specific actions depending on the situation and the
organization's requirements.
Vehicle DRX, its solution for public safety, is an example of how the organization's algorithms
2022: GATO by Deep Mind
The new AI system created by Deep Mind has the ability to complete more than 600 different
tasks simultaneously, from writing image descriptions to controlling a robotic arm.
It acts as a vision and language model that has been trained to execute different tasks with
different modalities and be performed successfully. It is expected that this system will have a
larger number of actions to perform in the future and will pave the way for Artificial General
Intelligence.
Arthur Samuel is one of the pioneers of computer games and Artificial Intelligence.
In 1952 he began writing the first computer program based on Machine Learning in which he
was able to give an early demonstration of the fundamental concepts of Artificial Intelligence.
The software was a program that played Chinese checkers and could improve its game with
each game. It was able to compete with middle-level players. Samuel continued to refine the
program until it was able to compete with high-level players.
It was the first computer built specifically to create neural networks. The Perceptron was
implemented in one of IBM's computers. Thanks to it, it was able to execute 40,000 instructions
per second.
MENACE was a mechanical computer made of 304 matchboxes designed and built by Michie
since he did not have a computer.
Michie built one of the first programs with the ability to learn to play Tic-Tac-Toe. He named
it the Motor Educable Machine of Zeros and Crosses (MENACE).
The machine learned to play more and more games where it eliminated a losing strategy by the
human player at every move.
Also known as k-NN, it is one of the most basic and essential classification algorithms in
Machine Learning.
It is a supervised learning classifier that uses proximity to recognize patterns, data mining, and
intrusion detection to an individual data point to classify the interest of the surrounding data.
It solves various problems such as recommender systems, semantic search, and anomaly
detection.
Linnainmaa published the inverse model of automatic differentiation in 1970. This method
later became known as backpropagation and is used to train artificial neural networks.
Motavec built the Standford Cart in 1979. It consisted of 2 wheels and a mobile television
camera from side to side, without the need to move it.
The Standford Cart was the first autonomous vehicle controlled by a computer and capable of
avoiding obstacles in a controlled environment. In that year, the vehicle successfully crossed a
room full of chairs without the need for human intervention in 5 hours.
Dejong introduced the "Explanation based learning" (EBL) concept in 1981, a Machine
Learning method that makes generalizations or forms concepts from training examples that
allow it to discard less important data or data that does not affect the investigation.
NETtalk is an artificial neural network created by Terry Sejnowski in 1986. This software
learns to pronounce words in the same way a child would. NETtalk's goal was to build
simplified models of the complexity of learning cognitive tasks at the human level.
This program learns to pronounce written English text by matching phonetic transcriptions for
comparison.
It is a Machine Learning meta-algorithm that reduces bias and variance in supervised learning
to convert a set of weak classifiers to a robust classifier.
It combines many models obtained by a method with low predictive capability to boost it.
The idea of Valiant and Kearns was not satisfactorily solved until Freund and Schapire in 1996,
presented the AdaBoost algorithm, which was a success.
It is part of Deep Learning with a technique called LSTM that uses neural network models
where it can learn previously done tasks.
It can collect data such as images, words, and sounds where algorithms interpret it and store
this information to perform actions.
It is a technique that, with its evolution, we have come to use daily in applications and devices
such as Amazon's Alexa, Apple's Siri, Google Translate, and more.
It was one of the fastest and most flexible frameworks for Machine and Deep Learning, which
was implemented by companies such as Facebook, Google, Twitter, NVIDIA, Intel and more.
It was discontinued in 2017 but is still used for finished projects and even developments
through PyTorch.
Facial recognition was evaluated through 3D facial analysis and high-resolution images.
Several experiments were carried out to recognize individuals and identify their expressions
and gender from relevance analysis, even identical twins could be recognized thanks to
strategic analysis.
Netflix created this award which consisted of participants having to create Machine Learning
algorithms with the highest efficiency in recommending content and predicting user ratings for
movies, series and documentaries.
The winner would receive one million dollars if they could improve the organization's
recommendation algorithm, called Cinematch, by 10%.
Fei-Fei invented ImageNet, which enabled major advances in Deep Learning and image
recognition, with a database of 140 million images.
It now consists of a quintessential dataset for evaluating image classification, localization and
recognition algorithms.
ImageNet has now created its own competition, ILSVRC, designed to foster the development
and benchmarking of state-of-the-art algorithms.
This platform has more than 540 thousand active members in 194 countries where users can
find important resources and tools to carry out Data Science projects.
2011: IBM and its Watson system
Watson is a system based on Artificial Intelligence that answers questions formulated in natural
language, developed by IBM.
This tool has a database built from numerous sources such as encyclopedias, articles,
dictionaries, literary works and more, and also consults external sources to increase its response
capacity.
This system beat champions Rutter and Jennings on the TV show Jeopardy!
In 2014, Facebook developed a software algorithm that recognizes individuals in photos on the
same level as humans do called Deep Face.
This tool allowed Facebook to identify with 97.25% accuracy the people appearing in each
image, almost matching the functionality of the human eye.
The social network decided to activate face recognition as a way to speed up and facilitate the
tagging of friends in the photos uploaded by its users.
Their work helped to describe the cerebellum's functions and demonstrate the computational
power connected elements in a neural network could have. This laid the theoretical foundation
for the artificial neural networks used today.
This model can adjust the weights of a neural network based on the error rate obtained from
previous attempts.
Machine learning (ML) is a type of artificial intelligence (AI) that allows software applications
to become more accurate at predicting outcomes without being explicitly programmed to do
so. Machine learning algorithms use historical data as input to predict new output values.
Recommendation engines are a common use case for machine learning. Other popular uses
include fraud detection, spam filtering, malware threat detection, business process
automation (BPA) and Predictive maintenance.
•
•
• Robotics: Robots can learn to perform tasks the physical world using this
technique.
• Video gameplay: Reinforcement learning has been used to teach bots to play a
number of video games.
• Resource management: Given finite resources and a defined goal, reinforcement
learning can help enterprises plan out how to allocate resources.
HOW MACHINE LEARNING WORKS
Machine learning is like statistics on steroids.
Who's using machine learning and what's it used for?
Today, machine learning is used in a wide range of applications. Perhaps one of the most well-
known examples of machine learning in action is the recommendation engine that powers
Facebook's news feed.
Facebook uses machine learning to personalize how each member's feed is delivered. If a
member frequently stops to read a particular group's posts, the recommendation engine will
start to show more of that group's activity earlier in the feed.
Behind the scenes, the engine is attempting to reinforce known patterns in the member's online
behaviour. Should the member change patterns and fail to read posts from that group in the
coming weeks, the news feed will adjust accordingly.
In addition to recommendation engines, other uses for machine learning include the following:
• Customer relationship management. CRM software can use machine learning
models to analyse email and prompt sales team members to respond to the most
important messages first. More advanced systems can even recommend potentially
effective responses.
• Business intelligence. BI and analytics vendors use machine learning in their
software to identify potentially important data points, patterns of data points and
anomalies.
• Human resource information systems. HRIS systems can use machine learning
models to filter through applications and identify the best candidates for an open
position.
• Self-driving cars. Machine learning algorithms can even make it possible for
a semi-autonomous car to recognize a partially visible object and alert the driver.
• Virtual assistants. Smart assistants typically combine supervised and
unsupervised machine learning models to interpret natural speech and supply
context.
What are the advantages and disadvantages of machine learning?
Machine learning has seen use cases ranging from predicting customer behavior to forming the
operating system for self-driving cars.
When it comes to advantages, machine learning can help enterprises understand their customers
at a deeper level. By collecting customer data and correlating it with behaviors over time,
machine learning algorithms can learn associations and help teams tailor product development
and marketing initiatives to customer demand.
Some companies use machine learning as a primary driver in their business models. Uber, for
example, uses algorithms to match drivers with riders. Google uses machine learning to surface
the ride advertisements in searches.
But machine learning comes with disadvantages. First and foremost, it can be expensive.
Machine learning projects are typically driven by data scientists, who command high salaries.
These projects also require software infrastructure that can be expensive.
There is also the problem of machine learning bias. Algorithms trained on data sets that exclude
certain populations or contain errors can lead to inaccurate models of the world that, at best,
fail and, at worst, are discriminatory. When an enterprise bases core business processes on
biased models it can run into regulatory and reputational harm.
Step 2: Collect data, format it and label the data if necessary. This step is typically led by data
scientists, with help from data wranglers.
Step 3: Chose which algorithm(s) to use and test to see how well they perform. This step is
usually carried out by data scientists.
Step 4: Continue to fine tune outputs until they reach an acceptable level of accuracy. This step
is usually carried out by data scientists with feedback from experts who have a deep
understanding of the problem.
Complex models can produce accurate predictions, but explaining to a lay person how an
output was determined can be difficult.
Machine learning platforms are among enterprise technology's most competitive realms, with
most major vendors, including Amazon, Google, Microsoft, IBM and others, racing to sign
customers up for platform services that cover the spectrum of machine learning activities,
including data collection, data preparation, data classification, model building, training and
application deployment.
Continued research into deep learning and AI is increasingly focused on developing more
general applications. Today's AI models require extensive training in order to produce an
algorithm that is highly optimized to perform one task. But some researchers are exploring
ways to make models more flexible and are seeking techniques that allow a machine to apply
context learned from one task to future, different tasks.
Deep
learning works in very different ways than traditional machine learning.
How has machine learning evolved?
1642 - Blaise Pascal invents a mechanical machine that can add, subtract, multiply and divide.
1834 - Charles Babbage conceives the idea for a general all-purpose device that could be
programmed with punched cards.
1842 - Ada Lovelace describes a sequence of operations for solving mathematical problems
using Charles Babbage's theoretical punch-card machine and becomes the first programmer.
1847 - George Boole creates Boolean logic, a form of algebra in which all values can be
reduced to the binary values of true or false.
1936 - English logician and cryptanalyst Alan Turing proposes a universal machine that could
decipher and execute a set of instructions. His published proof is considered the basis of
computer science.
1952 - Arthur Samuel creates a program to help an IBM computer get better at checkers the
more it plays.
1959 - MADALINE becomes the first artificial neural network applied to a real-world
problem: removing echoes from phone lines.
1985 - Terry Sejnowski's and Charles Rosenberg's artificial neural network taught itself how
to correctly pronounce 20,000 words in one week.
1999 - A CAD prototype intelligent workstation reviewed 22,000 mammograms and detected
cancer 52% more accurately than radiologists did.
2006 - Computer scientist Geoffrey Hinton invents the term deep learning to describe neural
net research.
2014 - A chatbot passes the Turing Test by convincing 33% of human judges that it was a
Ukrainian teen named Eugene Goostman.
2014 - Google's AlphaGo defeats the human champion in Go, the most difficult board game in
the world.
2016 - LipNet, DeepMind's artificial intelligence system, identifies lip-read words in video
with an accuracy of 93.4%.
2019 - Amazon controls 70% of the market share for virtual assistants in the U.S.
The goal of ML, in simpler words is to understand the nature of (human & other forms of)
learning, and to build learning capability in computers. To be more specific there are three
aspects of goals of ML.
Choosing the right hardware to train and operate machine learning programs will greatly impact
the performance and quality of a machine learning model. Most modern companies have
transitioned data storage and compute workloads to cloud services. Many companies operate
hybrid cloud environments, combining cloud and on-premise infrastructure. Others continue
to operate entirely on-premise, usually driven by regulatory requirements.
The processor is a critical consideration in machine learning operations. The processor operates
the computer program to execute arithmetic, logic, and input and output commands. This is the
central nervous system that carries out machine learning model training and predictions. A
faster processor will reduce the time it takes to train a machine learning model and to generate
predictions by as much as 100-fold or more.
There are two primary processors used as part of most AI/ML tasks: central processing units
(CPUs) and graphics processing units (GPUs). CPUs are suitable to train most traditional
machine learning models and are designed to execute complex calculations sequentially. GPUs
are suitable to train deep learning models and visual image-based tasks. These processors
handle multiple, simple calculations in parallel. In general, GPUs are more expensive than
CPUs, so it is worthwhile to evaluate carefully which type of processor is appropriate for a
given machine learning task.
Other specialized hardware increasingly is used to accelerate training and inference times for
complex, deep learning algorithms, including Google’s tensor processing units (TPUs) and
field-programmable gate arrays (FPGAs).
In addition to processor requirements, memory and storage are other key considerations for the
AI/ML pipeline.
To train or operate a machine learning model, programs require data and code to be stored in
local memory to be executed by the processor. Some models, like deep neural networks, may
require more fast, local memory because the algorithms are larger. Others, like decision trees,
may be trained with less memory because the algorithms are smaller.
As it relates to disk storage, cloud storage in a distributed file system typically removes any
storage limitations that were imposed historically by local hard disk size. However, AI/ML
pipelines operating in the cloud still need careful design of both data and model stores.
Many real-world AI/ML use cases involve complex, multi-step pipelines. Each step may
require different libraries and runtimes and may need to execute on specialized hardware
profiles. It is therefore critical to factor in management of libraries, runtimes, and hardware
profiles during algorithm development and ongoing maintenance activities. Design choices can
have a significant impact on both costs and algorithm performance.
Overcoming AI fantasies:
The first is the existing skills gap. The current skills gap is enormous, with far greater demand
than supply for essential AI capabilities. Companies struggle to fill key positions, and even
when they do, new hires require significant training before they are up to speed, especially in
large, complex organizations.
A corollary of this challenge is the need to democratize AI capabilities so they can be accessed
by everyone, not only the elite few with a STEM degree. We need to ensure we build an
economy powered by enterprises that provide direct access to all fabric of people to participate
in employing and interacting with the technology.
What can you do to help meet this shortfall? Start upskilling and reskilling your top talent.
Locating the right people externally is extremely difficult, so be prepared to grow your talent
from within. Look for personality traits more than specific competencies. Some traits you might
look for include a growth mindset, an interdisciplinary perspective and an eye for both technical
details and big-picture questions. In all likelihood, there are many people in your organization
who already exhibit these characteristics — make use of them.
The second issue is the polarization of skill sets. Unless we bridge this gap, there is a risk that
we are building an economy composed of people with technical skills and those without, and
a huge gap between these two categories. This will make collaboration difficult and limit our
capacity to unlock the benefits of AI.
The solution is to tighten the relationship between technical and nontechnical team members.
Technical people must engage with larger business objectives, while nontechnical people must
gain at least basic AI literacy. The best way to make this happen is to deliberately build an
organizational structure that fosters communication and collaboration between these two
different types of people. Additionally, there is a place for data translators, with a specific brief
to act as intermediaries between these two groups.
The third primary obstacle to AI adoption is the gap in executive knowledge. Few CEOs and
other top executives truly understand the potential or workings of AI. This is a problem because
AI adoption is a top-down initiative. The impetus must come from top management. This
applies not only to funding but also culturally. Executives must prioritize effective business
processes and change management. Without strong leadership and support from the top, AI
adoption will not be successful.
You can lead the way in this area by demonstrating that you use data to make decisions. Get in
the habit of asking for the data, and let the people around you see that you use it to make better
decisions — not as a replacement for your own discernment, but to inform your judgment.
Additionally, take an integrated approach to your company’s data strategy. As data plays a
bigger role in our daily lives, we face new questions about data availability and acquisition,
security and governance. Ensure that you are up to date and engaged on these issues.
Finally, keep learning. You should be broadly familiar with the current capabilities of AI. You
should also have a sense of where short-term advances are likely to occur, and a perspective
on what to expect in the medium-to-long term.
There is no doubt that the fourth Industrial Revolution will only make the adoption and use of
AI more important. The most successful companies in the world already make heavy use of
this technology, to great effect. If you don't want to risk trailing in the wake of a digital world,
it’s important to engage proactively with AI now.
The relationship between AI and ML is more interconnected instead of one vs the other. While
they are not the same, machine learning is considered a subset of AI. They both work together
to make computers smarter and more effective at producing solutions.
AI uses machine learning in addition to other techniques. Additionally, machine learning
studies patterns in data which data scientists later use to improve AI. The combination of AI
and ML includes benefits such as obtaining more sources of data input, increased operational
efficiency, and better, faster decision-making.
AI and ML are beneficial to a vast array of companies in many industries. Retail, banking and
finance, healthcare, sales and marketing, cybersecurity, customer service, transportation, and
manufacturing use artificial intelligence and machine learning to increase profitability, work
processes, and customer satisfaction. Additionally, ML can predict many natural disasters, like
hurricanes, earthquakes, and flash floods, as well as any human-made disasters, including oil
spills.
Real-world AI and ML Application
How do we take machine learning and AI and use them to help save Earth—particularly nature?
That is where science and creativity come to play. Humans must be able to interpret the
collected data from AI and ML to make decisions and find solutions to world problems. Here
are some steps we can take:
1. Comprehend data
In a hyper-connected world, we are surrounded by data. As it gets harder every day to
understand the information we are receiving, our first step is learning to gather relevant data
and—more importantly—to understand it. Being able to comprehend data collected by AI and
ML is crucial to reducing environmental impacts.
2. Make predictions
You can make predictions through supervised learning and data classification. Neural networks
in machine learning—or a series of algorithms that endeavors to recognize underlying
relationships in a set of data— facilitate this process. Making educated guesses using collected
data can contribute to a more sustainable planet.
3. Make informed decisions
You can make effective decisions by eliminating spaces of uncertainty and arbitrariness
through data analysis derived from AI and ML. Making informed decisions is different from
making guesses. The decision is backed by trusted data from AI.
4. Deploy casual interference
You can infer relevant conclusions to drive strategy by correctly applying and evaluating
observed experiences using machine learning.
AI and ML are essential to not only combatting climate change, but also to helping most
industries achieve their goals and obtain success.
Depending on the size of your organization, the type(s) of product, your development lifecycle
and your organization’s practices, you will have your own way of defining product
specifications. Whichever mechanisms you follow, here are some key adjustments and insights
to include on your specifications to build effective machine learning and AI products:
Stakeholders
It takes a village
To build effective machine learning products, most often you will have several stakeholders
from multiple disciplines including data scientists (various kinds), engineers, designers, domain
experts, product marketing and more. Ensure all of them are involved from the beginning of the
product specifications and partnering with you for the success of your product.
Goals/Features
The tech industry has been going through phases of evolution rapidly in the past few decades
and every few years there is a technology that is revolutionizing how we think about solving
various problems. Within the last decade, machine learning and artificial intelligence have seen
their all-time high with almost every product either claiming or wanting to move in the direction
of using ML/AI. This has pushed various product managers to almost feel compelled to design
for ‘machine learning’ or ‘AI’ solutions. As product managers, it is almost our duty to
remember: technology will evolve and change but solving the user problem will never go out of
fashion. That is the key objective to optimize for while the solution could be anything for that
matter. Keeping in mind that the product is only successful if it solves the user problem, not
whether it uses ML or AI will help ensure your product goals/features are aligned to the right
objectives.
Product managers, especially new to the space of machine learning and artificial intelligence
are enthused about various new algorithms and techniques potentially being used in their
product. Most of these new techniques are part of day to day conversations and it can be hard
for PMs to draw the line between product vs technical specification when it comes to machine
learning or AI techniques. In your specifications, ensure that the user problem and intended
product solution is clear avoiding references to particular machine learning or AI techniques.
This will give freedom for your technical teams to experiment and find the best solutions without
being constrained. This will also allow your specifications to remain relevant as techniques
evolve, which they do pretty quickly!
Product Targets/KPIs
Most product specifications include setting certain targets that the product aims to achieve. If
your product is based on ML/AI, it is important to establish alignment in your product and
underlying model targets. This will ensure that the technical implementations are optimizing for
the same KPIs and there is a way to evaluate the product end to end. In my previous article, I
wrote about how you can set targets for your models.
If your product is unique and new in a space where it might not be obvious to set particular
numeric targets in the start, it is okay to start with defining what the success criteria will be and
let the data from your product drive your targets afterward. It is better to be data driven and late
than early but completely off the mark.
A lot of times, products based on machine learning models do not consider the model evaluation
in real-world scenarios from the get-go. Not thinking through these scenarios in the beginning
can hurt you later because:
• Even if you don’t need an evaluation from the start, your product might be
discarding data points which are required to do evaluation later. So thinking through
the scenarios in the beginning will help avoid this later
• Until you have a mechanism to do evaluation, you are flying blind and this can
cause problems very quickly especially in AI based products which learn and adapt
in real time
• In many high impact industries, your users will look for evaluation to be able to
trust your product and consider it usable. So while your product outputs itself can
get you user activation, its validation & evaluation will help you with user adoption
and engagement.
Constraints
The most common mistake when specifying a new feature/functionality of a machine learning
product is to not evaluate and dive deep into the challenges of the data your product has to work
with. Identifying data limitations upfront help you paint a realistic path for your product and
helps avoid several iterations of your product spec later.