Beginning with Deep Learning Using TensorFlow: A Beginners Guide to TensorFlow and Keras for Practicing Deep Learning Principles and Applications
()
About this ebook
TensorFlow is the industry-standard library for Deep Learning, and thereby, it is covered extensively with both versions, 1.x and 2.x. As neural networks are the heart of Deep Learning, the book explains them in great detail and systematic fashion, beginning with a single neuron and progressing through deep multilayer neural networks. The emphasis of this book is on the practical application of key concepts rather than going in-depth with them.
After establishing a firm basis in TensorFlow and Neural Networks, the book explains the concepts of image recognition using Convolutional Neural Networks (CNN), followed by speech recognition, and natural language processing (NLP). Additionally, this book discusses Transformers, the most recent advancement in NLP.
Related to Beginning with Deep Learning Using TensorFlow
Related ebooks
Neural Networks: A Practical Guide for Understanding and Programming Neural Networks and Useful Insights for Inspiring Reinvention Rating: 0 out of 5 stars0 ratingsMastering TensorFlow 2.x: Implement Powerful Neural Nets across Structured, Unstructured datasets and Time Series Data Rating: 0 out of 5 stars0 ratingsTensorFlow in 1 Day: Make your own Neural Network Rating: 4 out of 5 stars4/5Machine Learning Bookcamp: Build a portfolio of real-life projects Rating: 4 out of 5 stars4/5Convolutional Neural Networks in Python: Beginner's Guide to Convolutional Neural Networks in Python Rating: 0 out of 5 stars0 ratingsPython Machine Learning Illustrated Guide For Beginners & Intermediates:The Future Is Here! Rating: 5 out of 5 stars5/5Neural Networks with Python Rating: 0 out of 5 stars0 ratingsPyTorch Cookbook Rating: 0 out of 5 stars0 ratingsDeep Learning Fundamentals in Python Rating: 4 out of 5 stars4/5Neural Networks: Neural Networks Tools and Techniques for Beginners Rating: 5 out of 5 stars5/5Machine Learning in Python: Hands on Machine Learning with Python Tools, Concepts and Techniques Rating: 5 out of 5 stars5/5Advanced Machine Learning with Python Rating: 0 out of 5 stars0 ratingsMastering TensorFlow: From Basics to Expert Proficiency Rating: 0 out of 5 stars0 ratingsNumPy Essentials Rating: 0 out of 5 stars0 ratingsDeep Learning with TensorFlow Rating: 5 out of 5 stars5/5Practical Full Stack Machine Learning: A Guide to Build Reliable, Reusable, and Production-Ready Full Stack ML Solutions Rating: 0 out of 5 stars0 ratingsUltimate Neural Network Programming with Python Rating: 0 out of 5 stars0 ratingsArtificial Intelligence with Python Rating: 4 out of 5 stars4/5
Computers For You
Mastering ChatGPT: 21 Prompts Templates for Effortless Writing Rating: 4 out of 5 stars4/5Elon Musk Rating: 4 out of 5 stars4/5SQL QuickStart Guide: The Simplified Beginner's Guide to Managing, Analyzing, and Manipulating Data With SQL Rating: 4 out of 5 stars4/5Creating Online Courses with ChatGPT | A Step-by-Step Guide with Prompt Templates Rating: 4 out of 5 stars4/5The ChatGPT Millionaire Handbook: Make Money Online With the Power of AI Technology Rating: 4 out of 5 stars4/5The Self-Taught Computer Scientist: The Beginner's Guide to Data Structures & Algorithms Rating: 0 out of 5 stars0 ratingsComputer Science I Essentials Rating: 5 out of 5 stars5/5Procreate for Beginners: Introduction to Procreate for Drawing and Illustrating on the iPad Rating: 5 out of 5 stars5/5Storytelling with Data: Let's Practice! Rating: 4 out of 5 stars4/5Data Analytics for Beginners: Introduction to Data Analytics Rating: 4 out of 5 stars4/5The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution Rating: 4 out of 5 stars4/5CompTIA Security+ Get Certified Get Ahead: SY0-701 Study Guide Rating: 5 out of 5 stars5/5Microsoft Azure For Dummies Rating: 0 out of 5 stars0 ratingsUX/UI Design Playbook Rating: 4 out of 5 stars4/5Fundamentals of Programming: Using Python Rating: 5 out of 5 stars5/5Deep Search: How to Explore the Internet More Effectively Rating: 5 out of 5 stars5/5A Quickstart Guide To Becoming A ChatGPT Millionaire: The ChatGPT Book For Beginners (Lazy Money Series®) Rating: 4 out of 5 stars4/5Technical Writing For Dummies Rating: 0 out of 5 stars0 ratingsLearning the Chess Openings Rating: 5 out of 5 stars5/5Get Into UX: A foolproof guide to getting your first user experience job Rating: 4 out of 5 stars4/5CompTIA IT Fundamentals (ITF+) Study Guide: Exam FC0-U61 Rating: 0 out of 5 stars0 ratingsThe Insider's Guide to Technical Writing Rating: 0 out of 5 stars0 ratingsThe Musician's Ai Handbook: Enhance And Promote Your Music With Artificial Intelligence Rating: 5 out of 5 stars5/5
Reviews for Beginning with Deep Learning Using TensorFlow
0 ratings0 reviews
Book preview
Beginning with Deep Learning Using TensorFlow - Mohan Kumar Silaparasetty
CHAPTER 1
Introduction to Artificial Intelligence
In this first chapter, we will introduce artificial intelligence, deep learning, and TensorFlow. Although, it is expected that the people buying this book already have this knowledge and experience, it is a good way to lay the ground for a smooth transition in the upcoming chapters. This also helps in making sure the reader and the author are on the same page, especially with respect to terminology and method of coding and so on.
Structure
This chapter will cover the following topics:
Brief history of AI
Why now?
Applications of AI
Industry examples of AI applications
Objective
At the end of this chapter, you will be able to learn about artificial intelligence and its applications in various industries. You will get an understanding of the evolution of AI and the relation between artificial intelligence (AI), machine learning (ML), and deep learning (DL).
Brief history of artificial intelligence
Today, we are on the verge of a huge technological revolution. Before we talk about AI, let us take a look at the multitude of emerging technologies (Figure 1.1). The convergence of these technologies along with AI will completely change the world in the next few decades.
Figure 1.1: Emerging technologies
According to leading analysts, the following technologies will mature over the next 5–10 years:
5G
BlockChain
3D printing
IoT
These technologies, along with AI, are bound to have an immense impact on humankind. The way we work, live, and fight! Yes, Warfare will also be completely different and unimaginable. AI will, of course, play the biggest role in all of it.
So, what is AI, and how is it going to change our life. AI is the intelligence demonstrated by machines. A formal definition of AI as per Wikipedia:
In computer science, AI, sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans.
Like it or not, AI is already here, although at a very basic level. We are all using AI, knowingly or unknowingly. Here are some examples as follows:
Chatbots
Robots
Smart speakers (Alexa)
Virtual assistants
Recommendation engines
Drones
Self-driving cars or autonomous vehicles.
Figure 1.2: AI is here (Source: Google)
For all the buzz around AI over the past few years, this is not a new concept. The term Artificial Intelligence was first coined by Prof. John McCarthy at a conference at Dartmouth College in 1956.
And even before that, there was a semblance of a humanoid robot in Greek Mythology. Talos was a giant automaton made of bronze to protect the mythological character Europa – after whom Europe was named. Talos circled the island's shores thrice daily to protect Europa from pirates and invaders.
And then, there was the Turing test developed by Alan Turing in 1950.
According to Wikipedia, "Turing Test is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human."
This is a test of model Natural Language Processing as this considers just conversations and in text-only mode.
In 1966, Joseph Weizenbaum, a professor at MIT, created a program called ELIZA, which can be considered to be the first Chatbot. ELIZA appeared to pass the Turing test. The program worked by identifying keywords. If a keyword is found, a rule that transforms the user's comments is applied, and the resulting sentence is returned.
A few years later in 1972, another program named Parry was created by Kenneth Colby – a pioneer in cognitive functions. It was like an enhanced version of ELIZA. It could simulate a paranoid schizophrenic.
Back to the Dartmouth conference, there was a lot of excitement after the conference, and the USA government even granted funds to conduct research on AI, but it could not take off. After a couple of years of research, the project was put on the back burner – this is referred to as the winter of AI.
One of the reasons was the lack of computing power. Subsequently, the interest in AI was renewed in the 90s. With the advent of cloud computing and cheap hardware, the availability of computing power tremendously increased. In addition to that, the availability of a large amount of data and the ability to handle large amounts of data with technologies like Hadoop has further strengthened AI research. It is like all the pieces of the puzzle falling in place.
One of the highly publicized events of the resurrection of AI was IBM Watson. In 2011, IBM Watson was pitted against the reigning human champions of the game show called Jeopardy, and it won. However, the term AI has not yet regained the popularity as it has today, and hence, Watson was just called a supercomputer. But the Natural Language Processing technique that was used is a significant component of Artificial Intelligence. And today, Watson is, of course, called AI.
It has to be noted that it was not the first time that such an event has occurred where a machine beat a human being. Way back in 1997, a supercomputer named Deep Blue defeated the reigning world champion of Chess – Gary Kasparov. And Deep Blue was also developed by IBM.
Figure 1.3: IBM Deep Blue (Source: www.stream-live-tvchannel.top/ProductDetail.aspx?iid=88929267&pr=40.88)
However, the difference is that Deep Blue was actually just a supercomputer with the ability to evaluate 200 million positions per second. So, it was brute force rather than any learning. Hence, it was not AI.
Then, in March 2016, AlphaGo an AI developed by a startup Deep Mind defeated the reigning world champion of Go named Lee Sedol 4 games to 1. This was truly a watershed moment in AI, and even the AI experts were shocked. Not because they did not believe AI could achieve this feat but because it has done it so fast.
Deep Mind was set up in 2013 with the idea of developing an AI to play the game of Go, and the experts believed it would take a good decade, if not more, to achieve this feat. And they were shocked when the AI achieved this in less than four years. Subsequently, of course, Google acquired Deep Mind:
Figure 1.4: DeepMind's AlphaGo (Source: https://247newsupdate.com/2016/03/12/computer-program-beats-human-master-in-go-game/)
This was a significant achievement at multiple levels. For one, Go, unlike Chess, is a game of intuition rather than rules. And there are these many legal positions as shown in Figure 1.5:
Figure 1.5: Number of legal positions in Go
And to put this in context, this is more than the number of atoms in the whole of the universe – yes, the universe, not just earth.
So, there is no way anyone can write a program for this. Then, how did Deep Mind achieve it? They used a technique called reinforcement learning which is a form of deep learning. In this technique, the system learns by playing the game over and over, perhaps several tens of million times and each time learning from its mistakes and improving itself.
This was probably the first time a machine has, in a way, shown signs of intuition!
Another significant incident in the world of AI took place on November 4, 2017. Saudi Arabia conferred citizenship on a humanoid robot called Sophia, which was developed by Hansen Robotics. Experts are divided over the capabilities of Sophia – some even calling it fake; this just shows the amount of interest in the field of AI.
Figure 1.6: Sophia (https://simple.wikipedia.org/wiki/Sophia_(robot))
Classification of AI
AI is classified into three categories, as shown in Figure 1.7.
Figure 1.7: Types of AI
Artificial Narrow Intelligence (ANI), also referred to as an expert system, is a system that can perform one task but can perform much better than human beings. Today's AI like AlphaGo and autonomous vehicles fall in this category.
Artificial Super Intelligence (ASI), also referred to as singularity, on the other hand, is a system that can perform multiple tasks and all of them significantly better than human beings. This is like the holy grail of AI and would take at least 30–40 years from now, if not more.
Artificial General Intelligence (AGI) is an intermediate stage where the AI will perform multiple tasks, and in one or two of them, it will be better than humans. In spite of all the buzz today around AI, we are still at the ANI stage. We barely scratched the surface.
However, the move toward ASI or singularity will be relatively faster. Even if it takes 50 years from now, that is too fast when viewed within the context of the time frame of human civilization. This is often depicted in the form of the following diagram:
Figure 1.8: Singularity
This concept is explained brilliantly by Tim Urban in his blog post titled:
The AI Revolution: The Road to Superintelligence.
This is available on the following link:
https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
Let us take a closer look at AI. AI is aimed at developing intelligence in machines along with the lines of the human brain. Human intelligence, in a simplified manner, can be explained as decisions taken based on the inputs received from our five senses – sight, hearing, touch, smell, and taste.
The actions taken are in the form of speech or movement. Today's AI is capable of sight, hearing, and speech. The two main branches of AI today are Image recognition that corresponds to sight, and speech recognition that corresponds to speech and hearing.
These will be discussed in great detail in the upcoming chapters. AI is a broad concept and the underlying techniques used are known as machine learning and deep learning. That is the reason very often these terms are used interchangeably or together like AI-ML or AI-DL.
The following figure depicts the relation between AI, ML, and DL:
Figure 1.9: AI, ML and DL
The secret sauce to AI is neural networks which are used in deep learning. And deep learning is a subset of machine learning.
We will take a look at machine learning in the next chapter.
How did we reach here?
Today, AI is a part of the field called data science. Data science involves the process of data capture all the way to taking action based on the data.
Figure 1.10: Data Science
It all began with the information technology revolution with started in the 1990s when we started creating and storing data. Initially, software was used for office automation – such as word processing, spreadsheets, and so on where the data was stored in the form of files. Then, came the enterprise automation era, where ERP applications were developed to capture orders, manage production, sales, and distribution, and so on. This was the beginning of generating larger amount of data. Around this time, the concept of RDBMS was born. These are transactional systems known as online transaction processing (OLTP). They are very good for capturing the data. However, when the data grows, the system performance deteriorates, and that is what was happening with most OLTP systems after a few years. In order to maintain the high level of performance of these systems, older data was backed up from the live databases and stored on tapes for later retrieval. And this made it difficult to run an analysis on the entire data. This gave birth to data warehouses, which are also known as online analytical processing systems (OLAP).
Figure 1.11: Business Intelligence (BI)
These systems had the ability to store large amounts of data from different systems, and they are much faster for querying. Data is transferred from the OLTP systems to the OLAP systems periodically – typically at the end of the day. This kept the OLTP systems lean while allowing you to repot and analyze the historical data on the OLAP system. This was the beginning of business analytics.
Data kept growing tremendously over the next few years. This was primarily structured data. With the advent of social media in the late 90s, including search engines like Yahoo and Google, there was an explosion in unstructured data – text, video, and audio.
Figure 1.12: Internet minute
This was the advent of Big Data. Traditional databases and warehouses were unable to handle Big Data. That led to the Big Data technology – Hadoop.
Figure 1.13: Hadoop ecosystem (Source: Internet)
Hadoop was developed in the early 2000s and has matured over the years enabling us to handle large amounts of data at a terabyte-scale with ease, including unstructured data. In the meanwhile, data started growing at an exponential rate. And the availability of large amounts of data helped in the revival of machine learning, deep learning, and AI.
This is like the falling of all the pieces of the AI puzzle. Availability of