0% found this document useful (0 votes)
60 views80 pages

Batch - 10

The document describes a reverse vending machine that accepts recyclable waste like plastic bottles and tin cans. It takes the waste as input and displays a token as a reward. The machine has mechanical and electronic parts to crush and sort the waste and give the reward. The aim is to encourage recycling through an automated rewards system.

Uploaded by

kota naik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views80 pages

Batch - 10

The document describes a reverse vending machine that accepts recyclable waste like plastic bottles and tin cans. It takes the waste as input and displays a token as a reward. The machine has mechanical and electronic parts to crush and sort the waste and give the reward. The aim is to encourage recycling through an automated rewards system.

Uploaded by

kota naik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 80

REVERSE VENDING FOR SORTING PET

BOTTLES

ABSTRACT
Reverse vending machine is a concept or an idea which inculcate the habit
of recycling the waste materials. It will be working by taking recyclable
waste into the machine and gives a useful thing as a token of appreciation.
The aim of this project is to design and fabricate a reverse vending machine
which takes recyclable waste into the machine and displays a token of
appreciation. The machine can accept a plastic bottle of 90mm diameter
without cap and tin cans can be accepted and crushed and stored. The
machine has a capacity of storing 50 plastic bottles and 50 tin cans. There
basically two parts, one is the mechanical part and the other is the
electronics part. The mechanical part is used to crush the recyclable waste
which is kept in the machine so that more plastic and cans can be recycled
and stored. The electronics part which consist of sensor and
microcontroller, is used to take the correct input and segregate the waste
into its respective categories and give a token of appreciation as a LCD
display. The whole system is automated by the help of electronics.
Combining both parts will give a reverse vending machine. Reverse
vending machine will be working by taking recyclable waste into the
machine and gives a use full thing as a token of appreciation. With limited
resources in the world, we need to start preserving them and put an end to
wastage. Being encouraged to recycle through a rewards system. To
encourage recycling process we are designing and manufacturing reverse
vending machine.

This project uses regulated 5V, 500mA power supply. Unregulated 12V DC
is used for relay. 7805 three terminal voltage regulator is used for voltage
regulation. Bridge type full wave rectifier is used to rectify the ac output of
secondary of 230/12V step down transformer.
CHAPTER 1

INTRODUCTION TO EMBEDDED SYSTEMS

1.1 INTRODUCTION:

Microcontroller are widely used in Embedded Systems products.


An Embedded product uses the microprocessor (or microcontroller) to do
one task & one task only. A printer is an example of Embedded system
since the processor inside it perform one task only namely getting the data
and printing it. Although microcontroller is preferred choice for many
Embedded systems, there are times that a microcontroller is inadequate for
the task. For this reason, in recent years many manufactures of general-
purpose microprocessors such as INTEL, Motorola, AMD & Cyrix have
targeted their microprocessors for the high end of Embedded market. One
of the most critical needs of the embedded system is to decrease power
consumptions and space. This can be achieved by integrating more
functions into the CPU chips. All the embedded processors have low power
consumptions in additions to some forms of I/O, ROM all on a single chip.
In higher performance Embedded system, the trend is to integrate more &
more function on the CPU chip & let the designer decide which feature
he/she wants to use.

1.2 EMBEDDED SYSTEM:

Physically, embedded systems range from portable devices such


as digital watches and MP3 players, to large stationary installations like
traffic lights, factory controllers, or the systems controlling nuclear power
plants. Complexity varies from low, with a single microcontroller chip, to
very high with multiple units, peripherals and networks mounted inside a
large chassis or enclosure

In general, "embedded system" is not an exactly defined term, as


many systems have some element of programmability. For example,
Handheld computers share some elements with embedded systems such as
the operating systems and microprocessors which power them but are not
truly embedded systems, because they allow different applications to be
loaded and peripherals to be connected. Embedded systems span all aspects
of modern life and there are many examples of their use.
Telecommunications systems employ numerous embedded systems from
telephone switches for the network to mobile phones at the end-user.
Computer networking uses dedicated routers and network bridges to route
data.

EXAMPLES OF EMBEDDED SYSTEM:

 Automated teller. machines (ATMS).


 Integrated system in aircraft and missile.

 Cellular telephones and telephonic switches.

 Computer network equipment, including routers timeservers and


firewalls

 Computer printers, Copiers.

 Disk drives (floppy disk drive and hard disk drive)

 Engine controllers and antilock brake controllers for automobiles.

 Home automation products like thermostat, air conditioners


sprinkles and security monitoring system.

 House hold appliances including microwave ovens, washing


machines, TV sets DVD layers/recorders.

 Medical equipment.

 Measurement equipment such as digital storage oscilloscopes, logic


analyzers and spectrum analyzers.

 Multimedia appliances: internet radio receivers, TV set top boxes.


 Small hand-held computer with P1M5 a n d other applications.

Programmable logic controllers (PLC’s) for industrial automation and


monitoring.

Stationary video game controllers.

1.3 CHARACTERISTICS:

Embedded systems are designed to do some specific tasks, rather


than be a general-purpose computer for multiple tasks. Some also have
real-time performance constraints that must be met, for reasons such as
safety and usability; others may have low or no performance requirements,
allowing the system hardware to be simplified to reduce costs.

Embedded systems are not always standalone devices. Many


embedded systems consist of small, computerized parts within a larger
device that serves a more general purpose. For example, the Gibson Robot
Guitar features an embedded system for tuning the strings, but the overall
purpose of the Robot Guitar is, of course, to play music. Similarly, an
embedded system in an automobile provides a specific function as a
subsystem of the car itself.

The software written for embedded systems is often called


firmware, and is usually stored in read-only memory or Flash memory chips
rather than a disk drive. It often runs with limited computer hardware
resources: small or no keyboard, screen, and little memory.

1.4MICROPROCESSOR (MP):

A microprocessor is a general-purpose digital computer central


processing unit (CPU). Although popularly known as a “computer on a
chip” is in no sense a complete digital computer. The block diagram of a
microprocessor CPU is shown, which contains an arithmetic and
logical unit (ALU), a program counter (PC), a stack pointer (SP), some
working registers, a clock timing circuit, and interrupt circuits.

Ser
ial
CPU CO
RAM ROM I/O Port Timer M
General
MICROCON Port
TR OLLERS
(MC)-

Purpose

Fig 1.5 Block diagram of microprocessor

1.5MICROCONTROLLER (MC):

Figure shows the block diagram of a typical microcontroller. The


design incorporates all of the features found in micro-processor CPU:
ALU, PC, SP, and registers. It also added the other features needed to make
a complete computer: ROM, RAM, parallel I/O, serial I/O, counters, and
clock circuit.

Fig 1.2 Microcontroller


1.6 COMPARISION BETWEEN MICROPROCESSOR
AND MICROCONTROLLER

The microprocessor must have many additional parts to be


operational as a computer whereas microcontroller requires no additional
external digital parts.

1. The prime use of microprocessor is to read data, perform


extensive calculations on that data and store them in the mass storage
device or display it. The prime functions of microcontroller are to read data,
perform limited calculations on it, control its environment based on these
data. Thus, the microprocessor is said to be general-purpose digital
computers whereas the microcontroller is intended to be special purpose
digital controller.

2. Microprocessor need many opcodes for moving data from the


external memory to the CPU, microcontroller may require just one or two,
also microprocessor may have one or two types of bit handling instructions
whereas microcontrollers have many.

PERIPHERALS:

Embedded Systems talk with the outside world via peripherals, such as

 Serial Communication Interfaces (SCI): RS-232, RS-422, RS-


485etc
 Synchronous Serial Communication Interface: I2C, JTAG, SPI, SSC
and ESSI

 Universal Serial Bus (USB)

 Networks: Ethernet, Controller Area Network, LAN networks, etc

 Timers: PLL(s), Capture/Compare and Time Processing Units

 Discrete IO: aka General Purpose Input/output (GPIO)

 Ana log to Digital/Digital to Analog (ADC/DAC)


TOOLS:

As for other software, embedded system designers use compilers,


assemblers, and debuggers to develop embedded system software.
However, they may also use some more specific tools:

 Utilities to add a checksum or CRC to a program, so the embedded


system can check if the program is valid.
 For systems using digital signal processing, developers may use a
math workbench such as MATLAB, Simulink, Mathcad, or
Mathematica to simulate the mathematics. They might also use
libraries for both the host and target which eliminates developing
DSP routines as done in DSP nano RTOS and Unison Operating
System.
 Custom compilers and linkers may be used to improve optimization
for the particular hardware.
 An embedded system may have its own special language or design
tool, or add enhancements to an existing language such as Forth or
Basic.
 Another alternative is to add a Real-time operating system or
Embedded operating system, which may have DSP capabilities like
DSP Nano RTOS.
CHAPTER 2

OVERVIEW OF THE PROJECT

2.1 INTRODUCTION

This first chapter is dedicated to the presentation of the preliminary study


which amounts to the first stage of our project titled Reverse Vending
Machine. First, we establish the business objectives that we aim to fulfill by
capturing the project’s goals. Next, we will introduce the project’s context,
data source identification and description and the system architecture.

In this thesis, a machine vision system based on multiple cameras has been
developed for a reverse vending machine prototype. The multi-camera
system enables high return speed and simplifies the mechanical structure of
the reverse vending machine. With the camera-based system, various
additional visual features, such as deposit and security markings, can be
extracted from the captured images for verification unlike with traditional
laser-based barcode scanners. Furthermore, with no moving parts, the
system is virtually maintenance free. The machine vision system developed
in this thesis has been a part of a larger Tekes funded New Knowledge and
Business from Research Ideas [10] project. The project focused on
developing a fast, low cost, easily maintainable and reliable reverse vending
machine. The developed system consists of six Raspberry Pi -based
cameras placed on a perimeter around the beverage container return chute
for imaging the outer surface of a beverage container as it slides past the
cameras and the barcode is extracted from the images. Ordinary PC
hardware is used for the image processing together with the software
developed as a part of this study.

With the developed multi-camera system, the beverage container barcode


can be extracted from the camera images without rotating the container.
Such solution simplifies the mechanics of the reverse vending machine by
removing the rotating mechanism, thus increasing the reliability and
maintainability. With the camera-based system, the beverage containers can
also be fed into the reverse vending machine either top or bottom first, and
since the barcode can be extracted without rotating the container, the
returning process is less time consuming for the user than with conventional
reverse vending machines. In the developed system, the beverage container
identification relies on checking whether the extracted barcode exists in a
database of refundable barcodes. In addition, the database contains the
requisite information about the beverage containers, such as the refund, the
material of the container, the maximum allowed weight and the dimensions
of the container. In the future versions, an additional seventh camera will be
used for capturing an image of the whole container to verify its shape and
dimensions to minimize the possibility of tricking the reverse vending
machine, e.g., with invalid objects that have a valid barcode attached onto
them.

2.2 EXISTING SYSTEM

In order to respond to the problem of waste management, an idea of


creating an intelligent trash was made. This project offers a credible and
hopeful solution to today’s sustainable development goals. The main
problem was the lack of sorting, which means that the waste is either
burned or piled up in a landfill, so the main task of this machine is to
capture the waste and the classifier according to their types : Plastic,
glass, ...etc Sorting waste and recycling it are crucial issues of our century.
It is therefore important that sorting becomes a daily gesture. Within Our
Smart Reverse Vending Machines, we use a smart camera which recognize
material. When an object is out in the receiver unit of the machine,
electronic recognition system evaluates the object and compares it with
models in the system database if it’s classified as recyclable the machine
accepts the objects and stores it separately depending on material. By
compressing plastic and aluminum waste, the machine provides more space
for the waste material.

2.3 PROPOSED SYSTEM

This proposed project is to the development of recycling machine for bottle


with reward system. This project was developed by giving an instant reward
to the user and provides receipt with user matrix number and quantity of
recycled empty bottle. This machine also developed to boost recycling
activities among the student. Instead, there are several objectives as follows

 Leasing machines to organizations: it creates a great buzz monthly


and would be an efficient way for the local companies not to invest
directly. By offering to rent in a couple of months machines can pay
back easily. For long term, it’s more profitable then resell model.
 Managing program model could be established by distributor within
various business scenarios, running self-owned machines has great
potential of continuous profit generation.
 CO-OP model mainly aims to work with recycling companies and
waste collection companies. Distributors can establish an operation
model and partnership with them. CO-OP model assist distributor to
buy machine with lower cost by receiving investment from these
companies, and they can supply materials continuously.
 Geo-localization can be very useful to companies and to the project
in this case, because this can provide additional information about
the consumption of products for each region. This way, product
advertisements can be aimed in a better manner, and we can get
additional information about the habits of each type of client
according to his position/location.
2.3.1 BLOCK DIAGRAM OF PROPOSED SYSTEM

SERVO
RPS MOTORS 2

RASPBERRY
LCD
PI
IR SENSOR

BUZZER

Fig 2.1 Block Diagram

2.3.2 HARDWARE COMONENTS

 Power Supply
 Raspberry Pi
 IR Sensor
 Servo Motor
 LCD
 Buzzer

2.3.3 SOFTWARE COMPONENTS


 Python IDE
 Proteus

2.3.4 TECHNOLOGY USED


 Image Processing and Deep Learning

CHAPTER 3

TECHNOLOGY USED

3.1 IMAGE PROCESSING

Image processing is a method to convert an image into digital form and


perform some operations on it, in order to get an enhanced image or to
extract some useful information from it. It is a type of signal dispensation in
which input is image, like video frame or photograph and output may be
image or characteristics associated with that image. Usually Image
Processing system includes treating images as two dimensional signals
while applying already set signal processing methods to them. It is among
rapidly growing technologies today, with its applications in various aspects
of a business. Image Processing forms core research area within
engineering and computer science disciplines too.

Image processing basically includes the following three steps:

 Importing the image via image acquisition tools.


 Analysing and manipulating the image.
 Output in which result can be altered image or report that is based
on image analysis.
Low-level image processing algorithms include:

 Edge detection.
 Segmentation.
 Classification.
 Feature detection and matching.

DIGITAL IMAGE PROCESSING

A digital image is a representation of a real image as a set of numbers that


can be stored and handled by a digital computer. In order to translate the
image into numbers, it is divided into small areas called pixels (picture
elements). For each pixel, the imaging device records a number, or a small
set of numbers, that describe some property of this pixel, such as its
brightness (the intensity of the light) or its color. The numbers are arranged
in an array of rows and columns that correspond to the vertical and
horizontal positions of the pixels in the image. Digital images have several
basic characteristics. One is the type of the image. For example, a black and
white image records only the intensity of the light falling on the pixels. A
color image can have three colors, normally RGB (Red, Green, Blue) or
four colors, CMYK (Cyan, Magenta, Yellow, black). RGB images are
usually used in computer monitors and scanners, while CMYK images are
used in color printers. There are also non-optical images such as ultrasound
or X-ray in which the intensity of sound or X-rays is recorded. In range
images, the distance of the pixel from the observer is recorded. Resolution
is expressed in the number of pixels per inch (ppi). A higher resolution
gives a more detailed image. A computer monitor typically has a resolution
of 100 ppi, while a printer has a resolution ranging from 300 ppi to more
than 1440 ppi. This is why an image looks much better in print than on a
monitor.

3.2 OPEN CV

A digital image is a representation of a real image as a set of numbers that


can be stored and handled by a digital computer. In order to translate the
image into numbers, it is divided into small areas called pixels (picture
elements). For each pixel, the imaging device records a number, or a small
set of numbers, that describe some property of this pixel, such as its
brightness (the intensity of the light) or its color. The numbers are arranged
in an array of rows and columns that correspond to the vertical and
horizontal positions of the pixels in the image. Digital images have several
basic characteristics. One is the type of the image. For example, a black and
white image records only the intensity of the light falling on the pixels. A
color image can have three colors, normally RGB (Red, Green, Blue) or
four colors, CMYK (Cyan, Magenta, Yellow, black). RGB images are
usually used in computer monitors and scanners, while CMYK images are
used in color printers. There are also non-optical images such as ultrasound
or X-ray in which the intensity of sound or X-rays is recorded. In range
images, the distance of the pixel from the observer is recorded. Resolution
is expressed in the number of pixels per inch (ppi). A higher resolution
gives a more detailed image. A computer monitor typically has a resolution
of 100 ppi, while a printer has a resolution ranging from 300 ppi to more
than 1440 ppi. This is why an image looks much better in print than on a
monitor.

3.3 DEEP LEARNING

INTRODUCTION
Whether it is medical diagnosis, self-driving vehicles, camera
monitoring, or smart filters, many applications in the field of computer
vision are closely related to our current and future lives. In recent years,
deep learning has been the transformative power for advancing the
performance of computer vision systems. It can be said that the most
advanced computer vision applications are almost inseparable from deep
learning. In view of this, this chapter will focus on the field of computer
vision, and investigate methods and applications that have recently been
influential in academia and industry.

In this, we studied various convolutional neural networks that are


commonly used in computer vision, and applied them to simple image
classification tasks. At the beginning of this chapter, we will describe two
methods that may improve model generalization, namely image
augmentation and fine-tuning, and apply them to image classification. Since
deep neural networks can effectively represent images in multiple levels,
such layer wise representations have been successfully used in various
computer vision tasks such as object detection, semantic segmentation, and
style transfer. Following the key idea of leveraging layer wise
representations in computer vision, we will begin with major components
and techniques for object detection. Next, we will show how to use fully
convolutional networks for semantic segmentation of images. Then we will
explain how to use style transfer techniques to generate images like the
cover of this book. In the end, we conclude this chapter by applying the
materials of this chapter and several previous chapters on two popular
computer vision benchmark datasets.

IMAGE AUGMENTATION

In Section, we mentioned that large datasets are a prerequisite for the


success of deep neural networks in various applications. Image
augmentation generates similar but distinct training examples after a series
of random changes to the training images, thereby expanding the size of the
training set. Alternatively, image augmentation can be motivated by the fact
that random tweaks of training examples allow models to less rely on
certain attributes, thereby improving their generalization ability. For
example, we can crop an image in different ways to make the object of
interest appear in different positions, thereby reducing the dependence of a
model on the position of the object. We can also adjust factors such as
brightness and colour to reduce a model’s sensitivity to colour. It is
probably true that image augmentation was indispensable for the success of
Alex Net at that time. In this section we will discuss this widely used
technique in computer vision.

COMMON IMAGE AUGMENTATION METHODS

In our investigation of common image augmentation methods,


we will use the following 400×500 image an example.
Most image augmentation methods have a certain degree of
randomness. To make it easier for us to observe the effect of image
augmentation, next we define an auxiliary function apply. This function
runs the image augmentation method aug multiple times on the input image
img and shows all the results.

FLIPPING AND CROPPING

Flipping the image left and right usually does not change the
category of the object. This is one of the earliest and most widely used
methods of image augmentation. Next, we use the transforms module to
create the Random Flip Left Right instance, which flips an image left and
right with a 50% chance.

Flipping up and down is not as common as flipping left and


right. But at least for this example image, flipping up and down does not
hinder recognition. Next, we create a Random Flip Top Bottom instance to
flip an image up and down with a 50% chance.

In the example image we used, the cat is in the middle of the


image, but this may not be the case in general. In Section 6.5, we explained
that the pooling layer can reduce the sensitivity of a convolutional layer to
the target position. In addition, we can also randomly crop the image to
make objects appear in different positions in the image at different scales,
which can also reduce the sensitivity of a model to the target position. In the
code below, we randomly crop an area with an area of 10% ∼ 100% of the
original area each time, and the ratio of width to height of this area is
randomly selected from 0.5 ∼ 2. Then, the width and height of the region
are both scaled to 200 pixels. Unless otherwise specified, the random
number between a and b in this section refers to a continuous value
obtained by random and uniform sampling from the interval [a, b].
CHANGING COLOURS

Another augmentation method is changing colors. We can change


four aspects of the image color: brightness, contrast, saturation, and
hue. In the example below, we randomly change the bright- ness of
the image to a value between 50% (1 − 0.5) and 150% (1 + 0.5) of
the original image.

Similarly, we can randomly change the hue of the image.

We can also create a RandomColorJitter instance and set how to


randomly change the brightness, contrast, saturation, and hue of the
image at the same time.
COMBINING MULTIPLE IMAGE AUGMENTATION
METHODS

In practice, we will combine multiple image augmentation methods.


For example, we can com- bine the different image augmentation
methods defined above and apply them to each image via a
Compose instance.

TRAINING WITH IMAGE AUGMENTATION

Let us train a model with image augmentation. Here we use the


CIFAR-10 dataset instead of the Fashion-MNIST dataset that we
used before. This is because the position and size of the objects in
the Fashion-MNIST dataset have been normalized, while the color
and size of the objects in the CIFAR-10 dataset have more significant
differences. The first 32 training images in the CIFAR-10 dataset are
shown below.

In order to obtain definitive results during prediction, we usually


only apply image augmentation to training examples, and do not
use image augmentation with random operations during pre-
diction. Here we only use the simplest random left-right flipping
method. In addition, we use a to Tensor instance to convert a
minibatch of images into the format required by the deep learning
framework, i.e., 32-bit floating point numbers between 0 and 1 with
the shape of (batch size, number of channels, height, width).

Next, we define an auxiliary function to facilitate reading the image


and applying image augmentation. The transform first function
provided by Gluonʼs datasets applies image augmentation to the first
element of each training example (image and label), i.e., the image.
For a detailed introduction to Data Loader, please refer to Section
3.5.

MULTI-GPU TRAINING

We train the ResNet-18 model from Section 7.6 on the CIFAR-10


dataset. Recall the introduction to multi-GPU training in Section
12.6. In the following, we define a function to train and evaluate the
model using multiple GPUs.

Now we can define the train with data aug function to train the model
with image augmentation. This function gets all available GPUs, uses
Adam as the optimization algorithm, applies im- age augmentation
to the training dataset, and finally calls the train_ch13 function just
defined to train and evaluate the model.

Let us train the model using image augmentation based on random


left-right flipping.

SUMMARY
 Image augmentation generates random images based on
existing training data to improve the generalization ability of
models.

 In order to obtain definitive results during prediction, we


usually only apply image augmentation to training examples,
and do not use image augmentation with random
operations during prediction.

 Deep learning frameworks provide many different image


augmentation methods, which can be applied
simultaneously.

3.2 FINE-TUNING

In earlier chapters, we discussed how to train models on the Fashion-


MNIST training dataset with only 60000 images. We also described
ImageNet, the most widely used large-scale image dataset in
academia, which has more than 10 million images and 1000 objects.
However, the size of the dataset that we usually encounter is
between those of the two datasets.

Suppose that we want to recognize different types of chairs from


images, and then recommend purchase links to users. One possible
method is to first identify 100 common chairs, take 1000 images of
different angles for each chair, and then train a classification model
on the collected image dataset. Although this chair dataset may be
larger than the Fashion-MNIST dataset, the number of examples is
still less than one-tenth of that in ImageNet. This may lead to
overfitting of complicated models that are suitable for ImageNet on
this chair dataset. Besides, due to the limited amount of training
examples, the accuracy of the trained model may not meet practical
requirements.

In order to address the above problems, an obvious solution is to


collect more data. However, collecting and labeling data can take a
lot of time and money. For example, in order to collect the ImageNet
dataset, researchers have spent millions of dollars from research
funding. Although the current data collection cost has been
significantly reduced, this cost still cannot be ignored.

Another solution is to apply transfer learning to transfer the


knowledge learned from the source dataset to the target dataset. For
example, although most of the images in the ImageNet dataset have
nothing to do with chairs, the model trained on this dataset may
extract more general image features, which can help identify edges,
textures, shapes, and object composition. These similar features may
also be effective for recognizing chairs.

STEPS

In this section, we will introduce a common technique in transfer


learning: fine-tuning. As shown in Fig. 13.2.1, fine-tuning consists
of the following four steps:

1. Pretrain a neural network model, i.e., the source model, on


a source dataset (e.g., the Image Net dataset).

2. Create a new neural network model, i.e., the target model.


This copies all model designs and their parameters on the
source model except the output layer. We assume that these
model parameters contain the knowledge learned from the
source dataset and this knowledge will also be applicable to
the target dataset. We also assume that the output layer of the
source model is closely related to the labels of the source
dataset; thus it is not used in the target model.

3. Add an output layer to the target model, whose number of


outputs is the number of categories in the target dataset.
Then randomly initialize the model parameters of this layer.

4. Train the target model on the target dataset, such as a chair


dataset. The output layer will be trained from scratch, while
the parameters of all the other layers are fine-tuned based on
the parameters of the source model.

Fig 3.1 Fine tuning.

When target datasets are much smaller than source datasets, fine-
tuning helps to improve model generalization ability.

HOT DOG RECOGNITION

Let us demonstrate fine-tuning via a concrete case: hot dog


recognition. We will fine-tune a Res Net model on a small
dataset, which was pretrained on the ImageNet dataset. This small
dataset consists of thousands of images with and without hot dogs.
We will use the fine-tuned model to recognize hot dogs from
images.

READING THE DATASET

The hot dog dataset we use was taken from online images. This
dataset consists of 1400 positive- class images containing hot dogs,
and as many negative-class images containing other foods. 1000
images of both classes are used for training and the rest are for
testing.

After unzipping the downloaded dataset, we obtain two folders


hotdog/train and hotdog/test. Both folders have hotdog and not-hotdog
subfolders, either of which contains images of the corresponding class.

We create two instances to read all the image files in the training and
testing datasets, respectively.

The first 8 positive examples and the last 8 negative images are
shown below. As you can see, the images vary in size and aspect
ratio.

During training, we first crop a random area of random size and


× the image, and then scale this area to a
random aspect ratio from
×
224*224 input image. During testing, we scale both the height and
width of an image to 256 pixels, and then crop a central 224*224
area as input. In addition, for the three RGB (red, green, and blue)
color channels we standardize their values channel by channel.
Concretely, the mean value of a channel is subtracted from each
value of that channel and then the result is divided by the standard
deviation of that channel.

DEFINING AND INITIALIZING THE MODEL

We use ResNet-18, which was pretrained on the ImageNet dataset,


as the source model. Here, we specify pretrained=True to
automatically download the pretrained model parameters. If this
model is used for the first time, Internet connection is required for
download.

The pretrained source model instance contains two member


variables: features and output. The former contains all layers of the
model except the output layer, and the latter is the output layer of the
model. The main purpose of this division is to facilitate the fine-
tuning of model parameters of all layers but the output layer. The
member variable output of source model is shown below.

As a fully-connected layer, it transforms Res Netʼs final global


average pooling outputs into 1000 class outputs of the ImageNet
dataset. We then construct a new neural network as the target model.
It is defined in the same way as the pretrained source model except
that its number of outputs in the final layer is set to the number of
classes in the target dataset (rather than 1000).

In the following code, the model parameters in the member variable


features of the target model instance finetune net are initialized to
the model parameters of the corresponding layer of the source
model. Since the model parameters in the features are pre-trained on
the ImageNet data set and are good enough, generally only a small
learning rate is needed to fine-tune these parameters.

The model parameters in the member variable output are initialized


randomly, and generally re- quire a larger learning rate to train
from scratch. Assuming that the learning rate in the Trainer
instance is η, we set the learning rate of the model parameters in
the member variable output to be 10η in the iteration.

In the code below, the model parameters before the output layer
of the target model instance finetune net are initialized to model
parameters of the corresponding layers from the source model. Since
these model parameters were obtained via pretraining on ImageNet,
they are effec tive. Therefore, we can only use a small learning rate
to fine-tune such pretrained parameters. In contrast, model
parameters in the output layer are randomly initialized and
generally require a larger learning rate to be learned from scratch.
Let the base learning rate be η, a learning rate of 10η will be used to
iterate the model parameters in the output layer.

FINE-TUNING THE MODEL

First, we define a training function train fine tuning that uses fine-
tuning so it can be called multiple times.

We set the base learning rate to a small value in order to fine-tune


the model parameters obtained via pretraining. Based on the
previous settings, we will train the output layer parameters of the
target model from scratch using a learning rate ten times greater.

epcho

For comparison, we define an identical model, but initialize all of its


model parameters to random values. Since the entire model needs to
be trained from scratch, we can use a larger learning rate.

SUMMARY

 Transfer learning transfers knowledge learned from the source


dataset to the target dataset. Fine-tuning is a common
technique for transfer learning.

 The target model copies all model designs with their


parameters from the source model except the output layer,
and fine-tunes these parameters based on the target dataset.
In contrast, the output layer of the target model needs to be
trained from scratch.

 Generally, fine-tuning parameters uses a smaller learning


rate, while training the output layer from scratch can use a
larger learning rate.

3.3 OBJECT DETECTION AND BOUNDING BOXES

In earlier sections, we introduced various models for image


classification. In image classification tasks, we assume that there is
only one major object in the image and we only focus on how to
recognize its category. However, there are often multiple objects in
the image of interest. We not only want to know their categories, but
also their specific positions in the image. In computer vision, we refer
to such tasks as object detection (or object recognition).

Object detection has been widely applied in many fields. For


example, self-driving needs to plan traveling routes by detecting
the positions of vehicles, pedestrians, roads, and obstacles in the
captured video images. Besides, robots may use this technique to
detect and localize objects of interest throughout its navigation of
an environment. Moreover, security systems may need to detect
abnormal objects, such as intruders or bombs.

In the next few sections, we will introduce several deep learning


methods for object detection. We will begin with an introduction to
positions (or locations) of objects.

We will load the sample image to be used in this section. We can see
that there is a dog on the left side of the image and a cat on the right.
They are the two major objects in this image.

BOUNDING BOXES

In object detection, we usually use a bounding box to describe the


spatial location of an object. The bounding box is rectangular,
which is determined by the x and y coordinates of the upper-left
corner of the rectangle and the such coordinates of the lower-right
corner. Another commonly used bounding box representation is the
(x, y)-axis coordinates of the bounding box center, and the width
and height of the box.

Here we define functions to convert between these two


representations: box corner to center converts from the two-corner
representation to the center-width-height presentation, and box center
to corner vice versa. The input argument boxes should be a two-
dimensional tensor of shape (n, 4), where n is the number of
bounding boxes.

We will define the bounding boxes of the dog and the cat in the
image based on the coordinate information. The origin of the
coordinates in the image is the upper-left corner of the image, and to
the right and down are the positive directions of the x and y axes,
respectively.
We can verify the correctness of the two bounding box conversion
functions by converting twice.

Let us draw the bounding boxes in the image to check if they are
accurate. Before drawing, we will define a helper function. It
represents the bounding box in the bounding box format of the
matplotlib package.

After adding the bounding boxes on the image, we can see that the
main outline of the two objects are basically inside the two boxes.

SUMMARY

 Object detection not only recognizes all the objects of


interest in the image, but also their positions. The position is
generally represented by a rectangular bounding box.
 We can convert between two commonly used bounding box
representations.
CHAPTER 4

HARDWARE IMPLEMENTATION

HARDWARE COMPONENTS ARE


 Power Supply
 Raspberry PI
 IR Sensor
 Servo Motor
 LCD
 Buzzer

4.1 POWER SUPPLY

In this project we have power supplies with +5V & -5V option
normally +5V is enough for total circuit. Another (-5V) supply is used in
case of OP amp circuit.

Transformer primary side has 230/50HZ AC voltage whereas at the


secondary winding the voltage is step downed to 12/50 Hz and this voltage
is rectified using two full wave rectifiers the rectified output is given to a
filter circuit to fitter the unwanted ac in the signal. After that the output is
again applied to a regulator LM7805 (toprovide+5v) regulator.
WhereasLM7905 is for providing –5Vregulation. Z (+12V circuit is used
for stepper motors, Fanand Relay by using LM7812 regulator same process
like above supplies).

Fig 4.1 RPS

TRANSFORMER

Transformers are used to convert electricity from one voltage to


another with minimal loss of power. They only work with AC (alternating
current) because they require a changing magnetic field to be created in
their core. Transformers can increase voltage (step-up) as well as reduce
voltage (step-down).

Alternating current flowing in the primary (input) coil creates a


continually changing magnetic field in them on core. This field also passes
through the secondary (output) coil and the changing strength of the
magnetic field induces an alternating voltage in the secondary coil. If the
secondary coils connected to a load the induced voltage will make an
induced current flow. The correct term for the induced voltage is ‘induced
electromotive force’ which is usually abbreviated to induced e.m.f.

RECTIFIERS

The purpose of a rectifier is to convert an AC wave form into a DC


wave form (OR) Rectifier converts AC current or voltages into DC current
or voltage. There are two different rectification circuits, known as 'half-
wave' and 'full-wave' rectifiers. Both use components called diodes to
convert AC into DC.

FILTERS

A filter circuit is a device which removes the ac component of


rectifier output but allows the dc component to the load. The most
commonly used filter circuits are capacitor filter, choke input filter and
capacitor input filter or pi-filter. We used capacitor filter here.

The capacitor filter circuit is extremely popular because of its low


cost, small size, little weight and good characteristics. For small load
currents this type of filter is preferred. It is commonly used in transistor
radio battery eliminators.
Rectifier O/P C RL

Fig 4.2 Capacitor

4.2 RASPBERRY PI
In this modern age when computers are sleek, Raspberry Pi seems alien
with tiny codes printed all over its circuit board. That’s a big part of
Raspberry Pi’s appeal. Let us have a look at what we can do with this
appealing circuit board.

Fig 4.3 Raspberry PI Pin Description

USES
Like a desktop computer, you can do almost anything with the Raspberry
Pi. You can start and manage programs with its graphical windows desktop.
It also has the shell for accepting text commands.
We can use the Raspberry Pi computer for the following:

 Playing games

 Browsing the internet

 Word processing

 Spreadsheets

 Editing photos

 Paying bills online

 Managing your accounts.

The best use of Raspberry Pi is to learn how a computer works. You can
also learn how to make electronic projects or programs with it.

It comes with two programming languages, Scratch and Python. Through


GPIO (general- purpose input output) pins, Raspberry Pi can be connected
to other circuits, so that you can control the other devices of your choice.

REQUIREMENTS
To use your Raspberry Pi board, you need to buy a few other bits and
pieces. Following is the checklist of what else we might need:

Monitor

The Raspberry Pi uses a high-definition multimedia interface (HDMI)


connection for video feed, and you can connect your monitor directly with
this interface connection, if your monitor has an HDMI socket.

Television

In the similar way, if you have High-Definition Television (HD TV), you
can also connect it to your Raspberry Pi using an HDMI socket. It will give
you a crisper picture.
USB hub

Depending on the model, Raspberry Pi has 1, 2, or 4 Universal Serial Bus


(USB) sockets. You should consider using powered USB to connect other
devices to your Raspberry Pi at the same time.

Keyboard and Mouse

Raspberry Pi only supports the USB keyboards and mouse. If you are using
keyboards and mouse with PS/2 connectors, you need to replace them with
Raspberry Pi.

SD or MicroSD card

As we know that the Raspberry Pi does not have a hard drive, so we need to
use SD cards or MicroSD cards (depending on the model) for storage.

USB Wi-Fi adapter

If you are going to use model A and A+ then, you need to buy a USB Wi-Fi
adapter for connecting to the internet. This should be done because these
Raspberry models do not have an Ethernet socket.

External hard drive

If you want to share your collection of music and movies, you need to use
an external hard drive with your Raspberry Pi model. You can connect the
same by using a powered USB cable.

Raspberry Pi Camera Module

The Raspberry Pi camera module originated at Raspberry Pi foundation. It


is an 8MP (megapixel) fixed focus camera that can be used to shoot high-
definition video and take still photos. For wildlife photography at night, it
provides another version without an infrared filter.
Speakers
The Raspberry Pi has a standard audio out socket. This socket is compatible
with headphones and speakers that use a 3.5mm audio jack. We can plug
headphones directly to it.

Power supply
For power supply, it uses a Micro USB connector. Hence theoretically, it is
compatible with a mobile phone and tablet charger.

Cables
Following are some of the cables, which you need for the connections to the
Raspberry Pi computer:
 HDMI cable
 HDMI-to-DVI adapter, if you are using a Digital Visual Interface
(DVI) monitor.
 RCA cable, if you want to connect to an older television.
 Audio cable
 Ethernet cable

COMPATIBLE AND INCOMPATIBLE DEVICES


To minimize the cost, the Raspberry Pi models are designed to be used with
whatever accessories we have. But, as we know that in practice, not all the
devices can be compatible.
You need to check for compatible and incompatible devices as
incompatible USB, keyboards and mouse can cause problems.
You can find the list of compatible and incompatible devices at
https://elinux.org/RPi_VerifiedPeripherals.

Before you get started with your Raspberry Pi board, you need to provide
with an OS (operating system). Linux is the most frequently used OS on the
Raspberry Pi.

For using an OS, we need to create a Secure Digital (SD) or MicroSD card
with an OS on it. The prerequisite for setting up the SD or MicroSD is a
computer having an internet connection and the ability to write to SD or
MicroSD cards.
4.2.1 OPERATING SYSTEM
NOOBS SOFTWARE

NOOBS means new-out-of-box software and it is the easiest way to get


started with the Raspberry Pi. It is easy to copy NOOBS to your SD or
MicroSD card. Once copied, it provides us with a simple menu for
installing various operating systems.

There is an option to buy a card with NOOBS already installed on it, but it
is always useful to know how to create your own NOOBS cards.

DOWNLOAD NOOBS

Follow the below given steps to download NOOBS:

Step 1: Go to the website www.raspberrypi.org/downloads/noobs

Step 2: Select from the two versions of NOOBS available. Version 1 is the
main version and includes Raspbian. This is the officially supported OS,
which you can use even without any network connection.

Another option is to choose the OS from the menu. You can download and
install the OS from the menu, if you have a network connection. It is always
recommended to download NOOBS for your first OS.

MICROSD CARD FORMATTING

Before downloading and installing OS, we first need to format our SD or


MicroSD card. We can use an application program, called SD card
Formatter, from SD Association. The latest version is SD Memory Card
Formatter 5.0.1.

For Windows and Mac, it can be downloaded from the link


https://www.sdcard.org/downloads/formatter/.

Let us see how we can format the SD card by using windows, Mac OS, and
Linux.
USING WINDOWS

Step 1: Download and install the SD formatter application. It will be as


follows:

Step 2: Next, we need to select the drive in which we have our SD


High Capacity SDHC/SDXC card. Once selected, click on the format button
to format it.

The following screen will appear:


Step 3: The program will ask for the confirmation. You need to click yes
to confirm the format process.

Step 4: Once the format process is completed, your SD card will be


formatted completely.

USING MAC OS

The process of formatting is similar as we did in windows. You just need to


download and install the Mac version of SD card formatter.

USING LINUX

We will be using the GParted application program, which is an open-


s o u r c e partition manager for Linux.
Use the steps given below to format a SD card in Ubuntu software:

Step 1: Download and install the GParted application by using the


terminal as follows:

sudo apt-get install gparted

Step 2: Once installation is completed, you need to insert the SD card.


Next, by using Unity dash, launch the GParted application.

Step 3: You will get the screen as below, which shows the partitions of the
removable disk. But before starting the formatting, we need to unmount the
disk by right-clicking on the partition as shown below:

Step 4: After unmounting, we need to right click on it, which will show us
the Format to option. Now from the list, you can choose whatever type of
file system you want on the disk.

After selecting the drive to format, you need to click on the Tick sign as
shown below:
Step 5: It will show you a couple of warnings and the format procedure will
be started.

INSTALL NOOBS TO MEMORY CARD


Now, you have a formatted card and the .zip file that was downloaded from
the Raspberry website. Hence, you can install NOOBS on your card.

On windows PC, you can simply double click the .zip file. It will open the
file. Once opened, you can select all the files and copy them to your
formatted card.

Similarly, on a Mac OS, you can see the folder that contains all the files by
double clicking on the NOOBS .zip file. Now, click on the Edit menu and
select all. Drag all the files onto your SD card.

In the same way, on Linux we can use the desktop environment to copy the
NOOBS .zip files to our SD card.

FLASHING A MICROSD CARD

Some operating systems (OS) may not be available through NOOBS. One
of them is the Reduced Instruction Set Computer (RISC) OS.
For creating a card for such an OS, we need to first download the OS as an
image file. Once an image file is downloaded, we need to use the process
called flashing your card. Later on, the single file can be converted into all
the files which we need on our card (SD or MicroSD).

To download the OS images, we can find the links at the website


https://www.raspberrypi.org/software/.
Now to flash the card or you can say burning an image to the card, we can
use an OS image flasher Etcher. It is available for windows, Mac OS and
Linux at https://www.balena.io/etcher/.

4.2.2 CONNECTING RASPBERRY PI


It is quite easy to connect Raspberry Pi. Let us understand about the same in
detail in this chapter.

PORTS AND SOCKETS


You should make sure that you have to face your Raspberry Pi in the right
way. Most components and sockets, with the help of which you connect it,
are sticking out at the top side whereas the back side is relatively flat. The
spiky GPIO (general-purpose input output) pins should be at the top left.

Let us have a look at the diagrams below representing the location of


connectors and main integrated circuits (ICs) on the Raspberry Pi boards.

The source of the diagrams is https://core-electronics.com.au

Diagram 1

Following is the diagram for Raspberry Pi Model B:

2
Diagram 2
Following is the diagram for Raspberry Pi Model A:
Diagram 3
Following is the diagram for Raspberry Pi Zero:

INSERT SD OR MICRO SD CARD

As we have discussed, you need an SD or MicroSD card with OS to get


started with Raspberry Pi. We have also discussed how you can create one,
in the previous chapter. Now, it is time to insert that card and get started.

If you are using model 2, 3, A+, or B+ then, you need to turn your
Raspberry Pi circuit board, so that the underside is at your side and you can
see that.

You can see, there would be a metal MicroSD card slot on the left side of
the board. Slide your card into this slot.

On the other hand, if you are using Model A or Model B, you need an SD
card and you need to flip your Raspberry Pi over. Now, slide the SD card
while facing the label side above. After that you need to gently press the
card home.

And we know that the models Pi Zero and Zero W have the MicroSD card
slot mounted on the top surface of the board. To insert the card, you need to
put the label side facing you.
CAMERA MODULE

Camera module, an official module from the Raspberry Pi board, is a small


circuit board with a strip of ribbon cable. It plugs directly into the board.

You can see the diagram below:

From the above diagram, you can see that for protection, the lens has a
plastic film over it. You need to pull the green plastic tab to remove the
film.

ON RASPBERRY PI ZERO

The Raspberry Pi model camera socket uses a different width of cable and
you can buy that cable separately. You can also get that cable with the
official Raspberry Pi Zero case. You can check the board and the camera
have similar sockets for the cable.

To open the connector, you just need to gently press the connector between
your finger and thumb. The camera connector is on the right of the
Raspberry Pi board.

To connect the cable with the camera, insert the cable with the shiny
contacts facing the camera front. And on the Pi Zero board, insert the cable
with the shiny contacts facing the flat side of the board i.e., the bottom
side.
ON OTHER RASPBERRY PI MODELS

To connect the camera on other boards, you need to hold the ends in
between your finger and thumb. Then, gently lift the board and it will move
apart to make a gap. This is the place, where you will insert the cable of the
camera.

At the end of the camera’s cable, you can see there are silver connectors on
one side. Now hold the cable in such a way that this side faces to the left.

Once done, insert the cable into the connector on your Raspberry Pi board.
Press it gently and then press the socket back together again and your board
is ready with the camera.

CONNECT RASPBERRY PI TO DEVICES

The respective processes to connect your Raspberry Pi board to different


devices is explained below in detail. Let us begin by understanding how to
connect a display device to your Pi board.

DISPLAY DEVICE

Depending on the screen type, you have two ways to connect the display
device to your Pi board. In these two ways, we are assuming that you are
going to use either monitor or
television. Apart from these two ways, there is an official Pi touchscreen
that connects using the display socket. Let us check how we can connect an
HDMI display and television, as explained below.

HDMI OR DVI DISPLAY

The HDMI connector is on the top surface of your Raspberry Pi board. But for
the Raspberry Pi Zero model, you need to use an adapter that converts Mini
HDMI to an HDMI socket. For connecting, insert one end of the HDMI
cable in the board or Pi ZERO connector and the other end into your
monitor.

On the other hand, if you are using a DVI display, an adapter should be
used.
TELEVISION

If the TV you are using is having a HDMI socket, you can use that for
optimal results. But if in case, your TV does not have an HDMI socket, you
need to use the composite video socket.

On the Raspberry Pi Model-A and Model-B, the composite video socket is


placed on the top edge of the board. It is a round, yellow-and-silver sockets.

On other models, Raspberry Pi 3, Pi 2, and Model B+, the same socket as


the audio output can be used as a composite video socket. It is placed on the
bottom of the board.

One thing you should note is that you will need to use a special RCA cable
for this socket. Connect one end of the RCA cable to the audio output
socket and the other end to Video in socket of the TV.

If you are using Pi Zero or Zero W boards then, you need to solder your
own connector to the board, where it is labelled TV. This should be done
because, both these boards do not have composite video socket.

KEYBOARD AND MOUSE

On Raspberry Pi Model B+, Model Pi 2, and Model Pi 3 the keyboard and


mouse can be directly connected. They should work fine. But for earlier
models of Raspberry Pi, you should use an external USB hub to connect
keyboard and mouse.

Because with this, the devices will not draw too much power from the Pi
board, and we can reduce the risk of heat and other problems caused by
devices.

On the other hand, for Raspberry Pi Zero, Model A, and Model A+, we
must use a USB hub, since these boards have only one USB socket.

AUDIO DEVICES

Raspberry Pi’s audio socket is a small black or blue box. On Model A and
Model B, it is stuck along the top edge of the board. Whereas, on Model
B+, Pi 2 and Pi 3, it is stuck along the bottom edge of the board.

If you have connected an HDMI TV, then you do not need to connect a
separate audio cable, as the sound is routed through your HDMI cable.

On the other hand, if you have earphones or headphones with a 3.5mm jack,
you can directly plug them into the audio socket.
Alternatively, it is recommended to use a suitable cable, as shown in the
below figure. The cable has Pi’s 3.5mm jack on the left and stereo
input/output plugs that feed into many stereos shown on the right.

INTERNET ROUTER

All the Raspberry Pi models other than Model A, A+, and Zero have an
Ethernet socket. You can find the socket on the right edge of the Raspberry
board. To connect to the internet, you can use a standard Ethernet cable in
this socket.

In case if you are using a router with DHCP (Dynamic Host Configuration
Protocol) support, your Raspberry Pi will automatically connect to the
internet.

On the other hand, if you have a Wi-Fi adapter then, you can plug into a
USB socket of Raspberry Pi and it will be ready to use whenever you turn
on your board.
POWER

Once you are done with connecting all the necessary and required devices,
it is time to connect your Raspberry Pi to power and turn it on. For this, you
need to use the Micro USB power socket.

To safeguard your board from damage, you need to provide a steady 5v of


power. Keep in mind that Raspberry Pi board has no on/off switch. It
means, whenever you connect it with power, it will start working.

If you want to turn it off, you just need to disconnect it. So, if you want to
save your data, you should proceed with caution and should shut down the
Raspberry Pi first.

TURN ON RASPBERRY PI

Connect with the power and turn on your Raspberry Pi board. There will be
a rainbow of colors on screen. Afterwards, it will start to run the NOOBS
software on the Memory card. You will get a choice of OS to install.

Below are the OS choices in NOOBS:

RASPBIAN
Raspbian, a version of a Linux distribution called Debian, is the distribution
that is recommended by the Raspberry Pi foundation. It has been optimized
for the Raspberry Pi board.

Most of the Raspberry Pi users start with Raspbian and it includes:

 Graphical Desktop software.

 Web browser.

 Development and programming tools like Scratch, Python etc.

It has two versions, one with the PIXEL desktop and other is termed as
Raspbian Lite, with a more minimal installation.
LIBR EELEC AND OSMC

Both are the versions of Kodi media center. They are mainly used for
playing music and video.

RISC OS

It is an alternative to Linux OS, which most of the people use on the


Raspberry Pi. It has a GUI (Graphical User Interface). In 1987, it was
created by Acorn Computers and now- a-days, it is maintained and
managed by RISC OS open Limited.

DATA PARTITION

If you use the Data Partition option, it will give you an option to sort the
data. The sorted data can be accessed by various Linux distributions.

LAKKA

It is a retro gaming system that includes emulators for a range of vintage


home computers such as Commodore 64 and Amiga, Amstrad CPC, ZX
Spectrum and various Atari machines.

It also includes emulators for a range of game consoles such as Nintendo


machines and Sony PlayStation. Although the Bomberman clones and 2048
games are included but, if you want to use Lakka, you need to get the
games separately.

Plug your USB with games files and you will be ready to get games into
Lakka.

RECALBOX

It is another game system. It also includes emulators for Super Nintendo


Entertainment System (SNES), Nintendo Entertainment System (NES),
Game Boy Advance, PC Engine, and Sega Master System. The shareware
version of a famous game called Doom is also included in the Recalbox
game system.
SCREENLY OS

As the name implies, it is a digital signage system. It enables the users to


use a Raspberry Pi with a connected HD screen as a digital sign. Here, OSE
refers to Open-Source Edition.

It enables the following to be displayed on the screen:


 Videos

 Images

 Web pages

Screenly OSE is also suitable for displaying the advertisements and


information in public areas like shops, schools, offices, shopping malls,
railway stations, etc.

WINDOWS 10 IOT CORE

As the name implies, it is the version of Windows which is designed to


support the IoT (Internet of things) devices. It is actually different from the
windows desktop experience we are familiar with.

Once installed, it will give us the following two versions:

 RTM version: It is the release to manufacturing (RTM) version. It


is recommended to use because it is a stable version as compared to
the Pre-release version.
 Pre-release version: Another is pre-release version, which is
less stable as compared to RTM version.

TLXOS

This is ThinLinX’s thin client software. It is a trial version and enables the
Raspberry Pi to work as a virtual desktop. By using ThinLinX, we can also
manage one or more Raspberries centrally.
4.2.3 CONFIGURATION
In this chapter, we will learn about configuring the Raspberry Pi. Let us
begin by understanding how to configure Raspberry Pi board in Raspbian.

RASPBIAN CONFIGURATION
For configuring Raspberry Pi in Raspbian, we are using Raspbian with
PIXEL desktop. It is one of the best ways to get Raspbian started with the
Raspberry Pi. Once we finish booting, we will be in the PIXEL desktop
environment.

Now to open the menu, you need to click the button that has the Raspberry
Pi logo on it. This button will be in the top left. After clicking the button,
choose Raspberry Pi configuration from the preferences.

CONFIGURATION TOOL

Following is the configuration tool in PIXEL desktop:


By default, the configuration tool opens to its system tab which has the
following options:

 Change Password: The default password is raspberry. You can


change it by clicking the change password button.
 Change the hostname: The default name is raspberry pi. You can
also change it to the name, which you want to use on the network.
 Boot: You can choose from the two options and control whether
Raspberry Pi boots into the desktop or CLI i.e., command line
interface.
 Auto Login: With the help of this option, you can set whether the
user should automatically log in or not.
 Network at Boot: By choosing this option, you can set whether the
pi user is automatically logged in or not.
 Splash screen: You can enable or disable it. On enabling, it will
display the graphical splash screen that shows when Raspberry Pi is
booting.
 Resolution: With the help of this option, you can configure the
resolution of your screen.
 Under scan: There are two options, enable or disable. It is used to
change the size of the displayed screen image to optimally fill the
screen. If you see a black border around the screen, you should
disable the under scan. Whereas, you should enable the under scan,
if your desktop does not fit your screen.

There are three other tabs namely Interfaces, Performance, and


Localization. The job of interface tab is to enable or disable various
connection options on your Raspberry Pi.

You can enable the Pi camera from the interface tab. You can also set up a
secure connection between computers by using SSH (short for Secure
Shell) option.

If you want to remote access your Pi with a graphical interface then, you
can enable Real VNC software from this tab. SPI, I2C, Serial, 1-wire, and
Remote GPIO are some other interfaces you can use.

There is another tab called Performance, which will give you access to the
options for overclocking and changing the GPU memory.

The localization tab, as the name implies, enable us to set:

 The character set used in our language.

 Our time zone.

 The keyboard setup as per our choice.

 Our Wi-Fi country.

CONFIGURE Wi-Fi

You can check at the top right, there would be icons for Bluetooth and Wi-
Fi. The fan- shaped icon is on the Wi-Fi. To configure your Wi-Fi, you need
to click on that icon. Once clicked, it will open a menu showing the
available networks. It also shows the option to turn off your Wi-Fi.

Among those available networks, you need to select a network. After


selecting, it will prompt for entering the Wi-Fi password i.e., the Pre-Shared
Key.
If you see a red cross on the icon, it means your connection has been failed
or dropped. To test whether your Wi-Fi is working correctly, open a web
browser and visit a web page.

CONFIGURE BLUETOOTH DEVICES


We can use wireless Bluetooth devices such as keyboard and/or mouse with
Pi 3 and Pi zero W because these models are Bluetooth-enabled. In PIXEL
desktop, you can set up your Bluetooth devices easily.

Following are the steps to configure the Bluetooth devices:

 First, make your device discoverable for pairing.

 Now, you need to click on the Bluetooth menu at the top right of
the screen. It is aligned to the Wi-Fi button.
 Now, choose the Add Device option.

 The Raspberry will start searching for the devices and when it
finds your device, click it and click the pair button.

DATA PARTITION SETUP

As we know that data partition is that area on your memory card (SD or
MicroSD) which can be shared by various distributions. One of the best
examples of use of a data partition is transferring the files between
distributions.

The data partition has the label data.

You can use this labeled data to make a directory point to it as follows:

Step 1: First, you need to boot the Raspberry Pi into Raspbian.

Step 2: Now, click the Terminal icon to get to the command line.
Step 3: Next, type the command mkdir shared. It will create a directory
named shared.

Step 4: Write the command sudo mount -L data shared. This command
will point the directory to the shared partition.

Step 5: Write the command sudo chown $USER: shared. It will set the
permission for writing in this shared folder.

Step 6: Now, to go to this shared folder, you need to type the command cd
shared.

Once all the files are created in this shared folder, they will be available to
all the distributions that have the permission to access the data partition.

4.3 IR SENSOR

An infrared sensor is an electronic device, that emits in order to sense some


aspects of the surroundings. An IR sensor can measure the heat of an object
as well as detects the motion. These types of sensors measure only infrared
radiation, rather than emitting it that is called a passive IR sensor. Usually,
in the infrared spectrum, all the objects radiate some form of thermal
radiation.

Fig 4.5 IR Sensor


These types of radiations are invisible to our eyes, which can be detected by
an infrared sensor. The emitter is simply an IR LED (Light Emitting Diode)
and the detector is simply an IR photodiode that is sensitive to IR light of
the same wavelength as that emitted by the IR LED. When IR light falls on
the photodiode, the resistances and the output voltages will change in
proportion to the magnitude of the IR light received.
Working Principle

The working principle of an infrared sensor is similar to the object detection


sensor. This sensor includes an IR LED & an IR Photodiode, so by
combining these two can be formed as a photo-coupler otherwise
optocoupler. The physics laws used in this sensor are planks radiation,
Stephan Boltzmann & weins displacement.

IR LED is one kind of transmitter that emits IR radiations. This LED looks
similar to a standard LED and the radiation which is generated by this is not
visible to the human eye. Infrared receivers mainly detect the radiation
using an infrared transmitter. These infrared receivers are available in
photodiodes form. IR Photodiodes are dissimilar as compared with usual
photodiodes because they detect simply IR radiation. Different kinds of
infrared receivers mainly exist depending on the voltage, wavelength,
package, etc.

Once it is used as the combination of an IR transmitter & receiver, then the


receiver’s wavelength must equal the transmitter. Here, the transmitter is IR
LED whereas the receiver is IR photodiode. The infrared photodiode is
responsive to the infrared light that is generated through an infrared LED.
The resistance of photo-diode & the change in output voltage is in
proportion to the infrared light obtained. This is the IR sensor’s
fundamental working principle.

Once the infrared transmitter generates emission, then it arrives at the


object & some of the emission will reflect back toward the infrared
receiver. The sensor output can be decided by the IR receiver depending on
the intensity of the response.

Types of Infrared Sensor

Infrared sensors are classified into two types like active IR sensor and
passive IR sensor.

Active IR Sensor

This active infrared sensor includes both the transmitter as well as the
receiver. In most of the applications, the light-emitting diode is used as a
source. LED is used as a non-imaging infrared sensor whereas the laser
diode is used as an imaging infrared sensor.

These sensors work through energy radiation, received & detected through
radiation. Further, it can be processed by using the signal processor to fetch
the necessary information. The best examples of this active infrared sensor
are reflectance and break beam sensor.

Passive IR Sensor

The passive infrared sensor includes detectors only but they don’t include a
transmitter. These sensors use an object like a transmitter or IR source. This
object emits energy and detects through infrared receivers. After that, a
signal processor is used to understand the signal to obtain the required
information.

The best examples of this sensor are pyroelectric detector, bolometer,


thermocouple-thermopile, etc. These sensors are classified into two types
like thermal IR sensor and quantum IR sensor. The thermal IR sensor
doesn’t depend on wavelength. The energy source used by these sensors is
heated. Thermal detectors are slow with their response and detection time.
The quantum IR sensor depends on the wavelength and these sensors
include high response and detection time. These sensors need regular
cooling for specific measurements.

4.4 SERVO MOTOR

The servo motor is an assembly of four things: a normal DC motor, a gear


reduction unit, a position-sensing device, and a control circuit. The DC
motor is connected with a gear mechanism that provides feedback to a
position sensor which is mostly a potentiometer. From the gearbox, the
output of the motor is delivered via servo spline to the servo arm. For
standard servo motors, the gear is normally made up of plastic whereas, for
high power servos, the gear is made up of metal.

Fig 4.8 Servo Motor

A servo motor consists of three wires- a black wire connected to


the ground, a white/yellow wire connected to the control unit, and a
red wire connected to the power supply.

The function of the servo motor is to receive a control signal that


represents a desired output position of the servo shaft and apply
power to its DC motor until its shaft turns to that position.
ADVANTAGES

 High efficiency.
 High output power relative to their size.
 More constant torque at higher speed.
 Closed-loop control.
 Quiet operation.
 Highly reliable.
 High ratio of torque to inertia.
 High acceleration.
4.5 LCD

LCD (liquid crystal display) is the technology used for displays in


notebook and other smaller computers. Like light-emitting diode ( LED) and
gas-plasma technologies, LCDs allow displays to be much thinner than
cathode ray tube (CRT) technology. LCDs consume much less power than
LED and gas-display displays because they work on the principle of
blocking light rather than emitting it.

Fig 4.10 Liquid Crystal Display

FEATURES

 E-blocks compatible
 Low cost
 Compatible with most I/O ports in the E-Block range (requires 5 I/O
lines via 9-way D-type connector)
 Ease to develop programming code using Flow code icons

Fundamentals of Liquid Crystal Displays


The term liquid crystal is used to describe a substance in a state
between liquid and solid but which exhibits the properties of both.
Molecules in liquid crystals tend to arrange themselves until they all point
in the same specific direction. This arrangement of molecules enables the
medium to flow as a liquid. Depending on the temperature and particular
nature of a substance, liquid crystals can exist in one of several distinct
phases. Liquid crystals in a nematic phase, in which there is no spatial
ordering of the molecules, for example, are used in LCD technology.

One important feature of liquid crystals is the fact that an electrical


current affects them. A particular sort of nematic liquid crystal, called
twisted nematics (TN), is naturally twisted. Applying an electric current to
these liquid crystals will untwist them to varying degrees, depending on the
current's voltage. LCDs use these liquid crystals because they react
predictably to electric current in such a way as to control the passage of
light.
The working of a simple LCD is shown in Figure 1. It has a mirror (A)
in back, which makes it reflective. There is a piece of glass (B) with a
polarizing film on the bottom side, and a common electrode plane (C) made
of indium-tin oxide on top. A common electrode plane covers the entire
area of the LCD. Above that is the layer of liquid crystal substance (D).
Next comes another piece of glass (E) with an electrode in the shape of the
rectangle on the bottom and, on top, another polarizing film (F), at a right
angle to the first one.

4.9 BUZZERS

Buzzer is usually like an alarm. Whenever we press the switch


button it gives an output like an alarm sound and then activates the
machine. Buzzer contains of two pins. The negative end is connected to the
data pin of microcontroller. The positive end is connected to the Vcc in the
microcontroller.

Fig 4.11 Buzzer


CHAPTER 5

SOFTWARE COMPONENTS

5.1 PYTHON IDE

SETTING UP PYTHON

This book is about programming computers with Python. You could read
this book from cover to cover without ever touching a keyboard, but you’d
miss out on the fun part—coding!
To get the most out of this book, you need a computer with Python installed
on it and a way to create, edit, and save Python code files.
IN THIS CHAPTER, YOU’LL LEARN HOW TO:
 Install the latest version of Python 3 on your computer

 Open IDLE, Python’s built-in Integrated Development and


Learning Environment Let’s get started!

5.1.1 A Note on Python Versions

Many operating systems, including macOS and Linux, come with Python
preinstalled. The version of Python that comes with your operating system
is called the system Python.

The system Python is used by your operating system and is usually out of
date. It’s essential that you have the most recent version of Python so that
you can successfully follow along with the examples in this book.

Important

Do not attempt to uninstall the system Python!


You can have multiple versions of Python installed on your computer. In
this chapter, you’ll install the latest version of Python 3 alongside any
system Python that may already exist on your machine.

Note
Even if you already have Python 3.9 installed, it’s still a good idea to
skim this chapter to double-check that your environ- ment is set up
for following along with this book.

This chapter is split into three sections: Windows, macOS, and Ubuntu
Linux. Find the section for your operating system and follow the steps to
get set up, then skip ahead to the next chapter.

If you have a different operating system, then check out Real Python’s
“Python 3 Installation & Setup Guide” to see if your OS is covered.
Readers on tablets and mobile devices can refer to the “Online Python
Interpreters” section for some browser-based options.

WINDOWS

Follow these steps to install Python 3 and open IDLE on Windows.

Important

The code in this book is tested only against Python installed as


described in this section.

Be aware that if you have installed Python through some other


means, such as Anaconda Python, you may encounter problems
when running some of the code examples.

INSTALL PYTHON

Windows doesn’t typically come with a system Python. Fortunately,


installation involves little more than downloading and running the Python
installer from the Python.org website.
Step 1: Download the Python 3 Installer

Open a web browser and navigate to the following URL:


https://www.python.org/downloads/windows/
Click Latest Python 3 Release - Python 3.x.x located beneath the “Python
Releases for Windows” heading near the top of the page. As of this writing,
the latest version was Python 3.9.

Then scroll to the bottom and click Windows x86-64 executable in- staller
to start the download.

Note
If your system has a 32-bit processor, then you should choose the 32-
bit installer. If you aren’t sure if your computer is 32-bit or 64-bit,
stick with the 64-bit installer mentioned above.

Step 2: Run the Installer

Open your Downloads folder in Windows Explorer and double-click the file to
run the installer. A dialog that looks like the following one will appear:
It’s okay if the Python version you see is greater than 3.9.0 as long as the
version is not less than 3.
Important

Make sure you select the box that says Add Python 3.x to PATH. If you
install Python without selecting this box, then you can run the
installer again and select it.

Click Install Now to install Python 3. Wait for the installation to finish,
then continue to open IDLE.

OPEN IDLE

You can open IDLE in two steps:

1. Click the Start menu and locate the Python 3.9 folder.

2. Open the folder and select IDLE (Python 3.9).

IDLE opens a Python shell in a new window. The Python shell is an


interactive environment that allows you to type in Python code and execute
it immediately. It’s a great way to get started with Python!

Note
While you’re free to use a code editor other than IDLE if you prefer,
note that some chapters, especially chapter 7, “Finding and Fixing
Code Bugs,” do contain material specific to IDLE.

The Python shell window looks like this:


At the top of the window, you can see the version of Python that is running
and some information about the operating system. If you see a version less
than 3.9, then you may need to revisit the installation instructions in the
previous section.

The >>> symbol that you see is called a prompt. Whenever you see this, it
means that Python is waiting for you to give it some instructions.

Interactive Quiz

This chapter comes with a free online quiz to check your learning
progress. You can access the quiz using your phone or computer at
the following web address:

realpython.com/quizzes/pybasics-setup

Now that you have Python installed, let’s get straight into writing your first
Python program! Go ahead and move on to chapter 3.

macOS

Follow these steps to install Python 3 and open IDLE on macOS.

Important

The code in this book is tested only against Python installed as


described in this section.

Be aware that if you have installed Python through some other


means, such as Anaconda Python, you may encounter problems
when running some of the code examples.

INSTALL PYTHON

To install the latest version of Python 3 on macOS, download and run the
official installer from the Python.org website.
Step 1: Download the Python 3 Installer

Open a web browser and navigate to the following URL:


https://www.python.org/downloads/mac-osx/

Click Latest Python 3 Release - Python 3.x.x located beneath the “Python
Releases for Mac OS X” heading near the top of the page. As of this
writing, the latest version was Python 3.9.

Then scroll to the bottom of the page and click macOS 64-bit installer

to start the download.

Step 2: Run the Installer

Open Finder and double-click the downloaded file to run the installer. A
dialog box that looks like the following will appear:

Press Continue a few times until you are asked to agree to the software
license agreement. Then click Agree.
You’ll be shown a window that tells you where Python will be installed and
how much space it will take. You most likely don’t want to change the
default location, so go ahead and click Install to start the installation.

When the installer is finished copying files, click Close to close the
installer window.

OPEN IDLE

You can open IDLE in three steps:

1. Open Finder and click Applications.

2. Double-click the Python 3.9 folder.

3. Double-click the IDLE icon.

IDLE opens a Python shell in a new window. The Python shell is an


interactive environment that allows you to type in Python code and execute
it immediately. It’s a great way to get started with Python!

Note
While you’re free to use a code editor other than IDLE if you prefer,
note that some chapters, especially chapter 7, “Finding and Fixing
Code Bugs,” do contain material specific to IDLE.

The Python shell window looks like this:


At the top of the window, you can see the version of Python that is running
and some information about the operating system. If you see a version less
than 3.9, then you may need to revisit the installation instructions in the
previous section.

The >>> symbol that you see is called a prompt. Whenever you see this, it
means that Python is waiting for you to give it some instruc- tions.

Interactive Quiz

This chapter comes with a free online quiz to check your learn- ing
progress. You can access the quiz using your phone or com- puter at
the following web address:

realpython.com/quizzes/pybasics-setup

Now that you have Python installed, let’s get straight into writing your first
Python program! Go ahead and move on to chapter 3.

UBUNTU LINUX

Follow these steps to install Python 3 and open IDLE on Ubuntu

Important

The code in this book is tested only against Python installed as


described in this section.

Be aware that if you have installed Python through some other


means, such as Anaconda Python, you may encounter problems
Linux.when running some of the code examples.

INSTALL PYTHON

There’s a good chance that your Ubuntu distribution already has Python
installed, but it probably won’t be the latest version, and it may be Python 2
instead of Python 3.

To find out what version(s) you have, open a terminal window and try the
following commands:

$ python --version

$ python3 --version

One or more of these commands should respond with a version, as


below:

$ python3 --
version Python
3.9.0

Your version number may vary. If the version shown is Python 2.x or a
version of Python 3 that is less than 3.9, then you want to in- stall the
latest version. How you install Python on Ubuntu depends on which
version of Ubuntu you’re running. You can determine your local Ubuntu
version by running the following command:

$ lsb_release -a

No LSB modules are available.


Distributor ID: Ubuntu
Description: Ubuntu 18.04.1

LTS Release: 18.04

Codename: bionic

Look at the version number next to Release in the console output, and
follow the corresponding instructions below.

Ubuntu 18.04 or Greater

Ubuntu version 18.04 does not come with Python 3.9 by default, but it is

$ sudo apt-get update

$ sudo apt-get install python3.9 idle-python3.9 python3-pip


in the Universe repository. You can install it with the following commands
in the Terminal application:

Note that because the Universe repository is usually behind the Python
release schedule, you may not get the latest version of Python
3.9. However, any version of Python 3.9 will work for this book.

Ubuntu 17 and Lower

For Ubuntu versions 17 and lower, Python 3.9 is not in the Universe
repository. You need to get it from a Personal Package Archive (PPA). To
install Python from the deadsnakes PPA, run the following com- mands in
the Terminal application:

$ sudo add-apt-repository ppa:deadsnakes/ppa

$ sudo apt-get update

$ sudo apt-get install python3.9 idle-python3.9 python3-pip

You can check that the correct version of Python was installed by run- ning
python3 --version. If you see a version number less than 3.9, then you may
need to type python3.9 --version. Now you can open IDLE and get ready
to write your first Python program.

OPEN IDLE
You can open IDLE from the command line by typing the following:

$ idle-python3.9

On some Linux installations, you can open IDLE with the following
shortened command:

$ idle3
IDLE opens a Python shell in a new window. The Python shell is an
interactive environment that allows you to type in Python code and execute
it immediately. It’s a great way to get started with Python!

Note
While you’re free to use a code editor other than IDLE if you prefer,
note that some chapters, especially chapter 7, “Finding and Fixing
Code Bugs,” do contain material specific to IDLE.

The Python shell window looks like this:

At the top of the window, you can see the version of Python that is running
and some information about the operating system. If you see a version less
than 3.9, then you may need to revisit the installation instructions in the
previous section.

Important

If you open IDLE with the idle3 command and see a version less
than 3.9 displayed in the Python shell window, then you’ll need to
open IDLE with the idle-python3.9 command.

The >>> symbol that you see in the IDLE window is called a prompt.
Whenever you see this, it means that Python is waiting for you to give it
some instructions.

Interactive Quiz

This chapter comes with a free online quiz to check your learn- ing
progress. You can access the quiz using your phone or com- puter at
the following web address:

realpython.com/quizzes/pybasics-setup
Now that you have Python installed, let’s get straight into writing your first
Python program! Go ahead

5.2 PROTEUS

5.2.1 INTRODUCTION:

Generally, we are listening the words PCB’s, PCB layout, PCB


designing, etc. But what is PCB? Why we are using this PCB? We want to
know about all these things as an electronic engineer. PCB means Printed
Circuit Board. This is a circuit board with printed copper layout
connections. These PCBs are two types. One is dotted PCB and another one
is layout PCB. The two examples are shown in below.

Fig 4.1 Dotted PCB and Layout PCB

What is the main difference between the dotted PCB and layout PCB?

In dotted PCB board only, dots are available. According to our


requirement we can place or insert the components in those holes and attach
the components with wires and soldering lid. In this dotted PCB we can
make the circuit as out wish but it is very hard to design. There are so many
difficulties are there. Those are connecting the proper pins, avoiding shot
connections and etc. Coming to the layout PCB this is simple to design.
First, we select the our circuit and by using different PCB designing
software’s, design the layout of the circuit and by itching process preparing
the copper layout of our circuit and solder the components in the correct
places. It is simple to design, take less time to design, no shortages, looking
nice and perfect.

Up to now we have discussed about types of PCB’s and difference


between the types. Now we can discuss about PCB designing software.
There are so many PCB designing software’s available. Some are
Express PCB, eagle PCB, PCB Elegance, free PCB, open circuit
design, zenith PCB and Proteus etc. Apart from remaining Proteus is
different. Proteus is design suit and PCB layout designing software. In
Proteus we can design any circuit and simulate the circuit and make PCB
layout for that circuit.

5.2.2 Introduction to Proteus:

Proteus professional is a software combination of ISIS schematic


capture program and ARES PCB layout program. This is a powerful and
integrated development environment. Tools in this suit are very easy to use
and these tools are very useful in education and professional PCB
designing.

As professional PCB designing software with integrated space-


based auto router, it the curser at the component pin end then draw the
connections with that pen symbol. Connect all the components according to
circuit then that designed circuit is show in below image.
If any modifications want to do to the component place the mouse
point and click on right button then option window will open. That is shown
in below figure.

After completion of designing save with some mane and debug it.
This is virtual simulation means without making circuit we can see the
result in virtually through this software and we can design the PCB layout
to our required circuit with this software.
CHAPTER 6

RESULT AND DISCUSSION

PLACE THE KIT IMAGE AND OUTPUT


CHAPTER 7

ADVANTAGES AND APPLICATIONS

7.1 ADVANTAGES

7.2 APPLICATIONS
CHAPTER 8

CONCLUSION AND FUTURE SCOPE

Usage of Reverse Vending Machines in cities is a perfect way to solve


various problems and gain substantial benefits. The common point is that
improving the ecology is a primary goal for the society today. The recycling
of waste contributes greatly to the cause, and, therefore, implementing
effective garbage collection systems can make an important positive impact
on the ecological situation in the cities. Apart from the overall cleanness of
the city, it is also important to point out the reward-based system for the
citizens and the public image benefits from implementing the systems. Last,
but not least, the economical viability of these activities. The income
created by selling the collected empty beverage containers for further
recycling is capable of covering all the costs of implementation and creating
of a sustainable profit flow.

While the developed system currently has lower recognition accuracy than
the tested commercial reverse vending machines due to lighting issues, the
return speed is already competitive. The results confirm the superiority of
the camera-based system over traditional recognition methods, typically
based on a beverage container rotation combined with a single
omnidirectional laser-based barcode reader. In addition, the user experience
is improved as the beverage containers can be returned at a high rate, either
top or bottom first unlike with the existing rotation-based systems.
Furthermore, the developed recognition unit design simplifies the
mechanics of the reverse vending machine. The camera-based system and
the chute have no moving parts making the system virtually maintenance
free.
CHAPTER 9

REFERENCE

1) https ://www.acorecycling.com/b-1-smart-reverse-vending-machine.
2) https ://www.snapmunk.com/reverse-vending-machine-recycling.
3) https ://www.raspberrypi.org/documentation/configuration/raspi-
config.md
4) https ://towardsdatascience.com/r-cnn-fast-r-cnn-faster-r-cnn-yolo-
object-detection-algorithms-36d53571365e
5) https ://docs.fast.ai/
6) https ://ch.mathworks.com/help/deeplearning/ref/resnet18.html
7) https ://en.wikipedia.org/wiki/AlexNet.
8) REXAM, “Annual report 2013” (Cited: Feb. 7th 2015), Available:
http://www.rexam.com/files/reports/2013ar/assets/downloads/Rexa
m_ Annual_Report_2013.pdf.
9) The Bottle Bill Resource Guide, “Beverage Container Deposit Laws
Worldwide” (Cited: Feb. 10th 2015), Available:
http://www.bottlebill.org/ legislation/world.htm.
10) PALPA, “Kaikki kiert¨a¨a – Palpan vuosijulkaisu 2013” (Cited: Feb.
10th, 2015), Available:
http://www.digipaper.fi/palautuspakkaus/120034/.
11) Infinitum, “Deposit facts of 2013” (Cited: Feb. 10th 2015),
Available: http://infinitum.no/english/deposit-facts-of-2013.
12) K. Holmen, J. S. Rognhaug, J. Rype, “Device for handling liquid
container,” U.S. Patent 6 678 578, January 13, 2004.
13) A. Nordbryhn, “Device for generating, detecting and recognizing a
contour image of a liquid container,” U.S. Patent 5 898 169, April
27, 1999.

You might also like