Credit Card Ocr With Opencv and Python PDF
Credit Card Ocr With Opencv and Python PDF
Today’s blog post is a continuation of our recent series on Optical Character Recognition (OCR) and computer vision.
In a previous blog post, we learned how to install the Tesseract binary and use it for OCR. We then learned how to cleanup images using basic
image processing techniques to improve the output of Tesseract OCR.
However, as I’ve mentioned multiple times in these previous posts, Tesseract should not be considered a general, off-the-shelf solution for
Optical Character Recognition capable of obtaining high accuracy.
In some cases, it will work great — and in others, it will fail miserably.
A great example of such a use case is credit card recognition, where given an input image,
we wish to:
In these cases, the Tesseract library is unable to correctly identify the digits (this is likely due to Tesseract not being trained on credit card
example fonts). Therefore, we need to devise our own custom solution to OCR credit cards.
In today’s blog post I’ll be demonstrating how we can use template matching as a form of OCR to help us create a solution to automatically
recognize credit cards and extract the associated credit card digits from images.
To learn more about using template matching for OCR with OpenCV and Python, just keep reading.
Finally, we’ll look at some examples of applying our credit card OCR algorithm to actual images.
The OCR-A font was designed in the late 1960s such that both (1) OCR algorithms at that time and (2) humans could easily recognize the
Email Address
characters The font is backed by standards organizations including ANSI and ISO among others.
In fact, there are quite a few fonts designed specifically for OCR including OCR-B and MICR E-13B.
While you might not write a paper check too often these days, the next time you do, you’ll see the MICR E-13B font used at the bottom
containing your routing and account numbers. MICR stands for Magnetic Ink Character Recognition code. Magnetic sensors, cameras, and
scanners all read your checks regularly.
Each of the above fonts have one thing in common — they are designed for easy OCR.
For this tutorial, we will make a template matching system for the OCR-A font, commonly found on the front of credit/debit cards.
In order to accomplish this, we’ll need to apply a number of image processing operations, including thresholding, computing gradient magnitude
representations, morphological operations, and contour extraction. These techniques have been used in other blog posts to detect barcodes in
images and recognize machine-readable zones in passport images.
Since there will be many image processing operations applied to help us detect and extract the credit card digits, I’ve included numerous
intermediate screenshots of the input image as it passes through our image processing pipeline.
These additional screenshots will give you extra insight as to how we are able to chain together basic image processing techniques to build a
solution to a computer vision project.
Lines 1-6 handle importing packages for this script. You will need to install OpenCV and imutils if you don’t already have them installed on your
machine. Template matching has been around awhile in OpenCV, so your version (v2.4, v3.*, etc.) will likely work.
Note: If you are using Python virtual environments (as all of my OpenCV install tutorials do), make sure you use the command to
access your virtual environment first and then install/upgrade .
Now that we’ve installed and imported packages, we can parse our command line arguments:
On Lines 8-14 we establish an argument parser, add two arguments, and parse them, storing as the variable, .
Credit card types, such as American Express, Visa, etc., can be identified by examining the first digit in the 16 digit credit card number.
On Lines 16-23 we define a dictionary, , which maps the first digit to the corresponding credit card type.
Let’s start our image processing pipeline by loading the reference OCR-A image:
First, we load the OCR-A image (Line 29) followed by converting it to grayscale (Line 30) and thresholding + inverting it (Line 31).
In each of these operations we store or overwrite , our reference image.
On Lines 36 and 37 we find the contours present in the image. Then, due to how OpenCV 2.4 and OpenCV 3 store the returned contour
information differently, we check the version and make an appropriate change to on Line 38.
Next, we sort the contours from left-to-right as well as initialize a dictionary, , which maps the digit name to the region of interest (Lines
39 and 40).
At this point, we should loop through the contours, extract, and associate ROIs with their corresponding digits:
On Line 43 we loop through the reference image contours. In the loop, holds the digit name/number and holds the contour.
We compute a bounding box around each contour, , (Line 46) storing the (x, y)-coordinates and width/height of the rectangle.
On Line 47 we extract the from (the reference image) using the bounding rectangle parameters. This ROI contains the digit. We
resize each ROI on Line 48 to a fixed size of 57×88 pixels. We need to ensure every digit is resized to a fixed size in order to apply template
matching for digit recognition later in this tutorial.
We associate each digit 0-9 (the dictionary keys) to each image (the dictionary values) on Line 51.
At this point, we are done extracting the digits from our reference image and associating them with their corresponding digit name.
Our next goal is to isolate the 16-digit credit card number in the input . We need to find and isolate the numbers before we can initiate
template matching to identify each of the digits. These image processing steps are quite interesting and insightful, especially if you have never
developed an image processing pipeline before, so be sure to pay close attention.
You can think of a kernel as a small matrix which we slide across the image to do (convolution) operations such as blurring, sharpening, edge
detection, or other image processing operations.
On Lines 55 and 56 we construct two such kernels — one rectangular and one square. We will use the rectangular one for a Top-hat
morphological operator and the square one for a closing operation. We’ll see these in action shortly.
Figure 5: The example input credit card image that we will be OCR’ing in the rest of this tutorial.
Figure 6: Converting the image to grayscale is a requirement prior to applying the rest of our
image processing pipeline.
Now that our image is grayscaled and the size is consistent, let’s perform a morphological operation:
Using our and our image, we perform a Top-hat morphological operation, storing the result as (Line 65).
The Top-hat operation reveals light regions against a dark background (i.e. the credit card numbers) as you can see in the resulting image
below:
Figure 7: Applying a tophat operations reveals light regions (i.e., the credit card digits) against a
dark background.
Given our image, let’s compute the gradient along the x-direction:
The next step in our effort to isolate the digits is to compute a Scharr gradient of the image in the x-direction. We complete this
computation on Lines 69 and 70, storing the result as .
After computing the absolute value of each element in the array, we take some steps to scale the values into the range [0-255] (as the
image is currently a floating point data type). To do this we compute the and of (Line 72) followed by our scaling
equation shown on Line 73 (i.e., min/max normalization). The last step is to convert to a which has a range of [0-255] (Line
74).
Figure 8: Computing the Scharr gradient magnitude representation of the image reveals vertical
changes in the gradient.
Figure 9: Thresholding our gradient magnitude representation reveals candidate regions” for the
credit card numbers we are going to OCR.
Next let’s find the contours and initialize the list of digit grouping locations.
On Lines 89-91 we find the contours and store them in a list, . Then, we initialize a list to hold the digit group locations on Line 92.
Now let’s loop through the contours while filtering based on the aspect ratio of each, allowing us to prune the digit group locations from other,
irrelevant areas of the credit card:
On Line 95 we loop through the contours the same way we did for the reference image. After computing the bounding rectangle for each
contour, (Line 98), we calculate the aspect ratio, , by dividing the width by the height (Line 99).
Using the aspect ratio, we analyze the shape of each contour. If is between 2.5 and 4.0 (wider than it is tall), as well as the between 40
and 55 pixels and between 10 and 20 pixels, we append the bounding rectangle parameters in a convenient tuple to (Lines 101-
110).
Note: These the values for the aspect ratio and minimum width and height were found experimentally on my set of input credit card images.
You may need to change these values for your own applications.
The following image shows the groupings that we have found — for demonstration purposes, I had OpenCV draw a bounding box around each
group:
Figure 10: Highlighting the four groups of four digits (sixteen overall) on a credit card.
Next, we’ll sort the groupings from left to right and initialize a list for the credit card digits:
On Line 114 we sort the according to the x-value so they will be ordered from left to right.
We initialize a list, , which will hold the image’s credit card number on Line 115.
Now that we know where each group of four digits is, let’s loop through the four sorted groupings and determine the digits therein.
This loop is rather long and is broken down into three code blocks — here is the first block:
In the first block for this loop, we extract and pad the group by 5 pixels on each side (Line 125), apply thresholding (Lines 126 and 127), and
find and sort contours (Lines 129-135). For the details, be sure to refer to the code.
Let’s continue the loop with a nested loop to do the template matching and similarity score extraction:
Using we obtain parameters necessary to extract a ROI containing each digit (Lines 142 and 143). In order for template
matching to work with some degree of accuracy, we resize the to the same size as our reference OCR-A font digit images (57×88 pixels)
on Line 144.
We initialize a list on Line 147. Think of this as our confidence score — the higher it is, the more likely it is the correct template.
Now, let’s loop (third nested loop) through each reference digit and perform template matching. This is where the heavy lifting is done for
this script.
OpenCV, has a handy function called in which you supply two images: one being the template and the other being the
input image. The goal of applying to these two images is to determine how similar they are.
In this case we supply the reference image and the from the credit card containing a candidate digit. Using these two images
we call the template matching function and store the (Lines 153 and 154).
Next, we extract the from the (Line 155) and append it to our list (Line 156). This completes the inner-most loop.
Using the scores (one for each digit 0-9), we take the maximum score — the maximum score should be our correctly identified digit. We find the
digit with the max score on Line 160, grabbing the specific index via . The integer name of this index represents the most-likely digit
based on the comparisons to each template (again, keeping in mind that the indexes are already pre-sorted 0-9).
Finally, let’s draw a rectangle around each group and view the credit card number on the image in red text:
For the third and final block for this loop, we draw a 5-pixel padded rectangle around the group (Lines 163 and 164) followed by drawing the
text on the screen (Lines 165 and 166).
The last step is to append the digits to the output list. The Pythonic way to do this is to use the function which appends each element
of the iterable object (a list in this case) to the end of the list.
To see how well the script performs, let’s output the results to the terminal and display our image on the screen.
Line 172 prints the credit card type to the console followed by printing the credit card number on the subsequent Line 173.
On the last lines, we display the image on the screen and wait for any key to be pressed before exiting the script Lines 174 and 175.
Take a second to congratulate yourself — you made it to the end. To recap (at a high level), this script:
It’s now time to see the script in action and check on our results.
We obviously cannot use real credit card numbers for this example, so I’ve gathered a few example images of credit cards using Google. These
credit cards are obviously fake and for demonstration purposes only.
However, you can apply the same techniques in this blog post to recognize the digits on actual, real credit cards.
To see our credit card OCR system in action, open up a terminal and execute the following command:
Figure 12: Applying template matching with OpenCV and Python to OCR the digits on a
credit card.
Notice how we were able to correctly label the credit card as MasterCard, simply by inspecting the first digit in the credit card number.
Once again, we were able to correctly OCR the credit card using template matching.
How about another image, this time from PSECU, a credit union in Pennsylvania:
Figure 14: Our system is correctly able to find the digits on the credit card, then
apply template matching to recognize them.
Our OCR template matching algorithm correctly identifies each of the 16 digits. Given that each of the 16 digits were correctly OCR’d, we can
also label credit card as a Visa.
Here’s another MasterCard example image, this one from Bed, Bath, & Beyond:
Figure 15: Regardless of credit card design and type, we can still detect the digits
and recognize them using template matching.
Figure 16: A final example of applying OCR with Python and OpenCV.
In each of the examples in this blog post, our template matching OCR script using OpenCV and Python correctly identified each of the 16 digits
100% of the time.
Furthermore, template matching is also a very fast method when comparing digits.
Unfortunately, we were not able to apply our OCR images to real credit card images, so that certainly raises the question if this method would
be reliable on actual, real-world images. Given changes in lighting condition, viewpoint angle, and other general noise, it’s likely that we might
need to take a more machine learning oriented approach.
Regardless, at least for these example images, we were able to successfully apply template matching as a form of OCR.
Summary
In this tutorial we learned how to perform Optical Character Recognition (OCR) using template matching via OpenCV and Python.
Specifically, we applied our template matching OCR approach to recognize the type of a credit card along with the 16 credit card digits.
1. Detecting the four groups of four numbers on the credit card via various image processing techniques, including morphological operations,
thresholding, and contour extraction.
2. Extracting each of the individual digits from the four groupings, leading to 16 digits that need to be classified.
3. Applying template matching to each digit by comparing it to the OCR-A font to obtain our digit classification.
4. Examining the first digit of the credit card number to determine the issuing company.
After evaluating our credit card OCR system, we found it to be 100% accurate provided that the issuing credit card company used the OCR-A
font for the digits.
To extend this application, you would want to gather real images of credit cards in the wild and potentially train a machine learning model (either
via standard feature extraction or training or Convolutional Neural Network) to further improve the accuracy of this system.
I hope you enjoyed this blog post on OCR via template matching using OpenCV and Python.
To be notified when future tutorials are published here on PyImageSearch, be sure to enter your email address in the form below!
Downloads:
If you would like to download the code and images used in this post, please enter your email address in the form below. Not
only will you get a .zip of the code, I’ll also send you a FREE 17-page Resource Guide on Computer Vision, OpenCV,
and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL!
Sound good? If so, enter your email address and I’ll send you the code immediately!
Email address:
Enter your email address below to get my free 17-page Computer Vision, OpenCV, and Deep Learning
Resource Guide PDF. Inside you'll find my hand-picked tutorials, books, courses, and Python libraries to
help you master computer vision and deep learning!
Using Tesseract OCR with Python Bank check OCR with OpenCV and Python (Part I)
REPLY
Shitian Ni July 17, 2017 at 11:32 am #
I would suggest detecting the four corners of the credit card and applying a perspective transform, like I do in this document scanner
post.
REPLY
Oscar Morales July 17, 2017 at 11:46 am #
Best Regards.
REPLY
Adrian Rosebrock July 17, 2017 at 5:07 pm #
Thanks Oscar!
REPLY
SSteve July 17, 2017 at 1:05 pm #
This has nothing to do with OCR, but MasterCard is now issuing numbers that begin with 2.
https://gravitypayments.com/highlights/mastercard-2series-bin/
REPLY
Ahmad Hasbini July 17, 2017 at 1:51 pm #
REPLY
Adrian Rosebrock July 17, 2017 at 5:07 pm #
REPLY
Amit Weil July 18, 2017 at 3:53 am #
REPLY
Adrian Rosebrock July 18, 2017 at 9:45 am #
There are a number of ways to accomplish recognizing the card holders name. First, you would need to isolate the card holder name
characters. This can be accomplished in a similar manner used for localizing the digits. From there, threshold the characters and OCR them,
perhaps using Tesseract + Python.
REPLY
hassan July 18, 2017 at 12:11 pm #
hello adrian rosebrock ! your tutoriall is very good. Adrian i want to detect the digits of license plate of my country. can you help me how to
apply ocr on license plate.just like this.thanks in advance
REPLY
Adrian Rosebrock July 21, 2017 at 9:04 am #
Hi Hassan — I cover automatic license plate recognition inside the PyImageSearch Gurus course. I would suggest you start there.
REPLY
Gozde July 18, 2017 at 3:08 pm #
Hi Adrian
Thank you very much for your blog I will ask a question I dont understand that how we can card type master card or visa You just give numbers but
how code can detect master or visa and I try this image but I can not a good result :
https://i.resimyukle.xyz/JyxbM.png
I am confused a bit
REPLY
Adrian Rosebrock July 21, 2017 at 9:03 am #
Hi Gozde — if you want to detect the Visa or Mastercard logo itself, I would suggest using template matching. You could use
keypoints, local invariant descriptors, and keypoint matching to recognize the logo as well — I provide an example of this technique to
recognize the covers of books inside Practical Python and OpenCV. This would make a good starting point for your project.
REPLY
YURII CHERNYSHOV July 20, 2017 at 1:58 pm #
One of the useful tutorial (fr those who can read between the lines), I follow you since almost a beginning
REPLY
Adrian Rosebrock July 21, 2017 at 8:53 am #
REPLY
khosro July 22, 2017 at 6:22 am #
Hi Adrian
very good !!
thanks for the tutorial.
regards
REPLY
Peter Pan July 24, 2017 at 4:12 am #
Where should I look for OCR-B-Reference images, sir? My card is printed in OCR-B font.
REPLY
Adrian Rosebrock July 24, 2017 at 3:31 pm #
Please use the “Downloads” section of the post. The Downloads section provides a .zip of the code, reference images, and example
images.
REPLY
Aadesh Shrestha July 25, 2017 at 3:54 am #
Hi Adrian!
Thanks for this amazing tutorial. I want to do something similar with electric meter system. What things will I have to change from this tutorial to get
that done ?
REPLY
Aadesh Shrestha July 25, 2017 at 4:08 am #
Template image
https://l7.alamy.com/zooms/4245c7ffaaad4bef937e60c81540031c/an-old-british-electricity-usage-meter-a837mx.jpg
REPLY
Adrian Rosebrock July 28, 2017 at 10:10 am #
Hi Aadesh — water meter detection is always a great project. The first step is to localize each of the digits. From there, you would
want to quantify each digit. Then train a simple Linear SVM to recognize each of the characters. Jeff Bass, a PyImageSearch Gurus
member built this exact project using the knowledge he learned from the course. I would definitely suggest you take a look at
PyImageSearch Gurus — it would help you solve this exact problem.
REPLY
chandiran July 29, 2017 at 8:03 am #
this above code is working good for demo images..but i am getting error while recognizing blur image and image of SBI card(beacuse it
have image in center)..may anyone help me to recognize any any cards..
REPLY
Abhranil August 1, 2017 at 1:37 pm #
I am getting very bad accuracy if I download an image and extract the 16 numbers. Will i have to preprocess the image?
REPLY
Adrian Rosebrock August 4, 2017 at 7:08 am #
Are you referring to the images in this blog post? Or images from your own collection?
REPLY
chandiran August 18, 2017 at 5:43 am #
REPLY
Adrian Rosebrock August 21, 2017 at 3:49 pm #
As I mentioned in the blog post, this method will work best for images captured in controlled lighting conditions. Without seeing
your example images it’s hard to say what the issue might be, but I would recommend looking into training your own custom object
detector.
REPLY
Omar August 8, 2017 at 11:19 am #
what do you suggest if we have 4×4 visas image that we want to extract evey card number ?
REPLY
Adrian Rosebrock August 10, 2017 at 8:52 am #
You would detect each Visa card individually (likely using contour approximation) like I do in this post. Then you OCR each of the
cards.
REPLY
shruthi August 26, 2017 at 5:17 am #
sir is there is any way to fetch all information exactly from card
REPLY
Adrian Rosebrock August 27, 2017 at 10:35 am #
I’m not sure what you mean by “all information exactly”. Please clarify.
REPLY
shruthi September 1, 2017 at 1:56 am #
sir i m getting full information from the card image..Thats ok but i want to get only exact name and sex(female and male ) from the image
but i don’t know how to get can u help me….
REPLY
Adrian Rosebrock September 1, 2017 at 11:58 am #
I would highly suggest you play around with the Google Vision API and benchmark your results against Tesseract. As far as OCR, the
Google Vision API is (arguably) the best off-the-shelf solution.
REPLY
edd September 1, 2017 at 4:08 pm #
Hi Adrian,
Your lessons are dope,you are a rock star man. I cannot thank you enough for how much you have helped me in learning and implementing
projects.
My current project needs to read the name, DOB,Sex and everything from the passport.
could you take up a blog to explain , how it could be done or let me know how i can proceed with implementing this.
Thanks in advance.
REPLY
Adrian Rosebrock September 5, 2017 at 9:40 am #
Hi Edd — I will consider this for a future blog post, but I cannot guarantee when/if I will cover it. I would suggest looking into the
Google Vision API if you need an off-the-shelf OCR system.
REPLY
Aniruddha October 6, 2017 at 6:25 am #
This is really interesting. For me it is good way to get into image processing.
Thanks!
REPLY
Rahul Pruthi November 23, 2017 at 9:28 am #
Hi Adrian,
Thanks for this amazing project!
I have few doubts
1: What should i do to scan real card images?
2:My cards have OCR-B font and there is no OCR-B reference image available on internet or in your downloads section.
3:What should i do when the credit card has a background pics drawn on it? what method should i apply to detect it?
REPLY
Mihir Rajput January 1, 2018 at 4:23 am #
Hello Adrian, above’s code generates output of credit card as all 0s. please help.
REPLY
Adrian Rosebrock January 3, 2018 at 1:15 pm #
Hey Mihir — that is indeed a bit strange. What version of OpenCV and Python are you using?
REPLY
Syed Hasnain January 19, 2018 at 4:08 am #
Hi Adrian, Can you please tell me what softwares i have to use to do this? Please give me a complete list. I am new to Computer vision.
REPLY
Adrian Rosebrock January 19, 2018 at 6:42 am #
This tutorial uses the Python programming language, OpenCV, NumPy, and imutils. I provide install an configuration tutorials here. If
you are brand new to computer vision I would highly suggest you read through my book, Practical Python and OpenCV which will teach you the
fundamentals and help you get up to speed.
REPLY
karan pal sunkariya January 30, 2018 at 9:45 am #
I would suggest taking a look at this post on OCR with passports to help you get started.
REPLY
Nikolay February 7, 2018 at 4:27 am #
any little changes and script can not solve task. try your pictures https://www.pyimagesearch.com/wp-
content/uploads/2017/06/ocr_groupings.jpg it is impossible
REPLY
Adrian Rosebrock February 8, 2018 at 8:33 am #
Hey Nikolay — you need to use the “Downloads” section of this blog post to download the raw example images. Then run the code on
them.
REPLY
Jeremiah February 10, 2018 at 6:39 am #
Good article
REPLY
Adrian Rosebrock February 10, 2018 at 7:05 am #
Thanks Jeremiah!
REPLY
claudio February 22, 2018 at 10:13 am #
I followed the tutorial instructions and run the code with the raspbian image I bought from the course
practical-python-opencv but when I test the script with a similar image downloaded from the internet
it doesn’t work.
http://es.tinypic.com/r/25samph/9
if the image has the same format as the images used in the tutorial. Why doesn’t it work?
the error displayed in console is as follows
http://es.tinypic.com/r/2uek0ic/9
REPLY
Adrian Rosebrock February 22, 2018 at 11:04 am #
Hey Claudio — thanks for picking up a copy of Practical Python and OpenCV! I hope you are enjoying it so far.
The code used in this post is meant to be an introduction to OCR-based techniques, enabling you to get an idea of how to approach various
image processing problems. It’s not meant to be a production quality credit card OCR system.
The error itself is due to the key of the dictionary not being found (i.e., it cannot recognize the digit). You can insert logic to to handle when a
digit is unknown or you can try a more advanced OCR engine, such as the Google Vision API.
REPLY
Ahmad March 12, 2018 at 9:15 am #
Hi Adrian,
I am not able to run your code because of the following error
REPLY
Ahmad March 12, 2018 at 9:27 am #
Nevermind, rookie error :/
REPLY
Adrian Rosebrock March 14, 2018 at 1:03 pm #
For anyone else having this error please refer to this blog post on command line arguments.
REPLY
neosarchizo March 12, 2018 at 11:02 pm #
Awesome!!
REPLY
Sandeep Bind April 18, 2018 at 3:34 am #
hello Adrian brilliant work on credit OCR, but can you suggest some solutions to do OCR on Indian Pan card.
REPLY
Adrian Rosebrock April 18, 2018 at 2:53 pm #
REPLY
Siab Shafique July 7, 2018 at 5:50 am #
hi Adrian I am getting all 0’s after using one of the images you provided. What cn be the issue. ( iam using python 2)
REPLY
Adrian Rosebrock July 10, 2018 at 8:45 am #
Did you use the “Downloads” section to the blog post to download the source code + example images? Or did you copy and paste the
code? If you didn’t download the code make sure you do so just in case there was a copy and paste problem.
REPLY
manzoor hussain July 22, 2018 at 4:28 am #
hello Adrian thank you for such a grate post what should i do if i want to detect the score board run and out section only (123-3 ) of any
cricket match by using ocr and template matching
REPLY
Adrian Rosebrock July 25, 2018 at 8:23 am #
You would first want to detect the cricket scoreboard via object detection first. Once you have the scoreboard apply a perspective
transform to obtain a “top down” view of the board. Finally, apply OCR.
REPLY
Hammad Ansari October 5, 2018 at 12:33 pm #
Trackbacks/Pingbacks
Bank check OCR with OpenCV and Python (Part I) - PyImageSearch - July 24, 2017
[…] Thank you for the PyImageSearch blog. I read it each week and look forward to your new posts every Monday. I really enjoyed last week’s tutorial on
credit card OCR. […]
Leave a Reply
Name (required)
Website
SUBMIT COMMENT
Search...
Get your FREE 17 page Computer Vision, OpenCV, and Deep Learning Resource Guide PDF. Inside you'll find my hand-picked tutorials,
books, courses, and libraries to help you master CV and DL.
Deep Learning for Computer Vision with Python Book — OUT NOW!
You're interested in deep learning and computer vision, but you don't know how to get started. Let me help. My new book will teach you all you need to know
about deep learning.
The PyImageSearch Gurus course is now enrolling! Inside the course you'll learn how to perform:
Click the button below to learn more about the course, take a tour, and get 10 (FREE) sample lessons.
I'm an entrepreneur and Ph.D who has launched two successful image search engines, ID My Pill and Chic Engine. I'm here to share my tips,
tricks, and hacks I've learned along the way.
Never miss a post! Subscribe to the PyImageSearch RSS Feed and keep up to date with my image search engine tutorials, tips, and tricks
POPULAR
Home surveillance and motion detection with the Raspberry Pi, Python, OpenCV, and Dropbox
JUNE 1, 2015