RealPythonPart3 PDF
RealPythonPart3 PDF
with Django
Jeremy Johnson
Contents
Preface
Thank you . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10
Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
Course Repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
13
Errata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
14
2 Introduction
15
15
17
18
19
20
3 Software Craftsmanship
23
29
43
Testing Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
48
Testing Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
56
Testing Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
60
63
64
66
67
77
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
83
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
84
85
Git Branching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
86
Branching Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
87
97
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
98
Django 1.8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
99
99
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Upgrading to Python 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
110
125
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7 Graceful Degradation and Database Transactions with Django 1.8
131
132
133
. . . . . . . . . . . . . . . . . . . . . . . . 142
2
144
146
SavePoints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
152
Nested Transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
155
157
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
159
Exercises. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
161
163
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
165
168
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
171
176
. . . . . . . . . . . . . . . . . . . . . . 183
208
212
Gravatar Support
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
217
230
231
. . . . . . . . . . . . . . . . . . . . . . . . . . 235
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
12 Django Migrations
264
271
283
. . . . . . . . . . . . . . . . . . . . . . . . . . 290
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
4
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
14 Djangular: Integrating Django and Angular
308
311
343
351
375
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
17 One Admin to Rule Them All
402
442
. . . . . . . . . . . . . . . . . . . . . . . . . . . 442
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 446
. . . . . . . . . . . . . . . . . . . . 473
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 478
19 Deploy
479
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 484
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 493
506
508
514
521
561
Chapter 1
Preface
Thank you
I commend you for taking the time out of your busy schedule to improve your craft of software
development. My promise to you is that I will hold nothing back and do my best to impart
the most important and useful things I have learned from my software development career
in hopes that you can benefit from it and grow as a software developer yourself.
Thank you so much for purchasing this course, and a special thanks goes out to all the early
Kickstarter supporters that had faith in the entire Real Python team before a single line of
code had been written.
License
This e-course is copyrighted and licensed under a Creative Commons AttributionNonCommercial-NoDerivs 3.0 Unported License. This means that you are welcome to
share this course and use it for any non-commercial purposes so long as the entire course
remains intact and unaltered. That being said, if you have received this copy for free and
have found it helpful, I would very much appreciate if you purchased a copy of your own.
The example Python scripts associated with this course should be considered open content.
This means that anyone is welcome to use any portion of the code for any purpose.
10
Conventions
NOTE: Since this is the Beta release, we do not have all the conventions in place.
We are working on it. Patience, please.
Formatting
1. Code blocks will be used to present example code.
1
$ python hello-world.py
11
bresaola meatball. Prosciutto pork belly tri-tip pancetta spare ribs salami,
porchetta strip steak rump beef filet mignon turducken tail pork chop. Shankle
turducken spare ribs jerky ribeye.
SEE ALSO: This is a see also box with more tasty impsum. Bacon ipsum dolor sit amet t-bone flank sirloin, shankle salami swine drumstick capicola doner
porchetta bresaola short loin. Rump ham hock bresaola chuck flank. Prosciutto beef ribs kielbasa pork belly chicken tri-tip pork t-bone hamburger bresaola
meatball. Prosciutto pork belly tri-tip pancetta spare ribs salami, porchetta strip
steak rump beef filet mignon turducken tail pork chop. Shankle turducken spare
ribs jerky ribeye.
12
Course Repository
Like the other two courses, this course has an accompanying repository. Broken up by chapter, in the _chapters folder, you can follow along with each chapter, then check your answers in the repository. Before you start each chapter, please make sure your local code
aligns with the code from the previous chapter. For example, before starting chapter 6 make
sure youre code matches the code from the chp5 folder from the repository.
You can download the course files directly from the repository. Press the Download ZIP button which is located at the bottom right of the page. This allows
you to download the most recent version of the code as a zip archive. Be sure
to download the updated code for each release.
That said, its recommended that you work on one project throughout this course, rather than
breaking it up.
NOTE: Since this is the Beta release, we do not have the structure completely
updated in the repository. We will eventually change to a Git tagging model. If
you have any suggestions or feedback, please contact us.
13
Errata
I welcome ideas, suggestions, feedback, and the occasional rant. Did you find a topic confusing? Or did you find an error in the text or code? Did I omit a topic you would love to know
more about. Whatever the reason, good or bad, please send in your feedback.
You can find my contact information on the Real Python website. Or submit an issue on the
Real Python official support repository. Thank you!
NOTE: The code found in this course has been tested on Mac OS X v. 10.8.5,
Windows XP, Windows 7, Linux Mint 17, and Ubuntu 14.04 LTS.
14
Chapter 2
Introduction
Welcome to Advanced Web Programming!
Dear Reader:
Let me first congratulate you for taking the time to advance your skills and become a better
developer. Its never easy in todays hectic world to carve out the time necessary to spend on
something like improving your Python development skills. So Im not going to waste your
time, lets just jump right in.
1
10
11
12
13
14
15
16
OK, let me explain this line by line no seriously - this is NOT what we are going to be doing.
While this script is a cool example of the power of Python, its painful to try to understand
whats happening with the code and nearly impossible to maintain without pulling your hair
out. Its a bit unfair to Ed Felten, the author of this code, because he wrote this intentionally
to fit the entire program in 15 lines. Its actually a pretty cool code example.
The point Im trying to make is this:
There are many ways to write Python code. It can be a thing of beauty; it can be something
very compact and powerful; it can even be a little bit scary. The inherent dynamic nature of
Python, plus its extreme flexibility, makes it both awesome and dangerous at the same time.
Thats why this book is not only about writing cool web apps with Django and Python; its
about harnessing the power of Python and Django in a way that not only benefits from the
immense power of the two - but in a way that is simple to understand, easy to maintain, and
above all fun to work with
16
17
18
Advanced Django
Although Software Craftsmanship should underlie all that we do, it is of course important
to understand the inner workings of the Django Framework so you can use it efficiently and
effectively. So, we will cover specific Django topics in nearly every chapter in this course. We
will cover all the usual suspects - like views, models and, templates as well as more advanced
topics such as database transactions, class-based views, mixins, the Django Rest Framework,
forms, permissions, custom fields, and so on.
This course is here to teach you all you need to know to develop great web applications that
could actually be used in a production environment, not just simple tutorials that gloss over
all the difficult stuff.
Full-stack Python
In todays world of HTML5 and mobile browsers, JavaScript is everywhere. Python and
Django just are not enough. To develop a website from end to end, you must know
the JavaScript frameworks, REST-based APIs, Bootstrap and responsive design, NoSQL
databases and all the other technologies that make it possible to compete at the global
level. Thats why this course does not just cover Django exclusively. Instead, it covers all
the pieces that you need to know to develop modern web applications from the client to the
server - developing, testing, tooling, profiling, and deploying.
By the end of this course you will have developed a REAL web based application ready to be
used to run an online business. Dont worry, we are not going to solely focus on buzzwords
and hype. Were going to have fun along the way and build something awesome.
Throughout the entire course we focus on a single application, so you can see how applications
evolve and understand what goes into making a REAL application.
19
Figure 2.2:21Chapter 10
22
Chapter 3
Software Craftsmanship
Somewhere along the path to software development mastery every engineer comes to the
realization that there is more to writing code than just producing the required functionality.
Novice engineers will generally stop as soon as the requirements are met, but that leaves a
lot to be desired. Software development is a craft, and like any craft there are many facets to
learn in order to master it. True craftsman insist on quality throughout all aspects of a product development process; it is never good enough to just deliver requirements. McDonalds
delivers requirements, but a charcoal-grilled, juicy burger cooked to perfection on a Sunday
afternoon and shared with friends thats craftsmanship.
In this course we will talk about craftsmanship from the get go, because its not something
you can add in at the end. Its a way of thinking about software development that values:
Not only working software, but well-crafted software;
Moving beyond responding to change and focusing on steadily adding value; and,
Creating maintainable code that is well-tested and simple to understand.
This chapter is about the value of software craftsmanship, and writing code that is elegant,
easily maintainable and dare I say beautiful. And while this may seem high-browed, fluteytooty stuff, the path to achieving well-crafted software can be broken down into a number of
simple techniques and practices.
We will start off with the most important of these practices and the first that any engineer
looking to improve his or her craft should learn: Automated Unit Testing.
23
24
25
Both:
Automatically load fixtures to load data prior to running tests.
Create a Test Client instance, self.client, which is useful for things like
resolving URLS.
Provide a ton of Django-specific asserts.
TransactionTestCase() allows for testing database transactions and automatically resets the database at the end of the test run by truncating all tables.
TestCase() meanwhile wraps each test case in a transaction, which is then
rolled back after each test.
Thus, the major difference is when the data is cleared out - after each test for
TestCase() versus after the entire test run for TransactionTestCase().
4. django.test.LiveServerTest() - Used mainly for GUI Testing with something
like Selenium, which
Does basically the same thing as TransactionTestCase().
Also fires up an actual web server so you can perform GUI tests.
Which test base class should you use?
Keep in mind that, with each test base class, theres always a trade off between speed and test
complexity. As a general rule testing becomes easier and less complex the more you rely on
Djangos internal functionality. However, there is a cost- speed.
So, which test class is best? Unfortunately, there is no easy answer to that. It all depends
on the situation.
From a purely performance-oriented point of view, you should always use unittest.TestCase()
and not load any Django functionality. However, that may make it very difficult/cumbersome
to test certain parts of your application.
Any time you use one of the django.test.* classes you will be loading all the Django infrastructure and thus your test might be doing a lot more than what you bargained for. Take the
following example:
1
2
3
def test_uses_index_html_template(self):
index = self.client.get('/')
self.assertTemplateUsed(index, "index.html")
Its only two lines of code, right? True. But behind the scenes theres a lot going on:
26
As you can see there is a lot going on here, we are testing multiple parts of the system. Test
that cover multiple parts of a system and verify the result when all these parts act in concert
are generally referred to as an ** Integration Test **.
By contrast a unit test by definition should test a small defined area of code (i.e. a unit of
code), in isolation. Which in most cases translates to a single function.
To rewrite this test as a unit test we could inherit from django.test.SimpleTestCase() and
then our test would look like this:
1
2
3
4
5
6
7
def test_uses_index_html_template_2(self):
#create a dummy request
request_factory = RequestFactory()
request = request_factory.get('/')
request.session = {} #make sure it has a session associated
8
9
10
11
12
13
Obviously this test is a bit longer than two lines, but since we are creating a dummy request
object and calling our view function directly, we no longer have so much going on behind
the scenes. There is no router, there is no middleware; basically all the Django magic isnt
happening. This means we are testing our code in isolation (its a unit test), which has the
following consequences:
Tests will likely run faster, because there is less going on (again, behind the scenes).
Tracking down errors will be significantly easier because only one thing is being tested.
27
Writing tests can be harder because you may have to create dummy objects or stub out
various parts of the Django system to get your tests to work.
NOTE: A Note on assertTemplateUsed Notice that in the above test we
have changed the assertTemplateUsed to assertContains. This is because
assertTemplateUsed does not work with the RequestFactory(). In fact, if
you use the two together the assertTemplateUsed assertion will always pass,
no matter what template name you pass in! So be careful not to use the two
together!
Integration vs unit testing
Both integration and unit tests have their merits, and you can use both types together in the
same test suite - so its not all-or-nothing. But it is important to understand the differences of
each approach so you can choose which option is best for you. There are some other considerations, specifically designing testable code and mocking, which we will touch upon later.
For now the important thing to understand is the difference between the two types and the
amount of work that the Django testing framework is doing for you behind the scenes.
28
29
Out of the box, Django gives us two standard ways to write unit tests - using the standard
unittest style or the DocTest style.
SEE ALSO: Please consult the Python documentation for more info in both standard unittest and doctests. Jump to this StackOverflow question to get more info
on when to use unittest vs. doctest.
Both are valid and the choice is more a matter of preference than anything else. Since the
standard unittest style is the most common, we will focus on that.
If youre continuing from the previous course, make sure you have your virtualenv activated
and ready to go by running:
1
$ source myenv/bin/activate
$ cd django_project
$ mkvirtualenv chp4
$ pip install -r ./requirements.txt
30
This will read the requirements.txt file in the root directory of the django_ecommerce
project and use pip to install all the dependencies listed in the requirements.txt file,
which at this point should just be Django and Stripe.
NOTE: Using a requirements.txt file as part of a Python project is a great
way to manage dependencies. Its a common convention that you need to
adhere to for any Python project, not just Django. A requirements.txt file is
a simple text file that lists one dependency per line of your project; pip will
read this file and all the dependencies will then be downloaded and installed
automatically.
After you have your virtual environment set up and all your dependencies installed, type pip freeze > requirements.txt to automatically create
the requirements.txt file that can later be used to quickly install the project
dependencies. Then check the file into git and share it with your team, and
everybody will be using the same set of dependencies.
More information about requirements.txt files can be found on the official
docs page.
3. OPTIONAL: If for whatever reason, the requirements.txt is not found, then we need to
set one up. No worries. Its good practice.
Start by installing the following dependencies:
1
2
Now just create the requirements.txt file in the root directory (repo):
1
Thats it.
4. Take a quick glance at the project structure, paying particular attention to the tests.py
files. See anything that stands out. Well, there are no tests, except for the generic 2+2
test:
1
2
3
4
5
6
7
class SimpleTest(TestCase):
def test_basic_addition(self):
"""
Tests that 1 + 1 always equals 2.
31
8
9
"""
self.assertEqual(1 + 1, 2)
So, we need to first add tests to ensure that this current app works as expected without
any bugs, otherwise we cannot confidently implement new features.
5. Once you have all your dependencies in place, lets run a quick set of tests to ensure
Django is up and running correctly:
1
2
$ cd django_ecommerce
$ python manage.py test
I generally just chmod +x manage.py. Once you do that, you can just run:
1
$ ./manage.py test
Not a huge difference, but over the long-run that can save you hundreds of keystrokes.
NOTE: You could also add the alias alias p="python" to your ~/.bashrc
file so you can just run p instead of python: p manage.py test.
Nevertheless, after running the tests you should get some output like this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
Notice that with Django 1.5 there will always be one failure - (expected failures=1);
thats expected, so just ignore it. If you get an output like this, than you know your
system is correctly configured for Django. Now its time to test our own app.
32
However, if you get an error about ValueError: unknown local e: UTF-8 this is the
dreaded Mac UTF-8 hassle. Depending on who you ask, its a minor annoyance in
Macs default terminal to the worst thing that could ever happen. In truth, though, its
easy to fix. From the terminal simply type the following:
1
2
$ export LC_ALL=en_US.UTF-8
$ export LANG=en_US.UTF-8
Feel free to use another language if you prefer. With that out of the way, your
tests should pass just fine. Before moving on, though, add these two lines to your
.bash_profile file so you dont have to retype them each time you start your terminal.
NOTE With virtualenvwrapper, to deactivate you still use the deactivate
command. When youre ready to reactivate the virtualenv to work on your
project again simply type workon django_mvp_app. You can also just
type workon to view your current working virtual environments. Simple.
urlpatterns = patterns('',
url(r'^admin/', include(admin.site.urls)),
url(r'^$', 'main.views.index', name='home'),
url(r'^pages/', include('django.contrib.flatpages.urls')),
url(r'^contact/', 'contact.views.contact', name='contact'),
# user registration/authentication
url(r'^sign_in$', views.sign_in, name='sign_in'),
url(r'^sign_out$', views.sign_out, name='sign_out'),
url(r'^register$', views.register, name='register'),
url(r'^edit$', views.edit, name='edit'),
7
8
9
10
11
12
Our first set of tests will ensure that this routing works correctly and that the appropriate
views are called for the corresponding URLs. In effect we are answering the question, Is my
Django application wired up correctly?
This may seem like something too simple to test, but keep in mind that Djangos routing
functionality is based upon regular expressions, which are notoriously tricky and hard to
understand.
33
NOTE: For more on regular expressions, be sure to check out the excellent chapter in the first Real Python course, Introduction to Python.
4
5
6
class MainPageTests(TestCase):
7
8
9
10
def test_root_resolves_to_main_view(self):
main_page = resolve('/')
self.assertEqual(main_page.func, index)
Here we are using Djangos own urlresolver (docs) to test that the / URL resolves, or
maps, to the main.views.index function. This way we know that our routing is working
correctly.
Want to test this in the shell to see exactly what happens?
1
2
3
4
5
$ ./manage.py shell
>>> from django.core.urlresolvers import resolve
>>> main_page = resolve('/')
>>> print main_page.func
<function index at 0x102270848>
This just shows that the / URL does in fact map back to the main.views.index
function. You could also test the url name, which is called home - url(r'^$',
'main.views.index', name='home'),:
34
1
2
3
4
5
$ ./manage.py shell
>>> from django.core.urlresolvers import resolve
>>> main_page = resolve('/')
>>> print main_page.url_name
home
Perfect.
Now, run the tests:
1
5
6
OK
Great!
You could test this in a slightly different way, using reverse(), which is also a urlresolver:
1
2
3
4
5
$ ./manage.py shell
>>> from django.core.urlresolvers import reverse
>>> url = reverse('home')
>>> print url
/
Can you tell what the difference is, though, between resolve() and reverse()? Check out
the Django docs for more info.
Second test: status code
Now that we know our routing is working correctly lets ensure that we are returning the appropriate view. We can do this by using the client.get() function from
django.tests.TestCase (docs. In this scenario, we will send a request from a dummy
web browser known as the Test Client then assert that the returned response is correct. More
on the Test Client later.
A basic test for this might look like:
35
1
2
3
def test_returns_appropriate_html(self):
index = self.client.get('/')
self.assertEquals(index.status_code, 200)
NOTE: Note on Test Structure
In Django 1.5 the default place to store tests is in a file called tests.py inside the
each applications directory - which is why these tests are in main/tests.py.
For the remainder of this chapter we are going to stick to that standard and put
all tests in app_name/tests.py files. In a later chapter we will discuss more on
the advantages/disadvantages of this test structure. For now lets just focus on
testing.
Test again:
1
2
3
4
5
6
OK
def test_uses_index_html_template(self):
index = self.client.get('/')
self.assertTemplateUsed(index, "index.html")
So, assertTemplateUsed() is one of those helper function provided by Django, which just
checks to see if a HTTPResponse object gets generated by a specific template. Check out the
Django docs for more info.
36
We can further increase our test coverage to verify that not only are we using the appropriate template but that the HTML being returned is correct as well. In other words, after the
template engine has done its thing, do we get the expected HTML?
The test looks like this:
1
2
3
4
5
6
7
8
def test_returns_exact_html(self):
index = self.client.get("/")
self.assertEquals(
index.content,
render_to_response("index.html").content
)
In the above test, the render_to_response() shortcut function is used to run our index.html template through the Django template engine and ensure that the response is the
same had an end user called the / url.
Third test: expected HTML redux
The final thing to test for the index view is that it returns the appropriate html for a loggedin user. This is actually a bit trickier than it sounds, because the index() function actually
performs three separate functions:
1. It checks the session to see if a user is logged in.
2. If the user is logged in, it queries the database to get that users information.
3. Finally, it returns the appropriate HTTP Response.
Now is the time that we may want to argue for refactoring the code into multiple functions to
make it easier to test. But for now lets assume we arent going to refactor and we want to test
the functionality as is. This means that really we are executing a Integration Test instead
of a Unit Test, because we are now testing several parts of the application as opposed to just
one individual unit.
NOTE: Notice how during this process we found an issue in our code - e.g., one
function having too many responsibilities. This is a side effect of testing: You
learn to write better code.
37
Forgetting about the distinction for the time being, lets look at how we might test this functionality:
1. Create a dummy session with the user entry that we need.
2. Ensure there is a user in the database, so our lookup works.
3. Verify that we get the appropriate response back.
Lets work backwards:
def test_index_handles_logged_in_user(self):
2
3
4
5
self.assertTemplateUsed(resp, 'user.html')
We have seen this before: See the first test in this section, test_uses_index_html_template().
Above we are just trying to verify that when there is a logged in user, we return the user.html
template instead of the normal index.html template.
1
2
3
4
5
6
7
8
def test_index_handles_logged_in_user(self):
# create the user needed for user lookup from index page
from payments.models import User
user = User(
name='jj',
email='[email protected]',
)
user.save()
9
10
...snipped code...
11
12
13
# verify the response returns the page for the logged in user
self.assertTemplateUsed(resp, 'user.html')
Here we are creating a user and storing that user in the database. This is more or less the
same thing we do when a new user registers.
38
Take a breather Before continuing, be sure you understand exactly what happens when
you run a Django unittest. Flip back to the last chapter if you need a quick refresher.
Do you understand why we need to create a user to test this function correctly if our
main.views.index() function is working? Yes? Move on. No? Flip back and read about
the django.test.TestCase(), focusing specifically on how it handles the database layer.
Ready?
Move on Want the answer? Its simple: Its because the django.test.TestCase class
that we are inheriting from will clear out the database prior to each run. Make sense?
Also, you may not have noticed, but we are testing against the actual database. This is problematic, but there is a solution. Sort of. We will get to this later. But first, think about the
request object that is passed into the view. Here is the view code we are testing:
1
2
3
4
5
6
7
8
9
def index(request):
uid = request.session.get('user')
if uid is None:
return render_to_response('index.html')
else:
return render_to_response(
'user.html',
{'user': User.objects.get(pk=uid)}
)
As you can see, the view depends upon a request object and a session. Usually these are
managed by the web browser and will just be there, but in the context of unit testing they
wont just be there. We need a way to make the test think there is a request object with a
session attached.
And there is a technique in unit testing for doing this.
Mocks, fakes, test doubles, dummy objects, stubs There are actually many names
for this technique: mocking, faking, using dummy objects, test doubling, stubbing,
etc.
There are subtle difference between each of these, but the terminology just adds confusion.
For the most part, they are all referring to the same thing - which we will define as: Using
a temporary, in-memory object to simulate a real object for the purposes of decreasing test
complexity and increasing test execution speed.
Visually you can think of it as:
39
40
As you can see from the illustration, a mock (or whatever you want to call it) simply takes
the place of a real object allowing you to create tests that dont rely on external dependencies.
This is advantageous for two main reasons:
1. Without external dependencies your tests often run much faster; this is important because unit tests should be ran frequently (ideally after every code change).
2. Without external dependencies its much simpler to manage the necessary state for a
test, keeping the test simple and easy to understand.
Coming back to the request and its state - the user value stored in the session - we can mock
out this external dependency by using Djangos built-in RequestFactory():
1
2
3
4
5
By creating the request mock, we can set the state (i.e. our session values) to whatever we want
and simply write and execute unit tests to ensure that our view function correctly responds
to the session state.
Step 1: Verify that we get the appropriate response back And with that we can complete our earlier unit test:
1
2
3
4
5
6
7
8
9
10
11
def test_index_handles_logged_in_user(self):
#create the user needed for user lookup from index page
from payments.models import User
user = User(
name='jj',
email='[email protected]',
)
user.save()
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
5
6
7
OK
Destroying test database for alias 'default'...
Good.
Lets recap
@classmethod
def setUpClass(cls):
request_factory = RequestFactory()
cls.request = request_factory.get('/')
cls.request.session = {}
Notice the above method, setUpClass(), is a @classmethod. This means it only gets run
once when the MainPageTest class is initially created, which is exactly what we want for our
case. In other words, the mock is created when the class is initialized, and it gets used for
each of the tests. See the Python docs for more info.
If we instead wanted the setup code to run before each test, we could write it as:
1
2
3
4
def setUp(self):
request_factory = RequestFactory()
self.request = request_factory.get('/')
self.request.session = {}
This will run prior to each test running. Use this when you need to isolate each test run.
After we have created our setUp method, we can then update our remaining test cases as
follows:
1
2
3
4
from
from
from
from
5
6
7
8
9
class MainPageTests(TestCase):
10
11
12
13
###############
#### Setup ####
###############
14
15
16
17
18
19
@classmethod
def setUpClass(cls):
request_factory = RequestFactory()
cls.request = request_factory.get('/')
cls.request.session = {}
20
21
22
23
##########################
##### Testing routes #####
##########################
24
25
26
27
def test_root_resolves_to_main_view(self):
main_page = resolve('/')
self.assertEqual(main_page.func, index)
28
29
30
31
def test_returns_appropriate_html_response_code(self):
resp = index(self.request)
self.assertEquals(resp.status_code, 200)
32
33
34
35
#####################################
#### Testing templates and views ####
#####################################
36
37
38
39
40
41
42
def test_returns_exact_html(self):
resp = index(self.request)
self.assertEquals(
resp.content,
render_to_response("index.html").content
)
43
44
def test_index_handles_logged_in_user(self):
44
# Create the user needed for user lookup from index page
user = User(
name='jj',
email='[email protected]',
)
user.save()
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
Sanity check
Run the test!
1
2
3
4
5
6
7
OK
Destroying test database for alias 'default'...
45
Refactoring, part 2
We still have one test, though, test_index_handles_logged_in_user() that is dependent on the database and, thus, needs to be refactored. Why? As a general rule. you do not
want to test the database with a unit test. That would be an integration test. Mocks are perfect
for handling calls to the database.
First, install the mock library:
1
import mock
from payments.models import User
3
4
5
6
7
8
9
10
def test_index_handles_logged_in_user(self):
# Create the user needed for user lookup from index page
# Note that we are not saving to the database
user = User(
name='jj',
email='[email protected]',
)
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
expectedHtml = render_to_response(
'user.html', {'user': user}).content
46
29
self.assertEquals(resp.content, expectedHtml)
47
Testing Models
In the previous section we used the Mock library to mock out the Django ORM so we could
test a Model without actually hitting the database. This can make tests faster, but its actually
NOT a good approach to testing models. Lets face it, ORM or not, your models are tied to
a database - and are intended to work with a database. So testing them without a database
can often lead to a false sense of security.
The vast majority of model-related errors happen within the actual database. Generally
database errors fall into one of the four following categories:
All of these issues wont occur if youre mocking out your database, because you are effectively
cutting out the database from your tests where all these issues originate from. Hence, testing
models with mocks provides a false sense of security, letting these issues slip through the
cracks.
This does not mean that you should never use mocks when testing models; just be very careful
youre not convincing yourself that you have fully tested your model when you havent really
tested it at all.
That said, lets look at several techniques for safely testing Django Models.
WARNING: Its worth noting that many developers do not run their tests in development with the same database engine used in their production environment.
This can also be a source of errors slipping through the cracks. In a later chapter,
well look at using Travis CI to automate testing after each commit including how
to run the tests on multiple databases.
1
2
3
4
5
6
7
8
9
10
11
12
class User(AbstractBaseUser):
name = models.CharField(max_length=255)
email = models.CharField(max_length=255, unique=True)
# password field defined in base class
last_4_digits = models.CharField(max_length=4, blank=True,
null=True)
stripe_id = models.CharField(max_length=255)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
13
14
USERNAME_FIELD = 'email'
15
16
17
def __str__(self):
return self.email
3
4
5
class UserModelTest(TestCase):
6
7
8
def test_user_creation(self):
User(email = "[email protected]", name='test user').save()
9
10
11
users_in_db = User.objects.all()
self.assertEquals(users_in_db.count(), 1)
12
13
14
15
user_from_db = users_in_db[0]
self.assertEquals(user_from_db.email, "[email protected]")
self.assertEquals(user_from_db.name, "test user")
Congratulations. You have verified that the Django ORM can indeed store a model correctly.
But we already knew that, didnt we? Whats the point then? Not much, actually. Remember:
We do not need to test Djangos inherent functionality.
49
def index(request):
uid = request.session.get('user')
if uid is None:
return render_to_response('index.html')
else:
return render_to_response(
'user.html',
{'user': User.objects.get(pk=uid)}
)
Isnt that the same thing? Not exactly. In the above code the view would have to know about
the implementation details in our model, which breaks encapsulation. In reality, for small
projects this doesnt really matter, but in large projects it can make the code difficult to maintain.
Imagine you had 40 different views, and each one uses your User model in a slightly different
way, calling the ORM directly. Do you see the problem? In that situation its very difficult to
make changes to your User model, because you dont know how other objects in the system
are using it.
Thus, its a much better practice to encapsulate all that functionally within the User model
and as a rule everything should only interact with the User model through its API, making it
much easier to maintain.
50
Without further ado, here is the change to our payments models.py file:
1
2
3
4
5
6
7
8
9
10
11
12
class User(AbstractBaseUser):
name = models.CharField(max_length=255)
email = models.CharField(max_length=255, unique=True)
#password field defined in base class
last_4_digits = models.CharField(max_length=4, blank=True,
null=True)
stripe_id = models.CharField(max_length=255)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
13
14
USERNAME_FIELD = 'email'
15
16
17
18
@classmethod
def get_by_id(cls, uid):
return User.objects.get(pk=uid)
19
20
21
def __str__(self):
return self.email
Notice the @classmethod at the bottom of the code listing. We use a @classmethod because
the method itself is stateless, and when we are calling it we dont have to create the User object,
rather we can just call User.get_by_id.
Test the model
Now we can write some tests for our model. Create a tests.py file in the payments folder
and add the following code:
1
2
3
4
5
class UserModelTest(TestCase):
51
6
7
8
9
10
@classmethod
def setUpClass(cls):
cls.test_user = User(email="[email protected]", name='test user')
cls.test_user.save()
11
12
13
def test_user_to_string_print_email(self):
self.assertEquals(str(self.test_user), "[email protected]")
14
15
16
def test_get_by_id(self):
self.assertEquals(User.get_by_id(1), self.test_user)
Test!
1
2
3
4
5
6
7
OK
Destroying test database for alias 'default'...
Whats happening here? In these tests, we dont use any mocks; we just create the User
object in our setUpClass() function and save it to the database. This is our second technique for testing models in action: Just create the data as needed..
WARNING: Be careful with the setUpClass(). While using it in the above
example can speed up our test execution - because we only have to create one
user - it does cause us to share state between our tests. In other words, all of our
tests are using the same User object, and if one test were to modify the User
object, it could cause the other tests to fail in unexpected ways. This can lead
to VERY difficult debugging sessions because tests could seemingly fail for no
apparent reason. A safer strategy, albeit a slower one, would be to create the
user in the setUp() method, which is run before each test run. You may also
want to look at the tearDown() method and use it to delete the User model after
each test is run if you decide to go the safer, latter route. As with most things in
development its a trade-off. If in doubt though I would go with the safer option
and just be a little more patient.
52
So, these tests provide some reassurance that our database is in working order because we
are round-tripping - writing to and reading from - the database. If we have some silly
configuration issue with our database, this will catch it, in other words.
NOTE: In effect, testing that the database appears sane is a side effect of our
test here - which again means, by definition, these are not unit tests - they are
integration tests. But for testing models the majority of your tests should really
be integration tests.
Update the view and the associated test
Now that we have added the get_by_id() function update the main.views.index() function accordingly.
1
2
3
4
5
6
7
8
def index(request):
uid = request.session.get('user')
if uid is None:
return render_to_response('index.html')
else:
return render_to_response(
'user.html', {'user': User.get_by_id(uid)}
)
import mock
2
3
4
5
def test_index_handles_logged_in_user(self):
# Create a session that appears to have a logged in user
self.request.session = {"user": "1"}
6
7
8
9
10
11
12
13
14
15
16
17
self.request.session = {}
18
19
20
21
22
expected_html = render_to_response(
'user.html', {'user': user_mock.get_by_id(1)}
)
self.assertEquals(resp.content, expected_html.content)
This small change actually simplifies our tests. Before we had to mock out the ORM - i.e.
User.objects, and now we just mock the model.
This makes a lot more sense as well, because previously we were mocking the ORM when we
were testing a view. That should raise a red flag. We shouldnt have to care about the ORM
when we are testing views; after all, thats why we have models.
import sys
# Covers regular testing and django-coverage
if 'test' in sys.argv or 'test_coverage' in sys.argv:
DATABASES['default']['ENGINE'] = 'django.db.backends.sqlite3'
54
Now anytime you run Django tests, Django will use a SQLite database so you shouldnt notice
much of a performance hit at all. Awesome!
NOTE: Do keep in mind the earlier warning about testing against a different
database engine in development than production. Well come back to this and
show you how you can have your cake and eat it to! :)
Even though out of the box fixtures are difficult, there are libraries that make using fixtures
much more straight-forward.
Fixture libraries
Testing Forms
We have a number of forms in our application, so lets go ahead and test those as well.
payments.forms has several forms in there. Lets start simple with the SigninForm. To
refresh you memory, it looks like this:
1
2
3
4
5
6
7
class SigninForm(PaymentForm):
email = forms.EmailField(required=True)
password = forms.CharField(
required=True,
widget=forms.PasswordInput(render_value=False)
)
When it comes to forms in Django, they generally have the following lifecycle:
1.
2.
3.
4.
We want to test by first populating data in the form and then ensuring that it validates correctly.
Mixins
Because most form validation is very similar, lets create a mixin to help us out. If youre
unfamiliar with mixins StackOverflow can help.
Add the following code to payments.tests:
1
class FormTesterMixin():
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
self.assertEquals(
test_form.errors[expected_error_name],
expected_error_msg,
msg="Expected {} : Actual {} : using data {}".format(
test_form.errors[expected_error_name],
expected_error_msg, pformat(data)
)
)
This guy makes it super simple to validate a form. Just pass in the class of the form, the name
of the field expected to have an error, the expected error message, and the data to initialize
the form. Then, this little function will do all the appropriate validation and provide a helpful
error message that not only tells you the failure - but what data triggered the failure.
Its important to know what data triggered the failure because we are going to use a lot of data
combinations to validate our forms. This way it becomes easier to debug the test failures.
Test
3
4
5
6
7
8
9
10
11
12
13
def test_signin_form_data_validation_for_invalid_data(self):
invalid_data_list = [
{'data': {'email': '[email protected]'},
'error': ('password', [u'This field is required.'])},
{'data': {'password': '1234'},
'error': ('email', [u'This field is required.'])}
]
14
15
16
17
invalid_data['error'][1],
invalid_data["data"])
18
19
5
6
OK
1. We first use multiple inheritance to inherit from the Python standard unittest.TestCase
as well as the helpful FormTesterMixin.
2. Then in the test_signin_form_data_validation_for_invalid_data() test,
we create an array of test data called invalid_data_list. Each invalid_data
item in invalid_data_list is a dictionary with two items: the data item, which is
the data used to load the form, and the error item, which is a set of field_names
and then the associated error_messages.
With this setup we can quickly loop through many different data combinations and check
that each field in our form is validating correctly.
Form validations are one of those things that developers tend to breeze through or ignore
because they are so simple to do. But for many applications they are quite critical (Im looking
at you banking and insurance industries). QA teams seek out form validation errors like a
politician looking for donations.
Whats more, since the purpose of validation is to keep corrupt data out of the database, its
especially important to include a number of detailed tests. Think about edge cases.
Cleaning
The other thing that forms do is clean data. If you have custom data cleaning code, you
should test that as well. Our UserForm does, so lets test that, starting with the validation
then moving on to cleaning.
58
Please note: youll have to inherit from SimpleTestCase to get the assertRaisesMessage function
Test Form Validation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
def test_user_form_passwords_match(self):
form = UserForm(
{
'name': 'jj',
'email': '[email protected]',
'password': '1234',
'ver_password': '1234',
'last_4_digits': '3333',
'stripe_token': '1'}
)
# Is the data valid? -- if not print out the errors
self.assertTrue(form.is_valid(), form.errors)
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
def test_user_form_passwords_dont_match_throws_error(self):
form = UserForm(
{
'name': 'jj',
'email': '[email protected]',
'password': '234',
'ver_password': '1234', # bad password
'last_4_digits': '3333',
'stripe_token': '1'
}
)
31
32
33
34
59
self.assertRaisesMessage(forms.ValidationError,
"Passwords do not match",
form.clean)
35
36
37
5
6
7
OK
Destroying test database for alias 'default'...
Testing Summary
This chapter provided a solid overview of unit testing for Django applications.
1.
2.
3.
4.
Part 1: Tested our Django url routing, templates and views using unit tests.
Part 2: Refactored those tests with Mocks to simply and further isolate the tests.
Part 3: Tested our models using the Django ORM by creating data on demand
Part 4: Tested our forms using mixins.
Summary
We covered quite a bit in this chapter, so hopefully this table will help summarize everything:
Testing Technique
When to use
Unit Testing
Integration Testing
GUI Testing
Mocking
Fixtures
Testing Technique
Avantages
Unit Testing
Testing Technique
Avantages
Integration Testing
GUI Testing
Mocking
Fixtures
Testing Technique
Disadvantages
Unit Testing
Integration Testing
GUI Testing
Mocking
Fixtures
This should provide you with the background you need to start unit testing your own applications. While it may seem strange at first to write unit tests alongside the code, keep at it and
before long you should be reaping the benefits of unit testing in terms of less defects, ease of
re-factoring, and having confidence in your code. If youre still not sure what to test at this
point, I will leave you with this quote:
In god we trust. Everything else we test.
-anonymous
Exercises
Most chapters have exercises at the end. They are here to give you extra practice on the
concepts covered in the chapter. Working through them will improve your mastery of the
subjects. If you get stuck, answers can be found in Appendix A.
1. Our URL routing testing example only tested one route. Write tests to test the other
routes. Where would you put the test to verify the pages/ route? Do you need to do
anything special to test the admin routes?
2. Write a simple test to verify the functionality of the sign_out view. Do you recall how
to handle the session?
3. Write a test for contact.models. What do you really need to test? Do you need to
use the database backend?
61
4. QA teams are particularly keen on boundary checking. Research what it is, if you are
not familiar with it, then write some unit tests for the CardForm from the payments
app to ensure that boundary checking is working correctly.
62
Chapter 4
Test Driven Development
Test Driven Development (TDD) is a software development practice that puts a serious emphasis on writing automated tests as, or immediately before, writing code. The tests not only
serve to verify the correctness of the code being written but they also help to evolve the design
and overall architecture. This generally leads to high quality, simpler, easier to understand
code, which are all goals of Software Craftsmanship.
Programmer (noun): An organism that converts caffeine or alcohol into
code.
~Standard Definition
While the above definition from Uncyclopedia is somewhat funny and true it doesnt tell the
whole story. In particular its missing the part about half-working, buggy code that takes a
bit of hammering to make it work right. In other words, programmers are humans, and we
often make mistakes. We dont fully think through the various use cases a problem presents,
or we forget some silly detail which can lead to some strange edge case bug that only turns
up eight years later and costs the company $465 million dollars in trading losses in about 45
minutes. It happened.
Wouldnt it be nice if there were some sort of tool that would alert a programmer when the
code isnt going to do what you think its going to do? Well, there is.
Its called TDD.
How can TDD do that? It all comes down to timing. If you write a test to ensure a certain
functionality is working, then immediately write the code to make the test pass, you have a
pretty good idea that the code does what you think it does. TDD is about rapid feedback. And
that rapid feedback helps developers who are not just machines (e.g., all of us) deliver great
code.
63
1. Red Light
The above workflow starts with writing a test before you write any code. The test will of
course fail because the code that its supposed to verify doesnt exist yet. This may seem like
a waste of time, writing a test before you even have code to test, but remember TDD is as
much about design as it is about testing and writing a test before you write any code is a good
way to think about how you expect your code to be used.
You could think about this in any number of ways, but a common checklist of simple considerations might look like this:
Should it be a class method or an instance method?
What parameters should it take?
64
2. Green Light
Once you have a basic test written, write just enough code to make it pass. Just enough is
important here. You may have heard of the term YAGNI - You Aint Going to Need It. Words
to live by when practicing TDD.
Since we are writing tests for everything, its trivial to go back and update/extend some functionality later if we decide that we indeed are going to need it. But often times you wont. So
dont write anything unless you really need it.
3. Yellow Light
Getting the test to pass isnt enough. After the test passes, we should review our code and
look at ways we can improve it. There are entire books devoted to re-factoring, so I wont go
into too much detail here, but just think about ways you can make the code easier to maintain,
more reusable, cleaner, and simpler.
Focus on those things and you cant go wrong. So re-factor the code, make it beautiful, and
then write another test to further verify the functionality or to describe new functionality.
65
66
Registration Functionality
For this application, when a user registers, his or her credit card information will be passed
to Stripe, a third party payment processor, for processing. This logic is handled in the payments/views.py file. The register() function basically grabs all the information from the
user, checks it against Stripe and stores it in the database.
Lets look at that function now. Its called payments.views.register:
1
2
3
4
5
def register(request):
user = None
if request.method == 'POST':
form = UserForm(request.POST)
if form.is_valid():
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
67
21
22
23
24
25
26
user = User(
name=form.cleaned_data['name'],
email=form.cleaned_data['email'],
last_4_digits=form.cleaned_data['last_4_digits'],
stripe_id=customer.id,
)
27
28
29
30
31
32
33
34
35
36
37
try:
user.save()
except IntegrityError:
form.addError(user.email + ' is already a member')
else:
request.session['user'] = user.pk
return HttpResponseRedirect('/')
38
39
40
else:
form = UserForm()
41
42
43
44
45
46
47
48
49
50
51
52
53
return render_to_response(
'register.html',
{
'form': form,
'months': range(1, 12),
'publishable': settings.STRIPE_PUBLISHABLE,
'soon': soon(),
'user': user,
'years': range(2011, 2036),
},
context_instance=RequestContext(request)
)
It is by no means a horrible function (this application is small enough that there arent many
bad functions), but it is the largest function in the application and we can probably come up
with a few ways to make it simpler so it will serve as a reasonable example for re-factoring.
The first thing to do here is to write some simple test cases to verify (or in the case of the 600
line function, determine) what the function actually does.
68
the
previous
routing testing example only tested one route. Write tests to test
the other routes. Where would you put the test to verify thepages/route?
Do you need to do anything special to test the admin routes?"And in
*Appendix A* we proposed the answer of using aViewTesterMixin().
If you were to implement that solution here you would probably come up with a class, like
RegisterPageTests(), that tests that the registration page returns the appropriate HTML/template data.
The code for those tests are shown below:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
@classmethod
def setUpClass(cls):
html = render_to_response(
'register.html',
{
'form': UserForm(),
'months': range(1, 12),
'publishable': settings.STRIPE_PUBLISHABLE,
'soon': soon(),
'user': None,
'years': range(2011, 2036),
}
)
ViewTesterMixin.setupViewTester(
'/register',
register,
html.content,
)
26
27
28
def setUp(self):
request_factory = RequestFactory()
69
self.request = request_factory.get(self.url)
29
If you dont have that code you can grab the appropriate tag from git by typing the following
from your code directory:
Notice there are a few paths through the view that we need to test:
Condition
1.
2.
3.
4.
request.method
request.method
request.method
request.method
!=
==
==
==
POST
POST and not form.is_valid()
POST and user.save() is OK
POST and user.save() throws an IntegretyError:
Expected Results
1.
2.
3.
4.
Condition 1
Our existing tests covers the first case, by using the ViewTesterMixin(). Lets write tests
for the rest.
Condition 2
1
2
3
4
5
6
def setUp(self):
request_factory = RequestFactory()
self.request = request_factory.get(self.url)
70
7
8
def test_invalid_form_returns_registration_page(self):
9
10
with mock.patch('payments.forms.UserForm.is_valid') as
user_mock:
11
user_mock.return_value = False
12
13
self.request.method = 'POST'
self.request.POST = None
resp = register(self.request)
self.assertEquals(resp.content, self.expected_html)
14
15
16
17
18
19
20
To be clear, in the above example we are only mocking out the call to is_valid(). Normally
we might need to mock out the entire object or more functions of the object, but since it will
return False, no other functions on from the UserForm will be called.
NOTE: Also take note that we are now re-creating the mock request before each
test is executed in our setUp() method. This is because this test is changing
properties of the request object and we want to make sure that each test is independent.
Condition 3
def test_registering_new_user_returns_successfully(self):
2
3
4
self.request.session = {}
self.request.method = 'POST'
71
5
6
7
8
9
10
11
12
self.request.POST = {
'email': '[email protected]',
'name': 'pyRock',
'stripe_token': '4242424242424242',
'last_4_digits': '4242',
'password': 'bad_password',
'ver_password': 'bad_password',
}
13
14
15
16
resp = register(self.request)
self.assertEquals(resp.status_code, 200)
self.assertEquals(self.request.session, {})
The problem with this condition is that the test will actually attempt to make a request to the
Stripe server - which will reject the call because the stripe_token is invalid.
Lets mock out the Stripe server to get around that.
1
def test_registering_new_user_returns_succesfully(self):
2
3
4
5
6
7
8
9
10
11
12
self.request.session = {}
self.request.method = 'POST'
self.request.POST = {
'email': '[email protected]',
'name': 'pyRock',
'stripe_token': '...',
'last_4_digits': '4242',
'password': 'bad_password',
'ver_password': 'bad_password',
}
13
14
15
16
17
18
19
20
21
22
resp = register(self.request)
self.assertEquals(resp.content, "")
self.assertEquals(resp.status_code, 302)
self.assertEquals(self.request.session['user'], 1)
23
72
24
25
26
So now we dont have to actual call Stripe to run our test. Also, we added a line at the end to
verify that our user was indeed stored correctly in the database.
NOTE: A side note on syntax: In the above code example we used the following
syntax to configure the mock:
1
2
def test_registering_user_twice_cause_error_msg(self):
2
3
4
user.save()
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
resp = register(self.request)
45
46
47
48
49
50
51
52
As you can see we are testing pretty much the entire system, and we have to set up a bunch
of expected data and mocks so the test works correctly.
This is sometimes required when testing view functions that touch several parts of the system.
But its also a warning sign that the function may be doing too much. One other thing: If you
look closely at the above test, you will notice that it doesnt actually verify the html being
returned.
We can do that by adding this line:
1
self.assertEquals(resp.content, html.content)
Now if we run it BOOM! Run it. Test Fails. Examining the results we can see the following
error:
Expected: <a href="/sign_in">Login</a>
Actual: <a href="/sign_out">Logout</a>
It looks like even though a user didnt register correctly the system thinks s user is logged into
the system.We can quickly verify this with some manual testing. Sure enough, after we try to
register the same email the second time this is what we see:
Further, clicking on Logout will give a nasty KeyError as the logout() function tries to
remove the value in session["user"], which is of course not there.
There you go: We found a defect. :D
While this one wont cost us $465 million dollars, its still much better that we find it before
our customers do. This is a type of subtle error that easily slips through the cracks, but that
unit tests are good at catching. The fix is easy, though. Just set user=None in the except
IntegretyError: handler within payments/views:
1
2
try:
user.save()
75
3
4
5
6
7
8
except IntegrityError:
form.addError(user.email + ' is already a member')
user = None
else:
request.session['user'] = user.pk
return HttpResponseRedirect('/')
7
8
9
OK
Destroying test database for alias 'default'...
Perfect.
76
user = User(
name = form.cleaned_data['name'],
email = form.cleaned_data['email'],
last_4_digits = form.cleaned_data['last_4_digits'],
stripe_id = customer.id,
)
7
8
9
10
11
12
try:
user.save()
And put it in the User() model class, within payments/models.py so it looks like this:
1
2
3
4
5
@classmethod
def create(cls, name, email, password, last_4_digits, stripe_id):
new_user = cls(name=name, email=email,
last_4_digits=last_4_digits, stripe_id=stripe_id)
new_user.set_password(password)
6
7
8
new_user.save()
return new_user
cd = form.cleaned_data
try:
user = User.create(
77
4
5
6
7
8
9
10
11
12
13
14
15
cd['name'],
cd['email'],
cd['password'],
cd['last_4_digits'],
customer.id
)
except IntegrityError:
form.addError(cd['email'] + ' is already a member')
user = None
else:
request.session['user'] = user.pk
return HttpResponseRedirect('/')
Updating Tests
Taking a defensive approach, lets update some of our tests before moving on with the refactoring.
The first thing is to move the testing of the user creation to the user model and test the create
class method. So, in the UserModelTest class we can add the following two methods:
1
2
3
def test_create_user_function_stores_in_database(self):
user = User.create("test", "[email protected]", "tt", "1234", "22")
self.assertEquals(User.objects.get(email="[email protected]"), user)
4
5
6
7
8
9
10
11
12
13
def test_create_user_allready_exists_throws_IntegrityError(self):
from django.db import IntegrityError
self.assertRaises(
IntegrityError,
User.create,
"test user",
"[email protected]",
"jj",
"1234",
78
89
14
15
With these tests we know that user creation is working correctly in our model and therefore
if we do get an error about storing the user from our view test, we know it must be something
in the view and not the model.
At this point we could actually leave all the existing register view tests as is. Or we could mock
out the calls to the database since we dont strictly need to do them any more, as we already
tested that the User.create method functions work as intended.
Lets go ahead and mock out the test_registering_new_user_returns_succesfully()
function to show how it works:
1
2
3
4
5
@mock.patch('stripe.Customer.create')
@mock.patch.object(User, 'create')
def test_registering_new_user_returns_succesfully(
self, create_mock, stripe_mock
):
6
7
8
9
10
11
12
13
14
15
16
self.request.session = {}
self.request.method = 'POST'
self.request.POST = {
'email': '[email protected]',
'name': 'pyRock',
'stripe_token': '...',
'last_4_digits': '4242',
'password': 'bad_password',
'ver_password': 'bad_password',
}
17
18
19
20
#get the return values of the mocks, for our checks later
new_user = create_mock.return_value
new_cust = stripe_mock.return_value
21
22
resp = register(self.request)
23
24
25
26
27
28
self.assertEquals(resp.content, "")
self.assertEquals(resp.status_code, 302)
self.assertEquals(self.request.session['user'], new_user.pk)
#verify the user was actually stored in the database.
create_mock.assert_called_with(
79
29
30
#get the return values of the mocks, for our checks later
new_user = create_mock.return_value
new_cust = stripe_mock.return_value
We store those return values so we can use them for our assertions later on, which are used
in these two assertions:
1
2
3
4
self.assertEquals(self.request.session['user'], new_user.pk)
#verify the user was actually stored in the database.
create_mock.assert_called_with(
'pyRock', '[email protected]', 'bad_password', '4242',
new_cust.id
)
1. The first assertion just verifies that we are setting the session value to the id of the
object returned from our create_mock.
2. The second assertion, meanwhile, is an assertion that is built into the mock library,
allowing us to check and see if a mock was called - and with what parameters. So
basically we are asserting that the User.create() function was called as expected.
Run the tests.
1
2
3
4
5
6
7
8
9
OK
Destroying test database for alias 'default'...
80
When a function is horribly slow and you dont want to run it each time.
Similarly, when a function needs to call something not on your development machine
(i.e., web APIs).
When a function requires setup, or state, that is more difficult to manage than setting
up the mock (i.e., unique data generation).
When a function is somebody elses code and you really dont trust it or dont want to
be responsible for it (but see When not to Mock below).
Time-dependent functionality is good to mock as its often difficult to recreate the dependency in a unit test.
When should you not Mock?
Not to Mock should be the default choice, when in doubt dont mock
When its easier or faster or simpler not to mock.
If you are going to mock out web APIs or code you dont want to be responsible for, its
a good idea to include some separate integration tests that you can run from time to
time to ensure the expected functionality is still working, or hasnt been changed. As
changing APIs can be a huge headache when developing against third party code.
Some people swear that the speed of your unit test is paramount to all else. While it is true
mocking will generally speed up the execution time of your unit test suite, unless its a significant speed increase (like in the case for stripe.Customer.create) my advice is: Mocking
81
is a great way to kill brain cycles; and actually improving your code is a much better use of
those brain cycles.
82
Conclusion
The final thing you might want to re-factor out of the register() function is the actual
Stripe customer creation.
This makes a lot of sense if you have a use case where you may want to choose between subscription vs one-time billing. You could just write a customer manager function that takes in
the billing type as an argument and calls Stripe appropriately returning the customer.id.
Well leave this one as an exercise for you, dear reader.
We have discussed the general approach to TDD and how following the three steps, and making one small change at a time is very important to getting TDD right.
As a review the three steps of TDD are:
1.
2.
3.
4.
Follow those steps and they should serve you well in your path to Elite Software Craftsmanship.
We also touched on applying TDD and unit testing to existing/legacy code and how to think
defensively so you dont break existing functionality while trying to improve the system.
With the two testing chapters now completed we can turn our focus on some of the cool and
exciting features offered in Django 1.6, 1.7, and 1.8, but first here are a couple of exercises to
cement all this TDD stuff in your brain.
83
Exercises
1. Try mocking out the test_registering_user_twice_cause_error_msg() test if
you really want to get your head around mocks. Start by mocking out the User.create
function so that it throws the appropriate errors.
HINT: this documentation is a great resource and the best place to start. In particular,
search for side_effect.
Want more? Mock out the UserForm as well. Good luck.
2. As alluded to in the conclusion, remove the customer creation logic from register()
and place it into a separate CustomerManager() class. Re-read the first paragraph of
the conclusion before you start, and dont forget to update the tests accordingly.
84
Chapter 5
Git Branching at a Glance
We talked about testing and re-factoring in the first few chapters of this book because they
are so important to the art of Software Craftsmanship. And we will continue to use them
throughout this book. But there are other tools which are important to master in the neverending quest to become a better craftsman. In this chapter we will briefly introduce Git
Branching so we can use it in the next chapter, which is about the art of the upgrade - i.e.,
Upgrading to Django 1.8.
It is assumed that the reader has some basic familiarity with how to push, pull and setup a
repo with git prior to reading this chapter. If you dont, you can revisit the Getting Started
chapter in the second Real Python course. Or you can go through this quick and to the point
git-guide.
85
Git Branching
When developing applications we are often faced with situations where we want to experiment or try new things. It would be nice if we could do this in a way that wouldnt affect our
main line of code and that we could easily roll back from if our experiment didnt quite work
out, or save it off and work on it later. This is exactly what Git Branching provides. Oftentimes
referred to as Gits killer feature, the branching model in Git is incredibly useful.
There are four main reasons that Gits branching model is so much more useful than branching in traditional Version Control Systems (Im looking at you SVN):
1. Git Branching is done in the same directory as you main line code. No need to switch
directories. No need to take up a lot of additional disk space. Just do a git checkout
<branchname> and Git switches all the files for you automatically.
2. Git Branching is lightning fast. Since it doesnt copy all the files but rather just applies
the appropriate diffs, its really fast.
3. Git Branching is local. Since your entire Git repository is stored locally, there is no need
for network operations when creating a branch, allowing for branching and merging
to be done completely offline (and later pushed back up to the origin server if desired).
This is also another reason why Git branching is so fast; in the time it takes SVN to
connect to the remote server to perform the branch operation, Git will have already
completed.
4. But the most important and useful feature of Git Branching is not Branching at all - its
merging. Git is fantastic at merging (in fact, every time you do a git pull you are
merging) so developers dont need to be afraid of merging branches. Its more or less
automatic and Git has fantastic tools to clean up those cases where the merge needs
some manual intervention.
5. (Yeah I know I said only 4, count this one as a bonus) Merging can be undone. Trying
to merge and everything goes bad? No problem, just git reset --hard.
The main advantage to all of these features is that it makes it simple - in fact, advisable - to
include branching as part of your daily workflow (something you generally wouldnt do in
SVN). With SVN we generally branch for big things, like upgrading to Django 1.8 or another
new release. With Git we can branch for everything.
One common branching pattern in Git is the feature branch pattern where a new branch
is created for every requirement and only merged back into the main line branch after the
feature is completed.
86
Branching Models
Thats a good segue into talking about the various branching models that are often employed
with Git.
A branching model is just an agreed upon way to manage your branches. It usually talks
about when and for what reasons a branch is created, when it is merged (and to what branch
it is merged into) and when the branch is destroyed. Its helpful to follow a branching model
so you dont have to keep asking yourself, Should I create a branch for this?. As developers,
we have enough to think about, so following a model to a certain degree takes the thinking
out of it - just do what the model says. That way you can focus on writing great software and
not focus so much on how your branching model is supposed to work.
Lets look at some of the more common branching models used in Git.
The I just switched from SVN and I dont want to really learn Git
Branching Model
Otherwise known as the no-branching model. This is most often employed by Git noobs,
especially if they have a background in something like CVS or SVN where branching is hard.
I admit when I started using Git this is exactly the model that I followed. For me, and I think
for most people who follow this model, its because branching seems scary and there is code
to write, and tests to run, and who has time to learn about the intricacies of branching yadda,
yadda, yadda.
The problem with this model is there is such a huge amount of useful features baked into Git
that you simply are not getting access to them if youre not branching. How do you deal with
production issues? How do you support multiple versions of your code base? What do you
do if you have accidentally gone down the wrong track and have screwed up all your code?
How do you get back to a known good state? The list goes on.
There are a ton of reasons why this isnt a good way to use Git. So do yourself a favor, take
some time and at least learn the basics of git branching. Your future self will thank you. :)
Git-Flow
Vincent Driessen first documented this excellent branching model here. In his article (which
you should really read) entitled A Successful Git Branching Model, he goes through a somewhat complex branching model that handles just about any situation you could think of when
working on a large software project. His model eloquently handles:
87
A quick snapshot of what the branching model looks like can be seen below:
For large software projects, when working with a team, this is the recommended model. Since
Vincent does such a good job of describing it on his website, were not going to go into detail
about it here, but if you are working on a large project this is a good branching model to use.
For smaller projects with less team members you may want to look at the model below. Also
I have reports from a number highly skilled developers use combinations of git-flow and the
Github pull-request model. So choose what fits your style best - or use them all. :)
Also to make git-flow even easier to use, Vincent has created an open source tool called
git-flow that is a collection of scripts that make using this branching model pretty straightforward, which can save a lot of keystrokes. Git-Flow can be found on Github here.
Thank you Vincent!
90
For this model to work effectively the Master branch must always be deployable. It must be
kept clean at all times, and you should never commit directly to master. The point of the Pull
Request is to ensure that somebody else reviews your code before it gets merged into Master
to try to keep Master clean. There is even a tool called git-reflow that will enforce a LGTM
(Looks Good to Me) for each pull request before it is merged into master.
Write your code and tests on the feature branch
All work should be done on a feature branch to keep it separate from Master and to allow
each to go through a code review before being merged back into master.
Create the branch by simply typing:
1
Then do whatever work it is you want to do. Keep in mind that commits should be incremental and atomic. Basically that means that each commit should be small and affect only one
aspect of the system. This makes things easy to understand when looking through history.
Wikipedia has a detailed article on atomic commit conventions.
Once your done with your commits but before you issue the pull request, its important to
rebase against Master and tidy things up, with the goal of keeping the history useful.
Rebasing in git is a very powerful tool and you can do all kinds of things with it. Basically it
allows you to take the commits from one branch and replay them on another branch. Not only
91
can you replay the commits but you can modify the commits as they are replayed as well. For
example you can squash several commits into a single commit. You can completely remove
or skip commits, you can change the commit message on commits, or you can even edit what
was included in the commit.
Now that is a pretty powerful tool. A great explanation of rebase can be found at the git-scm
book. For our purposes though we just want to use the -i or interactive mode so we can clean
up our commit history so our fellow developers dont have to bear witness to all the madness
that is our day to day development. :)
Start with the following command:
1
Issuing the above command will bring up your default text editor (mine is vim) showing a list
of commits with some commands that you can use:
1
2
3
4
pick
pick
pick
pick
7850c5d
6999b75
6598edc
8779d40
tag: Ch3_before_exercises
Ch3_ex1
Ch3_ex1_extra_credit
Ch3_ex2
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#
#
#
#
#
#
#
#
#
#
#
#
# If you remove a line here THAT COMMIT WILL BE LOST.
#
92
20
21
22
Above we can see there are five commits that are to be rebased. Each commit has the word
pick in front of it followed by the SHA ID of the commit and the commit message. You can
modify the word in front of the commit pick to any of the following commands:
1
2
3
4
5
6
7
#
#
#
#
#
#
#
Commands:
p, pick = use commit
r, reword = use commit, but edit the commit message
e, edit = use commit, but stop for amending
s, squash = use commit, but meld into previous commit
f, fixup = like "squash", but discard this commit's log message
x, exec = run command (the rest of the line) using shell
The commands are all self explanatory. Note that both squash and fixup come in handy.
edit is also nice. If you were to change the commits to look like the following:
1
2
3
4
pick
pick
edit
pick
7850c5d
6999b75
6598edc
8779d40
Ch3_before_exercises
Ch3_ex1
Ch3_ex1_extra_credit
Ch3_ex2
Then after you saved and closed the file, rebase would run and it would play back from the top
of the file, applying commit 7850c5d then commit 6999b75 and then the changes for commit
6598edc will be applied and rebase will drop you back to the shell allowing you to make any
changes you want. Then just git add your changes and once you issue the command1
-the changes that you git add-ed will be replayed as part of the 6598edc commit. Then
finally the last commit 8779d40 will be applied. This is really useful in situations where you
forgot to add a file to git or you just need to make some small changes.
Have a play with git rebase; its a really useful tool and worth spending some time learning.
Once youre done with the rebase and you have your commit history looking just how you
want it to look then its time to move on to the next step.
Does this make sense? Would you like to see a video depicting an example? If
so, please let us know! [email protected]
93
Technically you dont have to use GitHubs pull request feature. You could simply push your
branch up to origin and have others check it like this:
1
But Githubs pull requests are cool and provide threaded comments, auto-merging and other
good stuff as well. If you are going to use a pull request you dont need to fork the entire
repository, as GitHub now allows you to issue a pull request from a branch. So after you have
pushed up your branch you can just create the pull request on it through the GitHub website.
Instructions for this are here.
There is also an excellent set of command line tools called hub that make working with Github
from the command line a breeze. You can find hub here. If youre on OSX and you use
homebrew (which you should) just brew install hub. Then you dont have to push your
feature_branch at all just do something like this:
1
2
This will open a text editor allowing you to put in comments about the pull request. Be aware
though that this will create a fork of the repo. If you dont want to create the fork (which you
probably dont) you can do a slightly longer command such as:
1
This will create a pull request from your feature branch to the master branch for the repo
owned by repo_owner. If youre also tracking issues in Github you can link the pull request
to an issue by adding -i ### where ### is the issue to link to. Awesome! :)
Perform the Code Review
This is where the GitHub pull request feature really shines. Your team can review the pull
request, make comments or even checkout the pull-request and add some commits. Once
the review is done the only thing left to do is merge the pull request into Master.
Merge Pull Request into Master
Process:
94
1
2
3
The --no-ff flag is important for keeping the history clean. Basically this groups all the
commits from your feature_branch and breaks them out into a merge bubble so they are
easy to see in the history. An example would look like this:
1
2
3
4
5
6
7
While some people really dont like having that extra merge commit - i.e., commit number
5340d4b - around, its actually quite helpful to have. Lets say for example you accidentally
merged and you want to undo. Then because you have that merge commit, all you have to do
is:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
So the merge bubbles are a good defensive way to cover your ass (CYA) in case you make any
mistakes.
95
The only other thing you do in this branching model is periodically tag your code when its
ready to release.
Something like:
1
This way you can always checkout the tag, if for example you have production issues, and
work with the code from that specific release.
Thats all there is to it.
96
97
Exercises
The exercises for this chapter are pretty fun. Enjoy!
1. Peter Cottle has created a great online resource to help out people wanting to really
understand git branching. Your task is to play around with this awesome interactive
tutorial. It can be found here: http://pcottle.github.io/learnGitBranching/. What are
your thoughts? Write them down. Blog about them. Email us about it.
2. There is a great talk about the GitHub Pull Request Model by Zach Holman of GitHub
entitled, How GitHub uses GitHub to Build GitHub. Its worth watching the entire
video, but the branching part starts here.
98
Chapter 6
Upgrade, Upgrade, and Upgrade some
more
Django 1.8
Up until now we have been working on Django 1.5, but its worthwhile to look at some of the
new features offered by Django 1.6, 1.7, and 1.8 as well. In addition, it will give us a chance
to look at the upgrade process and what is required when updating a system. Lets face it: if
youre lucky enough to build a system that lasts for a good amount of time, youll likely have
to upgrade. This will also give us a chance to try out our git branching strategy and just have
some good clean code wrangling fun.
99
The Upgrade
Lets start off with creating a feature branch for our work.
1
2
Then we also need to create a separate virtualenv and upgrade the version of Django in that
virtualenv to 1.8. Before you do this, make sure to deactivate the main virtualenv.
1
2
3
4
5
$
$
$
$
$
$ git branch
* django1.8
master
======================================================================
ERROR: test_signin_form_data_validation_for_invalid_data
(payments.tests.FormTests)
100
4
5
6
7
8
9
10
11
12
13
Would you look at that. Django decided to take the name of our function assertFormError.
Guess the Django folks also thought we needed something like that. Okay. Lets just change
the name of our function to something like should_have_form_error() so it wont get
confused with internal Django stuff.
You should also see the same error thrown from each one of your test cases. It looks like this:
1
2
3
4
5
6
7
======================================================================
ERROR: tearDownClass (tests.contact.testContactModels.UserModelTest)
---------------------------------------------------------------------Traceback (most recent call last):
File "/site-packages/django/test/testcases.py", line 962, in
tearDownClass
cls._rollback_atomics(cls.cls_atomics)
AttributeError: type object 'UserModelTest' has no attribute
'cls_atomics'
Whats that? This is actually due to a cool new feature in Django 1.8 called setUpTestData
meant to make it easier to load data for a test case. The documentation is here. To achieve
this little bit of awesomeness the django.test.TestCase in 1.8 uses setUpClass to wrap
the entire test class in an atomic block (see migrations chapter). Anyways the solution for
this is to change our setUpClass function (in all of our test classes) to call the setUpClass
function in django.test.TestCase. We do this with the super function. Here is an example for the MainPageTetss:
1
class MainPageTests(TestCase):
2
3
4
5
6
@classmethod
def setUpClass(cls):
super(MainPageTests, cls).setUpClass()
... snipped the rest of the function ...
101
Basically the line starting with super can be read as, Call the setUpClass method on the
parent class of UserModelTest and pass in one argument cls. You then have to do the
same change to the setUpClass function for each TestCase class subsituting the name of
the class for UserModelTest.
NOTE: Since we will later in this chapter upgrade to Python 3 its
worth noting that the syntax to call super is a bit cleaner in Python
3. Instead of super(UserModelTest, cls).setUpClass() with
Python 3 you can just call super().setUpClass() and it will figure
out the correct arguments to use!
While it will certainly work to update all of our test cases with the called to super().setUpClass()
Django 1.8 introduces a new test case feature which may be more appropriate for some of
our test cases. The new feature called setUpTestData and its a class function that is
actually called from setUpClass is meant to be used for creaing test data. From a functional persepective there are no differences between creating data for your TestCase in
setUpClass vs setUpTestData. However due to the name of the latter, it is arguably a
cleaner way to express intent, making your test cases that much easier to understand. To
show an example, lets update our payments.testUserModel.UserModelTest class. To
do this we would complete remove the setUpClass function and replace it with this:
1
2
3
4
@classmethod
def setUpTestData(cls):
cls.test_user = User(email="[email protected]", name='test user')
cls.test_user.save()
Doing this will create the test_user only one time, just before all the test functions in the
class run. Also note because this is created inside an atomic block, there is no need to call
tearDownTest to delete the data.
After updating all your test classes, re-run the tests and now they are all at least passing. Nice.
But we still have a few warnings to deal with
First warning
1
../Users/michaelherman/Documents/repos/realpython/book3-exercises/_chapters/_chp06
RemovedInDjango18Warning: Creating a ModelForm without either
the 'fields' attribute or the 'exclude' attribute is deprecated
- form ContactView needs updating
class ContactView(ModelForm):
102
class ContactView(ModelForm):
message = forms.CharField(widget=forms.Textarea)
3
4
5
6
class Meta:
fields = ['name', 'email', 'topic', 'message']
model = ContactForm
Its only one line of extra work, and it prevents a potential security issue. If we now re-run
our unit tests, you should no longer see that warning.
Lets just add one more test, to contact.tests to make sure that we are displaying the
expected fields for our ContactForm:
1
2
3
4
class ContactViewTests(SimpleTestCase):
5
6
7
8
def test_displayed_fields(self):
expected_fields = ['name', 'email', 'topic', 'message']
self.assertEquals(ContactView.Meta.fields, expected_fields)
Why build such a simple, seemingly worthless test? The main reason for a test like that is to
communicate to other developers on our team that we intentionally have set the fields and
want only those fields set. This means if someone decided to add or remove a field they will
be alerted that they have broken the intended functionality which should hopefully prompt
them to at least think twice and discuss with us as to why they may need to make this change.
Run tests again. They should still pass. Now lets address the other warning.
Second warning
103
1
2
This is basically telling us there is a new test runner in town and we should be aware of and
make sure all of our tests are still running. We already know that all of our tests are running
so we are okay there. However, this new django.test.runner.DiscoverRunner() test
runner is pretty cool as it allows us to structure our tests in a much more meaningful way.
Before we get into the DiscoverRunner though lets get rid of the warning message.
Basically Django looks at the settings.py file to try to determine if your original Project was
generated with something previous to 1.6. Which means you will have certain default settings like SITE_ID, MANAGERS, TEMPLATE_LOADERS or ADMINS. Django also checks to see if
you have any of the Django 1.6 default settings that werent present in previous versions and
then attempts to guess if your project was created with 1.6 or something older. If Django
determines that you project was generated prior to 1.6 django warms you about the new
DiscoverRunner. Conversely if django thinks your project was created with 1.6 or something newer it assume you know what your doing and doesnt give you the warning message.
The simplest way to remove the error is to specify a TEST_RUNNER. Lets use the new one. :)
1
TEST_RUNNER = 'django.test.runner.DiscoverRunner'
The DiscoverRunner is the default test runner starting in Django 1.6 and now since we
explicitly set it in our settings.py, Django wont give us the warning anymore. Dont believe
me? Run the tests again.
1
2
3
4
5
6
7
8
9
OK
Destroying test database for alias 'default'...
Boom.
104
NOTE: Its also worth noting that the the check management command
- ./manage.py check would have highlighted this warning as well. This
command checks our configuration and ensure that it is compatible with the
installed version.
With the warning out of the way, lets talk a bit about the DiscoverRunner.
105
DiscoverRunner
In Django 1.5 you were strongly encouraged to put you test code in the same folder as your
production code in a module called tests.py. While this is okay for simple projects, it can
make it hard to find tests for larger projects, and it breaks down when you want to test global
settings or integrations that are outside of Django. For example, it might be nice to have a
single file for all routing tests to verify that every route in the project points to the correct
view. Starting in Django 1.6 you can do just that because you can more or less structure your
tests any way that you want.
Ultimately the structure of your tests is up to you, but a common and useful structure (in many
different languages) is to have your source directory (django_ecommerce, in our case) as
well as a test top level directory. The test directory mirrors the source directory and only
contains tests. That way you can easily determine which test applies to which model in the
project.
Graphically that structure would look like this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
django_ecommerce
contact
django_ecommerce
main
payments
tests
contact
userModelTests.py
contactViewTests.py
django_ecommerce
main
mainPageTests.py
payments
Ive omitted a number of the tests, but you get the idea. With the test directory tree separate
and identical to the code directory, we get the following benefits:
Separation of tests and production code.
Better test code structure (like separate files for helper test functions; for example,
the ViewTesterMixin() from could be refactored out into a utility module, like
tests\testutils.py, so each utility can be easily shared amongst the various tests
classes).
106
Simple to see the linkage between the test cases and the code its testing.
Helps ensure that test code isnt accidentally copied over to a production environment.
NOTE Users of Nose or other third party testing tools for Django wont think
this is a big deal because they have had this functionality for quite some time.
The difference now is that its built into Django so you dont need a third party
tool.
107
This will find all the tests in the ../tests directory and all sub-directories. Run them now
and make sure everything passes:
1
2
3
4
5
6
7
8
9
OK
Destroying test database for alias 'default'...
Once everything is working, remember we need to issue a pull request so the code can be
reviewed and eventually merged back into our main line. You have been making atomic commits as described in the previous chapter correct?
To show the commits we have made we could do the following from the command line:
1
2
3
4
5
NOTE: Keep in mind that hub must also be aliased to git by adding the line
alias git=hub to your .bash_profile for the previous line to work.
And then manually creating the pull request through the GitHub web UI.
Merge
This will issue the merge request which can be discussed and reviewed from the GitHub website. If youre unfamiliar with this process, please check out the official Github docs. After
the merge request has been reviewed/approved then it can be merged from the command
line with:
1
2
3
That will merge everything back into master and leave us a nice little merge bubble (as described in the previous chapter) in the git history.
Summary
Thats it for the upgrade to Django 1.8. Pretty straight-forward as we have a small system
here. With larger systems its just a matter of repeating the process for each error/warning
and looking up any warning or errors you are not sure about in the Django 1.6 release notes as
well as the Django 1.7 and Django 1.8 release notes. Its also a good idea to check the release
notes and make sure there arent any new changes or gotchas that may cause bugs, or change
the way your site functions in unexpected ways. And of course (dare I say) manual testing
shouldnt be skipped either, as automated testing wont catch everything.
109
Upgrading to Python 3
Django officially started supporting Python 3 in version 1.6. In the Django docs they highly
recommend Python 3.3 or greater, so lets go ahead and upgrade to 3.4.1, which is the latest
version as of writing.
NOTE: If you do decide to upgrade to a different version greater than 3.3, the
instructions below should remain the same. Please let us know if this is not the
case.
Armin Ronacher has been one of the most popular anti-Python 3 posts. Its worth a read,
but to sum it up, it basically comes down to: Porting to Python 3 is hard, and unicode/byte
string support is worse than it is in Python 2. To be fair, he isnt wrong and some of the issues
still exist, but a good amount of the issues he refers to are now fixed in Python 3.3.
More recently Alex Gaynor wrote a similar post to Armins with the main point that upgrading
is hard, and the Python Core Devs should make it easier.
While I dont want to discount what Armin and Alex have said, as they are both quite intelligent guys, I dont see any value in keeping 2.x around any longer than necessary. I agree
that the upgrade process has been kind of confusing and there havent seemed to be a ton
of incentives to upgrade, but right now we have this scenario where we have to support two
non-compatible versions, and that is never good. Look, we should all just switch to Python
3 as fast as possible (weve had 5 years already) and never look back. The sooner we put this
behind us, the sooner we can get back to focusing on the joy of Python and writing awesome
code.
Of course the elephant in the room is third party library support. While the support of Python
3 is getting better, there are still a number of third party libraries that dont support Python
3. So before you upgrade, make sure the libraries that you use support Python 3, otherwise
you may be in for some headaches.
With that said, lets look at some of the features that are in Python 3 that would make upgrading worthwhile.
110
The positive
Heres a short list: Unicode everywhere, virtualenv built-in, Absolute import, Set literals,
New division, Function Signature Objects, print() function, Set comprehension, Dict Comprehension, multiple context managers, C-based IO library, memory views, numbers modules, better exceptions, Dict Views, Type safe comparisons, standard library cleanup, logging dict config, WSGI 1.0.1, super() no args, Unified Integers, Better OS exception hierarchy, Metaclass class arg, lzma module, ipaddress module, faulthandler module, email packages rewritten, key-sharing dicts, nonlocal keyword, extended iterable unpacking, stable ABI,
qualified names, yield from, import implemented in python, __pycache__, fixed import
deadlock, namespace packages, keyword only arguments, function annotations, and much,
much more!
While we wont cover each of these in detail, there are two really good places to find information on all the new features:
The Python docs whats new page.
Brett Cannons talk (core python developer) on why 3.3 is better than 2.7.
There are actually a lot of really cool features, and throughout the rest of this book well introduce several of them to you.
So now you know both sides of the story. Lets look at the upgrade process.
111
If youre writing a reusable Django app that you expect others to use, or some sort of library
that you want to give to the community, then this is a valid option and what you most likely
want to do. If youre writing your own web app (as we are doing in this course) it makes no
sense to support both 2 and 3. Just upgrade to 3 and move on. Still, lets examine how you
might support both 2 and 3 and the same time.
1. Drop support for python 2.5 - while it is possible to support all versions of Python from
2.5 to 3.4 on the same code base, its probably more trouble than its worth. Python 2.6
was released in 2008, and given that its a trivial upgrade from 2.5 to 2.6, it shouldnt
take anybody more than 6 years to do the upgrade. So just drop the support for 2.5 and
make your life easier. :)
2. Set up a Python 3 virtualenv assuming you already have an existing Python 2 virtualenv that youve been developing on, set up a new one with Python 3 so you can test
your changes against both Python 2 and Python 3 and make sure they work in both
cases. You can set up a virtualenv using Python 3 with the following command:
1
112
This is what we are going to do for this project. Lets look at it step by step.
1. Create a virtualenv for Python 3:
1
3. Run 2to3 - now lets see what we need to change to support Python 3:
1
$ 2to3 django_ecommerce
It will return a diff listing of what needs to change, which should look like:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
RefactoringTool: No changes to
django_ecommerce/django_ecommerce/wsgi.py
RefactoringTool: No changes to django_ecommerce/main/views.py
RefactoringTool: Refactored django_ecommerce/payments/forms.py
--- django_ecommerce/payments/forms.py (original)
+++ django_ecommerce/payments/forms.py (refactored)
@@ -29,12 +29,12 @@
email = forms.EmailField(required=True)
password = forms.CharField(
required=True,
label=(u'Password'),
+
label=('Password'),
widget=forms.PasswordInput(render_value=False)
)
ver_password = forms.CharField(
required=True,
label=(u' Verify Password'),
+
label=(' Verify Password'),
widget=forms.PasswordInput(render_value=False)
)
30
31
32
33
34
35
36
37
38
39
40
print form.non_field_errors()
print(form.non_field_errors())
41
42
43
44
45
46
47
48
49
return render_to_response(
'sign_in.html',
@@ -92,11 +92,11 @@
'register.html',
{
'form': form,
'months': range(1, 12),
+
'months': list(range(1, 12)),
114
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
'publishable': settings.STRIPE_PUBLISHABLE,
'soon': soon(),
'user': user,
'years': range(2011, 2036),
'years': list(range(2011, 2036)),
},
context_instance=RequestContext(request)
)
@@ -133,8 +133,8 @@
'form': form,
'publishable': settings.STRIPE_PUBLISHABLE,
'soon': soon(),
'months': range(1, 12),
'years': range(2011, 2036)
+
'months': list(range(1, 12)),
+
'years': list(range(2011, 2036))
},
context_instance=RequestContext(request)
)
RefactoringTool: Files that need to be modified:
RefactoringTool: django_ecommerce/manage.py
RefactoringTool: django_ecommerce/contact/forms.py
RefactoringTool: django_ecommerce/contact/models.py
RefactoringTool: django_ecommerce/contact/views.py
RefactoringTool: django_ecommerce/django_ecommerce/settings.py
RefactoringTool: django_ecommerce/django_ecommerce/urls.py
RefactoringTool: django_ecommerce/django_ecommerce/wsgi.py
RefactoringTool: django_ecommerce/main/views.py
RefactoringTool: django_ecommerce/payments/forms.py
RefactoringTool: django_ecommerce/payments/models.py
RefactoringTool: django_ecommerce/payments/views.py
The diff listing shows us a number of things that need to change. We could just run
2to3 -w and let it handle these changes for us, but that doesnt teach us much. So
lets look at the errors and see where we need to make changes.
2to3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
In the first change, we just need to add u in front of the strings. Remember in Python 3
everything is unicode. There are no more pure ansi strings (there could however be byte
strings, denoted with a b), so this change is just denoting the strings as unicode. Strictly
speaking in Python 3 this is not required, in fact its redundant, since strings are always treated
as unicode. 2to3 puts it in so you can support both Python 2 and 3 at the same time, which
we are not doing. So we can skip this one.
On to the next issues:
1
2
print form.non_field_errors()
print(form.non_field_errors())
In Python 3 print is a function, and like any function, you have to surround the arguments
with (). This has been changed just to standardize things. So we need to make that change.
The next couple of issues are the same:
1
2
+
+
and
1
2
3
4
116
Above 2to3 is telling us that we need to convert the return value from a range() to a list.
This is because in Python 3, range() effectively behaves the same as xrange() in Python
2. The switch to returning a generator - range() in Python 3 and xrange() in Python 2 - is
for efficiency purposes. With generators there is no point in building an entire list if we are
not going to use it. In our case here we are going to use all the values, always, so we just go
ahead and covert the generator to a list.
Thats about it for code.
You can go ahead and make the changes manually or let 2to3 do it for you:
1
$ 2to3 django_ecommerce -w
21
22
23
24
25
26
27
28
29
},
{
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
}
]
RefactoringTool: No changes to tests/payments/testUserModel.py
RefactoringTool: Refactored tests/payments/testviews.py
--- tests/payments/testviews.py (original)
+++ tests/payments/testviews.py (refactored)
@@ -80,11 +80,11 @@
'register.html',
{
'form': UserForm(),
'months': range(1, 12),
+
'months': list(range(1, 12)),
'publishable': settings.STRIPE_PUBLISHABLE,
'soon': soon(),
'user': None,
'years': range(2011, 2036),
+
'years': list(range(2011, 2036)),
}
)
ViewTesterMixin.setupViewTester(
RefactoringTool: Files that need to be modified:
RefactoringTool: tests/contact/testContactModels.py
118
60
61
62
63
64
RefactoringTool:
RefactoringTool:
RefactoringTool:
RefactoringTool:
RefactoringTool:
tests/main/testMainPageView.py
tests/payments/testCustomer.py
tests/payments/testForms.py
tests/payments/testUserModel.py
tests/payments/testviews.py
Looking through the output we can see the same basic errors that we had in our main code:
1. Include u before each string literal,
2. Convert the generator returned by range() to a list, and
3. Call print as a function.
Applying similar fixes as we did above will fix these, and then we should be able to run our
tests and have them pass. To make sure after you have completed all your changes, type the
following from the terminal:
1
2
$ cd django_ecommerce
$ ./manage.py test ../tests
7
8
9
10
11
12
13
14
15
16
17
======================================================================
FAIL: test_registering_new_user_returns_succesfully
(tests.payments.testviews.RegisterPageTests)
119
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
======================================================================
FAIL: test_returns_correct_html
(tests.payments.testviews.SignOutPageTests)
---------------------------------------------------------------------Traceback (most recent call last):
File "../testviews.py", line 36, in test_returns_correct_html
self.assertEquals(resp.content, self.expected_html)
AssertionError: b'' != ''
33
34
35
36
37
38
FAILED (failures=3)
Destroying test database for alias 'default'...
As it turns out, 2to3 cant catch everything, so we need to go in and fix the issues. Lets deal
with them one at a time.
First error
1
2
3
4
5
6
7
======================================================================
FAIL: test_contactform_str_returns_email
(tests.contact.testContactModels.UserModelTest)
---------------------------------------------------------------------Traceback (most recent call last):
File
"/Users/michaelherman/Documents/repos/realpython/book3-exercises/_chapters/ch
line 23, in test_contactform_str_returns_email
self.assertEquals("[email protected]", str(self.firstUser))
AssertionError: '[email protected]' != 'ContactForm object'
120
8
9
- [email protected]
+ ContactForm object
It appears that calling str and passing our ContactForm no longer returns the users name.
If we look at contact/models.py/ContactForm we can see the following function:
1
2
def __unicode__(self):
return self.email
In Python 3 the unicode() built-in function has gone away and str() always returns a
unicode string. Therefore the __unicode__() function is just ignored.
Change the name of the function to fix this issue:
1
2
def __str__(self):
return self.email
This and various other Python 3-related errata for Django can be found here.
Run the tests again. A few left, but lucky for us they all have the same solution
Unicode vs Bytestring errors
2
3
4
5
6
7
8
FAIL: test_registering_new_user_returns_succesfully
(tests.payments.testviews.RegisterPageTests)
---------------------------------------------------------------------Traceback (most recent call last):
File "/py3/lib/python3.4/site-packages/mock.py", line 1201, in
patched
return func(*args, **keywargs)
File "../testviews.py", line 137, in
test_registering_new_user_returns_succesfully
self.assertEquals(resp.content, "")
AssertionError: b'' != ''
121
@classmethod
def setUpClass(cls):
ViewTesterMixin.setupViewTester(
'/sign_out',
sign_out,
b"", # a redirect will return an empty bytestring
status_code=302,
session={"user": "dummy"},
)
3
4
5
6
7
8
9
10
11
Here, we just changed the third argument from "" to b"", because response.context
is now a bytestring, so our expected_html must also be a bytestring. The last error in
tests/payments/testviews/ in the test_registering_new_user_returns_succesfully
test is the same type of bytestring error, with the same type of fix:
Update:
python self.assertEqual(resp.content, b)
To:
1
self.assertEqual(resp.content, b"")
Same story as before: We need to make sure our string types match. And with that, we can
rerun our tests and they all pass.
1
2
3
4
5
6
7
8
9
OK
Destroying test database for alias 'default'...
122
Success!
So now our upgrade to Python 3 is complete. We can commit our changes, and merge back
into the master branch. We are now ready to move forward with Django 1.8 and Python 3.
Awesome.
123
$ ls -al django_ecommerce/django_ecommerce/__pycache__
total 24
drwxr-xr-x 5 michaelherman staff 170 Jul 25 19:12 .
drwxr-xr-x 11 michaelherman staff 374 Jul 25 19:12 ..
-rw-r--r-- 1 michaelherman staff 208 Jul 25 19:12
__init__.cpython-34.pyc
-rw-r--r-- 1 michaelherman staff 2690 Jul 25 19:12
settings.cpython-34.pyc
-rw-r--r-- 1 michaelherman staff 852 Jul 25 19:12
urls.cpython-34.pyc
124
Upgrading to PostgreSQL
Since we are on the topic of upgrading, lets see if we can fit one more quick one in.
Up until now we have been using SQLite, which is OK for testing and prototyping purposes,
but you would probably wouldnt want to use SQLite in production. (At least not for a high
traffic site. Take a look at SQLites own when to use page for more info). Since we are trying
to make this application production-ready, lets go ahead and upgrade to PostgreSQL. The
process is pretty straightforward, especially since Postgres fully supports Python 3.
By now it should go without saying that we are going to start working from a new branch..so
I wont say it. :). Also as with all upgrades, its a good idea to back things up first, so lets pull
all the data out of our database like this:
1
This is a built-in Django command that exports all the data to JSON. The nice thing about
this is that JSON is database-independent, so we can easily pass this data to other databases.
The downside of this approach is that metadata like primary and foreign keys are not saved.
For our example that isnt necessary, but if you are trying to migrate a large database with
lots of tables and relationships, this may not be the best approach. Lets first get Postgres set
up and running, then we can come back and look at the specifics of the migration.
The easiest way to install Postgres on Windows is with a product called EnterpriseDB. Download it from here. Make sure to choose the newest version available for your operating system.
Download the installer, run it and then follow these instructions.
Mac OS Users
Brew it baby. We talked about homebrew before so now that you have it installed from the
command line just type:
1
125
Debian Users
1. Install Postgres
1
$ psql --version
3. Now that Postgres is installed, you need to set up a database user and create an account
for Django to use. When Postgres is installed, the system will create a user named
postgres. Lets switch to that user so we can create an account for Django to use:
1
$ sudo su postgres
$ createuser -P djangousr
Enter the password twice and remember it; you will use it in your settings.py later.
5. Now using the postgres shell, create a new database to use in Django. Note: dont type
the lines starting with a #. These are comments for your benefit.
1
$ psql
2
3
4
5
6
7
6. Then we can set up permissions for Postgres to use by editing the file /etc/postgresql/9.1/main/pg_
Just add the following line to the end of the file:
1
local
django_db
djangousr
md5
Then save the file and exit the text editor. The above line basically says djangousr
user can access the django_db database if they are initiating a local connection and
using an md5-encrypted password.
126
This should restart postgres and you should now be able to access the database. Check
that your newly created user can access the database with the following command:
1
This will prompt you for the password; type it in and you should get to the database
prompt. You can execute any SQL statements that you want from here, but at this point
we just want to make sure we can access the database, so just do a \q and exit out of
the database shell. Youre all set- Postgres is working! You probably want to do a final
exit from the command line to get back to the shell of your normal user.
If you do encounter any problems installing PostgreSQL, check the wiki. It has a lot of good
troubleshooting tips.
Now that Postgres is installed, you will want to install the Python bindings.
This will require a valid build environment and is prone to failure, so if it doesnt work you can
try one of the several pre-packaged installations. For example, if youre on a Debian-based
system, youll probably need to install a few dependencies first:
1
127
Since we are adding another Python dependency to our project, we should update the
requirements.txt file in our project root directory to reflect this.
1
Django==1.8.2
mock==1.0.1
psycopg2==2.5.4
requests==2.3.0
stripe==1.9.2
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'django_db',
'USER': 'djangousr',
'PASSWORD': 'your_password_here',
'HOST': 'localhost',
'PORT': '5432',
}
}
128
Please note that 5432 is the default postgres port, but you can double-check your system
setup with the following command:
1
2
3
$ ./manage.py syncdb
NOTE You will get a very different output from running syncdb than you are
used to after you upgrade to Django 1.8 This is because of the new key feature
of Django 1.8: migrations. We have an entire chapter coming up dedicated to
migrations, so for now just ignore the output, and we will come back to syncdb
and migrations in an upcoming chapter.
Of course this isnt essential if youre following this course strictly as you wont yet have much,
if any, data to load. But this is the general idea of how you might go about migrating data from
one database to the next.
Dont forget to make sure all of our tests still run correctly:
1
If so, check out this StackOverflow answer. Then enter the Postgres shell and run the following command:
1
Conclusion
Whoa - thats a lot of upgrading. Get used to it. Software moves so fast these days, that
software engineers cant escape the reality of the upgrade. If it hasnt happened to you yet, it
will. Hopefully this chapter has provided you with some food for thought on how to approach
an upgrade and maybe even given you a few upgrade tools to keep in your tool-belt for the
next time you have to upgrade.
130
Exercises
There are some really good videos out there on upgrading to Python 3 and Django 1.6/1.7/1.8.
So for this chapters exercises, watch the following videos to gain a better understanding of
why you might want to upgrade:
1. Porting Django apps to Python 3 talk by the creator of Django Jacob Kaplan-Moss
2. Python 3.3 Trust Me, Its Better than 2.7 by Brett Cannon (core python developer) video
131
Chapter 7
Graceful Degradation and Database
Transactions with Django 1.8
Graceful Degradation used to be a hot topic. Today you dont hear about it too much, at least
not in the startup world where everyone is rushing to get their MVP out the door and get signups. Well fix it later, right? Starting in Django 1.6 database transactions were drastically
simplified. So we can actually apply a little bit of Graceful Degradation to our sign-in process
without having to spend too much time on it.
132
133
User Registration
If Stripe is down, you still want your users to be able to register, right? We just want to hold
their info and then re-verify it when Stripe is back up, otherwise we will probably lose that
user to a competitor if we dont allow them to register until Stripe comes back up. Lets look
at how we can do that.
Following what we learned before, lets first create a branch for this feature:
1
Now that we are working in a clean environment, lets write a unit test in tests/payments/testViewsin theRegisterPageTests() class:
1
import socket
2
3
def test_registering_user_when_stripe_is_down(self):
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
self.assertEquals(len(users), 1)
self.assertEquals(users[0].stripe_id, '')
28
29
That should do it. We have just more or less copied the test for test_registering_new_user_returns_succ
but removed all the databases mocks and added a stripe_mock that throws a socket.error
every time its called. This should simulate what would happen if Stripe goes down.
Of course running this test is going to fail - OSError: Can't connect to Stripe. But
thats exactly what we want (remember the TDD Fail, Pass, Refactor loop). Dont forget to
commit that change.
Okay. So, how can we get it to work?
Well, we want to hold their info and then re-verify it when Stripe is back up.
First thing is to update the Customer.create() in payments/views/ method to handle
that pesky socket.error so we dont have to deal with it.
1
import socket
2
3
class Customer(object):
@classmethod
def create(cls, billing_method="subscription", **kwargs):
try:
if billing_method == "subscription":
return stripe.Customer.create(**kwargs)
elif billing_method == "one_time":
return stripe.Charge.create(**kwargs)
except socket.error:
return None
5
6
7
8
9
10
11
12
13
This way when Stripe is down, our call to Customer.create() will just return None. This
design is preferable so we dont have to put try-except blocks everywhere.
Rerun the test. It should still fail:
1
So now we need to modify the register() function. Basically all we have to do is change
the part that saves the user.
Change:
1
2
try:
user = User.create(
135
3
4
5
6
7
8
9
cd['name'],
cd['email'],
cd['password'],
cd['last_4_digits'],
customer.id
)
except IntegrityError:
To:
1
2
3
4
5
6
7
8
try:
user = User.create(
cd['name'],
cd['email'],
cd['password'],
cd['last_4_digits'],
stripe_id=''
)
9
10
11
12
if customer:
user.stripe_id = customer.id
user.save()
13
14
except IntegrityError:
Here, we broke up the single insert on the database into an insert and then an update.
Running the test still fails, though.
Since we are not passing in the stripe_id when we initially create the user, we have to
change the test_registering_new_user_returns_succesfully() test to not expect
that. In fact, lets remove all the database mocks from that test, because in a minute we are
going to start adding some transaction management.
As a generally rule, its best not to use database mocks when testing code that
directly manages or uses transactions.
Why? This is because the mocks will effectively ignore all transaction management, and thus
subtle defects can often slide into play with the developer thinking that the code is well tested
and free of defects.
After we take the database mocks out of test_registering_new_user_returns_succesfully()
the test looks like this:
136
def get_mock_cust():
2
3
class mock_cust():
4
5
6
7
@property
def id(self):
return 1234
8
9
return mock_cust()
10
11
12
@mock.patch('payments.views.Customer.create',
return_value=get_mock_cust())
def test_registering_new_user_returns_succesfully(self,
stripe_mock):
13
14
15
16
17
18
19
20
21
22
23
self.request.session = {}
self.request.method = 'POST'
self.request.POST = {
'email': '[email protected]',
'name': 'pyRock',
'stripe_token': '...',
'last_4_digits': '4242',
'password': 'bad_password',
'ver_password': 'bad_password',
}
24
25
resp = register(self.request)
26
27
28
self.assertEqual(resp.content, b"")
self.assertEqual(resp.status_code, 302)
29
30
31
32
users = User.objects.filter(email="[email protected]")
self.assertEqual(len(users), 1)
self.assertEqual(users[0].stripe_id, '1234')
33
34
def get_MockUserForm(self):
35
36
37
38
class MockUserForm(forms.Form):
137
39
def is_valid(self):
return True
40
41
42
@property
def cleaned_data(self):
return {
'email': '[email protected]',
'name': 'pyRock',
'stripe_token': '...',
'last_4_digits': '4242',
'password': 'bad_password',
'ver_password': 'bad_password',
}
43
44
45
46
47
48
49
50
51
52
53
54
55
56
return MockUserForm()
57
In the above test we want to explicitly make sure that the stripe_id IS being set, so we
mocked the Customer.create() function and had it return a dummy class that always
provides 1234 for its id. That way we can assert that the new user in the database has the
stripe_id of 1234.
All good:
1
2
3
4
5
6
7
8
9
OK
Destroying test database for alias 'default'...
138
The proposed solution to this endeavor is to create a table of unpaid users so we can harass
those users until they cough up their credit cards. Or, if you want to be a bit more politically
correct: So our account management system can help the customers with any credit card
billing issues they may have had.
To do that, we will create a new table called unpaid_users with two columns: the user email
and a timestamp used to keep track of when we last contacted this user to update billing
information.
First lets create a new model in payments/models.py:
1
2
3
4
5
class UnpaidUsers(models.Model):
email = models.CharField(max_length=255, unique=True)
last_notification = models.DateTimeField(default=timezone.now())
NOTE: Were intentionally leaving off foreign key constraints for now. We may
come back to it later.
Further NOTE: django.utils.timezone.now() functions the same as
datetime.now() except that timezone is always timezone aware. This will
prevent Django from complaining about naive timezone.
Further Further NOTE: After creating the new model, youll want to run
./manage.py syncdb. This will show some different output than youre used
to as Django 1.7 now has migration support. For the time being we will stick with
syncdb, however there is an upcoming chapter on Migrations which will explain
Django 1.7s migrations.
139
Now create the test. We want to ensure that the UnpaidUsers() table is populated if/when
Stripe is down. Lets modify our payments/testViews/test_registering_user_when_strip_is_down
test. All we need to do is add a couple of asserts at the end of that test:
1
2
3
4
This test asserts that we got a new row in the UnpaidUsers() table and it has a
last_notification timestamp. Run the test watch it fail.
Now lets fix the code.
Of course we have to populate the table during registration if a user fails to validate their card
through Stripe. So lets adjust payments.views.registration as is shown below:
1
def register(request):
2
3
4
5
6
7
cd = form.cleaned_data
try:
user = User.create(cd['name'], cd['email'],
cd['password'],
cd['last_4_digits'],
stripe_id='')
9
10
11
12
13
14
if customer:
user.stripe_id = customer.id
user.save()
else:
UnpaidUsers(email=cd['email']).save()
15
16
17
18
19
20
except IntegrityError:
import traceback
form.addError(cd['email'] + ' is already a member' +
traceback.format_exc())
user = None
140
else:
request.session['user'] = user.pk
return HttpResponseRedirect('/')
21
22
23
24
25
Now re-run the tests and they should all pass. Were golden.
Right? Not exactly.
141
For example, there was a plethora of decorators to work with, like: commit_on_success,
commit_manually, commit_unless_managed, rollback_unless_managed, enter_transaction_manag
leave_transaction_management, just to name a few. Fortunately, with Django 1.6 (or
greater, of course) that all goes out the door. You only really need to know about a few
functions for now, which well get to in just a few seconds. First, well address these topics:
What is transaction management?
Whats wrong with transaction management prior to Django 1.6?
Before jumping into:
Whats right about transaction management in Django 1.6?
And then dealing with a detailed example:
Stripe Example
Transactions
The recommended way
Using a decorator
Transaction per HTTP Request
SavePoints
Nested Transactions
142
What is a transaction?
According to SQL-92, An SQL-transaction (sometimes simply called atransaction) is a sequence of executions of SQL-statements that is atomic with respect to recovery. In other
words, all the SQL statements are executed and committed together. Likewise, when rolled
back, all the statements get rolled back together.
For example:
1
2
3
4
5
6
# START
note1 = Note(title="my first note", text="Yay!")
note2 = Note(title="my second note", text="Whee!")
note1.save()
Note2.save()
# COMMIT
A transaction is a single unit of work in a database, and that single unit of work is demarcated
by a start transaction and then a commit or an explicit rollback.
143
Databases
Every statement in a database has to run in a transaction, even if the transaction includes
only one statement.
Most databases have an AUTOCOMMIT setting, which is usually set to True as a default.
This AUTOCOMMIT wraps every statement in a transaction that is immediately committed if
the statement succeeds. You can also manually call something like START_TRANSACTION
which will temporarily suspend the AUTOCOMMIT until you call COMMIT_TRANSACTION or
ROLLBACK.
However, the takeaway here is that the AUTOCOMMIT setting applies an implicit commit after
each statement.
Client Libraries
Then there are the Python client libraries like sqlite3 and mysqldb, which allow Python
programs to interface with the databases themselves. Such libraries follow a set of standards
for how to access and query databases. That standard, DB API 2.0, is described in PEP 249.
While it may make for some slightly dry reading, an important takeaway is that PEP 249
states that the database AUTOCOMMIT should be OFF by default.
This clearly conflicts with whats happening within the database:
SQL statements always have to run in a transaction, which the database generally
opens for you via AUTOCOMMIT.
However, according to PEP 249, this should not happen.
Client libraries must mirror what happens within the database, but since they are not
allowed to turn AUTOCOMMIT on by default, they simply wrap your SQL statements in
a transaction, just like the database.
Okay. Stay with me a little longer
144
Django
Enter Django. Django also has something to say about transaction management. In Django
1.5 and earlier, Django basically ran with an open transaction and auto-committed that
transaction when you wrote data to the database. So every time you called something like
model.save() or model.update(), Django generated the appropriate SQL statements
and committed the transaction.
Also in Django 1.5 and earlier, it was recommended that you used the TransactionMiddleware
to bind transactions to HTTP requests. Each request was given a transaction. If the response
returned with no exceptions, Django would commit the transaction, but if your view function
threw an error, ROLLBACK would be called. In effect, this turned off AUTOCOMMIT. If you
wanted standard, database-level autocommit style transaction management, you had to manage the transactions yourself - usually by using a transaction decorator on your view function
such as @transaction.commit_manually, or @transaction.commit_on_success.
Take a breath. Or two.
145
cd = form.cleaned_data
try:
user = User.create(cd['name'], cd['email'], cd['password'],
cd['last_4_digits'], stripe="")
5
6
7
8
9
10
if customer:
user.stripe_id = customer.id
user.save()
else:
UnpaidUsers(email=cd['email']).save()
11
12
13
14
15
except IntegrityError:
import traceback
form.addError(cd['email'] + ' is already a member' +
traceback.format_exc())
Did you spot the issue? We would just want to hold their info and then re-verify it
What happens if the UnpaidUsers(email=cd['email']).save() line fails?
Well, then you have a user registered in the system who never verified their credit card while
the system assumes they have. In other words, somebody got in for free. Not good. So this
is the perfect case for when to use a transaction, because we want it all or nothing here. In
other words, we only want one of two outcomes:
1. User is created (in the database) and has a stripe_id.
146
2. User is created (in the database), doesnt have a stripe_id and has an associated row
in the UnpaidUsers table with the same email address as the User.
This means we want the two separate database statements to either both commit
or both rollback. A perfect case for the humble transaction. There are many ways we can
achieve this.
First, lets write some tests to verify things behave the way we want them to:
1
2
3
@mock.patch('payments.models.UnpaidUsers.save',
side_effect=IntegrityError)
def test_registering_user_when_strip_is_down_all_or_nothing(self,
save_mock):
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
147
unpaid =
UnpaidUsers.objects.filter(email="[email protected]")
self.assertEquals(len(unpaid), 0)
31
32
8
9
10
11
12
13
14
15
16
17
18
19
20
FAILED (failures=1)
Destroying test database for alias 'default'...
Nice. It failed. Seems funny to say that, but its exactly what we wanted. And the error
message tells us that the User is indeed being stored in the database; we dont want that.
Have no fear, transactions to the rescue
148
def register(request):
2
3
cd = form.cleaned_data
try:
with transaction.atomic():
user = User.create(cd['name'], cd['email'],
cd['password'],
cd['last_4_digits'],
stripe_id="")
5
6
7
8
10
if customer:
user.stripe_id = customer.id
user.save()
else:
UnpaidUsers(email=cd['email']).save()
11
12
13
14
15
16
except IntegrityError:
import traceback
form.addError(cd['email'] + ' is already a member' +
traceback.format_exc())
user = None
else:
request.session['user'] = user.pk
return HttpResponseRedirect('/')
17
18
19
20
21
22
23
24
25
26
Note the line with transaction.atomic():. All code inside that block will be executed
inside a transaction. Re-run our tests, and they all pass!
Using a decorator
We can also try adding atomic as a decorator. But if we do and rerun our tests they fail
with the same error we had before putting any transactions in at all!
Why is that?
Why didnt the transaction roll back correctly? The reason is because transaction.atomic
is looking for some sort of DatabaseError and, well, we caught that error (e.g., the
IntegrityError in our try/except block), so transaction.atomic never saw it and thus
the standard AUTOCOMMIT functionality took over.
But removing the try/except will cause the exception to just be thrown up the call chain and
most likely blow up somewhere else, so we cant do that either.
The trick is to put the atomic context manager inside of the try/except block, which is what
we did in our first solution.
Looking at the correct code again:
1
2
3
4
5
6
try:
with transaction.atomic():
user = User.create(cd['name'], cd['email'], cd['password'],
cd['last_4_digits'], stripe_id="")
7
8
9
10
11
12
if customer:
user.stripe_id = customer.id
user.save()
else:
UnpaidUsers(email=cd['email']).save()
13
14
15
16
17
except IntegrityError:
import traceback
form.addError(cd['email'] + ' is already a member' +
traceback.format_exc())
150
When UnpaidUsers fires the IntegrityError, the transaction.atomic() context_manager will catch it and perform the rollback. By the time our code executes in
the exception handler (e.g., the form.addError line), the rollback will be complete and
we can safely make database calls if necessary. Also note any database calls before or after
the transaction.atomic() context manager will be unaffected regardless of the final
outcome of the context_manager.
Transaction per HTTP Request
Django < 1.6 (like 1.5) also allows you to operate in a Transaction per request mode. In
this mode, Django will automatically wrap your view function in a transaction. If the function throws an exception, Django will roll back the transaction, otherwise it will commit the
transaction.
To get it set up you have to set ATOMIC_REQUEST to True in the database configuration for
each database that you want to have this behavior. In our settings.py we make the change
like this:
1
2
3
4
5
6
7
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(SITE_ROOT, 'test.db'),
'ATOMIC_REQUEST': True,
}
}
But in practice this just behaves exactly as if you put the decorator on your view function
yourself, so it doesnt serve our purposes here. It is however worthwhile to note that with
both AUTOMIC_REQUESTS and the @transaction.atomic decorator it is possible to still
catch and then handle those errors after they are thrown from the view. In order to catch
those errors you would have to implement some custom middleware, or you could override
urls.handler or make a 500.html template.
151
SavePoints
We can also further break down transactions into savepoints. Think of savepoints as partial
transactions. If you have a transaction that takes four database statements to complete, you
could create a savepoint after the second statement. Once that savepoint is created, then if
the 3rd or 4th statements fails you can do a partial rollback, getting rid of the 3rd and 4th
statement but keeping the first two.
Its basically like splitting a transaction into smaller lightweight transactions, allowing you
to do partial rollbacks or commits. But do keep in mind if the main transaction were to get
rolled back (perhaps because of an IntegrityError that gets raised but not caught), then
all savepoints will get rolled back as well.
Lets look at an example of how savepoints work:
1
2
@transaction.atomic()
def save_points(self,save=True):
3
4
5
user = User.create('jj','inception','jj','1234')
sp1 = transaction.savepoint()
6
7
8
9
10
11
12
13
14
if save:
transaction.savepoint_commit(sp1)
else:
transaction.savepoint_rollback(sp1)
Here the entire function is in a transaction. After creating a new user we create a savepoint
and get a reference to the savepoint. The next three statements:
1
2
3
Are not part of the existing savepoint, so they stand the potential of being part of the next
savepoint_rollback, or savepoint_commit. In the case of a savepoint_rollback.
The line user = User.create('jj','inception','jj','1234') will still be committed to the database even though the rest of the updates wont.
Put in another way, these following two tests describe how the savepoints work:
152
def test_savepoint_rollbacks(self):
2
3
self.save_points(False)
4
5
6
7
8
9
10
11
#note the values here are from the original create call
self.assertEquals(users[0].stripe_id, '')
self.assertEquals(users[0].name, 'jj')
12
13
14
15
def test_savepoint_commit(self):
self.save_points(True)
16
17
18
19
20
21
22
23
After we commit or rollback a savepoint, we can continue to do work in the same transaction,
and that work will be unaffected by the outcome of the previous savepoint.
For example, if we update our save_points function as such:
1
2
@transaction.atomic()
def save_points(self,save=True):
3
4
5
user = User.create('jj','inception','jj','1234')
sp1 = transaction.savepoint()
6
7
8
9
10
11
user.stripe_id = 4
user.save()
12
153
13
14
15
16
if save:
transaction.savepoint_commit(sp1)
else:
transaction.savepoint_rollback(sp1)
17
18
19
user.create('limbo','illbehere@forever','mind blown',
'1111')
154
Nested Transactions
In addition to manually specifying savepoints with savepoint(), savepoint_commit, and
savepoint_rollback, creating a nested Transaction will automatically create a savepoint
for us, and roll it back if we get an error.
Extending our example a bit further we get:
1
2
@transaction.atomic()
def save_points(self,save=True):
3
4
5
user = User.create('jj','inception','jj','1234')
sp1 = transaction.savepoint()
6
7
8
9
10
11
user.stripe_id = 4
user.save()
12
13
14
15
16
if save:
transaction.savepoint_commit(sp1)
else:
transaction.savepoint_rollback(sp1)
17
18
19
20
21
22
23
24
try:
with transaction.atomic():
user.create('limbo','illbehere@forever','mind blown',
'1111')
if not save: raise DatabaseError
except DatabaseError:
pass
Here we can see that after we deal with our savepoints, we are using the transaction.atomic
context manager to encase our creation of the limbo user. When that context manager is
called, it is in effect creating a savepoint (because we are already in a transaction) and that
savepoint will be committed or rolled-back upon exiting the context manager.
Thus the following two tests describe their behavior here:
1
def test_savepoint_rollbacks(self):
155
self.save_points(False)
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
def test_savepoint_commit(self):
self.save_points(True)
20
21
22
23
24
25
26
27
28
29
30
31
So in reality you can use either atomic or savepoint to create savepoints inside a transaction, but with atomic you dont have to worry explicitly about the commit/rollback, where
as with savepoint you have full control over when that happens.
156
This will get then send the POST method to the payments.views.register() function,
which will in turn call our recently updated Customer.create() function. However now
we are likely to get a different error, in fact we will probably get two different errors:
Connection Errors - socket.error or stripe.error.APIConnectionError
157
class Customer(object):
2
3
4
5
6
7
8
9
10
11
12
@classmethod
def create(cls, billing_method="subscription", **kwargs):
try:
if billing_method == "subscription":
return stripe.Customer.create(**kwargs)
elif billing_method == "one_time":
return stripe.Charge.create(**kwargs)
except (socket.error, stripe.APIConnectionError,
stripe.InvalidRequestError):
return None
158
Conclusion
If you have had any previous experience with earlier versions of Django, you can see how
much simpler the transaction model is. Also having auto-commit on by default is a great
example of sane defaults that Django and Python both pride themselves in delivering. For
many systems you wont need to deal directly with transactions, just let auto-commit do its
work. But when you do, hopefully this chapter will have given you the information you need
to manage transactions in Django like a pro.
Further here is a quick list of reminders to help you remember the important stuff:
Functions at the database level; implicitly commit after each SQL statement..
1
2
3
START TRANSACTION
SELECT * FROM DEV_JOBS WHERE PRIMARY_SKILL = 'PYTHON'
END TRANSACTION
Atomic Decorator
Django >= 1.6 based transaction management has one main API which is atomic. Using
with transaction.atomic():
user1.save()
unpaidUser1.save()
This causes Django to automatically create a transaction to wrap each view function call. To
activate add ATOMIC_REQUEST to your database config in settings.py
1
2
3
4
5
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(SITE_ROOT, 'test.db'),
'ATOMIC_REQUEST': True,
159
6
7
Savepoints
Savepoints can be thought of as partial transactions. They allow you to save/rollback part of
a transaction instead of the entire transaction. For example:
1
2
@transaction.atomic()
def save_points(self,save=True):
3
4
5
user = User.create('jj','inception','jj','1234')
sp1 = transaction.savepoint()
6
7
8
9
10
11
12
13
14
if save:
transaction.savepoint_commit(sp1)
else:
transaction.savepoint_rollback(sp1)
160
Exercises.
1. For the final example of savepoints(), what would happen if you removed the try/except altogether?
For reference, the code were referring to is below (minus the try/except):
1
2
@transaction.atomic()
def save_points(self,save=True):
3
4
5
user = User.create('jj','inception','jj','1234')
sp1 = transaction.savepoint()
6
7
8
9
10
11
user.stripe_id = 4
user.save()
12
13
14
15
16
if save:
transaction.savepoint_commit(sp1)
else:
transaction.savepoint_rollback(sp1)
17
18
19
20
21
with transaction.atomic():
user.create('limbo','illbehere@forever','mind blown',
'1111')
if not save: raise DatabaseError
Verify your expectation with a test or two. Can you explain why that happened? Perhaps these lines from the Django Documentation may help you understand it more
clearly.
Under the hood, Djangos transaction management code:
2. Build your own transaction management system. Just joking. Theres no need to reinvent the wheel. Instead, you could read through the complete Django documentation
161
on the new transaction management features here if you really, really wanted to.
162
Chapter 8
Building a Membership Site
Up until now we have covered the nitty gritty of unit testing; how to do TDD; git branching
for teams; upgrading to Django 1.8, Python 3 and PostgreSQL as well as the latest in Django
Database Transactions. While these are all necessary tools and techniques that modern day
web developers should have in their tool belt, you could say we havent built much yet.
163
164
A Membership Site
Lets look at what we have before diving into what were going to add. Currently we have the
following implemented:
Static homepage
User login/logout (session management)
User Registration
Stripe integration supporting one-time and subscription payments
A Contact Us Page
About Us Page (using Django flatpages)
Not too shabby, but also not something thats going to make the front page of Hacker News
either. Lets see what we can do to flesh this out to into something a bit more interesting.
Lets start out with an overview of what our MVP is and why we are creating it. To make it
more interesting lets pretend we are building a membership site for Star Wars lovers. Lets
call our wonderful little membership site Mos Eisleys Cantina or just MEC for short.
Product Vision
Mos Eisleys Cantina (MEC) aims to be the premiere online membership site for Star Wars
Enthusiasts. Being a paid site, MEC will attract only the most loyal Star Wars fans and will
encourage highly entertaining debate and discussion about the entirety of the Star Wars Universe. A unique polling system will allow us to once and for all decide on important questions
such as who is truly the most powerful Jedi, are the separatists good or evil, and seriously
what is the deal with Jar Jar Binks? In addition, MEC will provide news and videos of current Star Wars-related happenings and in general be the best place to discuss Star Wars in
the entire galaxy.
Thats the vision; were building a real geek site. It probably wont make a dime, but thats
not the point. The techniques we are using here will be applicable to any membership site.
Now that we know what we are building, lets list out a few user stories so we have something
to focus on.
165
US2: Registration
After clicking the sign up button the applicant padwan (user wishing to sign up but not yet
verified by credit card) will be presented with a screen to collect basic registration information
including credit card information. Credit card information should be immediately processed
through Stripe, after which the applicant padwan should be upgraded to padwan status
and redirected to the Members Home Page.
166
This map should provide a graphic view of who is where and allow for zooming in on certain
locations.
We could go on for days, but thats enough to fill a course. In the coming chapters we are
going to look at each of these user stories and try to implement them. US2 (Registration) is
pretty much done, but the others all need to be implemented. Weve arranged them in order
of increasing difficulty so we can build on the knowledge learned by implementing each of
the user stories.
Without further ado the next chapter will cover US1. See you on the next page.
167
Chapter 9
Bootstrap 3 and Best Effort Design
As we mentioned in the last chapter, traditional web design is usually not expected of most
full-stack developers. If you can design, then more power to you, but for the rest of us artistically challenged developers there is Bootstrap. This fantastic library, originally released by
the amazing design team at Twitter, makes it so easy to style a website that even somebody
with absolutely zero artistic ability can make a site look presentable. Granted, youre site is
probably not going to win any awards for design excellence, but it wont be embarrassing
either. With that in mind, lets look at what we can do to make our site look a little bit better.
168
In other words, we need something that looks cool and gets the youngling to click on the
signup button (convert).
If we had to do that in straight HTML and CSS, it might take ages. But with Bootstrap we
can do it pretty quickly. The plan then is to create a main page with a nice logo, a couple of
pictures and descriptions of what the site is about and a nice big signup button to grab the
users (err younglings) attention.
Lets get started.
169
170
Installing Bootstrap 3
The first thing to do is grab Bootstrap file. So lets download it locally and chuck it in our static
directory. Before that, go ahead and remove all files from the django_ecommerce/static
directory except for application.js.
Keep in mind that we could serve the two main Bootstrap files, bootstrap.min.css
and bootstrap.min.js from a Content Delivery Network (CDN). When you are
developing locally, its best to download the files on your file system, though,
in case youre trying to develop/design and you lose Internet access. There are
various benefits to utilizing a CDN when you deploy your app to a production
server, which we will detail in a later chapter.
1. Download the Bootstrap distribution files from here.
2. Unzip the files and add the css, fonts, and js directories to the django_ecommerce/static
file directory. Add the application.js file to the js directory. When its all said and
done, your directory structure should look like this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
.
css
bootstrap-theme.css
bootstrap-theme.css.map
bootstrap-theme.min.css
bootstrap.css
bootstrap.css.map
bootstrap.min.css
fonts
glyphicons-halflings-regular.eot
glyphicons-halflings-regular.svg
glyphicons-halflings-regular.ttf
glyphicons-halflings-regular.woff
js
application.js
bootstrap.js
bootstrap.min.js
3. Another critical file that we must not forget to add is jQuery. As of this writing 1.11.1
is the most recent version. Download the minified version - e.g., jquery-1.11.1.min.js from the jQuery download page site, and add it to the js folder.
171
NOTE: On a side note, do keep in mind that if you have a real site thats
currently using a version of jQuery earlier than 1.9, and youd like to upgrade
to the latest version, you should take the time to look through the release
notes and also look at the jQuery Migrate plugin. In our case we are actually
upgrading from version 1.6x; however, since our example site only makes
very little use of jQuery (at the moment), its not much of an issue. But in
a production environment ALWAYS make sure you approach the upgrade
carefully and test thoroughly.
4. Say goodbye to the current design. Take one last look at it if youd like, because we are
going to wipe it out.
on the same page that you downloaded Bootstrap from. Take that template and overwrite your base.html file:
1
2
3
4
5
6
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width,
initial-scale=1">
<title>Bootstrap 101 Template</title>
8
9
10
11
12
13
14
15
16
17
18
19
20
21
<h1>Hello, world!</h1>
22
23
24
25
26
27
28
29
30
<script src="https://js.stripe.com/v2/"
type="text/javascript"></script>
<script type="text/javascript">
//<![CDATA[
Stripe.publishableKey = '{{ publishable }}';
//]]>
</script>
<!-- jQuery (necessary for Bootstrap's JavaScript plugins) -->
<script
src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js
173
31
32
33
34
35
If you refresh your page, you will see that there is a change in the font, indicating the
the CSS file is now loading correctly.
Installation complete. Your final base.html file should look like this:
1
{% load static %}
2
3
4
5
6
7
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
174
10
11
12
13
14
15
16
17
18
19
20
21
22
23
<h1>Hello, world!</h1>
24
25
26
27
28
29
30
31
<script src="https://js.stripe.com/v2/"
type="text/javascript"></script>
<script type="text/javascript">
//<![CDATA[
Stripe.publishableKey = '{{ publishable }}';
//]]>
</script>
32
33
34
35
36
37
38
39
Page Title
We probably want to change the title since the current title, Bootstrap 101 Template, has
little to do with Star Wars. You can customize this how you want, but well be using Mos
Eisleys Cantina our example.
1
Custom Fonts
Lets also use a custom Star Wars font called Star Jedi.
Custom fonts can be a bit tricky on the web because there are so many different formats
you need to support because of cross browser compatibility issues. Basically, you need four
different formats of the same font. If you have a TrueType font you can convert it into the
four different fonts that you will need with an online font conversion tool. Then youll have
to create the correct CSS entries.
You can do this conversion on your own if youd like to practice. Simply download the font
from the above URL and then convert it. Or you can grab the fonts already converted in the
chp08 folder on the exercise repo.
1. Add a new file called mec.css in the static/css directory. Add the follow styles to start
with:
1
2
3
4
5
6
7
8
9
10
@font-face {
font-family: 'starjedi';
/*IE9 */
src: url('../fonts/Starjedi.eot');
/* Chrome, FF, Opera */
src: url('../fonts/Starjedi.woff') format('woff'),
/* android, safari, iOS */
url('../fonts/Starjedi.ttf') format('truetype'),
/* legacy safari */
url('../fonts/Starjedi.svg') format('svg');
176
font-weight: normal;
font-style: normal;
11
12
13
14
15
16
17
18
body {
padding-bottom: 40px;
background-color: #eee;
}
19
20
21
22
h1 {
font-family: 'starjedi', sans-serif;
}
23
24
25
26
.center-text {
text-align: center;
}
27
28
29
30
.featurette-divider {
margin: 80px 0;
}
The main thing going on in the CSS file above is the @font-face directive. This defines
a font-family called starjedi (you can call it whatever you want) and specifies the
various font files to use based upon the format requested from the browser.
Its important to note that the path to the font is the relative path from the location of
the CSS file. Since this is a CSS file, we dont have our Django template tag like static
available to us.
2. Make sure to add the new mec.css to the base.html template:
1
2
3
4
3. Test! Fire up the server. Check out the changes. Our Hello, world! is now using the
cool startjedi font so our site can look a bit more authentic.
Layout
Now lets work on a nice layout. This is where we are going:
177
178
Navbars are pretty common these days, and they are a great way to provide quick links for
navigation. We already had a navbar on our existing template, but lets put a slightly different
one in for the sake of another example. The basic structure of the navbar is the same in
Bootstrap 3 as it is in Bootstrap 2, but with a few different classes. The structure should look
like this:
1
2
-->
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
Add this to your base.html template just below the opening body tag.
Comparing the above to our old navbar, you can see that we still have the same set of list
items. However, with the new navbar we have a navbar header and some extra divs wrapping
it. Most important are the <div class="container"> (aka container divs) because they
are vital for the responsive design in Bootstrap 3.
Since Bootstrap 3 is based on the mobile-first philosophy, the site is responsive from the
start. For example, if you re-size your browser to make the window smaller, you will see the
navbar links will disappear and a drop down button will be shown instead of all the links
(although we do have to insert the correct Javascript for this to happen, which we havent
done yet). Clicking that drop-down button will show all the links. An image of this is shown
below (ignore the colors for now).
Django to bind to your network adapter when you start it so that the development server will
accept connections from other machines. To do this, just type the following:
1
Then from your mobile phone, put in the IP address of your computer (and port 8000), and
you can see the site on your mobile phone or tablet. On unix machine or Macs you can do an
ifconfig from terminal to get your IP address. On windows machines its ipconfig from the
command prompt. We tested it on 192.168.1.12:8000.
Theres always certain sense of satisfaction that comes out of seeing your websites on your
own phone. Right?
There are also simpler ways to test out the responsiveness of bootstrap. The simplest being
simply resizing your browser and what bootstrap automatically adjust. Also you can use an
online tool like viewpoint-resizer.
Colors
To make the navbar use our cool starjedi font, update mec.css by replacing the current h1
style with:
1
2
3
This just applies our font selection to the links in the navbar as well as the h1 tag. But as
is often the case with web development, doing this will make the navbar look a bit off when
viewed from the iPhone in portrait mode. To fix that, we have to adjust the sizing of the
.navbar-brand to accommodate the larger size of the startjedi font. This can be done by
adding the following to mec.css:
1
2
3
.navbar-brand {
height: 40px;
}
Now your navbar should look great on pretty much any device. With the navbar more or less
taken care of, lets add a footer as well, just below - <h1>Hello, world!</h1>:
1
<hr class="featurette-divider">
2
3
4
<footer>
<p class="pull-right"><a href="/#">Back to top</a></p>
181
The actual HTML is more or less the same, right? But you may be saying to yourself, What
is the <footer> tag? Why shouldnt we just use a <div>?
Well Im glad you asked, because that brings us to our next topic.
182
-->
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand" href="#">Mos Eisley's Cantina</a>
</div>
<div class="navbar-collapse collapse">
<ul class="nav navbar-nav">
<li class="active"><a href="{% url 'home' %}">Home</a></li>
<li><a href="/pages/about">About</a></li>
<li><a href="{% url 'contact' %}">Contact</a></li>
{% if user %}
<li><a href="{% url 'sign_out' %}">Logout</a></li>
{% else %}
<li><a href="{% url 'sign_in' %}">Login</a></li>
<li><a href="{% url 'register' %}">Register</a></li>
{% endif %}
</ul>
</div> <!-- end navbar links -->
</div> <!-- end container -->
</div> <!-- end navbar -->
30
31
Now imagine for a second that you are a computer program trying to determine the meaning
of that section of HTML. Its not very hard to tell what is going on there, right? You have a
few clues like the classes that are used, but they are not standard across the web, and the best
you could do is guess.
NOTE: The astute reader may point out the role attribute of the third div from
the top, which is actually part of the HTML5 semantic web specification. Its used
to provide a means for assistive technology to better understand the interface. So
its really just a high-level overview. What were after is more of a granular look
at the HTML itself.
The problem is that HTML is really a language to describe structure; it tells us how this data
should look, not what this data is. HTML5 section tags are the first baby steps used to start
to make data more accessible. An HTML5 section tag can be used in place of a div to provide
some meaning as to what type of data is contained in the div.
184
Inheritance
Before moving on, lets update the parent template, base.html, to add template internee.
Remove:
1
<h1>Hello, world!</h1>
<div class="container">
2
3
4
{% block content %}
{% endblock %}
5
6
7
<hr class="featurette-divider">
8
9
10
11
12
<footer>
<p class="pull-right"><a href="/#">Back to top</a></p>
<p class="pull-left">© 2014 <a
href="http://realpython.com">The Jedi Council</a></p> <p
class="text-center"> · Powered by about 37 AT-AT
Walkers, Python 3 and Django 1.8 · </p>
</footer>
13
14
</div>
4. header Not to be confused with the head tag (which is the head of the document),
the header is the header of a section. It is common that headers have h1 tags. But
like with all the tags in this section, header describes the data, so its not the same as
h1 which describes the appearance. Also note that there can be several headers in an
HTML document, but only one per section tag.
5. nav Used mainly for site navigation, like our navbar above.
6. footer Despite the name, footer doesnt need to be at the bottom of the page.
Footer elements contain information about the containing section they are in. Oftentimes (like we did above) they contain copyright info, information about the author /
company. They can of course also contain footnotes. Also like headers, there can be
multiple footer tags per HTML document, but one per section tag.
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
Notice the difference? Basically we replaced two of the divs with more meaningful section
elements.
1. The second line and the second to last line define the nav or navigation section, which
lets a program know: This is the main navigation section for the site.
2. Inside the nav element there is a header element, which encompasses the brand part
of the navbar, which is intended to show the name of the site with a link back to the
main page.
Those two changes may not look like much, but depending on the scenario, they can have a
huge impact. Lets look at a few examples real quick.
1. Web scraping: Pretend you are trying to scrape the website. Before, you would have
to follow every link on the page, make sure the destination is still in the same domain,
detect circular links, and finally jump through a whole bunch of loops and special cases
just to visit all the pages on the site. Now you can basically achieve the same thing in
just a few steps. Take a look at the following pseudo-code:
1
access site
2
3
4
5
6
7
8
9
2. Specialized interfaces: Imagine tying to read a web page on something tiny like google
glass, or a smart watch. Lots of pinching and zooming and pretty tough to do. Now
imagine if that smart watch could determine the content and respond to voice commands like Show navigation, Show Main Content, Next Section. This could happen across all sites today if they used Semantic markup. But there is no way to make
that work if your site is just a bunch of unstructured divs.
187
188
{% extends 'base.html' %}
2
3
{% block content %}
4
5
{% load static %}
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
<center>
<section id="myCarousel" class="carousel slide"
data-ride="carousel" style="max-width: 960px;">
<!-- Indicators -->
<ol class="carousel-indicators">
<li data-target="#myCarousel" data-slide-to="0"
class="active"></li>
<li data-target="#myCarousel" data-slide-to="1"></li>
<li data-target="#myCarousel" data-slide-to="2"></li>
</ol>
<div class="carousel-inner">
<figure class="item active">
<img src="{% static 'img/darth.jpg' %}" alt="Join the
Dark Side" style="max-height: 540px;">
<figcaption class="carousel-caption">
<h1>Join the Dark Side</h1>
<p>Or the light side. Doesn't matter. If your into Star
Wars then this is the place for you.</p>
<p><a class="btn btn-lg btn-primary" href="{% url
'register' %}" role="button">Sign up today</a></p>
</figcaption>
</figure>
<figure class="item">
189
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
{% endblock %}
Thats a fair amount of code there, so lets break it down into sections.
1
2
3
4
5
6
7
1. The first line is the HTML5 section tag, which is just separating the carousel as a
separate section of the document. The attribute data-ride="carousel" starts the
carousel animation on page load.
2. The ordered list ol displays the three dots near the bottom of the carousel that indicate
which page of the carousel is being displayed. Clicking on the list will advance to the
associated image in the carousel.
The next section has three items that each correspond to an item in the carousel. They all
behave the same, so lets just describe the first one, and the same will apply to all three.
1
2
3
4
5
7
8
1. figure is another HTML5 element that we havent talked about yet. Like the section
elements, it is intended to provide some semantic meaning to the page:
NOTE: The figure element represents a unit of content, optionally with a
caption, which is self-contained, that is typically referenced as a single unit
from the main flow of the document, and that can be moved away from the
main flow of the document without affecting the documents meaning.
2. Just like with the HTML5 section tags, we are replacing the overused div element with
an element figure that provides some meaning.
3. Also as a sub-element of the figure we have the <figcaption> element which represents a caption for the figure. In our case we put the join now message an a link to
the registration page in our caption.
191
The final part of the carousel are two links on the left and right of the carousel that look like
> and <. These advance to the next or previous slide in the carousel, and the code for these
links look like:
1
Finally, create an img folder within the static folder, and then grab the star-warsbattle.jpg, star-wars-cast.jpg, and darth.jpg images from the chp08 folder in the exercise
repo and them to that newly created img folder.
And thats it. You now have a cool carousel telling visiting younglings about the benefits of
joining the site.
<br><br>
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
Be sure to grab the images - yoda.jpg, clone_army.jpg, leia.jpg - from the repo again.
This gives us three sections each with an image in a circular border (because circles are hot
these days right?), some text and a view details button. This is all pretty straight-forward
Bootstrap stuff; Bootstrap uses a grid system, which breaks up the page into columns and
rows.
By putting content into the same row (that is to say, all elements that are a child of the
same <div class="row center-text"> ), it will appear lined up side by side. And as
you might expect, a second row will appear underneath the previous row. Likewise with
columns, add content and a column will appear to the right of the previous column (i.e. <div
class="column">) or to the left of the subsequent column.
For our marketing info, we create one row div with three child divs <div class="col-lg-4">.
col-lg-4 is interesting. With Bootstrap each row has a total of 12 columns. You can
break up the page how you want as long as the sum of all the columns is 12. In other
words, <div class='col-6'> will take up half of the width of the page, whereas <div
class='col-4'> will take up a third of the width of the page. For a more in-depth
discussion on this, take a look at this blog post.
Did you notice the other identifier? lg. This stands for large. Along with lg, theres xs, sm,
and md for extra-small, small, and medium, respectively.
193
Bootstrap 3, being responsive by default, has the notion of extra-small, small, medium, and
large screen sizes. So by using the lg identifier, you are can saying, I want columns only if
the screen size is large,. Likewise, you can do the same with other sizes.
You can see this by looking at our Marketing Items using a large screen size (where it will
have three columns because we are using the col-lg-4 class:
194
195
3
4
<div class="col-lg-4">
<img class="img-circle" src="{% static 'img/yoda.jpg' %}"
width="140" height="140" alt="Generic placeholder image">
<h2>Hone your Jedi Skills</h2>
<p>All members have access to our unique training and
achievements ladders. Progress through the levels and show
everyone who the top Jedi Master is! </p>
<p><a class="btn btn-default" href="#" role="button">View details
»</a></p>
</div><!-- /.col-lg-4 -->
196
-could be easily re-factored into a template tag so you wouldnt have to type so much. Lets
do it.
Directory structure
First, create a directory in your main app called templatetags and add the following files to
ti:
1
2
3
4
5
6
7
.
__init__.py
models.py
templatetags
__init__.py
marketing.py
views.py
In marketing.py add the following code to define the custom template tag:
1
2
3
4
5
6
8
9
10
11
12
13
14
@register.inclusion_tag('circle_item.html')
def circle_header_item(img_name="yoda.jpg", heading="yoda",
caption="yoda",
button_link="register", button_title="View
details"):
return {
'img': img_name,
'heading': heading,
'caption': caption,
'button_link': button_link,
'button_title': button_title
}
The first two lines just register circle_header_item() as a template tag so you can use it
in a template using the following syntax:
1
Just as if you were calling a regular Python function, when calling the template tag you can
use keywords or positional arguments - but not both. Arguments not specified will take on
the default value.
The @register function simply creates a context for the template to use. It does this by
creating a dictionary of variable names mapped to values. Each of the variable names in the
dictionary will become available as template variables.
The remainder of the line1
@register.inclusion_tag('circle_item.html')
-declares an HTML fragment that is rendered by the template tag. Django will look for the
HTML file everywhere that is specified in the TEMPLATE_LOADERS list, which is specified in
the settings.py file. In our case, this is under the template subdirectory.
HTML Fragment
{% load staticfiles %}
2
3
4
5
6
7
<div class="col-lg-4">
<img class="img-circle" src="{% static 'img/'|add:img %}"
width="140" height="140" alt="{{img}}">
<h2>{{ heading }}</h2>
<p>{{ caption }}</p>
<p><a class="btn btn-default" href="{% url button_link %}"
role="button">{{button_title}}</a></p>
</div>
Looking at the src attribute you can see it is using the standard {% static %} tag. However
there is a funny looking bit - 'img/'|add:img - which is used for concatenation.
You can use it for addition:
198
{{ num|add:'1' }}
If you passed 5 in for num, then this would render 6 in your HTML.
add can also concatenate two lists as well as strings (like in our case). Its a pretty handy filter
to have.
Adding the tag
7
8
10
11
12
13
14
15
16
17
199
200
A Note on Structure
Pairing Bootstrap with Django templates can not only make design a breeze, but also greatly
reduce the amount of code you have to write.
If you do start to use a number of custom template tags and/or use multiple templates to
render a page, it can make it more difficult to debug issues within the template. The hardest
part of the process is often determining which file is responsible for the markup you see on
the screen. This becomes more prevalent the larger an application gets. In order to combat
this issue you should choose a meaningful structure to organize your templates and stick to it
religiously. It will make debugging and maintenance much easier.
Currently, in our code, we are just throwing everything in the templates directory. This
approach doesnt scale; it makes it very difficult to know which templates belong to which
views and how the templates relate to one other. Lets fix that.
Restructure
First, lets organize the templates by subfolder, where the name of each subfolder corresponds
with the app that is using the template. Reorganizing our template folder as such produces:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
.
base.html
contact
contact.html
flatpages
default.html
main
index.html
templatetags
circle_item.html
user.html
payments
cardform.html
edit.html
errors.html
field.html
register.html
sign_in.html
201
With this setup we can easily see which templates belong to which application. This should
make it easier to find the appropriate templates. We do have to change the way in which we
refer to templates in both our views and our templates by appending the app name before the
template when calling it so that Django can find the template correctly. This is just a minor
change and also adds readability.
For example, in a template you change this:
1
{% include "cardform.html" %}
{% include "payments/_cardform.html" %}
To:
The only thing we are doing differently here is including the folder where the template
is stored. And since the folder is the same name as the app that uses the template, we
also know where to find the corresponding views.py file, which in this case would be
payments/views.py.
In a views.py file its the same sort of update:
1
2
3
4
5
6
7
8
return render_to_response(
'payments/sign_in.html',
{
'form': form,
'user': user
},
context_instance=RequestContext(request)
)
Again, just like in the template, we add the name of the directory so that Django can locate
the template correctly.
Dont make these changes just yet.
This is a good first step to organizing our templates, but we can take it a bit further and
communicate the purpose of the template by adhering to a naming convention.
Naming Convention
Basically you have three types of templates:
Description
Example
1
2
3
__base.html
payments/_cardform.html
@register.inclusion_tag(main/templatetags/circle_item.html)
Doing this will let us quickly identify what the purpose of each of our templates is. This will
make things easier, especially when your application grows and you start including a number
of different types of templates and referencing them from all over your application.
Go ahead and make the restructuring and renaming changes now. Once done, run the sever
to make sure you caught everything.
Your templates directory should now look like this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
.
__base.html
contact
contact.html
flatpages
default.html
main
index.html
templatetags
circle_item.html
user.html
payments
_cardform.html
_field.html
edit.html
errors.html
register.html
sign_in.html
203
As you can see, the basic structure stays the same, but now if you look at the payments folder,
for example, you can quickly tell that *_cardform.html* and *_field.html* are templates that
are meant to be included in other templates, while the three other templates represent a page
in the application. And we know all of that without even looking at their code.
{% load marketing %}
The first question you should have after reading that is, Well where are the marketing template tags, and whats in there?
Lets first rename the marketing.py to main_marketing.py so at a glance we can tell where
the template tags are located (within the templates/main/templatetags folder). Also in
the main_marketing.py file we have a tag called circle_header_item. While this may
describe what it is, it doesnt tell us where it came from. Larger Django projects could have
templates that include ten other templatetag libraries. In such a case, its pretty difficult to
tell which tags belong to which library. The solution is to name the tag after the library.
One convention is to use the taglibname__tagname. In this case circle_header_item
becomes marketing__circle_item. This way, if we find it in a template, we know it
comes from a library called marketing, and if we just jump to the top of the HTML file,
well see the {% load main_marketing %} tag and thus well know to look for the code:
main.templatetags.main_marketing.marketing__circle_item.
This may not seem like much but its a life saver when the application becomes large and/or
you are revisiting the app six months after you wrote it. So take the time and add a little
structure to your templates. Youll thank yourself later.
Make all the changes and run your tests. Youll probably pick up anything you missed just by
running the test suite since moving around templates will cause your tests to fail. Still having
problems? Check the Project in the chp09 folder in the exercise repo.
204
Conclusion
We have talked about several important topics in this chapter with regards to Bootstrap:
1. First, we went through the basics of Bootstrap 3, its grid system and some of the cool
tools it has for us to use.
2. Then, we talked about using custom fonts, putting in our own styling and imagery, and
making Bootstrap and Django templates play nicely together.
This all represents the core of what you need to know to successfully work with Bootstrap and
Django. That being said, Bootstrap is a giant library, and we barely scratched the surface.
Read more about Bootstrap on the official website. There are a number of things that you
can do with Bootstrap, and the more you use it, the better your Bootstrap skills will become.
With that in mind, the exercises will go through a number of examples designed to give you
more practice with Bootstrap.
Finally, if you need additional help with Bootstrap, check out this blog post, which touches
on a number of features and components of the framework, outside the context of Django which may be easier to understand.
We also talked about two important concepts that go hand and hand with Bootstrap and front
end work:
1. HTML5 Semantic Tags - these help make your web site more accessible programmatically, while also making your HTML easier to understand and maintain by fellow web
developers. The web is about data and making the data of your website more accessible
is generally a good thing.
2. Template Tags: We talked a lot about custom template tags and how they can reduce
your typing, and make your templates much easier to use. We will explore custom tags
further in the exercises with a look at how to data drive your website using custom
template tags.
A lot of ground was covered so be sure to go through the exercises to help ensure you understand everything that we covered in this chapter.
205
Exercises
1. Bootstrap is a front-end framework, and although we didnt touch much on it in this
chapter, it uses a number of CSS classes to insert things on the page, making it look
nice and provide the responsive nature of the page. It does this by providing a large
number of classes that can be attached to any HTML element to help with placement.
All of these capabilities are described on the Bootstraps CSS page. Have a look through
it, and then lets put some of those classes to use.
In the main carousel, the text, Join the Dark Side on the Darth Vader image,
blocks the image of Darth himself. Using the Bootstrap / carousel CSS, can you
move the text and sign up button to the left of the image so as to not cover Lord
Vader?
If we do the above change, everything looks fine until we view things on a phone
(or make our browser really small). Once we do that, the text covers up Darth
Vader completely. Can you make it so on small screens the text is in the normal
position (centered / lower portion of the image) and for larger images its on the
left.
2. In this chapter, we updated the Home Page but we havent done anything about the
Contact Page, the Login Page, or the Register Page. Bootstrapify them. Try to make
them look awesome. The Bootstrap examples page is a good place to go to get some
simple ideas to implement. Remember: try to make the pages semantic, reuse the
Django templates that you already wrote where possible, and most of all have fun.
3. Previously in the chapter we introduced the marketing__circle_item template tag.
The one issue we had with it was that it required a whole lot of data to be passed into
it. Lets see if we can fix that. Inclusion tags dont have to have data passed in. Instead, they can inherit context from the parent template. This is done by passing in
takes_context=True to the inclusion tag decorator like so:
1
@register.inclusion_tag('main/templatetags/circle_item.html',
takes_context=True)
If we did this for our marketing__circle_item tag, we wouldnt have to pass in all
that data; we could just read it from the context.
Go ahead and make that change, then you will need to update the main.views.index
function to add the appropriate data to the context when you call render_to_respnose.
Once that is all done, you can stop hard-coding all the data in the HTML template and
instead pass it to the template from the view function.
206
For bonus points, create a marketing_info model. Read all the necessary data from
the model in the index view function and pass it into the template.
207
Chapter 10
Building the Members Page
Now that we have finished getting the public facing aspects of our site looking nice and
purdy, its time to turn our attention to our paying customers and give them something fun
to use, that will keep them coming back for more (at least in theory).
208
User Story
As always, lets start with the user story we defined in Chapter 7
And that is exactly what we are going to do. To give you an idea of where we should end up
with by the end of this chapter, have a look at the screenshot below:
209
{% extends "__base.html" %}
{% load staticfiles %}
{% block content %}
<div class="row member-page">
<div class="col-sm-8">
<div class="row">
{% include "main/_statusupdate.html" %}
{% include "main/_lateststatus.html" %}
</div>
</div>
<div class="col-sm-4">
<div class="row">
{% include "main/_jedibadge.html" %}
</div>
</div>
</div>
{% endblock %}
Looks pretty simple, right? Here we added the basic Bootstrap scaffolding. Remember that
Bootstrap uses a grid system, and we can access that grid system by creating CSS classes that
use the row and col syntax. In this case, we have used special column classes col-sm-8 and
col-sm-4. Since the total columns available is 12, this tells Bootstrap that our first column
should be 2/3 of the screen (8 of 12 columns) and our second column should be 1/3 of the
screen width.
The sm part of the class denotes that these columns should appear on tablets and larger devices. On anything smaller than a tablet there will only be one column. You have four options
for column size with Bootstrap:
class name
width in pixels
device type
.col-xs.col-sm.col-md.col-lg-
< 768px
>= 768px
>= 992 px
>= 1200 px
Phones
Tablets
Desktops
Large Desktops
210
class name
width in pixels
device type
Keep in mind that choosing a column size will ensure that the column is available for that
size and all sizes larger (i.e., specifying .col-md will show the column for desktops and large
desktops but not phones or tablets).
After setting up the grid, we used three includes:
1
2
3
{% include "main/_statusupdate.html" %}
{% include "main/_lateststatus.html" %}
{% include "main/_jedibadge.html" %}
After the last chapter you should be familiar with includes; they just let us include a separate template in our current template. Each of these represents an info box - e.g, the Report
Back to Base box - which is a separate reusable piece of logic kept in a separate template file.
This ensures that your templates stay relatively small and readable, and it makes it easier to
understand things.
211
6
7
8
9
10
11
12
13
<!-- The jedi badge info box, shows user info -->
{% load staticfiles %}
<section class="info-box" id="user_info">
<h1>Jedi Badge</h1>
<img class="img-circle" src="{% static 'img/yoda.jpg' %}"
width="140" height="140" alt="user.avatar"/>
<ul>
<li>Rank: {{user.rank}}</li>
<li>Name: {{user.name}}</li>
<li>Email: {{user.email}}</li>
<li><a id="show-achieve" href="#">Show Achievements</a></li>
</ul>
<p>Click <a href="{% url 'edit' %}">here</a> to make changes to
your credit card.</p>
</section>
We start with a section tag that wraps the whole section and gets its styling from a class called
info-box. Add the following CSS styles to mec.css:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
/* Member Page */
.info-box {
border: 2px solid #000000;
margin-bottom: 20px;
padding-left: 10px;
padding-right: 10px;
padding-bottom: 5px;
background-color: #eee;
}
#user_info {
text-align: center;
margin-left: 20px;
}
#user_info ul {
list-style-type: none;
text-align: left;
}
.member-page {
212
padding-top: 20px;
padding-bottom: 40px;
background-color: white;
margin-left: 0px;
margin-right: 0px;
margin-top: -20px;
19
20
21
22
23
24
25
Mainly we are just setting some spacing and background colors. Nothing too fancy here.
Coming back to the jedi badge info box, there are two things we have changed about the user:
1. The user now has an avatar (which for the time being we are hard-coding as the
yoda image) - <img class="img-circle" src="{% static 'img/yoda.jpg'
{{user.rank}}</li>
We default the value to Padwan because all registered users start with that value, and unregistered users, who technically have a rank of youngling, wont see the rank displayed
anywhere on the system.
Since we are making changes to the User model we need to re-sync our database to add the
new rank column to the payments_user table. However, if you run ./manage.py syncdb,
Django will essentially do nothing because it cannot add a column to a table that has users
in it. It is worth nothing that in Django versions prior to 1.7 you would see an error populate.
To resolve this, you need to first remove all the users and then the syncdb can continue error
free.
For right now that is okay: We can delete the users. But if you are updating a system where
you need to keep the data (say, a system that is in production), this is not a feasible option.
Thankfully, in Django 1.7 the concept of migrations has been introduced. Migrations actually come from a popular Django framework called South, which allows you to update a
213
table/schema without losing data. An upcoming chapter will cover migrations in more detail.
For now we will just drop the database, and then re-sync the tables.
Before you do that, lets talk quickly about users. Since we are developing the membership
page (which requires a login), we will need registered users so we can log into the system.
Having to delete all the users anytime we make a change to the User model can be a bit
annoying, but we can get around this issue by using fixtures.
Fixtures
We talked about fixtures in the Software Craftsmanship chapter as a means of loading data
for unit testing. Well, you can also use fixtures to load initial data into a database. This can
make things harder to debug, but in this case since we are doing a lot of work on how the
system responds to registered users, we can save ourselves a lot of time by pre-registering
users.
Since users are in the payments application, we should put our fixture there. Lets create a
fixture that runs each time ./manage.py syncdb is run (which also runs every time our
unit test are run):
1. Create a directory called fixtures in the payments directory.
2. Make sure you have some registered users in your database. Manually add
some if necessary.
Then run ./manage.py dumpdata payments.User >
payments/fixtures/initial_data.json.
manage.py dumpdata spits out JSON for all the data in the database, or in the case above for
a particular table. Then we just redirect that output to the file payments/fixtures/initial_data.json.
Now when you run ./manage.py syncdb, it will search all of the applications registered
in settings.py for a fixtures/initial_data.json file and load that data into the database that is
stored in that file.
Here is an example of what the data might look like, formatted to make it more humanreadable:
1
2
3
4
5
6
7
[
{
"pk": 1,
"fields": {
"last_login": "2014-03-11T08:58:20.136",
"rank": "Padwan",
"name": "jj",
214
"password":
"pbkdf2_sha256$12000$c8TnAstAXuo4$agxS589FflHZf+C14EHpzr5+EzFtS1V1t
"email": "[email protected]",
"stripe_id": "cus_3e8fBA8rIUEg5X",
"last_4_digits": "4242",
"updated_at": "2014-03-11T08:58:20.239",
"created_at": "2014-03-11T08:58:20.235"
9
10
11
12
13
},
"model": "payments.user"
14
15
16
17
},
{
"pk": 2,
"fields": {
"last_login": "2014-03-11T08:59:19.464",
"rank": "Jedi Knight",
"name": "kk",
"password":
"pbkdf2_sha256$12000$bEnyOYJkIYWS$jqwLJ4iijmVgPHu9na/Jncli5nJnxbl47
"email": "[email protected]",
"stripe_id": "cus_3e8gyBJlWAu8u6",
"last_4_digits": "4242",
"updated_at": "2014-03-11T08:59:19.579",
"created_at": "2014-03-11T08:59:19.577"
},
"model": "payments.user"
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
},
{
"pk": 3,
"fields": {
"last_login": "2014-03-11T09:12:09.802",
"rank": "Jedi Master",
"name": "ll",
"password":
"pbkdf2_sha256$12000$QE2hn0nj0IWm$Ea+IoZMzv6KYV2ycpe+g7afFWi2wPSSya
"email": "[email protected]",
"stripe_id": "cus_3e8tB7EaspoOiJ",
"last_4_digits": "4242",
"updated_at": "2014-03-11T09:12:10.033",
"created_at": "2014-03-11T09:12:10.029"
},
215
"model": "payments.user"
45
46
47
With that, you wont have to worry about re-registering users every time you run a unit test
or you resync your database. But if you do use the above data exactly, it will break your unit
tests, because our unit tests assume there is no data in the database. In particular, the test
test_get_by_id in tests.payments.testUserModel.UserModelTest should now be
failing (among others). Lets fix it really quick:
1
2
def test_get_by_id(self):
self.assertEqual(User.get_by_id(self.test_user.id),
self.test_user)
Before we hard-coded the id to 1, which is okay if you know what the state of the database is
but its still hard-coding, and it has come back to bite us in the rear. Never again! Now we
just use the id of the test_user (that we created in the setUpClass method), so it doesnt
matter how much data we have in the database; this test should continue to pass, time after
time.
Update Database
With the fixtures setup, lets update the database.
1. First drop the database from the Postgres Shell:
1
$ ./manage.py syncdb
Make sure to run your tests. Right now you should have three errors since we have not created
the *main/_statusupdate.html* template yet.
216
Gravatar Support
Most users are going to want to be able to pick their own avatar as opposed to everybody
being Yoda. We could give the user a way to upload an image and store a reference to it in the
user table, then just look up the image and display it in the jedi badge info box. But somebody
has already done that for us
Gravatar or Globally Recognized Avatars, is a site that stores an avatar for a user based upon
their email address and provides APIs for all of us developers to access that Avatar so we can
display it on our site. This is a nice way to go because it keeps you from having to reinvent the
wheel and it keeps the user from having to upload yet another image to yet another service.
To use it, lets create a custom tag that will do the work of looking up the gravatar for us.
Once the tag is created it will be trivial to insert the gravatar into our *main/_jedibadge.html*
template.
Create main/templatetags/main_gravatar.py and fill it with the following code:
1
2
3
4
5
register = template.Library()
6
7
8
9
10
11
12
@register.simple_tag
def gravatar_img(email, size=140):
url = get_url(email, size)
return '''<img class="img-circle" src="%s" height="%s"
width="%s"
alt="user.avatar" />''' % (url, size, size)
13
14
15
16
17
18
19
20
21
22
23
return ('http://www.gravatar.com/avatar/' +
hashlib.md5(email.lower().encode('utf-8')).hexdigest() +
217
24
'?' + query_params)
return ('http://www.gravatar.com/avatar/' +
hashlib.md5(email.lower().encode('utf-8')).hexdigest() +
'?' + query_params)
The base url is http://www.gravatar.com/avatar/. To that we add the users email address hashed with md5 (which is required by gravatar) along with the query_params.
We pass in two query parameters:
s: the size of the image to return
d: a default image that is returned if the email address doesnt have a gravatar account
The code to do that is here:
1
2
default = ('http://upload.wikimedia.org/wikipedia/en/9/9b/'
'Yoda_Empire_Strikes_Back.png')
3
4
5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
<!-- The jedi badge info box, shows user info -->
{% load staticfiles %}
{% load main_gravatar %}
<section class="info-box" id="user_info">
<h1>Jedi Badge</h1>
{% gravatar_img user.email %}
<ul>
<li>Rank: {{user.rank}}</li>
<li>Name: {{user.name }}</li>
<li>Email: {{user.email }}</li>
<li><a id="show-achieve" href="#">Show Achievements</a></li>
</ul>
<p>Click <a href="{% url 'edit' %}">here</a> to make changes to
your credit card.</p>
</section>
Notice we have changed line 3 and line 6 from our earlier template: Line 3 loads our custom
tag library and line 6 calls it, passing in user.email. Now we have gravatar support - and it
only took a handful of lines!
NOTE: There are a number of gravatar plugins available on GitHub, most of
which are basically the same thing we just implemented. While you shouldnt reinvent the wheel, theres not much point in utilizing an external dependency for
something that is this straight-forward. However if/when we need more than
basic gravatar support, it may be worth looking into some of the pre-existing
packages.
To finalize the gravatar support, we better re-run our tests and make sure nothing fails (aside
for the three previous errors, of course), as well as add some new tests for the gravatar tag.
Since this is review, add this on your own - its always good to get some extra practice.
219
5
6
7
8
9
10
11
12
13
14
Its just a simple form that POSTS the report URL; be sure to add the URL to urls.py as:
1
To fully understand the view function, main.views.report, we need to first have a look at
the model, which the view function relies on. The listing for user.models.StatusReport
is below:
1
2
3
4
class StatusReport(models.Model):
user = models.ForeignKey('payments.User')
when = models.DateTimeField(auto_now_add=True)
status = models.CharField(max_length=200)
def report(request):
if request.method == "POST":
status = request.POST.get("status", "")
221
4
5
6
7
8
9
10
11
The last line will in effect cause the Recent Status Reports info box to update with the newly
posted status, with the following updates to the main.views.index function:
1
2
3
4
5
6
7
8
9
10
def index(request):
uid = request.session.get('user')
if uid is None:
#main landing page
market_items = MarketingItem.objects.all()
return render_to_response(
'main/index.html',
{'marketing_items': market_items}
)
else:
222
11
12
13
14
15
16
17
#membership page
status = StatusReport.objects.all().order_by('-when')[:20]
return render_to_response(
'main/user.html',
{'user': User.get_by_id(uid), 'reports': status},
context_instance=RequestContext(request),
)
The main difference from our previous version of this function is:
1
status = StatusReport.objects.all().order_by('-when')[:20]
This line grabs a list of twenty status reports ordered by posted date in reverse order. Please
keep in mind that even though we are calling objects.all() we are never actually retrieving all records from the database for this table. Djangos ORM by default uses Lazy Loading
for its query sets, meaning Django wont actually query the database until you access the data
from the query set - which, in this case, is the at the point of slicing, [:20]. Thus we will only
pull at most 20 rows from the database.
As you can see from line 4, the exact SQL that Django executes is outputted for you to read
and it does include a LIMIT 20 at the end. In general, dropping down into the shell and
223
outputting the exact query is a good sanity check to quickly verify that the ORM is executing
what you think it is executing.
class StatusReportQuerySet(models.QuerySet):
def latest(self):
return self.all().order_by('-when')[:20]
Then we hook it up to our StatusReport model by adding one line to our StatusReport
class:
1
objects = StatusReportQuerySet.as_manager()
Once that is setup we can use our new query set in place of our main.views.index function by
changing this line:
1
status = StatusReport.objects.all().order_by('-when')[:20]
status = StatusReport.objects.latest()
to:
status = StatusReport.objects.get_query_set().latest()
At the very least Django 1.7 saves us a few keystrokes. In general though, there is some value
in using custom query sets because it can make the intent of the code clearer. Its pretty easy
to guess that latest() returns the latest StatusReports. This is especially advantageous
when you have to write some custom SQL or you have a complex query that isnt easy to understand what it does. By wrapping it in a query_set youre creating a kind of self-documenting
code that makes it easier to maintain, and now in 1.7 takes less key strokes as well.
Dont forget we can also verify that our new query is doing what we expect by using the same
technique we used previously, e.g., run ./manage.py shell import the model and then
run:
224
print (StatusReport.objects.latest().query)
And this should give the same output as our original query.
def index(request):
uid = request.session.get('user')
if uid is None:
#main landing page
market_items = MarketingItem.objects.all()
return render_to_response(
'main/index.html',
{'marketing_items': market_items}
)
else:
#membership page
status = StatusReport.objects.all().order_by('-when')[:20]
return render_to_response(
'main/user.html',
{'user': User.get_by_id(uid), 'reports': status},
context_instance=RequestContext(request),
)
Lines 10-14:
1
2
3
4
5
return render_to_response(
'main/user.html',
{'user': User.get_by_id(uid), 'reports': status},
context_instance=RequestContext(request),
)
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
<!-- list of latest status messages sent out by all users of the
site -->
{% load staticfiles %}
{% load main_gravatar %}
<section class="info-box" id="latest_happenings">
<h1>Recent Status Reports</h1>
{% for report in reports %}
<div class="media">
<div class="media-object pull-left">
<img src="{% gravatar_url report.user.email 32 %}"
width="32" height="32"
alt="{{ report.user.name }}"/>
</div>
<div class="media-body">
<p>{{ report.status }}</p>
</div>
</div>
{% endfor %}
</section>
@register.simple_tag
def gravatar_url(email, size=140):
default = ('http://upload.wikimedia.org/wikipedia/en/9/9b/'
'Yoda_Empire_Strikes_Back.png')
5
6
7
8
9
10
11
12
13
14
15
return ('http://www.gravatar.com/avatar/' +
hashlib.md5(email.lower().encode('utf-8')).hexdigest() +
'?' + query_params)
4
5
register = template.Library()
6
7
8
9
10
11
12
@register.simple_tag
def gravatar_img(email, size=140):
url = gravatar_url(email, size)
return '''<img class="img-circle" src="%s" height="%s"
width="%s"
alt="user.avatar" />''' % (url, size, size)
13
14
15
16
17
@register.simple_tag
def gravatar_url(email, size=140):
default = ('http://upload.wikimedia.org/wikipedia/en/9/9b/'
227
'Yoda_Empire_Strikes_Back.png')
18
19
20
21
22
23
24
25
26
27
28
29
return ('http://www.gravatar.com/avatar/' +
hashlib.md5(email.lower().encode('utf-8')).hexdigest() +
'?' + query_params)
That should give us the basic functionality for our members page. Youll be adding some more
functionality in the exercises to give you a bit of practice, and then in the next chapter well
look at switching to a REST-based architecture.
Run your automated tests:
1
2
3
4
5
6
7
8
9
.....
.
.....
---------------------------------------------------------------------Ran 31 tests in 1.434s
10
11
12
OK
Destroying test database for alias 'default'...
228
Exercises
1. Our User Story US3 Main Page says that the members page is a place for announcements and to list current happenings. We have implemented user announcements in
the form of status reports, but we should also have a section for system announcements/current events. Using the architecture described in this chapter, create an Announcements info_box to display system-wide announcements?
2. You may have noticed that in the Jedi Badge box there is a list achievements link. What
if the user could get achievements for posting status reports, attending events, and any
other arbitrary action that we create in the future? This may be a nice way to increase
participation, because everybody likes badges, right? Go ahead and implement this
achievements feature. Youll need a model to represent the badges and a link between
each user and the badges they own (maybe a user_badges table). Then youll want
your template to loop through and display all the badges that the given user has.
229
Chapter 11
REST
Remember the status updates features that we implemented in the last chapter? It works but
we can do better.
The main issue is that when you submit a status, the entire page must reload before you see
your updated status. This is so web 1.0. We cant have that, can we? The way to improve this
and remove the screen refresh is by using AJAX, which is a client-side technology for making
asynchronous requests that dont cause an entire page refresh. Often AJAX is coupled with
a server-side API to make it much easier to get the data you need from JavaScript.
One of the most popular server-side API styles in modern web programming is REST.
REST stands for Representational State Transfer, which to most people means absolutely
nothing. Lets hazard a definition: REST is a stateless architectural style generally run
over HTTP that relies on consistent URL names and HTTP verbs (GET, POST, DELETE, etc.)
to make it easy for various client programs to simply and consistently access and manipulate
resources from a server in a standard way.
REST doesnt actually specify what format should be used for data exchange, but most new
REST APIs are implemented with JSON. This is great for us since JSON is extremely simple
to work with in Python, as a Python dictionary is basically JSON out of the box. So we will
also use JSON in our examples here.
When implementing a REST API, there are a number of ways you could choose to implement
it, and a lot of debate about which is the best way. Well focus on a standard method otherwise
we might never finish this chapter!
230
http://<site-name>/status_reports/
http://<site-name>/status_reports/2
http://<site-name>/api/status_reports/
This helps to differentiate between the REST API and URLs that just return HTML.
Finally, to be good web citizens, its a good idea to put the version of your API into your URL
structure so you can change it in future versions without breaking everybodys code. Doing
so would have your URLs looking like:
1
http://<site-name>/api/v1/status_reports/
231
HTTP Verb
Typical Use
GET
PUT
POST
DELETE
HTTP Verb
Typical Use
GET
PUT
POST
DELETE
Returns the member and any related data - status report with an id of 2
Replaces the addressed member with the one passed in
USUALLY NOT USED: POST to the Collections URI instead
Deletes the member with corresponding id
A couple of notes are worth mentioning. PUT is meant to be idempotent, which means you
can expect the same result every time you call it. That is why it implements an insert. It either
updates an existing member or inserts a new one if the member does not exist; the end result
is that the appropriate member will exist and have data equal to the data passed in.
POST on the other hand is not idempotent and is thus used to create things.
There is also an HTTP verb called PATCH which allows for partial updates; there is a lot of
debate about if it should be used and how to use it. Many (err most) developers ignore it
since you can create a new item with POST and update with PUT. Well ignore it as well since
it does over-complicate things.
Conversely, oftentimes when thinking through how to design a REST API, developers are
stuck with the idea that they need more verbs to provide the access they want to provide.
While this is sometimes true, it can usually be solved by exposing more resources. The canonical example of this is login. Rather than implementing something like:
1
GET api/v1/users/1/login
This sticks more strictly to the REST definition. Of course with REST there are only suggestions/conventions, and nothing will stop you from implementing whatever URL structure
you wish. This can get ugly if you dont stick with the conventions outlined. Only deviate
from them if you have a truly compelling reason to do so.
233
Collection
GET - api/v1/status_reports - returns ALL status reports
POST - api/v1/status_reports - creates a status update and returns the id
Member
GET - api/v1/status_reports/<id> - returns a particular report status by id
Thats all we need for now. You can see that we could simply add a number of URL query
strings - i.e., user, date, etc. - to provide further query functionality, and maybe a DELETE
as well if you wanted to add additional functionality, but we dont need those yet.
234
3
4
5
6
def report(request):
if request.method = "GET":
status = StatusReport.objects.all().order_by('-when')[:20]
7
8
9
10
11
12
return HttpResponse(
serializers.serialize("json", status),
content_type='application/json',
context_instance=RequestContext(request)
)
There you go: You now have a simple and extremely naive REST API with one method.
Of course that isnt going to get you very far. We could abstract the JSON functionality into
a mixin class, and then by using class-based views make it simple to use JSON on all of our
views. This technique is actually described in the Django documentation.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
class JSONResponseMixin(object):
"""
A mixin that can be used to render a JSON response.
"""
def render_to_json_response(self, context, **response_kwargs):
"""
Returns a JSON response, transforming 'context' to make the
payload.
"""
return HttpResponse(
serializers.serialize("json",context),
content_type='application/json',
**response_kwargs
)
235
While this is a bit better, it still isnt going to help a whole lot with authentication, API discovery, handling POST parameters, etc. To that end, we are going to use a framework to help us
implement our API. The two most popular ones as of writing are Django REST Framework
and Tastypie. Which should you use? They really are about the same in terms of functionality,
so its up to you.
Its worth nothing (or maybe not) that we chose Django REST Framework since it has a
cooler logo.
If youre unsure (and not sold by the logo argument), you can look at the popularity of the
package, how active it is, and when its last release date to help you decide. There are two
great places to find this information: Django Packages and Github. Look for the number of
watchers, stars, forks, and contributors. Check out the issues page as well to see if there are
any reported show stopper issues.
Using this information, especially when you are unfamiliar with the project can greatly aid in
the decision making process of which project to use.
That said, lets jump right into the meat and potatoes.
Installation
The first thing to do is install DRF:
1
Also, lets update our requirements.txt file to include our new dependency:
1
Django==1.8.2
django-embed-video==0.11
djangorestframework==3.1.1
mock==1.0.1
psycopg2==2.5.3
requests==2.3.0
stripe==1.9.2
Serializers
DRF provides a number of tools you can use. Lets start with the most fundamental: the
serializer.
236
Serializers provide the capability to translate a Model or QuerySet to/from JSON. This is
what we just saw with the django.core.serializers above. DRF serializers do more-orless the same thing, but they also hook directly into all the other DRF goodness.
Lets create a serializer for the StatusReport object. Create a new file called main/serializers.py with the following content:
1
2
3
4
5
6
7
8
9
class StatusReportSerializer(serializers.Serializer):
id = serializers.ReadOnlyField()
user = serializers.StringRelatedField()
when = serializers.DateTimeField()
status = serializers.CharField(max_length=200)
10
11
12
13
14
15
16
17
18
19
If youve worked with Django Forms before, the StatusReportSerializer should look
somewhat familiar. It functions like a Django Form. You first declare the fields to include in
the serializer, just like you do in a form (or a model, for that matter). The fields you declare
here are the fields that will be included when serializing/deserializing the object.
Two quick notes about the fields we declared for our serializer:
1. We declared an id (primary key) field as type serializers.ReadOnlyField(). This
is a read-only field that cannot be changed, but it needs to be there since it maps to our
id field, which we will need on the client side for updates.
2. Our user field is of type serializers.StringRelatedField(). This represents a
many-to-one relationship and says that we should serialize the user object by using its
__str__ function. In our case this is the users email address.
237
There are two functions, create() and update(). They do pretty much exactly what you
would think: Create a new model instance from serialized data, or update an existing model
instance. Any custom creation logic you may need can be put in these functions. By creation
logic were not referring to the code that would normally be put into your __init__ function,
because that is already going to be called. Creation logic is simply any logic you may need to
include when creating the object from a deserialized JSON string (i.e., a python dictionary).
For a single object serializer like the one shown above, there isnt likely to be much extra logic,
but there is nothing preventing you from writing a serializer that works on an entire object
chain. This is sometimes helpful when dealing with nested or related objects.
Tests
Lets write some tests for the serializer to ensure it does what we want it to do. You do remember the Test Driven Development chapter, right?
Implementing a new framework/reusable app is another great example of where Test Driven
Development shines. It gives you an easy way to get at the underpinnings of a new framework,
find out how it works, and ensure that it does what you need it to. It also gives you an easy
way to try out different techniques in the framework and quickly see the results (as we will
see throughout the rest of this chapter).
Serialization
The DRF serializers actually work in two steps: It first converts the object into a Python
dictionary and then converts the dictionary into JSON. Lets test that those two steps work
as we expect.
Create a file called ../tests/main/testSerializers.py:
1
2
3
4
5
6
7
8
from
from
from
from
from
from
from
from
9
10
11
class StatusReportSerializer_Tests(TestCase):
238
12
13
14
15
16
@classmethod
def setUpTestData(cls):
cls.u = User(name="test", email="[email protected]")
cls.u.save()
17
18
19
20
cls.expected_dict = OrderedDict([
('id', cls.new_status.id),
('user', cls.u.email),
('when', cls.new_status.when.isoformat()),
('status', 'hello world'),
])
21
22
23
24
25
26
27
28
29
30
def test_model_to_dictionary(self):
serializer = StatusReportSerializer(self.new_status)
self.assertEquals(self.expected_dict, serializer.data)
There is a fair amount of code here, so lets take it a piece at a time. The first part of this file1
2
3
4
5
6
7
from
from
from
from
from
from
from
8
9
10
class StatusReportSerializer_Tests(TestCase):
11
12
13
14
15
@classmethod
def setUpTestData(cls):
cls.u = User(name="test", email="[email protected]")
cls.u.save()
16
17
cls.new_status.save()
18
19
cls.expected_dict = OrderedDict([
('id', cls.new_status.id),
('user', cls.u.email),
('when', cls.new_status.when.isoformat()),
('status', 'hello world'),
])
20
21
22
23
24
25
-is responsible for all the necessary imports and for setting up the user/status report that we
will be working with in our tests.
The first test1
2
3
def test_model_to_dictionary(self):
serializer = StatusReportSerializer(self.new_status)
self.assertEquals(self.expected_dict, serializer.data)
-verifies that we can take our newly created object self.new_status and serialize it to a
dictionary. This is what our serializer class does. We just create our serializer by passing in
our object to serialize and then call serialize.data, and out comes the dictionary we want.
Run the test boom! Hashtag FAIL. If you look at the results you should see an error about
the two dictionaries not being equal. You should also see a warning about naive datetime.
A Brief Aside about Timezone support
JSON, by convention, should use a datetime format called iso-8601. This is a universal
datetime format that includes timezone information. In python you can output this format
by calling isoformat on your string. The specification format will output the date then the
time then the timezone but if the timezone is UTC it will output Z instead of the timezone so
its something like: 2013-01-29T12:34:56.123Z however pythons isoformat will output
2013-01-29T12:34:56.123+00:00 for the same date. This makes our test fail if we are
using UTC, which is the default timzone. For a quick fix, lets just change our setUpClass
method as such, to use the properly formated timezone in the expected_dict so the function now will look like:
1
2
3
4
@classmethod
def setUpTestData(cls):
cls.u = User(name="test", email="[email protected]")
cls.u.save()
5
6
7
8
9
10
11
when = cls.new_status.when.isoformat()
if when.endswith('+00:00'):
when = when[:-6] + 'Z'
12
13
14
15
16
17
18
cls.expected_dict = OrderedDict([
('id', cls.new_status.id),
('user', cls.u.email),
('when', when),
('status', 'hello world'),
])
Notice in lines 9-11 that we grab the time (when) and convert it to the format that DRF expects.
Then we use that value to populate our expected_dict. After making this change our test
should pass.
Dictionary to JSON
The next step in the object to JSON conversion process is converting the dictionary to JSON:
1
2
3
4
5
def test_dictionary_to_json(self):
serializer = StatusReportSerializer(self.new_status)
content = JSONRenderer().render(serializer.data)
expected_json = JSONRenderer().render(self.expected_dict)
self.assertEquals(expected_json, content)
To convert to JSON you must first call the serializer to convert to the dictionary, and then
call JSONRenderer().render(serializer.data). This instantiates the JSONRenderer
object and passes it a dictionary to render as JSON. The render function calls json.dumps
and ensures the output is in the proper unicode format. Now we have an option of how we
want to verify the results. We could build the expected JSON string and compare the two
strings.
One drawback here is that you often have to play around with formatting the string exactly
right, especially when dealing with date formats that get converted to the JavaScript date
format.
Another option (which we used) is to create the dict that we should get from the serializer
(and we know what the dict is because we just ran that test), then convert that dict to JSON
and ensure the results are the same as converting our serializer.data to JSON. This also
has its issues, as the order in which the attributes are placed in the results JSON string is
important, and dictionaries dont guarantee order. So we have to use OrderedDict, which
241
will ensure our dictionary preserves the order of which the keys were inserted. After all that,
we can verify that we are indeed converting to JSON correctly.
Run the tests:
1
def test_json_to_StatusReport(self):
2
3
4
5
json = JSONRenderer().render(self.expected_dict)
stream = BytesIO(json)
data = JSONParser().parse(stream)
6
7
8
9
10
11
12
13
14
15
status = serializer.save()
self.assertEqual(self.new_status.id, status.id)
self.assertEqual(self.new_status.status, status.status)
self.assertEqual(self.new_status.when, status.when)
self.assertEqual(self.new_status.user, status.user)
self.asesrtEqual(self.new_status, status)
But I just wanted to be explicit and show that each field was infact being deserialized correctly.
Now what about creating a new serializer instance from json? Lets write another test:
1
2
3
def test_json_to_new_StatusReport(self):
json = JSONRenderer().render(self.expected_dict)
stream = BytesIO(json)
242
data = JSONParser().parse(stream)
5
6
7
serializer = StatusReportSerializer(data=data)
self.assertTrue(serializer.is_valid())
8
9
10
11
12
status = serializer.save()
self.assertEqual(self.new_status.status, status.status)
self.assertIsNotNone(status.when)
self.assertEqual(self.new_status.user, status.user)
You probably already guessed it, but this test is going to fail.
1
2
3
4
5
6
The important part is RelatedObjectDoesNotExist; thats the error you get when you try
to look up a model object from the db and it doesnt exist. Why dont we have a user object
associated with our StatusReport? Remember in our serializer, we used this line:
1
user = serializers.StringRelatedField()
This means that we serialize the user field by calling its __str__ function, which just returns an email. Then when we deserialize the object, our create function is called, but since
StringRelatedField is by read only then the user wont get passed in. Nor will the id for
that matter (because its set to be a ReadOnlyField). Well come back to the solution for this
in just a second.
First, lets talk about ModelSerializers.
ModelSerializers
Our initial StatusReportSerializer contains a ton of boilerplate code. Were really just
copying the field from our model. Fortunately, there is a better way. Enter ModelSerializers.
If we rewrite our StatusReportSerializer using DRFs ModelSerializer, it looks like
this:
243
1
2
3
4
5
class StatusReportSerializer(serializers.ModelSerializer):
6
7
8
9
class Meta:
model = StatusReport
fields = ('id', 'user', 'when', 'status')
Wow! Thats a lot less code. Just like Django gives you a Form and a ModelForm, DRF
gives you a Serializer and a ModelSerializer. And just like Djangos ModelForm, the
ModelSerializer will get all the information it needs from the model. You just have to
point it to the model and tell it what fields you want to use.
The only difference between these four lines of code in the ModelSerializer and the twelve
lines of code in our Serializer is that the Serializer serialized our user field using the
id instead of the email address. This is not exactly what we want, but it does mean that when
we deserialize the object from JSON, we get our user relationship back! To verify that, and to
update our tests to account for the user being serialized by id instead of email, we only have
to change our cls.expected_dict to look like this in testSerializers.py:
1
2
3
4
5
6
cls.expected_dict = OrderedDict([
('id', cls.new_status.id),
('user', cls.u.id),
('when', cls.new_status.when),
('status', 'hello world'),
])
Almost there. If you recall, our main.models.index uses the users email address so we can
do the gravatar lookup. How do we get that email address?
We create a custom relationship field in serializers.py:
1
2
3
class RelatedUserField(serializers.RelatedField):
4
5
read_only = False
6
7
8
244
10
11
Full update:
1
2
3
4
5
class RelatedUserField(serializers.RelatedField):
6
7
read_only = False
8
9
10
11
12
13
14
15
16
class StatusReportSerializer(serializers.ModelSerializer):
user = RelatedUserField(queryset=User.objects.all())
17
18
19
20
class Meta:
model = StatusReport
fields = ('id', 'user', 'when', 'status')
245
Take note of line 16. When declaring a RelatedField in our serailizer queryset is a required parameter. The intent it to make it explicit where the data is comming from for this
field.
Update the cls.expected_dict again, back to using the users email:
1
2
3
4
5
6
cls.expected_dict = OrderedDict([
('id', cls.new_status.id),
('user', cls.u.email),
('when', cls.new_status.when),
('status', 'hello world'),
])
246
And A View
Our new view function in main/json_views.py should look like:
1
2
3
4
5
from
from
from
from
from
6
7
8
9
10
11
@api_view(['GET', 'POST'])
def status_collection(request):
"""Get the collection of all status_reports
or create a new one"""
12
13
if request.method == 'GET':
247
14
15
16
17
18
19
20
21
22
status_report = StatusReport.objects.all()
serializer = StatusReportSerializer(status_report,
many=True)
return Response(serializer.data)
elif request.method == 'POST':
serializer = StatusReportSerializer(data=request.DATA)
if serializer.is_valid():
serializer.save()
return Response(serializer.data,
status=status.HTTP_201_CREATED)
return Response(serializer.errors,
status=status.HTTP_400_BAD_REQUEST)
Line 1-5: Import what we need
Line 8: The @api_view is a decorator provided by DRF, which:
checks that the appropriate request is passed into the view function
adds context to the Response so we can deal with stuff like CSRF tokens
provides authentication functionality (which we will discuss later)
handles ParseErrors
Line 8: The arguments to the @api_view are a list of the HTTP verbs to support.
Line 13-16: A GET request on the collection view should return the who list. Grab the
list, then serialize to JSON and return it.
Line 16: DRF also provides the Response object which inherits django.template.response.Simple
It takes in unrendered content (for example, JSON) and renders it based upon a
Content-Type specified in the request.header.
Line 17-20: For POST requests just create a new object based upon passed-in data.
Line 17: Notice the use of request.DATA DRF provides the Request class that
extends Djangos HTTPRequest and provides a few enhancements: request.DATA,
which works similar to HTTPRequest.POST but handles POST, PUT and PATCH
methods.
Line 21: On successfully saving, return a response with HTTP return code of 201 (created). Notice the use of status.HHTP_201_CREATED. You could simply put in 201,
248
but using the DRF status identifiers makes it more explicit as to what code youre returning so that people reading your code dont have to remember all the HTTP return
codes.
Line 22: If the deserialization process didnt work (i.e., serializer.is_valid()
returns False) then return HTTP_400_BAD_REQUEST. This basically means dont call
me again with the same data because it doesnt work.
Thats a lot of functionality and not very much code. Also if you recall from the section on
Structuring a REST API, this produces a REST API that uses the correct HTTP Verbs and
returns the appropriate response codes. If you further recall from the discussion on Structuring a REST API, resources have a collection URL and a member URL. To finish the example
we need to flesh out the member URL below by updating main/json_views.py:
1
2
3
4
5
6
7
8
try:
status_report = StatusReport.objects.get(id=id)
except StatusReport.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
9
10
11
12
13
14
15
16
17
18
19
20
21
if request.method == 'GET':
serializer = StatusReportSerializer(status_report)
return Response(serializer.data)
elif request.method == 'PUT':
serializer = StatusReportSerializer(status_report,
data=request.DATA)
if serializer.is_valid():
serializer.save()
return Response(serializer.data)
return Response(serializer.errors,
status=status.HTTP_400_BAD_REQUEST)
elif request.method == 'DELETE':
status_report.delete()
return Response(status=status.HTTP_204_NO_CONTENT)
This is nearly the same as the collection class, but we support different HTTP verbs and are
dealing with one object instead of an entire collection of objects. With that, we now have the
entire API for our StatusReport
249
NOTE: In the code above, the PUT request is not idempotent. Do you know why?
What happens if we call a PUT request with an id that is not in the database? For
extra credit go ahead and implement the fix now or just read on; we will fix it
later.
Test
Create a new file called ../tests/main/testJSONViews.py:
First the GET functionality:
1
2
3
4
from
from
from
from
5
6
7
class dummyRequest(object):
8
9
10
11
12
13
14
15
16
17
class JsonViewTests(TestCase):
18
19
20
21
22
def test_get_collection(self):
status = StatusReport.objects.all()
expected_json = StatusReportSerializer(status,
many=True).data
response = status_collection(dummyRequest('GET'))
23
24
self.assertEqual(expected_json, response.data)
Above we create a dummyRequest that has the information that DRF expects.
250
NOTE: We cant use the RequestFactory yet because we havent setup the
URLs.
Then in our JsonViewTests we call our status_collection function, passing in the GET
parameter.
This should return all the StatusReport objects as JSON. We manually query all the
StatusReport, convert them to JSON, and then compare that to the return from
our view call. Notice the returned response we call response.data as opposed to
response.content which we are used to, because this response hasnt actually been
rendered yet.
Otherwise the test is the same as any other view test. To be complete, we should check the
case where we have data and where there is no data to return as well, and we should also test
the POST with and without valid data. Well leave that as an exercise for you, dear reader.
Dont forget to run the test:
1
Now that we have tested our view, lets go ahead and wire up the URLs. We are going to create
a separate urls file specifically for the JSON URLs in our main application as opposed to using
our default django_ecommerce/urls.py file. This creates better separation of concerns and
allows our REST API to be more independent.
Lets create a main/urls.py file that contains:
1
2
3
4
5
6
7
urlpatterns = patterns(
'main.json_views',
url(r'^status_reports/$', 'status_collection'),
url(r'^status_reports/(?P<id>[0-9]+)$', 'status_member'),
)
We need our django_ecommerce/urls.py to point to this new urls.py. So add the following
URL entry to the end of the list:
1
url(r'^api/v1/', include('main.urls')),
Dont forget to actually add rest_framework to the list of INSTALLED_APPS in your settings.py. This should now look like:
1
2
INSTALLED_APPS = (
'django.contrib.auth',
251
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sites',
'django.contrib.messages',
'django.contrib.staticfiles',
'main',
'django.contrib.admin',
'django.contrib.flatpages',
'contact',
'payments',
'embed_video',
'rest_framework',
3
4
5
6
7
8
9
10
11
12
13
14
15
Of course, when you call your API from your program you dont want to see that
page; we just want the JSON. Dont worry: DRF has you covered. By default the
@api_view wrapper, which gives us the cool browsable API amongst other things, listens to the Accept header to determine how to render the template (remember the
rest_framework.response.Response is just a TemplateResponse object).
Try this from the command line (with the development server running):
1
$ curl http://127.0.0.1:8000/api/v1/status_reports/
NOTE: Windows Users Sorry. You most likely dont have curl installed by
default like the rest of us do. You can download it here. Just scroll ALL the way
down to the bottom and select the appropriate download for you system.
And you will get back raw JSON (as long as you have a status update in the table, of course).
1
Thus returning JSON is the default action the DRF Response will take. However, by default
your browser will set the Accept header to text/html, which you can also do from curl like
this:
1
And then youll get back a whole mess of HTML. Hats off to the DRF folks. Very nicely done.
253
By using some of the mixins that DRF provides, we can do more with a lot less code. Update
json_views.py:
1
2
3
4
5
6
7
8
class StatusCollection(mixins.ListModelMixin,
mixins.CreateModelMixin,
generics.GenericAPIView):
9
10
11
queryset = StatusReport.objects.all()
serializer_class = StatusReportSerializer
12
13
14
15
16
17
Where did all the code go? Thats exactly what the mixins are for:
Line 7: mixins.ListModelMixin provides the list(request) function that allows you to serialize a collection to JSON and return it.
Line 7: mixins.CreateModelMixin provides the create(request) function that
allows for the POST method call - e.g., creating a new object of the collection type.
Line 7: generics.GenericAPIView - this mixin provides the core functionality
plus the Browsable API we talked about in the previous section.
Line 10: defining a class-level queryset member is required so the ListModelMixin
can works its magic.
254
Line 11: defining a class-level serializer_class member is also required for all the
Mixins to work.
Remaining Lines: we implement GET and POST by passing the call to the respective
Mixin.
Using the class-based view in this way with the DRF mixins saves a lot of boilerplate code
while still keeping things pretty easy to understand. Also, we can clearly see what happens
with a GET vs a POST request without having a number of if statements, so there is better
separation of concerns.
NOTE: It would help even more if the mixin would have been called
generics.GetPostCollectionAPIView so that you know its for GET and
POST on a collection as opposed to having to learn DRF. ListCreateAPIView
doesnt really tell us anything about the REST API that this view function is
creating unless we are already familiar with DRF. In general, the folks at Real
Python like to be a bit more explicit even if it means just a bit more code.
Fortunately, there is nothing preventing you from putting in a nice docstring to
explain what the function does - which is a good compromise. Ultimately its up
to you to decide which one you prefer.
To complete the example, here is the status_member function after being refactored into a
class view:
1
2
3
4
class StatusMember(mixins.RetrieveModelMixin,
mixins.UpdateModelMixin,
mixins.DestroyModelMixin,
generics.GenericAPIView):
5
6
7
queryset = StatusReport.objects.all()
serializer_class = StatusReportSerializer
8
9
10
11
12
13
14
15
16
class StatusMember(mixins.RetrieveUpdateDestroyAPIView):
This is just a combination of the four mixins we inherited from above. The choice is yours.
We also need to change our main/urls.py function slightly to account for the class-based
views:
1
2
3
4
5
6
urlpatterns = patterns(
'main.json_views',
url(r'^status_reports/$',
json_views.StatusCollection.as_view()),
url(r'^status_reports/(?P<pk>[0-9]+)/$',
json_views.StatusMember.as_view())
)
2
3
4
5
6
def test_get_collection(self):
status = StatusReport.objects.all()
expected_json = StatusReportSerializer(status, many=True).data
response = StatusCollection.as_view()(dummyRequest("GET"))
7
8
self.assertEqual(expected_json, response.data)
Notice in line 5 that we need to call the as_view() function of our StatusCollection class
just like we do in main/urls.py. We cant just call StatusCollection().get(dummyRequest("GET"))
directly. Why? Because as_view() is magic. It sets up several instance variables such as
self.request, self.args, and self.kwargs; without these member variables set up,
your test will fail.
Make sure to run your tests before moving on.
256
Authentication
There is one final topic that needs to be covered so that a complete REST API can be implemented: Authentication.
Since we are charging a membership fee for MEC, we dont want unpaid users to have access to our members-only data. In this section we will look at how to use authentication so
that only authorized users can access your REST API. In particular, we want to enforce the
following constraints:
Unauthenticated requests should be denied access
Only authenticated users can post status reports
Only the creator of a status report can update and/or delete that status report
permission_classes = (permissions.IsAuthenticated,)
Pay attention to that comma at the end of the line! We need to pass a tuple, not a single item.
Also, dont forget the proper import:
1
Testing Authentication
We can verify that this is working by running our unit tests which should now fail, as they try
to check if the user is authenticated:
1
We need an authenticated user. Again, DRF comes through with some helpful test functions.
Previously we were using a dummyRequest class to provide the functionality that we needed.
Lets drop that and use DRFs APIRequestFactory:
1
2
4
5
class JsonViewTests(TestCase):
6
7
8
9
10
@classmethod
def setUpClass(cls):
super().setUpClass()
cls.factory = APIRequestFactory()
11
12
13
14
@classmethod
def setUpTestData(cls)
cls.test_user = User(id=2222, email="[email protected]")
15
16
17
18
19
20
21
22
return request
In a nutshell we have coded a way to create a request for any type of HTTP verb. Further, we
can decide if that request is authenticated or not. This gives us the flexibility to test as many
permutations as we might need to so we can properly exercise our REST API. For now we
put the code in the JSONViewTests because thats all we need. However you might consider
creating your own DRFTestCase, perhaps putting it in ../test/base_test_case.py, for
example. Then you could easily share it amongst whatever tests you create that need the
functionality.
def test_get_collection(self):
status = StatusReport.objects.all()
expected_json = StatusReportSerializer(status, many=True).data
4
5
6
response = StatusCollection.as_view()(self.get_request())
self.assertEqual(expected_json, response.data)
Line 5 is the only line that changed, as it now calls our newly created sef.get_request()
factory method.
Lets add one more test, test_get_collection_requires_logged_in_user, to verify
that our authentication is working correctly:
1
2
3
def test_get_collection_requires_logged_in_user(self):
4
5
6
7
8
self.assertEqual(response.status_code,
status.HTTP_403_FORBIDDEN)
Line 1: we use DRFs statuses in our test as they are more descriptive, so we will need
to add this import line to the top of the module
Line 5: we pass in authed=False to our get_request factory method, which sets
the user to be unauthorized
259
2
3
4
class IsOwnerOrReadOnly(permissions.BasePermission):
5
6
7
8
9
10
11
12
13
Once the class is created, we need to update our StatusMember class in json_views.py as
well:
1
2
3
4
5
6
class StatusMember(mixins.RetrieveModelMixin,
mixins.UpdateModelMixin,
mixins.DestroyModelMixin,
generics.GenericAPIView):
7
8
9
10
queryset = StatusReport.objects.all()
serializer_class = StatusReportSerializer
permission_classes = (permissions.IsAuthenticated,
IsOwnerOrReadOnly)
11
260
12
13
14
15
16
17
18
19
And thats it. Now only owners can update / delete their status reports.
It is important to note that all of this authentication is using the default Authentication classes
which are SessionAuthentication and BasicAuthentication. If your main client is
going to be an AJAX web-based application then the default authentication classes will work
fine, but DRF does provide several other authentication classes if you need something like
OAuthAuthentication, TokenAuthentication or something custom.
The official DRF documentation does a pretty good job of going over these if you want more
info.
261
Conclusion
We started this chapter talking about the desire to not refresh the page when a user submitted
a status report. And, well,we didnt even get to that solution yet. Think of it as a cliff hanger
to be continued in the next chapter.
We did however go over one of the key ingredients to making that no-refresh happen: a good
REST API. REST is increasingly popular because of its simplicity to consume resources and
because it can be accessed from any client that can access the web.
Django REST Framework makes implementing the REST API relatively straight-forward and
helps to ensure that we follow good conventions. We learned how to serialize and deserialize
our data and structure our views appropriately along with the browsable API and some of the
important features of DRF. There are even more features in DRF that are worth exploring
such as ViewSet and Routers. While these powerful classes can greatly reduce the code you
have to write, you sacrifice readability. But that doesnt mean you shouldnt check them out
and use them if you like.
In fact, its worth going through the DRF site and browsing through the API Guide. Weve
covered the most common uses when creating REST APIs, but with all the different use-cases
out there, some readers will surely need some other part of the framework that isnt covered
here.
Either way: REST is everywhere on the web today. If youre going to do much web
development, you will surely have to work with REST APIs - so make sure you
understand the concepts presented in this chapter.
262
Exercises
1. Flesh out the unit tests. In the JsonViewTests, check the case where there is no data
to return at all, and test a POST request with and without valid data.
2. Extend the REST API to cover the user.models.Badge.
3. Did you know that the browsable API uses Bootstrap for the look and feel? Since we
just learned Bootstrap, update the browsable API Template to fit with our overall site
template.
4. We dont have permissions on the browsable API. Add them in.
263
Chapter 12
Django Migrations
Whats new in Django 1.7? Basically migrations. While there are some other nice features,
the new migrations system is the big one.
In the past you probably used South to handle database changes. However, in Django 1.7,
migrations are now integrated into the Django Core thanks to Andrew Godwin, who ran this
Kickstarter. He is also the original creator of South.
Lets begin
264
265
266
3
4
5
6
7
8
class Migration(migrations.Migration):
9
10
11
dependencies = [
]
12
13
14
15
16
17
18
19
20
21
22
operations = [
migrations.CreateModel(
name='ContactForm',
fields=[
('id', models.AutoField(
verbose_name='ID', primary_key=True,
serialize=False, auto_created=True)),
('name', models.CharField(max_length=150)),
('email', models.EmailField(max_length=250)),
('topic', models.CharField(max_length=200)),
267
('message', models.CharField(max_length=1000)),
('timestamp',
models.DateTimeField(auto_now_add=True)),
23
24
],
options={
'ordering': ['-timestamp'],
},
bases=(models.Model,),
25
26
27
28
29
),
30
31
For a migration to work, the migration file must contain a class called Migration() that
inherits from django.db.migrations.Migration. This is the class that the migration
framework looks for and executes when you ask it to run migrations - which we will do later.
The Migration() class contains two main lists, dependencies and operations.
Migration dependencies
Dependencies is a list of migrations that must be run prior to the current migration being
run. In the case above, nothing has to run prior so there are no dependencies. But if you
have foreign key relationships then you will have to ensure a model is created before you can
add a foreign key to it.
To see that lets create migrations for our main app:
1
dependencies = [
('payments', '__first__'),
]
268
The dependency above says that the migrations for the payments app must be run before the
current migration. You might be wondering, How does Django know I have a dependency
on payments when I only ran makemigrations for main?:
Short Answer: Magic.
Slightly Longer Answer: makmigrations look at things like ForeignKey Fields to
determine dependencies (more on this later).
You can also have a dependency on a specific file.
To see an example, lets initialize another migration:
1
dependencies = [
('main', '0001_initial'),
]
This essentially means that it depends on the 0001_inital.py file in the main app to run first.
This functionality provides a lot of flexibility, as you can accommodate foreign keys that depend upon models from different apps.
Dependencies can also be combined (its just a list after all) so you can have multiple dependencies - which means that the numbering of the migrations (usually 0001, 0002, 0003, )
doesnt strictly have to be in the order they are applied. You can add any dependency you
want and thus control the order without having to re-number all the migrations.
Migration operations
The second list in the Migration() class is the operations list. This is a list of operations
to be applied as part of the migration. Generally the operations fall under one of the following
types:
269
CreateModel: You guessed it: This creates a new model. See the migration above for
an example.
DeleteModel: removes a table from the database; just pass in the name of the model.
RenameModel: Given the old_name and new_name, this renames the model.
AlterModelTable: changes the name of the table associated with a model. Same as
the db_table option.
AlterUniqueTogether: changes unique constraints.
AlteIndexTogether: changes the set of custom indexes for the model.
AddField: Just like it sounds. Here is an example (and a preview of things to come
dun dun dun dun):
1
2
3
4
5
migrations.AddField(
model_name='user',
name='badges',
field=models.ManyToManyField(to='main.Badge')
),
270
$ ./manage.py migrate
Models
Our payments.models.User looks like this:
1
2
3
4
5
6
7
8
9
10
class User(AbstractBaseUser):
name = models.CharField(max_length=255)
email = models.CharField(max_length=255, unique=True)
#password field defined in base class
last_4_digits = models.CharField(max_length=4, blank=True,
null=True)
stripe_id = models.CharField(max_length=255)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
rank = models.CharField(max_length=50, default="Padwan")
badges = models.ManyToManyField(Badge)
Notice the ManyToMayField called badges at the end, which references main.models.Badge:
1
2
3
4
class Badge(models.Model):
img = models.CharField(max_length=255)
name = models.CharField(max_length=100)
desc = models.TextField()
Okay. So far there are no issues, but we have another model to deal with:
271
1
2
3
4
class StatusReport(models.Model):
user = models.ForeignKey('payments.User')
when = models.DateTimeField(blank=True)
status = models.CharField(max_length=200)
Oops! We now have payments.models depending on main.models and main.models depending on payments.models. Thats a problem. In the code, we solved this already by not
importing payments.models and instead using the line:
1
user = models.ForeignKey('payments.User')
While that trick works at the application-level, it doesnt work when we try to apply migrations to the database.
Migrations
How about the migration files? Again, take a look at the dependencies:
main.migrations.001_initial:
1
2
3
dependencies = [
('payments', '__first__'),
]
payments.migrations.001_initial:
1
2
3
dependencies = [
('main', '0001_initial'),
]
So, the latter migration depends upon the main migration running first, and thus - we
have a circular reference. Remember how we talked about makemigrations looking at
ForeignKey fields to create dependencies? Thats exactly what happened to us here.
The fix
When I was an intern in college (my first real development job) my dev manager said something to me which I will never forget: You cant really understand code unless you can write
it yourself.
This was after a copy and paste job I did crashed the system.
So lets write a migration from scratch.
272
3
4
5
6
7
class Migration(migrations.Migration):
8
9
10
11
12
dependencies = [
('payments', '__first__'),
('main', '0001_initial'),
]
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
operations = [
migrations.CreateModel(
name='StatusReport',
fields=[
('id', models.AutoField(
primary_key=True, auto_created=True,
verbose_name='ID', serialize=False)),
('when', models.DateTimeField(blank=True)),
('status', models.CharField(max_length=200)),
('user', models.ForeignKey(to='payments.User')),
],
options={
},
bases=(models.Model,),
),
]
Notice how this migration depends upon both main.0001_initial and payments.__first__.
This means that the payments.user model will already be created before this migration
runs and thus the user foreign key will be created successfully.
Dont forget to:
$ ./manage.py migrate
You should now have your database in sync with your migration files!
Timezone support
If you have timezone support enabled in your settings.py file, you will likely get an error,
when running migrations that says something to the effect of: native datetime
while time zone support is active. The fix is to use django.utils.timezone
instead of datetime.
Youll have to update contact.models.ContactForm and
payments.models.UnpaidUsers. Just replace datetime.datetime with django.utils.timezone.
You should update both the models and the migrations files.
More info on timezone support can be found in the Django docs.
Your models have changes that are not yet reflected in a migration,
and so won't be applied.
This has to do with the same datetime fields we just discussed. Since we are defaulting the
value of the datetime field to now() Django will see the default value as always being different
and thus it will give you the warning above. While this is supposed to be a helpful warning
message to remind you to keep your migrations up-to-date with your models.py in this case
it is a minor annoyance. There is nothing to be done here, just ignore the warning.
274
Data Migrations
Migrations are mainly for keeping the data model of you database up-to-date, but a database
is more than just a data model. Most notably, its also a large collection of data. So any discussion of database migrations wouldnt be complete without also talking about data migrations.
Data migrations are used in a number of scenarios. Two very popular ones are:
1. Loading system data: When you would like to load system data that your application depends upon being present to operate successfully.
2. Migrating existing data: When a change to a data model forces the need to change
the existing data.
Do note that loading dummy data for testing is not in the above list. You could
use migrations to do that, but migrations are often run on production servers,
so you probably dont want to be creating a bunch of dummy test data on your
production server. (More on this later)
Lets look at examples of each
3
4
5
6
7
class Migration(migrations.Migration):
8
9
10
11
dependencies = [
('payments', '0001_initial'),
]
12
13
operations = [
275
migrations.RunPython(create_default_user)
14
15
Like any other migration, we create a class called Migration(), set its dependency, and then
define the operation.
For Data Migrations we can use the RunPython() operation, which just accepts a callable
and calls that function. So, we need to add in the create_default_user() function:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
u = new_user(
name='vader', email="[email protected]",
last_4_digits="1234", password= make_password("darkside")
).save()
20
21
22
class Migration(migrations.Migration):
23
24
25
26
dependencies = [
('payments', '0001_initial'),
]
27
28
29
30
operations = [
migrations.RunPython(create_default_user)
]
This function just adds the new user. A couple of things are worth noting:
276
1. We dont use the User object but rather grab the model from the App Repository. By
doing this, migrations will return us the historic version of the model. This is important as the fields in the model may have changed. Grabbing the model from the App
Repository will ensure we get the correct version of the model.
2. Since we are grabbing from a historic app repository it is likely we dont have access
to out custom defined functions such as user.create. So we saved the user without
using the user.create function.
3. There are cases where you may want to rerun all your migrations or perhaps there is
existing data in the database before you run migrations, so weve added a check to clear
out any conflicting data before we create the new user:
1
2
3
4
5
try:
vader = new_user.objects.get(email="[email protected]")
vader.delete()
except new_user.DoesNotExist:
pass
This will prevent the annoying duplicate primary key error that you would get if you somehow
ran this migration twice.
Go ahead an apply the migration:
1
$ ./manage.py migrate
4
5
6
7
8
9
Operations to perform:
Synchronize unmigrated apps: contact, rest_framework
Apply all migrations: admin, sessions, payments, sites,
flatpages, contenttypes, auth, main
Synchronizing apps without migrations:
Creating tables...
Installing custom SQL...
Installing indexes...
Running migrations:
Applying payments.0003_initial_data... OK
While both use cases can be accomplished with migrations the second use case, loading test
data, should be though of as something separate from migrations. You can continue to use
fixtures for loading test data, or better yet just create the data you need in the testcase itself.
id |
app
|
name
|
date applied
----+-----------+--------------------+-----------------------------1 | main
| 0001_initial
| 2014-09-20 23:51:38.499414-05
2 | payments | 0001_initial
| 2014-09-20 23:51:38.600185-05
4 | main
| 0002_statusreport | 2014-09-20 23:52:33.808006-05
5 | payments | 0003_initial_data | 2014-09-21 11:36:12.702975-05
The next time migrations are run, it will skip the migration files listed in the database here.
This means that even if you change the migration file manually, it will be skipped if there is an
entry for it in the database. This makes sense as you generally dont want to run migrations
twice. But if for whatever reason you do, one way to get it to run again is to first delete the
corresponding row from the database. In the case of schema migrations, though, Django will
first check the database structure, and if it is the same as the migration (i.e. the migration
doesnt apply any new changes) then the migration will be faked meaning not really ran.
Conversely, if you want to undo all the migrations for a particular app, you can migrate to
a special migration called zero. For example if you type:
1
It will undo (reverse) all the migrations for the payments app. You dont have to use zero; you
can use any arbitrary migration (like ./manage.py migrate payments 0001_initial),
and if that migration is in the past then the database will be rolled back to the state of that
migration, or rolled forward if the migration hasnt yet been run. Pretty powerful stuff!
Note: This doesnt apply to data migrations.
278
3
4
5
6
7
8
9
10
11
12
class Migration(migrations.Migration):
13
14
15
16
dependencies = [
('yourappname', '0001_initial'),
]
17
18
19
20
operations = [
migrations.RunPython(combine_names),
]
When you create a Python function to be called by the RunPython migration, it must accept
two arguments. The first is apps, which is of type django.apps.registry.Apps and gives
you access to the historical models/migrations. In other words, this is a model that has the
state as defined in the previous migration (which could be vastly different to the current state
of the model). By state we are mainly referring to the fields associated with the model.
The second argument is the schema_editor for changing the schema, which should not be
279
necessary very often when migrating data, because youre not changing the schema, just the
data.
In the example we call apps.get_model, which gives us that historical model. Then we loop
through all rows in the model and combine the first_name and last_name into a single
name and save the row. Thats it. our migration is done. Its actually pretty straight-forward
to write the migration once you get the hang of it.
The hardest part is remembering the structure of a migrations file, but Django Migrations
has got that covered!. From the command line, if you run1
-this will create an empty migration file in the appropriate app. Django will also suggest a
name for the migration (which you are free to change), and it will add your dependencies
automatically, so you can just start writing your operations.
Or:
This will load the data into your database, so you can call it as needed.
280
Conclusion
Weve covered the most common scenarios youll encounter when using migrations. There
are plenty more, and if youre curious and really want to dive into migrations, the best place
to go (other than the code itself) is the official docs. Its the most up-to-date and does a pretty
good job of explaining how things work.
Remember that in the general case, you are dealing with either:
1. Schema Migrations - a change to the structure of the database or tables with no
change to the data. This is the most common type, and Django can generally create
these migrations for you automatically.
2. Data Migrations - a change to the data, or loading new data. Django cannot generate
these for you. They must be created manually using the RunPython migration.
So pick the migration that is correct for you, run makemigrations and then just be sure to
update your migration files every time you update your model - and thats more or less it.
That will allow you to keep your migrations stored with your code in git and ensure that you
can update your database structure without having to lose data.
Happy migrating!
NOTE: We will be building on the migrations that we created in this chapter,
and problems will arise if your migration files do not match exactly (including
the name of the files) with the migration files from from the repo. Compare your
code/migration files with code/migration files from the repo. Fix any discrepancies. Thanks!
281
Exercises
1. At this point if you drop your database, run migrations and then run the tests you will
have a failing test because there are no MarketItems in the database. For testing you
have two options:
Load the data in the test (or use a fixture).
Load the data by using a datamigration.
The preferred option for this case is to create a data migration to load the MarketingItems.
Can you explain why?. Create the migration.
NOTE: For some fun (and a somewhat ugly hack) we can add a line to create
the data to test.main.testMainPageView.setUpClass. See if you can
figure out what this line does and why adding it will fix your test:
1
2
2. We have a new requirement for two-factor authentication. Add a new field to the
user model called second_factor. Run ./manage.py makemigration payments.
What did it create? Can you explain what is going on in each line of the migration?
Now run ./manage.py migrate and check the database to see the change that has
been made. What do you see in the database? Now assume management comes back
and says two-factor is too complex for users; we dont want to add it after all. List two
different ways you can remove the newly added field using migrations.
3. Lets pretend that MEC has been bought by a big corporation - well call it BIGCO.
BIGCO loves making things complicated. They say that all users must have a
bigCoID, and that ID has to follow a certain formula. The ID should look like this:
<first_two_digits_in_name><1-digit-Rank_code><sign-up-date><runningNumber>
1-digit-Rank_code = Y for youngling, P for padwan, J for Jedi
sign-up-date is in the format mmddyyyy
Now create the new field and a migration for the field, then manually write a data
migration to populate the new field with the data from the pre-existing users.
282
Chapter 13
AngularJS Primer
283
same conceptual architecture in mind when working on either the front-end with Angular or
the back-end with Django.
284
Directives
Angular Models
Data/Expression Bindings
Angular Controllers
Lets start off with user story 4 from the Membership Site Chapter:
US4: User Polls
Determining the truth of the galaxy and balancing the force are both very important goals
at MEC. As such, MEC should provide the functionality to poll padwans and correlate the
results in order to best determine or predict the answers to important topics of the Star Wars
galaxy. This includes questions like Kit Fisto vs Aayla Secura, who has the best dreadlocks? Or
who would win in a fight, C3PO or R2-D2? Results should also be displayed to the padwans
so that all shall know the truth.
Again, lets implement this first in Angular before integrating it into our Django app.
SEE ALSO: Since we cannot possibly cover all of the fundamentals in this chapter, please review this introductory Angular tutorial before moving on.
285
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
<!doctype html>
<html lang="en" ng-app=''>
<head>
<meta charset="UTF-8">
<title>{{ msg }} Universe</title>
<!-- styles -->
<link href="http://netdna.bootstrapcdn.com/bootswatch/3.1.1/
yeti/bootstrap.min.css" rel="stylesheet" media="screen">
</head>
<body>
<div class="container">
<br><br>
<p>What say you, padwan: <input type="text" ng-model="msg"
ng-init="msg='Hello'"></p>
<p>{{ msg }} Universe!</p>
</div>
<!-- scripts -->
<script
src="http://code.jquery.com/jquery-1.11.0.min.js"></script>
<script src="http://netdna.bootstrapcdn.com/bootstrap/3.1.1/js/
bootstrap.min.js"></script>
<script
src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.16/
angular.min.js" type="text/javascript"></script>
</body>
</html>
Save this file as index.html in a new directory called angular_test outside of the Django
Project.
If you open the above web page in your browser then you will see that whatever you type in
the text box is displayed in the paragraph below it. This is an example of data binding in
286
Angular.
Before we look at the code in greater detail, lets look at Angular directives, which power
Angular.
Angular Directives
When a HTML page is rendered, Angular searches through the DOM looking for directives
and then executes the associated JavaScript. In general, all of these directives start with ng
(which is supposed to stand for Angular). You can also prefix them with data- if you are
concerned about strict w3c validation. Use data-ng- in place of ng-, in other words.
There are a number of built-in directives, like:
ng-click handles user actions, specifying a Javascript function to call when a user
clicks something.
ng-hide controls when an HTML element is visible or hidden, based on a conditional.
These are just a few examples. As we go through this chapter well see many uses of directives.
287
Angular Models
A model in Angular plays a similar role to a context variable in a Django template. If you
think about a simple Django template1
-Django allows you to pass in the value of msg from your view function. Assuming the above
template was called index.html, your view function might look like this:
1
2
def index():
return render_to_response(index.html, {"msg": Hello})
This will create the index template with a context variable msg that has the value of Hello,
which is exactly what Angular is doing directly in the template, on the client-side. The
difference is that with Django (in most cases) you can only change the value of your context
variable by requesting a new value from the server, whereas with Angular you can change
the value of the model anytime on the client-side by executing some JavaScript code. This is
ultimately how we are going to be able to update the page without a refresh (our main goal),
because we can just update the model directly on the client-side.
288
Angular
Django
{{ msg }}
{{ val | currency }}
{% marketing__circle_item %}
{'msg': 'hello'}
{'msg': 'hello'})
Hopefully this table will serve as a quick guide to remember what the various functionality in
Angular does.
There are some rather sizable differences between the two frameworks as well. The main
difference is the client-side nature of Angular versus the server-side nature of Django. The
client-side nature of Angular leads to one of its most prominent features, which is marketed
as two-way data binding.
289
return render_to_response('index.html',
{'marketing_items':market_items})
Here, we pass in two arguments to the render_to_response function - the first being the
template name. Meanwhile, the second is technically the template context ; however, since it
generally consists of a model or set of models, well refer to it as a model for this discussion.
Djangos template processor parses the template substituting the data binding expressions
and template tags with the appropriate data, then produces a view, which is finally returned
to the user to see.
At that point, the data binding process is complete. If the user changes data on the web page
(e.g., the view), those changes wont be reflected in the model unless the change is sent back
to the server (usually with a POST request) and the data binding process starts all over again.
Since data is bound only once (in the response), this technique is referred to as one-way data
binding, which works well for a number of use cases.
290
291
and from the server in order to keep the model and view in sync. This results in a site that
appears much more responsive and functional to the end user.
Angulars two way data-binding is depicted in the image above. The process starts with a user
action (rather than a request) - i.e., typing a value into a text box - then the JavaScript event
handler updates the associated model and triggers Angulars template compiler to rebuild
the view with the newly updated data from the model.
In a similar fashion, if you write JavaScript code that changes the model directly, Angular
will again fire off the template compiler to ensure the view stays up-to-date with the model.
Regardless of whether you make a change to the view or the model, Angular ensures the two are kept in sync.
This is two-way data binding in action, which results in keeping everything (model and view)
in sync. Since its all done on the client-side, its also done very quickly, without requiring
multiple trips to the server or a page refresh.
To be completely fair, you dont need to use Angular to keep the model and view up-to-date.
Since Angular uses vanilla JavaScript behind the scenes, you could use AJAX to achieve similar results. Angular makes the process much, much easier, allowing you to do a lot with little
code.
292
Quick start
Well start with just the poll in a standalone HTML file. Again, save this file in the angular_test directory as polls.html.
First, lets create some basic markup for the poll:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Poll: Who's the most powerful Jedi?</title>
<!-- styles -->
<link href="http://netdna.bootstrapcdn.com/bootswatch/3.1.1/yeti/
bootstrap.min.css" rel="stylesheet" media="screen">
</head>
<body>
<div class="container">
<div class="row">
<div class="col-md-8">
<h1>Poll: Who's the most powerful Jedi?</h1>
<br>
<span class="glyphicon glyphicon-plus"></span>
<strong>Yoda</strong>
<span class="pull-right">40%</span>
<div class="progress">
<div class="progress-bar progress-bar-danger"
role="progressbar" aria-value="40"
aria-valuemin="0" aria-valuemax="100" style="width:
40%;">
</div>
</div>
<span class="glyphicon glyphicon-plus"></span>
294
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
<strong>Qui-Gon Jinn</strong>
<span class="pull-right">30%</span>
<div class="progress">
<div class="progress-bar progress-bar-info"
role="progressbar" aria-value="30"
aria-valuemin="0" aria-valuemax="100" style="width:
30%;">
</div>
</div>
<span class="glyphicon glyphicon-plus"></span>
<strong>Obi-Wan Kenobi</strong>
<span class="pull-right">10%</span>
<div class="progress">
<div class="progress-bar progress-bar-warning"
role="progressbar" aria-value="10"
aria-valuemin="0" aria-valuemax="100" style="width:
10%;">
</div>
</div>
<span class="glyphicon glyphicon-plus"></span>
<strong>Luke Sykwalker</strong>
<span class="pull-right">5%</span>
<div class="progress">
<div class="progress-bar progress-bar-success"
role="progressbar" aria-value="5"
aria-valuemin="0" aria-valuemax="100" style="width:
5%;">
</div>
</div>
<span class="glyphicon glyphicon-plus"></span>
<strong>Me... of course</strong>
<span class="pull-right">15%</span>
<div class="progress progress-striped active">
<div class="progress-bar" role="progressbar"
aria-valuenow="15"
aria-valuemin="0" aria-valuemax="100" style="width:
15%;">
<span class="sr-only">15% Complete</span>
</div>
</div>
295
57
58
59
60
61
62
63
64
65
66
67
</div>
</div>
</div>
<!-- scripts -->
<script
src="http://code.jquery.com/jquery-1.11.0.min.js"></script>
<script src="http://netdna.bootstrapcdn.com/bootstrap/3.1.1/js/
bootstrap.min.js"></script>
<script
src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.16/
angular.min.js" type="text/javascript"></script>
</body>
</html>
This snippet above relies on Bootstrap to create a simple list of choices each with a + next to
them and a progress bar below. Weve put in some default values just so you can see what it
might look like after people have voted. The screenshot is below.
Bootstrap Angular
How about we start with Yoda. Update the top of the file like so:
296
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
<!doctype html>
<html lang="en" ng-app=''>
<head>
<meta charset="UTF-8">
<title>Poll: Who's the most powerful Jedi?</title>
<!-- styles -->
<link href="http://netdna.bootstrapcdn.com/bootswatch/3.1.1/yeti/
bootstrap.min.css" rel="stylesheet" media="screen">
</head>
<body>
<div class="container">
<div class="row">
<div class="col-md-8">
<h1>Poll: Who's the most powerful Jedi?</h1>
<br>
<span ng-click='votes_for_yoda = votes_for_yoda + 1'
ng-init="votes_for_yoda=0" class="glyphicon
glyphicon-plus"></span>
<strong>Yoda</strong>
<span class="pull-right">{{ votes_for_yoda }}</span>
<div class="progress">
<div class="progress-bar progress-bar-danger"
role="progressbar" aria-value="{{ votes_for_yoda }}"
aria-valuemin="0" aria-valuemax="100" style="width:
{{ votes_for_yoda }}%;">
</div>
</div>
Line 2: ng-app will bootstrap in Angular and get everything working.
Line 16: ng-click='votes_for_yoda = votes_for_yoda + 1' - this Angular
directive creates a click handler that increments the model votes_for_yoda by 1. It
will be called each time the user clicks on the + span.
Line 16: ng-init="votes_for_yoda=0" - this Angular directive initializes the
value of votes_for_yoda to 0.
Line 18, 20, and 21: the expression {{ votes_for_yoda }} uses two-way data
binding to keep the values in sync with the votes_for_yoda model. Since these values
control the progress bar, the bar will now grow each time we click on the plus.
Once this is working, update the code for all the remaining Jedis.
297
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
<!doctype html>
<html lang="en" ng-app=''>
<head>
<meta charset="UTF-8">
<title>Poll: Who's the most powerful Jedi?</title>
<!-- styles -->
<link href="http://netdna.bootstrapcdn.com/bootswatch/3.1.1/yeti/
bootstrap.min.css" rel="stylesheet" media="screen">
</head>
<body>
<div class="container">
<div class="row">
<div class="col-md-8">
<h1>Poll: Who's the most powerful Jedi?</h1>
<br>
<span ng-click='votes_for_yoda = votes_for_yoda + 1'
ng-init="votes_for_yoda=0" class="glyphicon
glyphicon-plus"></span>
<strong>Yoda</strong>
<span class="pull-right">{{ votes_for_yoda }}</span>
<div class="progress">
<div class="progress-bar progress-bar-danger"
role="progressbar" aria-value="{{ votes_for_yoda }}"
aria-valuemin="0" aria-valuemax="100" style="width:
{{ votes_for_yoda }}%;">
</div>
</div>
<span ng-click='votes_for_qui = votes_for_qui + 1'
ng-init="votes_for_qui=0" class="glyphicon
glyphicon-plus"></span>
<strong>Qui-Gon Jinn</strong>
<span class="pull-right">{{ votes_for_qui }}</span>
<div class="progress">
<div class="progress-bar progress-bar-info"
role="progressbar" aria-value="{{ votes_for_qui }}"
aria-valuemin="0" aria-valuemax="100" style="width:
{{ votes_for_qui }}%;">
</div>
</div>
298
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
<script
src="http://code.jquery.com/jquery-1.11.0.min.js"></script>
<script src="http://netdna.bootstrapcdn.com/bootstrap/3.1.1/js/
bootstrap.min.js"></script>
<script
src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.16/
angular.min.js" type="text/javascript"></script>
</body>
</html>
With that, you have a simple user poll that uses progress bars to show which option has the
most votes. This will update automatically without making any calls to the back-end. This
however puts a lot of logic into our actual HTML and can thus make maintenance a bit difficult. Ideally we would like a separate place to keep our logic.
We can tackle this problem by using an Angular controller.
Angular Controller
We havent talked about controllers in Angular yet, but for now you can think of an Angular
controller as your views.py file that handles the HTTP requests from the browser. Of course
with Angular the requests are coming from individual client actions, but the two more or less
serve the same purpose.
If we modify the above code to use a controller, it might look like this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
<!doctype html>
<html lang="en" ng-app="mecApp">
<head>
<meta charset="UTF-8">
<title>Poll: Who's the most powerful Jedi?</title>
<!-- styles -->
<link href="http://netdna.bootstrapcdn.com/bootswatch/3.1.1/yeti/
bootstrap.min.css" rel="stylesheet" media="screen">
</head>
<body>
<div class="container" ng-controller="UserPollCtrl">
<div class="row">
<div class="col-md-8">
<h1>Poll: Who's the most powerful Jedi?</h1>
<br>
300
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
mecApp.controller('UserPollCtrl', function($scope) {
$scope.votes_for_yoda = 80;
$scope.votes_for_qui = 30;
$scope.votes_for_obi = 20;
$scope.votes_for_luke = 10;
$scope.votes_for_me = 30;
74
75
76
77
78
$scope.vote = function(voteModel) {
$scope[voteModel] = $scope[voteModel] + 1;
};
});
302
79
80
81
</script>
</body>
</html>
<script type="text/javascript">
var mecApp = angular.module('mecApp',[]);
3
4
5
6
mecApp.controller('UserPollCtrl', function($scope) {
$scope.votes_for_yoda = 80;
$scope.votes_for_qui = 30;
303
7
8
9
$scope.votes_for_obi = 20;
$scope.votes_for_luke = 10;
$scope.votes_for_me = 30;
10
11
12
13
14
15
$scope.vote = function(voteModel) {
$scope[voteModel] = $scope[voteModel] + 1;
};
});
</script>
Line 2: We declare an Angular module called mecApp, which is what we named the
module in our ng-app directive in the earlier example. If we dont declare a module
with the name used in ng-app then things wont work. We created our module with
two arguments - the first is the name of the module and the second is an array of the
names of all the modules this module depends on. Since we dont depend on any other
modules we just pass an empty array.
Line 4: Here we create our controller by calling the controller method on our module. We pass in two arguments - the first is the name of the controller and the second is a constructor function which will be called when Angular parses the DOM and
finds the ng-controller directive. The constructor method is always passed the
$scope variable which Angular uses as context when evaluating expressions - i.e., ({{
votes_for_yoda }}) - or for propagating events.
<div ng-controller='DaysCtrl'>
<p>Today is {{ day_of_week }} </p>
</div>
app.controller('DaysCtrl', function($scope) {
$scope.day_of_week = "Monday";
});
304
3
4
<div ng-controller='DaysCtrl'>
<input id="day-of-week" ng-model="day_of_week" placeholder="what
day is it?">
<p>Today is {{ day_of_week }} </p>
</div>
When the user types in a value in the day-of-week input then that value is stored in
$scope.day_of_week and can be accessed from the controller as well as in the next line of
the html - <p>Today is {{ day_of_week }} </p>.
For our purposes that is really all you need to know about scope.
would silently fail and give no indication the function wasnt defined.
Line 12: When voteModel is a string (in our case, something like the string
votes_for_yoda), then this expression is just looking up the value by name in the
$scope data structure. This works much like a dictionary in Python. By using this
syntax, we can use the same vote() function for whatever we want to vote for; we just
pass in the name of the model we want to increment, and it will be incremented.
Thats it. We probably want to set the default values to 0 for everybody to be fair; we set nonzero values in the example above to show that the default values would actually be reflected
in the view.
305
Conclusion
That certainly isnt all there is to Angular. Again, Angular is a massive framework with a
lot to offer, but this is enough to get us started building some features in Angular. As we
go through the process of developing these user stories, we will most surely stumble upon
some more features of Angular, but for now this should get your feet wet. The key here is to
understand the similarities between Angular and Django (e.g., a templating language that is
very similar to Djangos templating language) and its differences (e.g., client-side vs serverside, two-way data binding vs one-way data binding). If you can grasp these core Angular
concepts, you will be well on your way to developing some cool features with Angular.
In the next chapter we will look at integrating Angular with Django, talk about how to structure things so the two frameworks play nice together, and also talk about how to use Angular
to update the back-end.
306
Exercises
1. Our User Poll example uses progress bars, which are showing a percentage of votes.
But our vote function just shows the raw number of votes, so the two dont actually
match up. Can you update the vote() function to return the current percentage of
votes. (HINT: You will also need to keep track of the total number of votes.)
2. Spend a bit more time familiarizing yourself with Angular. Here are a few excellent
resources to go through:
The Angular PhoneCat Tutorial - https://docs.angularjs.org/tutorial.
The Angular team is very good at releasing educational talks and videos about
Angular; check those out on the Angular JS YouTube Channel.
There is a really good beginner to expert series from the ng-newsletter, which is
highly recommended.
307
Chapter 14
Djangular: Integrating Django and
Angular
In the last chapter we went through a brief introduction to Angular and how to use it to deliver
the User Polls functionality from our User Stories. However, there are a few issues with that
implementation:
1. We implemented it stand-alone in a separate HTML page, so we still need to integrate
it into our Django Project.
2. We hard coded a number of values, so it will only work for a single type of poll; we need
a more general solution.
3. As implemented, the user poll will not share poll data between users (because the
model only exists on the client side); we need to send the votes back to the server so all
user votes can be counted and shared.
This chapter will focus on how to accomplish those three things. We will look at the Django
backend as well as the Django and Angular templates. Lets start first with the Django backend.
308
With the new app created, we can update the djangular_polls/models.py file and add
our new model:
1
2
3
4
5
6
class Poll(models.Model):
title = models.CharField(max_length=255)
publish_date = models.DateTimeField(auto_now=True)
7
8
9
def poll_items(self):
return self.pollitem_set.all()
10
11
12
13
14
15
16
17
18
class PollItem(models.Model):
poll = models.ForeignKey(Poll, related_name='items')
name = models.CharField(max_length=30)
text = models.CharField(max_length=300)
votes = models.IntegerField(default=0)
percentage = models.DecimalField(
max_digits=5, decimal_places=2, default=0.0)
19
20
21
class Meta:
ordering = ['-text']
309
1. Poll(): represents the overall poll - i.e., who is the best Jedi.
2. PollItem(): the individul items to vote on in the poll.
Notice within PollItem() we also added a default ordering so that the returned JSON will
always be in the same order. (This will become important later.) Dont forget to add the app
to INSTALLED_APPS in the settings.py file and migrate the changes to your database:
1
2
Now you have a nicely updated database and are ready to go.
310
Serializer
First create the serializers:
1
2
3
4
5
6
7
8
class PollItemSerializer(serializers.ModelSerializer):
class Meta:
model = PollItem
fields = ('id', 'name', 'text', 'votes', 'percentage')
9
10
11
12
class PollSerializer(serializers.ModelSerializer):
items = PollItemSerializer(many=True)
13
class Meta:
model = Poll
fields = ('title', 'publish_date', 'items')
14
15
16
{
'title': 'Who is the best jedi',
'publish_date': datetime.datetime(2014, 5, 20, 5, 47, 56, 630638),
'items': [
{'id': 4, 'name': 'Yoda', 'text': 'Yoda', 'percentage':
Decimal('0.00'), 'votes': 0},
{'id': 5, 'name': 'Vader', 'text': 'Vader', 'percentage':
Decimal('0.00'), 'votes': 0},
{'id': 6, 'name': 'Luke', 'text': 'Luke', 'percentage':
Decimal('0.00'), 'votes': 0}
311
8
9
This makes it easy to get a Poll() and its associated PollItem() in one JSON call. To
do this, we gave the relationship, between a poll and its items, a name in the PollItem()
model:
1
The related_name provides a name to use for the reverse relationship from Poll.
Next, in the PollSerializer() class we included the items property:
1
items = PollItemSerializer(many=True)
Why? Well, without it the serializer will just return the primary keys.
Then we also have to include the reverse relationship in the fields list:
1
Those three things will give us the JSON that we are looking for. This would be a good thing
to write some unit tests for to make sure its working appropriately. Yes, try it on your own.
Endpoints
Next, we need to create the REST endpoints in djangular_polls/json_views.py.
1
2
3
4
5
6
7
8
9
class PollCollection(mixins.ListModelMixin,
mixins.CreateModelMixin,
generics.GenericAPIView):
10
11
12
queryset = Poll.objects.all()
serializer_class = PollSerializer
13
14
15
16
17
18
19
20
21
22
23
24
class PollMember(mixins.RetrieveModelMixin,
mixins.UpdateModelMixin,
mixins.DestroyModelMixin,
generics.GenericAPIView):
25
26
27
queryset = Poll.objects.all()
serializer_class = PollSerializer
28
29
30
31
32
33
34
35
36
37
38
39
40
41
class PollItemCollection(mixins.ListModelMixin,
mixins.CreateModelMixin,
generics.GenericAPIView):
42
43
44
queryset = PollItem.objects.all()
serializer_class = PollItemSerializer
45
46
47
48
49
50
51
52
53
54
55
class PollItemMember(mixins.RetrieveModelMixin,
mixins.UpdateModelMixin,
mixins.DestroyModelMixin,
313
generics.GenericAPIView):
56
57
58
59
queryset = PollItem.objects.all()
serializer_class = PollItemSerializer
60
61
62
63
64
65
66
67
68
Again, this should all be review from the Django REST Framework chapter. Basically we are
just creating the endpoints for pPoll() and the associated PollItem() with basic CRUD
functionality.
Routing
Dont forget to update our Django routing in djangular_polls/urls.py:
1
2
3
4
5
6
7
8
9
10
11
12
urlpatterns = patterns(
'djangular_polls.main_views',
url(r'^polls/$', json_views.PollCollection.as_view(),
name='polls_collection'),
url(r'^polls/(?P<pk>[0-9]+)$', json_views.PollMember.as_view()),
url(r'^poll_items/$', json_views.PollItemCollection.as_view(),
name="poll_items_collection"),
url(r'^poll_items/(?P<pk>[0-9]+)$',
json_views.PollItemMember.as_view()),
)
314
3
4
main_json_urls.extend(djangular_polls_json_urls)
Above, we are combining the json_urls from our main application with the json_urls
from our djangular_polls application. Then they are added to the overall django_ecommerce.urls
with the same line we had before:
1
url(r'^api/v1/', include(main_json_urls)),
This way our entire REST API stays in one place, even if it spans multiple Django apps. Now
we should have two new URLs in our REST API:
1
2
/api/vi/polls/
/api/vi/poll_items/
315
Structural Concerns
Now that we have the backend all sorted, we have some choices to make about how to best
integrate our frontend framework, Angular, with our backend framework, Django. There are
many ways you can do this, and it really depends on what your design goals are and, as always,
personal preference. Lets discuss a couple of integration options.
316
or even an inclusion tag, that uses Angular to accomplish that task and use standard Django
views for the remainder of the application.
Those three choices should basically cover every possiblity, and the choice really just comes
down to how much Angular you want to include. Thus, there is no wrong or right choice; just
do what best suits your needs.
317
user.html template
First, update our main/user.html template to include our djangular_polls/_polls.html
template in a new info box:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
{% extends "__base.html" %}
{% load staticfiles %}
{% block content %}
<div id="achievements" class="row member-page hide">
{% include "main/_badges.html" %}
</div>
<div class="row member-page">
<div class="col-sm-8">
<div class="row">
{% include "main/_announcements.html" %}
{% include "main/_lateststatus.html" %}
</div>
</div>
<div class="col-sm-4">
<div class="row">
{% include "main/_jedibadge.html" %}
{% include "main/_statusupdate.html" %}
{% include "djangular_polls/_polls.html" %}
</div>
</div>
</div>
{% endblock %}
{% block extrajs %}
<script src="{% static "js/angular.min.js" %}"
type="text/javascript"></script>
318
319
25
26
Then add a new file to the same directory (static/js) called userPollCtrl.js and add the following code:
1
2
3
4
5
pollsApp.controller('UserPollCtrl', function($scope) {
$scope.total_votes = 0;
$scope.vote_data = {}
6
7
8
9
10
11
12
13
14
15
16
$scope.vote = function(voteModel) {
if (!$scope.vote_data.hasOwnProperty(voteModel)) {
$scope.vote_data[voteModel] = {"votes": 0, "percent": 0};
$scope[voteModel]=$scope.vote_data[voteModel];
}
$scope.vote_data[voteModel]["votes"] =
$scope.vote_data[voteModel]["votes"] + 1;
$scope.total_votes = $scope.total_votes + 1;
for (var key in $scope.vote_data) {
var item = $scope.vote_data[key];
item["percent"] = item["votes"] / $scope.total_votes * 100;
320
17
18
}
};
19
20
});
_polls.html template
Now lets add the _polls.html template within templates/djangular_polls:
1
2
3
4
5
6
7
8
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
<strong>Obi-Wan Kenobi</strong>
<span class="pull-right">[[ votes_for_obi.percent | number:0
]]%</span>
<div class="progress">
<div class="progress-bar progress-bar-warning"
role="progressbar" aria-value="[[ vote_for_obi.percent ]]"
aria-valuemin="0" aria-valuemax="100" style="width: [[
votes_for_obi.percent ]]%;">
</div>
</div>
<span class="glyphicon
glyphicon-plus"ng-click="vote('votes_for_luke')"></span>
<strong>Luke Sykwalker</strong>
<span class="pull-right">[[ votes_for_luke.percent | number:0
]]%</span>
<div class="progress">
<div class="progress-bar progress-bar-success"
role="progressbar" aria-value="[[ vote_for_luke.percent
]]"
aria-valuemin="0" aria-valuemax="100" style="width: [[
votes_for_luke.percent ]]%;">
</div>
</div>
<span class="glyphicon glyphicon-plus"
ng-click="vote('votes_for_me')"></span>
<strong>Me... of course</strong>
<span class="pull-right">[[ votes_for_me.percent | number:0
]]%</span>
<div class="progress">
<div class="progress-bar" role="progressbar" aria-value="[[
vote_for_me.percent ]]"
aria-valuemin="0" aria-valuemax="100" style="width: [[
votes_for_me.percent ]]%;">
</div>
</div>
</div>
</section>
This should look familiar, as its pretty much the same code we had in the last chapter, except
now we are shoving it all into one of our info-boxes, which we introduced way back in the
322
Bootstrap chapter.
There is one important difference here though, can you spot it?
Notice that we are using different template tags - e.g., [[ ]] instead of {{ }}. Why? Because
Django and Angular both use {{ }} by default. Therefore we will instruct angular to use
the delimeters [[ ]] and let Django continue to use the delemiters {{ }} - so we can be
clear who is doing the interpolation. Luckily in Angular this is very easy to do. Just add the
following lines to the top of the userPollCtrl.js file:
1
2
3
4
pollsApp.config(function($interpolateProvider){
$interpolateProvider.startSymbol('[[')
.endSymbol(']]');
});
2
3
4
5
6
pollsApp.config(function($interpolateProvider){
$interpolateProvider.startSymbol('[[')
.endSymbol(']]');
});
7
8
9
10
pollsApp.controller('UserPollCtrl', function($scope) {
$scope.total_votes = 0;
$scope.vote_data = {}
11
12
13
14
15
16
17
18
19
20
21
22
23
$scope.vote = function(voteModel) {
if (!$scope.vote_data.hasOwnProperty(voteModel)) {
$scope.vote_data[voteModel] = {"votes": 0, "percent": 0};
$scope[voteModel]=$scope.vote_data[voteModel];
}
$scope.vote_data[voteModel]["votes"] =
$scope.vote_data[voteModel]["votes"] + 1;
$scope.total_votes = $scope.total_votes + 1;
for (var key in $scope.vote_data) {
var item = $scope.vote_data[key];
item["percent"] = item["votes"] / $scope.total_votes * 100;
}
};
24
25
});
323
So, the config function of an Angular Module allows you to take control of how Angular will
behave. In our case, we are using the config function to tell angular to use [[ ]] for demarcation, which Django ignores.
With that, you should have a functioning User Poll info-box on the main users page. Make
sure to manually test this out. Once its working, then add unit tests.
This app still isnt talking to the backend. Lets rectify that.
324
3
4
5
6
7
8
9
10
$http.get('/api/v1/polls/1').
success(function(data){
$scope.poll = data;
}).
error(function(data,status) {
console.log("calling /api/v1/polls/1 returned status " +
status);
});
Line 2 - Initializes the poll scope variable as an empty string to avoid undefined errors
later.
Line 4 - $http.get makes a GET request to the specified URL.
Line 5 - The return value from $http.get is a promise. Promises allow us to register
callback methods that receive the response object. Here, we are registering a method,
success, to be called on a success. It receives the data object - e.g, the data returned
in the response - and we set our $scope.poll variable to this, which should be the
JSON representation of our poll, where id is 1.
325
Data Type
Explanation
data
status
headers
config
statusText
String or Object
Number
Function
Object
String
Response body
HTTP status code - 200, 404, 500, etc.
Header getter function to be called to get the header item
Config object used to generate the request
HTTP Status text of the response
NOTE: In JavaScript you can always just ignore extra parameters that you do
not need like we have done in our success function above.
So that little bit of code allows us to retrieve the information about a Poll to be displayed from
our Django backend. Make sure to inject the $http service into the Angular Controller:
1
7
8
9
10
11
12
13
14
15
16
17
18
ng-repeat functions much like {% for item in poll.items %} would in a Django template. The main difference is that ng-repeat is generally used as an attribute of an HTML
tag, and it repeats that tag.
In our case, the <div> is repeated, which includes all of the HTML inside the <div>. We
only have to create the HTML for one of the voting items and use ng-repeat to generate a
new item for every poll item returned from our $http.get('/api/v1/polls/1') call.
With ng-repeat we specify item in poll.items; this exposes item as an variable that
we can use in Angular expressions:
327
$scope.total_votes = 0;
2
3
4
5
$scope.vote = function(item) {
item.votes = item.votes + 1;
$scope.total_votes = $scope.total_votes + 1;
6
7
8
9
10
11
for (i in $scope.poll.items){
var temp_item = $scope.poll.items[i];
temp_item.percentage = temp_item.votes / $scope.total_votes *
100;
}
};
{
"title": "Who is the best Jedi?",
"publish_date": "2014-10-21T04:05:24.107Z",
"items": [
{
"id": 1,
"name": "Yoda",
"text": "Yoda",
"votes": 0,
"percentage": "0.00"
},
{
"id": 2,
"name": "Vader",
"text": "Vader",
"votes": 0,
"percentage": "0.00"
},
{
"id": 3,
"name": "Luke",
"text": "Luke",
"votes": 0,
"percentage": "0.00"
}
]
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
Be sure to test it out! Navigate to http://localhost:8000/. After you login make sure the Poll
is at the bottom right of the page. Test the functionality.
329
6
7
8
9
10
11
12
13
14
15
16
So, we added in [[ barcolor($index) ]] as one of the classes. Now when Angular comes
across this, it calls $scope.barcolor() from the controller and passes in the current iteration number of ng-repeat.
Lets update the controller as well in static/js/userPollCtrl.js:
1
2
3
4
5
6
$scope.barcolor = function(i) {
colors = ['progress-bar-success', 'progress-bar-info',
'progress-bar-warning', 'progress-bar-danger', '']
idx = i % colors.length;
return colors[idx];
}
330
This function uses a list of various Bootstrap classes to control the color of the progress bar.
Using % (modulus) ensures that no matter how many vote items we get, we will always get an
index that is in the bounds of our colors list.
Now all we need to do is make this thing work for multiple users.
331
@property
def total_votes(self):
return self.poll_items().aggregate(
models.Sum('votes')).get('votes__sum', 0)
Before we jump into the explanation of what this code does lets talk about using a @property
for just a second.
1. What is a @property? Its a function that you can call without (). So in our case, normally we would call model.total_votes(), but by making it a property with @property
we can just call it like this - model.total_votes
2. What is it good for? Other than saving a couple of keystrokes, properties are best
used for computed model attributes. You can think of it as a field within the model
(sort of), except that it needs some logic to run on the get, set, or delete.
3. Is it the same thing as a model field? Absoluetly not. Django, for the most part,
just ignores a property - i.e., you cant use it in a querySet as it wont trigger a migration, etc.). You can think of a property like a field that you dont want django to know
about.
4. Why not just use a function? Really @property is just snytactic sugar, so if you
prefer you could use a function all you want. Its a design choice more than anything
else.
Okay. On with the show ..
Take a look at the code again. This uses Djangos aggregate() function to compute the sum
of all the related PollItem() for the poll. aggregate returns a dictionary that in this case
looks like {'votes__sum': <sum_of_votes> }. But we only want the actual number, so
we just call a get on the returned dictionary and default the value to 0 in case its empty.
With this method, we calculate total_votes every time the property is called; that way we
dont have to keep the various database tables in sync.
Because this is a property and not a true field, Django Rest Framework wont pick it up by
default, so we need to modify our serializer as well:
332
1
2
3
class PollSerializer(serializers.ModelSerializer):
items = PollItemSerializer(many=True, required=False)
total_votes = serializers.Field(source='total_votes')
4
5
6
7
class Meta:
model = Poll
fields = ('title', 'publish_date', 'items', 'total_votes')
We instruct the serializer to include a field called total_votes which points to our
total_votes property. With that, our /api/v1/polls will return an additional property total_votes.
Finally, we need to calculate the percentage as well within djangular_polls/models.py.
Lets use a similar technique to calculate the percentage on the fly:
1
2
3
4
5
6
@property
def percentage(self):
total = self.poll.total_votes
if total:
return self.votes / total * 100
return 0
We use the newly created total_votes property of our parent poll to calculate the percentage. We do need to be careful to avoid a divide by zero error. Then we make a similar change
to the PollItemsSerialzer as we did to the PollSerailzer. It now looks like this:
1
2
class PollItemSerializer(serializers.ModelSerializer):
percentage = serializers.Field(source='percentage')
3
4
5
6
class Meta:
model = PollItem
fields = ('id', 'poll', 'name', 'text', 'votes',
'percentage')
Your models.py and the serializers.py should now look like this:
djangular_polls/models.py
1
2
3
4
5
class Poll(models.Model):
333
6
7
title = models.CharField(max_length=255)
publish_date = models.DateTimeField(auto_now=True)
8
9
10
11
@property
def total_votes(self):
return
self.poll_items().aggregate(Sum('votes')).get('votes__sum',
0)
12
13
14
def poll_items(self):
return self.items.all()
15
16
17
class PollItem(models.Model):
18
19
20
21
22
23
24
25
26
27
28
29
@property
def percentage(self):
total = self.poll.total_votes
if total:
return self.votes / total * 100
return 0
30
31
32
class Meta:
ordering = ['-text']
djangular_polls/serializers.py
1
2
3
4
5
6
class PollItemSerializer(serializers.ModelSerializer):
percentage = serializers.Field(source='percentage')
7
8
9
class Meta:
model = PollItem
334
10
11
12
13
14
15
class PollSerializer(serializers.ModelSerializer):
items = PollItemSerializer(many=True, required=False)
total_votes = serializers.Field(source='total_votes')
16
17
18
19
class Meta:
model = Poll
fields = ('id', 'title', 'publish_date', 'items',
'total_votes')
With these changes we can now update our Angular controller to push the calculations to
the Django backend so we can share the results amongst multiple users. But first we should
update our database to reflect the changed table structure for the Poll / PollItems models, so
dont forget to run makemigrations and migrate.
Once that is done we can switch back to our angular controller. Update the vote function
within static/js/userPollCtrl.js like so:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$scope.vote = function(item) {
item.votes += 1;
$http.put('/api/v1/poll_items/'+item.id,item).
success(function(data){
$http.get('/api/v1/polls/1').success(function(data){
$scope.poll = data;
}).
error(function(data,status){
console.log("calling /api/v1/polls/1 returned status " +
status);
});
}).
error(function(data,status){
console.log("calling PUT /api/v1/poll_items returned status " +
status);
});
});
Line 3 - Using a PUT request, send the item back to the backend (which will update
the vote count). Note the second parameter of the PUT call is the item itself, which will
be converted to JSON and sent back to the server.
Line 5 - If the PUT request is successful then refetch all the Poll data, which will also
give us updated total_votes and percentages for each item. Its important to only
call this in the success handler so as to ensure the previous PUT request has completed,
otherwise we might not get the most up-to-date data.
336
pollsApp.factory('pollFactory', function($http) {
2
3
4
5
6
7
8
9
10
11
pollFactory.getPoll = function(id) {
return $http.get(pollUrl + id);
};
12
13
14
15
16
pollFactory.vote_for_item = function(poll_item) {
poll_item.votes +=1;
return $http.put(pollItemsUrl + poll_item.id, poll_item);
}
17
18
19
return pollFactory;
});
337
Line 1 - Here we just declare the factory. Note that we inject the $http service, since
we will need it to make our REST calls.
Lines 3-5 - Define the URLs needed for the REST calls.
Line 7 - This is the factory object that we will eventually return, but first we need to
add some functions to it.
Lines 9-11 - The first function we add is the getPoll() function, which will grab the
poll from our Django REST API. Note that we are returning a promise (we talked about
them previously) and not the actual data. The advantage of returning the promise is
that it allows us to do promise chaining (one call after another), which we will make
use of later in the controller.
Linse 13-16 - Our second function, vote_for_item, takes the poll item as an input,
increments the vote counter, then updates the poll item through the REST API. Again
we are returning a promise which wraps the actual result of the call.
Line 18 - Now that we have finished creating our pollFactory() object and have
given it all the functionality we need. Lets return it, so we can use it in our controller.
Next, lets look at how we use our newly created factory in our controller.
All we have to do is ask Angular to inject the factory into our controller, which we can do with
this initial line:
1
2
pollsApp.controller('UserPollCtrl',
function($scope, $http, pollFactory) {
This is the line used to create our controller. Notice that we added pollFactory to the list
of dependencies, so Angular will inject the factory for us. Also note that we no longer require
$http, as that is all handled by the factory now. With the declaration above, we can use the
factory anywhere in our controller (or application, for that matter) by simply calling it like
this:
1
pollFactory.getPoll(1);
Lets see how the complete controller looks after we add in the factory:
1
pollsApp.controller('UserPollCtrl',function($scope, pollFactory) {
2
3
4
5
6
7
338
9
10
11
function getPoll(){
return pollFactory.getPoll(1);
}
12
13
14
15
16
17
18
$scope.barcolor = function(i) {
var colors = ['progress-bar-success','progress-bar-info',
'progress-bar-warning','progress-bar-danger','']
var idx = i % colors.length;
return colors[idx];
}
19
20
getPoll().then(setPoll);
21
22
23
24
25
26
$scope.vote = function(item) {
pollFactory.vote_for_item(item)
.then(getPoll)
.then(setPoll);
}
27
28
});
Youll notice near the top of the controller we created two functions getPoll which asks the
pollFactory to get the poll, and setPoll which takes a promise and uses it to set the value
of $scope.poll. These functions will come in handy later on.
The next function $scope.barcolor remains unchanged.
Line 22 - getPoll().then(setPoll). This is a simple example of promise chaining. First we call getPoll which in turn calls pollFactory.getPoll and returns a
promise. Next we use a function of the promise called then which will be triggered
once the promise has completed. We pass in our setPoll() function to then, which
will receive the promise returned by getPoll and use the data on that promise to set
our $scope.poll variable.
Line 25 - In our $scope.vote function we have another example of promise chaining. Here we first call pollFactory.vote_for_item, which issues a PUT request to
our poll_items API and returns a promise containing the result of that PUT request.
When the promise returns, we call our getPoll() function with .then(getPoll).
This function then gets the newer version of our Poll object with total_votes and
percentages recalculated, and returns a promise. Finally we call .then(setPoll),
which uses that returned promise to update our $scope.poll variable.
339
As you can see, by structuring our factory methods to return promises, we can chain together
several function calls and have them execute synchronously. As an added advantage, if any
of the calls fail, the subsequent calls in the chain will receive the failed result and can thus
react accordingly.
340
Conclusion
With that, we have some pretty well-factored Angular code that interacts with our RESTful
API to provide a multi-user user poll function. While the Django Rest Framework part of
this chapter should have been mostly a review, we did learn how to ensure that our URL
structure plays nicely with Angular and how to serialize model properties that arent stored
in the database.
On the REST side of things, you should now have a better understanding of controllers and
how they are used to manage scope, factories and how they are used to provide access to data,
the $http service and how it is used to make AJAX calls, and of course promises and how
they can be chained together to achieve some pretty cool functionality.
Also, lets not forget about the ng-repeat directive and how it can be used to achieve the
same functionalty as {% for in Django templates.
All in all, while this is only a small portion of what Angular offers, it should give you some
good ideas about how it can be used to buid well-factored, solid client-side functionality.
341
Exercises
1. For the first exercise lets explore storing state in our factory. Change the pollFactory.getPoll
function to take no parameters and have it return the single most recent poll. Then
cache that polls id, so next time getPoll() is called you have the id of the poll to
retrieve.
2. In this chapter we talk about factories but Angular also has a very similar concept called
services. Have a read through Michael Hermans excellent blog post on services so you
know when to use each.
3. Currently our application is a bit of a mismatch, we are using jQuery on some parts i.e,. showing achievements - and now Angular on the user polls. For practice, convert
the showing acheivement functionality, from application.js, into Angular code.
342
Chapter 15
Angular Forms
The final piece of our application that needs to be converted over to Angular is the registration
from. This chapter will discuss how to handle forms in Angular when dealing with a Django
back-end. We will address submitting and validating forms(client vs. server-side), dealing
with CSRF, and basically making Django and Angular Forms work well together.
When creating a form with Django and Angular, you basically have two choices:
1. You can create the form with Angular along with a REST API for it to call on the Django
backend.
2. You can also create the form from the Django side, using Django Forms, much like we
have already done for the registration form. This allows you to build the form in the
back-end and follow a more traditional Django development style.
We could argue at length as per which method is better. However, since we have talked about
REST APIs and integrating those with Angular before, in this chapter lets cover using Django
Forms (the latter method). If nothing else, at least youll have an idea of how to implement
both approaches and you can see which one works best for you.
343
Field Validation
Regardless of which choice we make for creating the form, we need some sort of form validation. Django Forms offer server side validation out of the box.
NOTE: Server side validation refers to validating the user input of a form
on the server - e.g., via the Python code, in our case. Form submission and validation usually happen at the same time. If server side validation fails, a list of
errors is generally sent back to the client. Meanwhile, Client side validation
refers to validating user inputs directly in the web browser - e.g., via Angular, in
our case.. Validation is often in response to a key press or an input field losing
focus. This allows for fields to be validated one at a time, without a trip to the
server, and generally provides a more real-time feel for the user, meaning that
they are notified as soon as they get an error, instead of after the entire form is
submitted.
Since we already know how to do server side validation with Django Forms, form.is_valid(),
lets look at client side validation with Angular. Before adding any validations, our registration form in templates/payments/register.html looks like this:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
{% endfor %}
<div id="change-card" class="clearfix"{% if not
form.last_4_digits.value %} style="display: none"{% endif %}>
Card
<div class="input">
Using card ending with {{ form.last_4_digits.value }}
(<a href="#">change</a>)
</div>
</div>
{% include "payments/_cardform.html" %}
</form>
<div class="clearfix">
{{ field.label_tag }}
<div class="input">
{{ field }}
</div>
</div>
<div class="clearfix">
<label for="id_name">Name:</label>
345
3
4
5
6
<div class="input">
<input id="id_name" name="name" type="text">
</div>
</div>
In order to do Angular validation, we need to first add an ng-model to each field in our form.
In our template we are using {{field}} to render the input fields. Lets think about what is
happening with that.
field is a property of the form. If you recall, we called {% for field in form.visible_fields
%}. This will loop through all the fields in our UserForm and display them. But how exactly
does it know what to display? Looking at our payments.forms.UserForm class, we see:
1
2
3
4
5
6
7
8
9
10
11
12
13
class UserForm(CardForm):
name = forms.CharField(required=True)
email = forms.EmailField(required=True)
password = forms.CharField(
required=True,
label=('Password'),
widget=forms.PasswordInput(render_value=False)
)
ver_password = forms.CharField(
required=True,
label=('Verify Password'),
widget=forms.PasswordInput(render_value=False)
)
This is a list of all the fields. Pay attention to the password and ver_password fields. They
both have an optional argument widget. In Django, a widget controls how a field is rendered
to HTML. Every field gets a default widget based upon the field type. So CharField will have
a default TextInput widget, EmailField will have a default EmailInput widget, and so
on. For our password fields, we have overwritten the default widget to be a PasswordInput
widget so that the password isnt display when the user types. This is handled by simply
adding the HTML attribute type='password' to the input when it is rendered, and the
widget does that.
Since the widget ultimately controls how the field is rendered and we want all of our fields to
be rendered with an ng-model attribute, we can change the widget and tell it to include the
ng-model attribute for us.
The easiest way to do that is to modify the __init__ function of our UserForm. Behold:
1
class UserForm(CardForm):
346
2
3
ng_scope_prefix = "userform"
4
5
6
7
8
9
10
field.widget.attrs.update(attrs)
Line 3 - In Angular, its simpler to nest all of our models in one object. In other
words, rather than having models named email, name, etc its easier if we have
userform.email, and userform.name. This way, when it comes time to pass that
information around in Angular (i.e., when we are POSTing our form), we can just
pass userform and not have to reference each individual field. This also has the
added advantage of allowing us to change the names and even the number of fields
in the form without having to make corresponding changes in Angular. So grouping
together all of the $scope for a form is a good thing in Angular. And that is what the
ng_scope_prefix field is for.
Line 5 - Our __init__ function.
Line 6 - Always a good idea to call the __init__ of our parent class in case something
is happening there.
Line 7 - Loop through each field in UserForm.
Line 8 - Each widget uses a dictionary called attrs that stores the HTML attributes that will be rendered when the widget is rendered. Here we are creating a dictionary with one entry that has the key of ng-model and the value of
userform.<fieldName>.
Line 10 - Add our dictionary to the list of attrs the widget will render.
class UserForm(CardForm):
2
3
4
5
6
7
name = forms.CharField(required=True)
email = forms.EmailField(required=True)
password = forms.CharField(
required=True,
label=('Password'),
347
8
9
10
11
12
13
14
widget=forms.PasswordInput(render_value=False)
)
ver_password = forms.CharField(
required=True,
label=('Verify Password'),
widget=forms.PasswordInput(render_value=False)
)
15
16
ng_scope_prefix = "userform"
17
18
19
20
21
22
23
field.widget.attrs.update(attrs)
24
25
26
27
28
29
30
31
def clean(self):
cleaned_data = self.cleaned_data
password = cleaned_data.get('password')
ver_password = cleaned_data.get('ver_password')
if password != ver_password:
raise forms.ValidationError('Passwords do not match')
return cleaned_data
With this change, if we refresh the browser and look at the fields rendered on the registration
page (http://localhost:8000/register, they should look like:
1
2
3
4
5
6
<div class="clearfix">
<label for="id_name">Name:</label>
<div class="input">
<input id="id_name" name="name" ng-model="userform.name"
type="text" class="ng-pristine ng-valid">
</div>
</div>
There you go. We have our nice ng-model="userform.name" added to our input. Pretty
cool, huh? But wait, there is also something else added: class="ng-pristine ng-valid".
We didnt tell it to do that, did we?
348
Well no, not directly. Angular does that for us: Anytime you add an ng-model to an input
element, Angular will add those classes, and thats a good thing because we need those classes
for data/field validation. Lets look at what classes there are that Angular sets for us to help
with the validation. The four we care about are:
Class
Activated
ng-valid
ng-invalid
ng-pristine
ng-dirty
Notice that by default the field is assigned ng-pristine, meaning it hasnt been touched yet,
and ng-valid, meaning it passes validation.
Try this: Open your JavaScript console and highlight the name input field. Then click on the
name field and type something. Watch the classes change to:
ng-valid ng-dirty
These signifies that text has been inputted into the field and it passes validation. Lets look
at the email field as an example. Type a valid email, then check out the JavaScript console:
Youll notice it has an extra class, ng-valid-email, because a field can have multiple validators. This class tells us that the email-validator is currently valid. But how does Angular
know this is an email field? Angular uses the HTML5 attributes to determine which validators a field needs and applies them automatically to the field (as long as it has the ng-model
directive attached). In this case, because the field has the attribute type="email", Angular applies the validator. Angular respects the HTML5 type values as well as the required
attribute for form validation. You can also use:
With these you have quite a bit of flexibility and can validate almost anything you want. (It
is also possible to use custom directives to do custom validation.)
349
The only issue we have now is that in our UserForm we have also specified some additional
validatiors, like:
1
And we would like those validations to also be on the front-end. So we can go back to our
payments.forms.UserForm.__init__ function and add the appropriate attributes. As an
example, if we wanted to add the required HTML attribute for any field that had required
= True we could add the following code:
1
2
3
4
if field.required:
attrs.update({"required": True})
if field.min_length:
attrs.update({"ng-minlength": field.min_length})
Since we dont have any min_length validations, lets go ahead and change our name field
to require a minimum of three letters:
1
You could of course add as many parameters as you wanted. You have total control over what
is rendered.
The __init__ method should now look like:
1
2
3
4
5
6
7
8
9
10
if field.required:
attrs.update({"required": True})
if field.min_length:
attrs.update({"ng-minlength": field.min_length})
field.widget.attrs.update(attrs)
350
This will color the input green if it is valid and its been changed, and likewise will color
the input red if its invalid and has been changed. Go ahead and try it out. Navigate to
http://localhost:8000/register. If you type py in the email field and then move to a new
field, it should become outlined in red, because thats not a valid email. Try changing it to
[email protected]. It should now be green.
This isnt super helpful to the user though, because they dont know why the field is invalid.
So lets tell them. Angular to the rescue again!
Angular keeps track of validation errors in an $error object. You can access it through the
form, with the only requirement being that the form has a name attribute. Currently our
registration form has no name, so assuming we add the name user_form, you could check
for errors with this code:
1
user_form.name.$error.required
<<form_name>>.<<field_name>>.$error.<<validator-name>>
This value will be set to True if the field is failing the particular validator. Now you could
create your own custom error and show it only when the validator is true. For example:
1
Here ng-show only display the span if the name field is failing the required validation. Of
course we dont want to have to write this kind of stuff over and over for each field, so lets
revisit our templates/payments/_field.html template and modify it to include some
custom errors for each field:
1
2
3
<div class="clearfix">
{{ field.label_tag }}
<div class="input">
351
4
5
6
7
8
9
10
11
12
{{ field }}
</div>
<div class="custom-error"
ng-show="{{form.form_name}}.{{field.name}}.$dirty &&
{{form.form_name}}.{{field.name}}.$invalid">
{{field.label}} is invalid:
<span
ng-show="{{form.form_name}}.{{field.name}}.$error.required">value
is required.</span>
<span
ng-show="{{form.form_name}}.{{field.name}}.$error.email">Input
a valid email address.</span>
</div>
</div>
You can see on line 6 that we are adding a div below each field. That div will show only when
the field is in error - e.g., when the field is $dirty (which corresponds to the ng-dirty class)
and $invalid (which corresponds to the ng-invalid class). Then inside of the div we check
each of the error types - required and email - and show an error message for each of those
errors.
Youll notice that we are using the template variable {{form.form_name}}; we need to add
this to our UserForm in payments.forms.UserForm:
1
form_name = 'user_form'
class UserForm(CardForm):
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
form_name = 'user_form'
ng_scope_prefix = "userform"
18
19
20
21
22
23
24
25
26
27
28
if field.required:
attrs.update({"required": True})
if field.min_length:
attrs.update({"ng-minlength": field.min_length})
field.widget.attrs.update(attrs)
29
30
31
32
33
34
35
36
def clean(self):
cleaned_data = self.cleaned_data
password = cleaned_data.get('password')
ver_password = cleaned_data.get('ver_password')
if password != ver_password:
raise forms.ValidationError('Passwords do not match')
return cleaned_data
And also in the templates/payments/register.html, lets give the form the same name:
1
In addition to changing the name of the form you may have noticed the last attribute,
novalidate. This turns off the native HTML5 validation, because we are going to have
Angular handle the validation for us. Finally, lets add a CSS class to style the error message
a bit in static/css/mec.css:
1
2
3
4
.custom-error {
color: #FF0000;
font-family: sans-serif;
}
Now try it out. Youll notice that as you type into the field, an error message will be displayed
until you type the correct info to pass the validation, then once you pass the validation, the
353
error message will disappear and the field will become green. Hows that for rapid feedback!
Next we need to apply validation to the credit card fields in templates/payments/_cardform.html,
which well leave as an exercise for you.
As a final touch you may want to disable the register button until all the fields pass validation.
We talked about how Angular assigns classes such as ng-valid and ng-invalid to fields
that fail validation. It also assigns those same classes to the form, so you can quickly tell if
the entire form is valid or invalid. So using that, we can modify our register button (which is
in templates/payments/_cardform.html) to look like this:
1
So, ng-disabled will ensure the button is disabled if the form is $pristine (meaning no
data has been input on the form yet) or if the form is $invalid (meaning one of the field
validators is failing). So once all data is input correctly then the button will become valid and
you can submit the form.
If you do submit the form, the register view function will pick it up and perform server side
validation. If there is an error, such as the email is already in use, it will send back the error
which will be displayed on top of the form.
354
Here we first declare a controller called RegisterCtrl (well add a file called static/js/registerCtrl.js
for that). Then notice how we have removed the id attribute (id="{{form.form_name}})
that was being used for the jQuery function that calls Stripe on submit, but we are going to convert that to Angular so we dont need the id anymore. Finally, ng-submit="register()"
is called to handle the form submission, so we will need to create the register() function
in our controller. Also note that we have removed the action attribute. If we leave the
action attribute in place, the form will submit as it normally would, skipping the Angular
ng-submit handling of form submission.
The rest of the template remains unchanged. Now lets look at our register() function to
see what happens when the form is submitted. First off, lets remove the jQuery call to Stripe
in our static/js/application.js file and implement it in Angular. Before we get to
the JavaScript portion, though, lets update our templates/payments/_cardform.html
template with some Angular goodness so it will work better with our Angular call:
1
2
3
4
5
6
7
8
9
10
<div id="credit-card">
<div id="credit-card-errors" ng-show="stripe_errormsg">
<div class="custom-error" id="stripe-error-message">
[[ stripe_errormsg ]]</div>
</div>
11
12
13
14
15
16
<div class="clearfix">
<label for="credit_card_number">Credit card number</label>
<div class="input">
<input class="field" id="credit_card_number" type="text"
ng-model="card.number" required>
355
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
</div>
</div>
<div class="clearfix">
<label for="cvc">Security code (CVC)</label>
<div class="input">
<input class="small" id="cvc" type="text" ng-model="card.cvc"
required min=3>
</div>
</div>
<div class="clearfix">
<label for="expiry_date">Expiration date</label>
<div class="input">
<select class="small" id="expiry_month"
ng-model="card.expMonth">
{% for month in months %}
<option value="{{ month }}"{% if soon.month == month %}
selected{% endif %}>{{ month }}</option>
{% endfor %}
</select>
<select class="small" id="expiry_year" ng-model="card.expYear">
{% for year in years %}
<option value="{{ year }}"{% if soon.year == year %} selected{%
endif %}>{{ year }}</option>
{% endfor %}
</select>
</div>
</div>
<br/>
</div>
<div class="actions">
<input class="btn btn-lg btn-primary" id="user_submit"
name="commit" type="submit" value="Register"
ng-disabled="{{form.form_name}}.$pristine ||
{{form.form_name}}.$invalid">
</div>
Mainly we have just added the ng-model directive to the various form fields. Also, on lines
7-9 we are using the model stripe_errormsg to determine if there is an error and to display
any error messages returned from Stripe.
356
Also notice the first two hidden fields that are commented out. We do not need those anymore
because we can just store the value in an Angular model without passing them to the template.
Now lets go back to the register function in our controller. A straight-forward attempt
might look like this:
1
2
$scope.register = function() {
$scope.stripe_errormsg = "";
approve_cc();
};
3
4
5
6
7
approve_cc = function() {
Stripe.createToken($scope.card, function(status, response) {
if (response.error) {
$scope.$apply(function(){
$scope.stripe_errormsg = response.error.message;
});
} else {
$scope.userform.last_4_digits = response.card.last4;
$scope.userform.stripe_token = response.id;
}
});
};
});
8
9
10
11
12
13
14
15
16
17
18
19
20
Save this in a new file called static/js/registerCtrl.js. Make sure to update the base
template:
357
Line 10-12 - when we call Stripe.createToken and then use a callback to handle
the return value (as we are doing in this example), we are outside of Angulars digest
loop, which means we need to tell Angular that we have made changes to the model.
This is what $scope.$apply does. It lets Angular know that we have made some
changes so Angular can apply them. Most of the time this isnt necessary because Angular will automatically wrap our calls in a $scope.$apply for us (as is the
case when we use$http), but because Angular isnt aware of Stripe, we need this
extra step to ensure our changes are applied.
While the above code works, it uses jQuery-style callbacks as opposed to Angular-style
promises. This will make it difficult, or at least a little bit messy, for us to chain together the
functionality that we will need, which will be:
call Stripe -> post data to Django -> log user in -> redirect to members page ->
handle errors if they occur
Lets rework the Stripe function call to use promises. Also, lets factor it out into a factory to
keep our controller nice and tidy. The factory will look like this:
1
2
3
4
5
var factory = {}
factory.createToken = function(card) {
var deferred = $q.defer();
6
7
8
9
10
11
12
13
14
15
16
17
return deferred.promise;
}
return factory;
});
Notice how the service uses the $q API to wrap the Stripe.createToken call in a promise.
In particular:
358
2
3
4
5
6
7
8
9
10
logStripeErrors = function(error) {
$scope.stripe_errormsg = error.message;
}
11
12
13
14
15
16
$scope.register = function() {
$scope.stripe_errormsg = "";
StripeFactory.createToken($scope.card)
.then(setToken, logStripeErrors);
};
17
18
});
This code should now start to look similar to our UserPollCtrl because we are relying on
promises and an external factory to get the work done. With that we have now replaced our
jQuery Stripe call with an Angular one. In doing so we have changed our form to an Angularcontrolled form, so lets now look at submitting forms with Angular.
359
CSRF Protection
The first item that needs to be addressed when submitting forms from Angular to Django is
the CSRF protection. If we were to just submit our form as it is, Djangos CSRF protection
would complain. In order to rectify that, we can add a configuration setting to our Angular
module in static/js/application.js:
1
2
3
4
mecApp.config(function($interpolateProvider, $httpProvider) {
$interpolateProvider.startSymbol('[[')
.endSymbol(']]');
$httpProvider.defaults.headers.common['X-CSRFToken'] =
$('input[name=csrfmiddlewaretoken]').val();
});
Line 4 - is the newly added line here, and it says: add the X-CSRFToken value
to the headers of all requests that we send, and set that value to the input named
csrfmiddlewaretoken. Luckily for us, that is also the name of the input that is
created with Djangos {% csrf_protection %} tag.
CSRF - Sorted.
Next up is making the actual post request from our controller, which would look like:
1
$http.post("/register", $scope.userForm)
import json
2
3
4
5
6
if request.method == 'POST':
# We only talk AJAX posts now
if not request.is_ajax():
return HttpResponseBadRequest("I only speak AJAX nowadays")
7
8
9
data = json.loads(request.body.decode())
form = UserForm(data)
10
11
12
if form.is_valid():
#carry on as usual
Line 1 - We are going to need JSON; import it at the top of the file.
Line 3 - We are going to leave the GET request unchanged, but for POST
Line 5 - This line checks the header to see if the request looks like its an AJAX request.
In other words, if the header has X-Requested-With: XMLHttpRequest, this check
is going to pass.
Line 6 - Insert attitude (err, an error)
Line 8 - JSON data will be sent in the body. Remember, though, we are dealing with
Python 3 now, and so request.body will come back as a byte stream. json.loads
wants a string, so we use .decode() for conversion.
Line 9 - From here we are off to the races; we can load up our form with the incoming
data and use the Django form the same way we always have.
361
1
2
3
4
5
6
def register(request):
user = None
if request.method == 'POST':
# We only talk AJAX posts now
if not request.is_ajax():
return HttpResponseBadRequest("I only speak AJAX
nowadays")
7
8
9
data = json.loads(request.body.decode())
form = UserForm(data)
10
11
12
13
14
15
16
17
18
19
20
21
if form.is_valid():
try:
customer = Customer.create(
"subscription",
email=form.cleaned_data['email'],
description=form.cleaned_data['name'],
card=form.cleaned_data['stripe_token'],
plan="gold",
)
except Exception as exp:
form.addError(exp)
22
23
24
25
26
27
cd = form.cleaned_data
try:
with transaction.atomic():
user = User.create(cd['name'], cd['email'],
cd['password'],
cd['last_4_digits'],
stripe_id="")
28
29
30
31
32
33
if customer:
user.stripe_id = customer.id
user.save()
else:
UnpaidUsers(email=cd['email']).save()
34
35
36
except IntegrityError:
resp = json.dumps({"status": "fail", "errors":
362
37
38
39
40
else:
request.session['user'] = user.pk
resp = json.dumps({"status": "ok", "url": '/'})
41
42
43
44
45
return HttpResponse(resp,
content_type="application/json")
else: # form not valid
resp = json.dumps({"status": "form-invalid", "errors":
form.errors})
return HttpResponse(resp,
content_type="application/json")
46
47
48
else:
form = UserForm()
49
50
51
52
53
54
55
56
57
58
59
60
61
return render_to_response(
'payments/register.html',
{
'form': form,
'months': list(range(1, 12)),
'publishable': settings.STRIPE_PUBLISHABLE,
'soon': soon(),
'user': user,
'years': list(range(2011, 2036)),
},
context_instance=RequestContext(request)
)
The top part of the function is the same as what we covered earlier in the chapter. In fact the
only difference in the function is what is returned for a POST request:
Line 38-39 - If the registration is successful, rather than redirecting the user we now
return some JSON saying the status is OK and providing a URL to redirect to.
Line 35-36 - If there is an error when registering the user (such as email address
already exists) then return some JSON saying status it failed and the associated errors.
Line 44-45 - If form validation fails, return the failed result as JSON. You might think,
why do we still need to validate on the back-end if we have done it on the front-end?
Since we are decoupled from the front-end now, we cant be 100% sure the front-end
363
validation is working correctly or even ran, so its best to check on the back-end as well
just to be safe.
After updating the register view function to return some JSON, lets update the front-end
to take advantage of that. First thing, lets create another factory to interact with our Django
back-end.
364
Fixing Tests
But first, lets keep our unit tests working. Since we switched to returning json responses, we
will need to update several of the tests in tests/payments/testViews.py. Lets look at
the RegisterPageTests class.
Our register view function now handles both GET and POST requests, so the first thing to
do is change our setUp function to create the two request that we need. Doing so will make
RegisterPageTests.setup look like:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
def setUp(self):
self.request_factory = RequestFactory()
data = json.dumps({
'email': '[email protected]',
'name': 'pyRock',
'stripe_token': '...',
'last_4_digits': '4242',
'password': 'bad_password',
'ver_password': 'bad_password',
})
self.post_request = self.request_factory.post(self.url,
data=data,
content_type="application/json",
HTTP_X_REQUESTED_WITH='XMLHttpRequest')
self.post_request.session = {}
15
16
self.request = self.request_factory.get(self.url)
Above we have created two requests 1. The standard GET request, which is self.request. That remains unchanged.
2. And a new POST request called post_request that issues a POST with the proper json
encoded data data.
Also note that post_request sets the extra parameter HTTP_X_REQUESTED_WITH='XMLHttpRequest'.
This will make request.is_ajax() return TRUE.
From there its just a matter of changing most of the tests to use the post_request
and to ensure the data returned is the appropriate JSON response. For example
RegisterPageTests.test_invalid_form_returns_registration_page will now
look like:
365
def test_invalid_form_returns_registration_page(self):
2
3
with mock.patch('payments.forms.UserForm.is_valid') as
user_mock:
4
5
6
7
user_mock.return_value = False
self.post_request._data = b'{}'
resp = register(self.post_request)
8
9
10
11
12
13
366
Breaking Chains
Add the following to static/js/registerCtrl.js:
1
2
3
4
5
6
7
8
9
10
mecApp.factory("UserFactory", function($http) {
var factory = {}
factory.register = function(user_data) {
return $http.post("/register",
user_data).then(function(response)
{
return response.data;
});
}
return factory;
});
Here we are taking care of POSTing the data to our back-end and returning the response.
Then we just have our controller call the factory, assuming of course our Stripe call passed.
We do this with a promise chain.
Lets first have a look at the chain:
1
2
3
$scope.register = function() {
$scope.stripe_errormsg = "";
$scope.register_errors = "";
4
5
6
7
8
9
10
StripeFactory.createToken($scope.card)
.then(setToken, logStripeErrors)
.then(UserFactory.register)
.then(redirect_to_user_page)
.then(null,logRegisterErrors);
};
Above is our register function (which if you recall is tied to our form submission with
ng-submit). The first thing it does is clear out any errors that we may have. Then it calls a
promise chain starting on line 5. Lets go through each line in the promise chain.
StripeFactory.createToken($scope.card) - Same as before: Call Stripe, passing in the credit card info.
.then(setToken, logStripeErrors) - This line is called after the createToken
call completes. If createToken completes successfully then setToken is called; if it
fails then logStripeErrors is called. setToken is listed below:
367
1
2
3
4
5
Same as before: We appropriate scope data. The final step in this function is to return
$scope.userform. Why? Because we are going to need it in the next step, so we are just
passing on the data to the next promise in the chain.
Moving on to the logStripeErrors function:
1
2
3
4
error is stored in the stripe_errormsg (same as before). But now we are throwing an
error! This has to do with how promise chaining works. If an error is thrown in a promise
chain, it will be converted to a rejected promise and passed on to the next step in the chain.
If the next step doesnt have an onRejected handler then that step will be skipped and the
promise will be re-thrown, again and again, until there is no more chain or until there is an
onRejected handler. So looking at our promise chain again:
1
2
3
4
5
StripeFactory.createToken($scope.card)
.then(setToken, logStripeErrors)
.then(UserFactory.register)
.then(redirect_to_user_page)
.then(null,logRegisterErrors);
This makes it possible for us to break out of the chain by throwing an error and never catching
it or trying to recover from an error and continuing execution along the chain.
The next step in the chain is the good ol UserFactory.register, which does the POST and
returns the response, which is handled by redirect_to_user_page:
1
2
3
4
5
6
7
Here we receive the JSON response, and if it has an errors key, we throw the errors (so
they will be handled by the next onRegected handler). If no errors then redirect (with
windows.location, old-school style) to the URL that should have been returned by our
POST call.
On a successful registration, this will be the end of our promise chain. But if we have an error,
then there is one more onRejected handler that will get triggered: logRegisterErrors.
1
2
3
Just set them to the appropriate scope value. These errors will be displayed at the top of our
form. So if we have a look at /templates/payments/register.html then we can see our
old error displaying functionality.
This old version1
2
3
4
5
6
7
8
9
10
11
12
13
14
</div>
</div>
{% endif %}
2
3
4
5
And there you have it! We can now submit our form with Angular (and do a whole bunch of
other cool stuff along the way). That was quite a bit of work but we made it. For convenience,
here is the entire static/js/registerCtrl.js:
1
2
3
4
5
var factory = {}
factory.createToken = function(card) {
var deferred = $q.defer();
7
8
9
10
11
12
13
return deferred.promise;
14
15
16
return factory;
17
18
});
19
20
21
22
23
mecApp.factory("UserFactory", function($http) {
var factory = {}
factory.register = function(user_data) {
return $http.post("/register",
user_data).then(function(response)
370
24
25
26
27
28
29
{
return response.data;
});
}
return factory;
});
30
31
mecApp.controller('RegisterCtrl',function($scope, $http,
StripeFactory, UserFactory) {
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
$scope.register = function() {
$scope.stripe_errormsg = "";
$scope.register_errors = "";
59
60
61
62
StripeFactory.createToken($scope.card)
.then(setToken, logStripeErrors)
.then(UserFactory.register)
371
.then(redirect_to_user_page)
.then(null,logRegisterErrors);
63
64
65
66
};
67
68
});
372
Conclusion
Okay. So, we got our form all working in Angular. We learned about promises, using them
to wrap third-party APIs, and about breaking out of promise chains. We also talked a good
deal about validation and displaying error messages to the user and how to keep your Django
forms in-sync with your Angular validation. And of course, how to get Angular and Django to
play nicely together. These last three chapters should give you enough Angular background
to tackle the most common issues youll face in developing a Django app with an Angular
front-end.
In other words, you know enough now to be dangerous! That being said, Angular is a large
framework and we have really just scratched the surface. If youre looking for more on Angular there are several resources on the web that you can check out. Below are some good ones
to start you off:
Egghead Videos
Github repo updates with a list of a ton of resources on Angular
Very well done blog with up-to-date angular articles
373
Exercises
1. We are not quite done yet with our conversion to Angular, as that register view
function is begging for a refactor. A good way to organize things would be to have
the current register view function just handle the GET requests and return the
register.html as it does now. As for the POST requests, I would create a new users
resource and add it to our existing REST API. So rather than posting to /register,
our Angular front-end will post to /api/v1/users. This will allow us to separate the
concerns nicely and keep the code a bit cleaner. So, have a go at that.
2. As I said in the earlier part of the chapter, Im leaving the form validation for
_cardform.html to the user. True to my word, here it is as an exercise. Put in some
validation for credit card, and CVC fields.
3. While the code we added to templates/payments/_fields.html is great for our
register page, it also affects our sign in page, which is now constantly displaying errors.
Fix it!
374
Chapter 16
MongoDB Time!
Youve probably heard something about MongoDB or the more general NoSQL database
craze. In this chapter we are going to explore some of the MongoDB features and how you
can use them within the context of Django.
There is a much longer discussion about when to use MongoDB versus when to use a relational database, but Im going to mainly sidestep that for the time being. (Maybe Ill write
more on this later). In the meantime Ill point you to a couple of articles that address that
exact point:
When not to use MongoDB
When to use MongoDB
For our purposes we are going to look at using MongoDB and specifically the Geospatial
capabilities of MongoDB. This will allow us to complete User Story #5 - Galactic Map:
A map displaying the location of each of the registered padwans. Useful for physical meetups, for the purpose of real life re-enactments of course. By Galactic here we mean global.
This map should provide a graphic view of who is where and allow for zooming in on certain
locations.
375
Dont forget to add the new app to the INSTALLED_APPS tuple in settings.py as well.
__base.html template. By now the scripts portion of your __base.html should look like:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
<script src="//maps.googleapis.com/maps/api/js?sensor=false"
type="text/javascript"></script>
<script src="{% static "js/lodash.underscore.min.js" %}"
type="text/javascript"></script>
<script src="{% static "js/angular-google-maps.min.js" %}"
type="text/javascript"></script>
<script src="{% static "js/application.js" %}"
type="text/javascript"></script>
<script src="{% static "js/userPollCtrl.js" %}"
type="text/javascript"></script>
<script src="{% static "js/loggedInCtrl.js" %}"
type="text/javascript"></script>
<script src="{% static "js/registerCtrl.js" %}"
type="text/javascript"></script>
{% block extrajs %}{% endblock %}
You can see on lines 11-13 the three JS files we need to start working with Google Maps.
While were in the __base.html, lets change the menu and replace the about page (since
we are not really using it) with our new user maps page:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
{% else %}
<li><a href="{% url 'sign_in' %}">Login</a></li>
<li><a href="{% url 'register' %}">Register</a></li>
{% endif %}
</ul>
</div>
</nav>
Line 16 - this is the new navigation item that points to our usermap page. Of course
this line is going to cause the Django templating engine to blow up unless we add the
appropriately named url to django_ecommerce\urls.py. So, just add another item
to the urlpatterns tuple:
2
3
4
5
def usermap(request):
return render_to_response('usermap/usermap.html')
{% extends '__base.html' %}
2
3
{% load staticfiles %}
4
5
{% block content %}
6
7
8
<div ng-controller="UserMapCtrl">
<google-map center="map.center" zoom="map.zoom"
options="map.options"></google-map>
</div>
10
11
{% endblock content %}
12
13
14
15
{% block extrajs %}
<script src='{% static "js/usermapCtrl.js" %}'
type="text/javascript"></script>
{% endblock %}
378
Line 7 - As with our earlier chapters on Angular we define a controller just for this
page. And as usual we have come up with a highly creative and original name for our
controller UserMapCtrl.
Line 8 - A nice Directive courtesy of angular-google-maps. As you have probably
already guessed it will insert a google map onto the page. We can also pass in a number
of attributes to control how the map is displayed - center, zoom, and options. Check
out the API documentation for more details.
Line 14 - This line just loads up our controller where we will configure the map options.
Before we add the controller we need to be sure to inject the google-maps service into our
Angular app within static/js/application.js by modifying the first line to look like:
javascript var mecApp = angular.module('mecApp', ['google-maps']);
The second argument there, which if you recall is our list/array of dependencies, just lists
google-maps as a dependency and then Angular will work some dependency injection
magic for us so that we can use the service anywhere in our Angular application.
Finally, we can create our userMapCtrl controller to actually display the map in static/js/userMap.Ctrl.js
1
mecApp.controller('UserMapCtrl', function($scope) {
2
3
4
5
6
7
8
9
10
11
12
$scope.map = {
center: {
latitude: 38.062056,
longitude: -122.643380
},
zoom: 14,
options: {
mapTypeId: google.maps.MapTypeId.HYBRID,
}
};
13
14
});
This is a very simple controller that just initializes our google-map object.
Remember this line?
1
.angular-google-map-container {
height: 400px;
}
With that you will actually be able to see the map on your web page. And thats it for the basic
map.
To complete the usermap feature we are going to place a bunch of markers all over the map
showing the locations of our users based on the user location data in our MongoDB backend
(which we still need to create).
380
MongoDB vs SQL
To provide the functionality for User Story 5 we need a representation of each users location. Lets dive right into some terminology. MongoDB uses different terminology than your
standard SQL database, so to avoid confusion lets compare SQL with MongoDB:
Standard SQL
MongoDB
database
table
row
column
index
table joins
primary key
database
collection
document
field
index
embedded documents or linking
primary key
Documents
Documents define aggregates of data - e.gm data that is combined together to form a total
quantity or thing.
For example:
An advertising Flyer is an aggregate of a main message and several supporting paragraphs, possibly also consisting of other elements such as images, testimonials, footers
and what not. The point here is that a flyer is an aggregate of multiple pieces.
An Article is an aggregate of an author, and abstract and several sections of text. In
the case of a blog post, an article may include comments, likes and/or social mentions.
Again we are modeling multiple things, as a larger aggregate of those things.
381
With a relational database you might model an Article with an Author table, an Article
table, and a Comments table, etc., and then you would create the necessary joins (relationships) to tie all these tables together so they could function as your data-model. Using an
aggregate data-model however you would create an Article Document that would contain
all the data necessary (authors, article, comments, etc.) to display the article.
Trade offs
There are a few tradeoffs to modeling things as a large aggregate versus a relational model.
1. Querying efficiency - using a join to query across several tables as opposed to a single aggregate is generally slower, so often times read speeds will increase by using an
aggregate data model.
2. Data normalization - in relational data models you are encouraged not to duplicate
data; this reduces the storage costs and can also make it easier to maintain the data integrity as each piece of data only lives in one place. However in Aggregate data models
you are encouraged to duplicate data. This reduces the need for joins and makes the
data model simpler, at the cost of increased storage requirements.
3. Cascading Updates - Imagine the Article relational data model. Imagine we have an
author - lets call him Pete - who wrote 20 articles. Now Pete wants to change his
name to Fred. We just update the authors table to Fred and since all the Articles
only hold a relationship to the Authors table all the articles will be updated as well.
Conversely, in the Aggregate data model, where author information is duplicated in
every Article, we would first have to find all articles with author name of Pete and
then update each of them individually.
4. Horizontal scalability - Because aggregate data models store all information together it
is relatively simple to scale the database horizontally by adding more nodes and sharding the database. With relational databases this can be much more difficult as creating
joins across servers can be painful to say the least. So in general relational data models are easier to scale vertically (adding more horse power to the machine), where as
aggregate data models scale horizontally (adding more machines) with relative ease.
These trade offs are by no means set in stone, there are just to get you thinking about the
effects of using one data model vs another. As always the right choice will depend upon the
particular problem that you are trying to solve. Use the trade offs listed here to help get you
thinking about where a particular data model would fit best.
Now that we have a better understanding of Mongo. Lets start using it.
382
Installing Mongo
We arent going to get very far with MongoDB until we install it. Luckily its way easier than
Postgres. Just head on over to the mongoDB downloads page and download it. There are also
a number of package managers supported that can be found here. brew install mongo anyone?
Next up is to get the python packages we need to support MongoDB. There are two that we
are going to use.
pymongo - this is the mongodb database driver for python
mongoengine - a Django ODM (Object-Document Mapper) for mongodb.
Lets install those:
1
pymongo is a dependency of mongoengine so you dont need to explicitly install it. After that
finishes, update your requirements.txt file to include the new libraries:
1
Cool. Now we are ready to configure Django to start working with mongodb.
383
Thats it. That will create the database named test if it doesnt already exist. Of course
there are more complicated forms. Say you want to connect to a MongoDB instance running
on the machine 192.168.1.15 called golaith that requires a username and password. Then
your connection might look like:
1
2
# DATABASES
2
3
mongoengine.connect("mec-geodata")
4
5
6
7
8
9
10
11
12
13
14
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'django_db',
'USER': 'djangousr',
'PASSWORD': 'djangousr',
'HOST': 'localhost',
'PORT': '5432',
}
}
import mongoengine
2
3
...snip...
4
5
mongoengine.connect("my-mongo-database")
6
7
8
9
10
11
12
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME' : 'dummy_db',
}
}
There are many tutorials out there that will tell you otherwise, but in Django 1.8 you are not
going to get very far if you dont have the minimum DATABASES defined. With this setup you
will basically be storing tables for Django admin and user sessions in the sqlite backend and
everything else in MongoDB. If you want to also store the session information in MongoDB
add the following two lines to your settings.py file:
1
2
3
385
4
5
6
7
8
9
10
class UserLocation(Document):
email = EmailField(required=True, unique=True, max_length=200)
name = StringField(required=True, max_length=200)
location = PointField(required=True)
mappoint_id = SequenceField(required=True)
Creating a mongoengine.Document is really similar to creating any other model you would
normally create in Django. There are just a couple of variations to point out:
Line 6 - Our MongoDB documents all need to inherit from mongoengine.document.Document.
Lines 7 - 8 - Declaring fields for you document with MongoDB is the same as with
the standard Django ORM. As such the two fields (email and name) should feel very
familiar to you.
Line 9 - the location field uses a type you may not be familiar with that is the
PointField type. This is one of MongoDBs Geospatial fields which the mongoengine
ODM has full support for.
Line 10 - mappoint_id is used because Google Maps requires a unique numeric id
for all map points in order to improve caching/performance.
Nice! So with that we have our database setup. Now lets talk about storing user locations
- e.g., geodata in MongoDB. In fact lets just give you a crash course in storing GeoSpatial
information in a database. Then well come back to our models and finish coding them up.
386
A Geospatial primer
This primer is not intended to get you up to speed with the enormous GIS (Geographical
Information Systems) industry. Instead, the point here is just to describe the bare minimum
of terms so you have an idea what is going on, and so you can talk GIS at a party without
sounding like, well, an idiot.
Calculating Distance
In the Geospatial world indexes are used to calculate distance. More specifically they are
used to calculate geometry, but distance is just a function of geometry. The point is that if
you want to calculate distance between various GeoJSON objects, you have to index them.
And there are several different types of indexes that you can use. Lets look at two of them to
give you a feel for how they work.
Generally, when you calculate distance on a graph you use the good old Euclidean geometry:
1
This is great for flat surfaces. Thus, if you want to calculate distance against a flat surface in
MongoDB you would use a 2d index.
But the earth is round, so if we want to deal with distances across a globe - aka the spherical distance - we need to be sure we are calculating with the correct distance formula. In
MongoDB we are looking for an index called 2dshpere.
387
Thats all we need to know for now. We are of course glossing over huge amounts of information and ignoring a lot of the finer details, but for now, YAGNI.
Remember:
There are 3 GeoJSON object types (Points, Polygons, and LineStrings) and
There are 2 surfaces/index types used to calculate geometries (2d, and 2dsphere)
For our purposes we will be using 2dsphere indexes.
388
2d Index
2dsphere Index
Point
Polygon
LineString
GeoPointField
GeoPolygonField
GeoLineStringField
PointField
PolygonField
LineStringField
class UserLocation(Document):
email = EmailField(required=True, unique=True, max_length=200)
name = StringField(required=True, max_length=200)
location = PointField(required=True)
mappoint_id = SequenceField(required=True)
-you can see from the last line that our location field has a type of PointField - which means
it is setup to use a 2dshpere index so we will be able to use it to calculate distances across
the globe. And thats exactly what we want.
389
Mongodb speaks JSON. And we want to send back JSON, so we could simply just return the
UserLocation as JSON:
1
UserLocation.objects().to_json()
390
The to_json function exists on mongoengines Queryset and Document classes, and with
only 9 characters of code all of the documents in the UserLocation collection, will be expunged to JSON.
With to_json and the corresponding from_json you can now easily (de)serialize to/from
JSON - which means you dont really need a serializer class at all. Lets leave out the serializer
class and let MongoDB do the heavy lifting for us this time. to_json all the way!
Restful Endpoint for Mongo
Creating a REST endpoint for MongoDB is similar to creating an endpoint for anything else.
With a few minor caveats, of course. Well stick with Django Rest Framework; however, since
we are not using a DRF serializer things will be slightly different than what we did in the
previous DRF chapter.
Diving right in, lets create a usermaps/json_views.py file, and then lets add a function called
user_locations_lists to handle the GET and POST requests for our UserLocations collection.
GET:
5
6
7
8
9
10
11
@api_view(['GET', 'POST'])
def user_locations_list(request):
if request.method == 'GET':
locations = json.loads(UserLocation.objects().to_json())
return Response(locations)
Whats happening?
Lines 7 - 8 - Notice here we are using a function and not a class based view. If you
recall from the earlier DRF chapter we used a class based view that inherited from
ListModelMixinand others. However because mongoengine implements
The next part of the REST API for UserLocations is to allow the user to crate a new
UserLocation. We do that with a POST request:
1
2
3
4
5
6
7
8
9
10
@api_view(['GET', 'POST'])
def user_locations_list(request):
if request.method == 'GET':
392
11
12
13
14
15
16
17
18
19
locations = json.loads(UserLocation.objects().to_json())
return Response(locations)
if request.method == 'POST':
locations UserLocation().from_json(json.dumps(request.DATA))
locations.save()
return Response(
json.loads(locations.to_json()),
status=status.HTTP_201_CREATED
)
Line 1 - We need to import rest_framework.status at the top of our module so we
can return the 201 status code.
Line 13 - As said before we are handling POST requests here.
Line 14 - Here we use the mongoengine from_json to create an object from the JSON
passed to us by the client (request.DATA). Again we jump through a few hoops to
get the JSON to the appropriate format, but once we do that we can create our new
UserLocation with no problem. Also note this is really an upsert, meaning if the
UserLocation JSON data passed in from the client includes a MongoDB ObjectID
then from_json will update the existing document, otherwise it will create a new one.
Line 16 - Here we just return the JSON form of the object that was just upserted with
the appropriate status of 201.
There you have it, thats the REST API that works with MongoDB and provides you the standard DRF functionality.
One last thing to do is add the URL routes in usermap/urls.py:
1
2
3
4
5
6
7
urlpatterns = patterns(
'usermap.json_views',
url(r'^user_locations$', 'user_locations_list'),
)
And then of course we need to reference that from our master URL file, django_ecommerce/urls.py:
1
2
3
4
5
6
7
8
9
10
11
12
admin.autodiscover()
main_json_urls.extend(djangular_polls_json_urls)
main_json_urls.extend(payments_json_urls)
main_json_urls.extend(map_json_urls)
13
14
15
16
17
18
19
20
21
22
urlpatterns = patterns(
'',
url(r'^admin/', include(admin.site.urls)),
url(r'^$', 'main.views.index', name='home'),
url(r'^pages/', include('django.contrib.flatpages.urls')),
url(r'^contact/', 'contact.views.contact', name='contact'),
url(r'^report$', 'main.views.report', name="report"),
url(r'^usermap/', 'usermap.views.usermap', name='usermap'),
23
# user registration/authentication
url(r'^sign_in$', views.sign_in, name='sign_in'),
url(r'^sign_out$', views.sign_out, name='sign_out'),
url(r'^register$', views.register, name='register'),
url(r'^edit$', views.edit, name='edit'),
24
25
26
27
28
29
# api
url(r'^api/v1/', include('main.urls')),
30
31
32
394
2
3
4
5
6
7
8
9
10
11
12
$scope.map = {
center: {
latitude: 38.062056,
longitude: -122.643380
},
zoom: 14,
options: {
mapTypeId: google.maps.MapTypeId.HYBRID,
}
};
13
14
15
16
17
18
19
20
21
locations.getAll().then(cache);
});
22
23
mecApp.factory('locations', function($http) {
24
25
26
27
28
29
30
31
return {
getAll:
function() { return
$http.get(locationUrls).then(function(response) {
console.log(response);
return response.data;
395
});
32
},
33
34
};
35
36
});
From the angular controller we can see that we are basically just adding the functionality to
talk to the JSON API we just created. Now we need to update the Google map Directive in
the template (templates/usermap/usermap.html):
1
2
4
5
<div ng-controller="UserMapCtrl">
<google-map center="map.center" zoom="map.zoom"
options="map.options">
<markers models="locs" coords="'location'"
idKey="'mappoint_id'"></markers>
</google-map>
</div>
Notice that added a markers Directive as a sub-Directive of our google-map Directive. The
markers Directive is an easy way to display a series of markers from a JSON data source. To
setup the markers Directive we passed the following attribute values:
models=locs - this corresponds to the $scope.locs that we populated in our controller.
coords= location - this is telling the directive that each object
in the model has alocation property that stores the coordinates.
idKey= mapppoint_id - recalling from earlier when we created
ourUser_Locationsmodel with aSequenceField, we are now supplying said field to
Google Maps, making the mapping gods happy.
This is the minimal setup we need to display some markers. So were good! Test it out. Fire
up the app and make sure the user maps work as expected.
396
registerCtrl()
1
2
3
4
5
6
7
8
$scope.geoloc = "";
if (navigator.geolocation) {
navigator.geolocation.getCurrentPosition(function(position){
$scope.$apply(function(){
$scope.geoloc = position;
});
});
}
3
4
5
6
factory.saveUserLoc = function(coords) {
return $http.post("/api/v1/user_locations",
coords).then(function(response)
{
return response.data;
});
}
saveUsrLoc = function() {
var data = {'name' : $scope.userform.name,
'email' :
$scope.userform.email,
'location' : [$scope.geoloc.coords.longitude,
$scope.geoloc.coords.latitude]};
397
UserFactory.saveUserLoc(data);
return $scope.userform;
6
7
8
The main thing to remember here is you need to store the data in longitude, latitude
as opposed to the more obvious latitude / longitude. Also note that we are returning
$scope.userform, this is so the next call in the user story - e. g., to register the user) gets
the userform passed in.
So now with this new functionality, lets update our RegisterCtrl() promise chain:
1
2
3
4
5
6
StripeFactory.createToken($scope.card)
.then(setToken, logStripeErrors)
.then(saveUsrLoc)
.then(UserFactory.register)
.then(redirect_to_user_page)
.then(null,logRegisterErrors);
So, the only difference is that we added the line .then(saveUsrLoc) which is now called
right before calling UserFactory.register.
There you go. You know have a usermap with MongoDB backend.
398
Conclusion
Two things happened in this chapter that are very important. Perhaps you noticed them.
1. We upgraded our usage of Angular in subtle yet important ways. 21 We had to make
design decision where there wasnt a clean right way to do it.
Lets talk about each of these in turn.
Naming Conventions
Remember this promise chain?
1
Locations.getAll().then(cache);
This chains together the process of retrieving location data from our server and it does exactly
what it says it does.
Compare that to the promise chain we wrote previously in the Djangular chapter.
1
pollFactory.vote_for_item(item).then(getPoll).then(setPoll);
While the later is focused on individual parts of the process - e.g., the pollFactory, the poll
- the former is entirely based on the function - getAll().then(cache).
Put another way, the former promise chain uses verbs for names which better describe
their function. While the latter primarily focuses on the nouns which better describe the
thing they are. The seemingly innocuous change of name represent the start of a paradigm
shift from nouns to verbs, from objects to functions. The more we work with Angular and
JavaScript the more we start to identify with the functional nature of the tools and that
identification is communicated back through the code, largely by the way we choose to name
things.
SEE ALSO: Now would be a great time to take a quick dive into the philosophy
of naming things, and who better to guide you than the amazing author Steve
Yegge. Check out the wonderful blog post Execution in the Kingdom of Nouns.
We are starting to move beyond just making stuff work with the language and we are moving
into a stage of deeper appreciation of the design of the language - and how it can be used to express elegant solutions. This is the point where computer science and software development
really start to get interesting. This is where you move beyond being functional in a language
to becoming a student of the language. This is the gateway to mastery but the road becomes
far more individualized from here.
399
400
Exercises
1. Remember how we said that Django doesnt officially have a django.db.backends
for Mongo? Well, it also doesnt have any unit testing support for Mongo either. Remember: Django provides a DjangoTestCase that will automatically clear out your
database and migrate the data models. Write a MongoTestCase that clears out a MongoDB database prior to test execution/between test runs.
Need help? Focus on these parts:
django.test.runner.DiscoverRunner
pymongo.Connection
mongoengine.connection.disconnect()
NOTE: This is a difficult exercise, which not everybody will be able to solve. Take
a shot at it though. May the force be with you.
401
Chapter 17
One Admin to Rule Them All
Now that we have finished all of the user stories from Building a Membership Site, its time
to turn our attention to more of the backend, the non-user-facing functionality. For most
membership sites we will need to update the site data from time to time. There are several
use cases as to why we might need to do that:
1.
2.
3.
4.
It sure would be nice if there was some sort of way to provide backend access to our MEC
app so we could do all this directly from the browser, via the Django admin site. In this
chapter, we are going to look at the admin site in detail - how to use it and how to add custom
functionality.
402
Basic Admin
I was onced asked to sum up Djangos basic admin site in one sentence It aint pretty, but
it works.
With just a few lines of code you can enable a functioning admin site that will allow you to
modify your models from a web interface. Its not the most beautiful interface, but - hey for two or three lines of code, we cant really complain. Lets start off by taking a look at the
admin site.
If you havent already, you will need to create a user who can access the admin site:
1
2
3
4
5
6
$ ./manage.py createsuperuser
Username: admin
Email address: [email protected]
Password:
Password (again):
Superuser created successfully.
Once done, fire up the server and navigate to the admin site - http://localhost:8000/admin/
NOTE: Starting in Django 1.8 the admin site is enabled by default.
Log in with the user you just created, and you should see the basic admin site, which should
look something like:
Click around a bit. Its pretty intuitive; just click on the Add or Change button to perfrom
the subsequnet action. Did you notice how none of the models we created are actually shown
yet? Lets fix that.
403
404
3
4
When we register a model to be displayed in the Admin view, by default Django will do its
best to come up with a standard form allowing the user to add and edit the model. Fire up the
admin view and have a look at what this produces; you should now see all the above models
listed under the main section. Try to edit one of the models; it should show you a list of
existing models, and you can choose one to edit.
NOTE: If youre not seeing any data in the admin view, you can load the system
data with the following command:
1
2
3
```sh
$ ./manage.py loaddata system_data.json
```
Comming back to our auto-generated admin views - for simple cases, what Django creates
is all you need. However, you can create your own ModelAdmin to control how a model is
displayed and/or edited. Lets do that with the Badge model.
First, the obligatory before shot:
Not horribly exciting. Lets add a few columns:
1
2
3
4
5
6
7
@admin.register(Badge)
405
class BadgeAdmin(admin.ModelAdmin):
list_display = ('img', 'name', 'desc', )
This will add more information to the list view display so that we can tell what each item is at
a glance. Once the above changes are in place, http://localhost:8000/admin/main/badge/
should look like:
ImageField is a Django field that allows for uploading files. Note that it doesnt actually store the image in the database; it stores the image in the file system under the
settings.MEDIA_ROOT directory. Only the meta data about the image is stored in the
database.
To get an ImageField to behave properly, we first have to do a bit of setup:
1.
2.
3.
4.
Because an ImageField needs to load / save the image to disk, it requires the Pillow library.
We can install that with pip:
1
2
This will install the library and update our requirements.txt file.
Define where images are saved
With this setting in place, when you upload an ImageField it will now be stored under
django_ecommerce/media. You can specify a subdirectory with the upload_to parameter
passed to your ImageField constructor. (We will see an example of this in just a minute.)
Define the URL from which images are served
We could do this directly in our urls.py, but lets stick to Djangos recommendation here
and update settings.MEDIA_URL like so:
1
MEDIA_URL = '/media/'
407
Normally the Django development server wont give two hoots about your MEDIA_URL.
We can add a special route that will make it available on the development server. Update
django_ecommerce.urls.py by adding the following:
1
2
3
4
5
urlpatterns = patterns('',
6
7
8
9
10
That last line will allow the Django development server to find the MEDIA_ROOT and serve
it under the MEDIA_URL setting. It is worth noting that this is considered not viable for production, and so if you set DEBUG=False in your settings.py file it will turn off this staticfiles
view.
class Badge(models.Model):
2
3
4
5
img = models.ImageField(upload_to="images/")
name = models.CharField(max_length=100)
desc = models.TextField()
6
7
8
9
10
11
def thumbnail(self):
if self.img:
return u'<img src="%s" width="100" height="100" />' %
(self.img.url)
else:
return "no image"
12
13
thumbnail.allow_tags = True
408
14
class Meta:
ordering = ('name',)
15
16
@admin.register(Badge)
class BadgeAdmin(admin.ModelAdmin):
3
4
Thats it.
Now look at the admin view and, well, we dont have any images. Huh? Thats because we
previously stored all of our images in the static/img directory. So we need to move them
to /media/images/ and update the fields in the database. But, hey - we just configured the
admin view to allow us to upload images for badges, so just use the admin view to move the
images by clicking on each badge, and then using the choose file button.
Once done, your list view will now look like:
Looking good!
1
2
3
Notice that we are no longer using the {% static %} directive, rather we are just using
{{media_url}}{{bdg.img.url}}. This is the combination of settings.MEDIA_URL and
the relative URL of our uploaded file provided by the ImageField.
And that about wraps things up for Images. Before moving on to the next admin topic, lets
look at one more example of customizing the list_display.
@admin.register(Badge)
class BadgeAdmin(admin.ModelAdmin):
410
5
6
7
@admin.register(Badge)
class BadgeAdmin(admin.ModelAdmin):
3
4
5
6
7
8
9
In that line we are setting a property of the users_with_badge function called short_description
to the name we want to be displayed in the column heading. If it seems strange to set a
property of a function, remind yourself that everything in Python is an object. A function is
an object just like a class is an object, and so we can add properties or even other functions
to a function if we want.
From Djangos point of view, it will ask each item in the list_display if it has a
short_description property. If so, it will use that for the column heading.
short_description isnt the only property that we can use to control how things are displayed. Lets say we wanted to format the cells displayed in the users_with_badges column.
If we update our function to use a little HTML:
1
2
3
4
5
6
Then in the cells for the users_with_badge column we would actually see the raw HTML
code. Not great. So we can add the property allow_tags to take care of that.
1
2
3
4
5
6
7
8
9
2
3
4
6
7
8
3
4
5
6
7
8
9
10
@admin.register(Badge)
class BadgeAdmin(admin.ModelAdmin):
11
12
13
14
15
16
17
18
19
20
21
22
413
Editing Stuff
Weve seen a bunch of ways to control how existing data is displayed, but what about editing
it? Of course, from the list view of our badges we can just click on any item in the list, and
Django will take us to the change form where we can edit to our hearts content. By default
it looks like this:
3
4
5
6
7
8
@admin.register(Poll)
class PollAdmin(admin.ModelAdmin):
9
10
11
12
13
14
15
16
17
18
414
19
20
21
22
23
24
25
26
27
We are customizing the list view as we did in the previous section. Here is a screenshot of
what the list view will now look like:
class PollItemInline(admin.TabularInline):
model = PollItem
415
2. Next up is to reference the above inline from our PollAdmin. Add this bit of code to
the top of the class:
1
inlines = [PollItemInline,]
We should also point out that an InlineModelAdmin can be customized in much the same
way that a regular ModelAdmin can. For example, if we dont want our pesky administrators
to modify the number of votes for a PollItem, we could modify the PollItemInline class
to look like:
1
2
class PollItemInline(admin.TabularInline):
model = PollItem
3
4
5
readonly_fields = ('votes',)
ordering = ('-votes',)
416
Note we also decided to change the ordering to show the items with the most votes at the top.
The complete list of options for InlineModelAdmins is here.
417
Next all we have to do is add it to our INSTALLED_APPS list somewhere above the
django.contrib.admin app. The two apps to add are:
1
2
'django_admin_bootstrapped.bootstrap3',
'django_admin_bootstrapped',
418
419
3
4
5
6
@admin.register(User)
class UserAdmin(admin.ModelAdmin):
7
8
Weve seen this before. It just controls what values are displayed in the list view of all our
payments / users. Now we can further customize the change form, which is the form used
to edit an individual user. Lets do that by using fieldsets. fieldsets are a way to break up
the form into several different / unique sections. To better understand, have a look at the
following form, which creates a set of three <fieldset>s called User Info, Billing, and
Badges.
As you can see, the form is broken up into three clearly marked sections, each with a title
and some fields underneath. We can achieve this by adding the fieldsets attribute to our
UserAdmin class, like so:
1
2
3
4
5
6
@admin.register(User)
class UserAdmin(admin.ModelAdmin):
7
8
9
10
11
12
13
14
Notice that fieldsets is a tuple of two-tuples. Each two-tuple contains the title of the section and a dictionary of attributes for that section. In our case the only attribute we used was
the fields attribute, which lists the fields to be displayed in the fieldset. More information
about fieldsets can be found here.
With that, we now have a decent looking interface to administrate our users. However, we
can make the admin interface even more useful. Lets look at exactly how to do that next.
421
Resetting passwords
Someone is bound to call / email and say, Your password reset isnt working, and I cant
get in the site. This usually means they cant remember the password they put in when they
reset their password 30 seconds ago. What we want to do is to get the password reset for
them right. So lets add that capability to the admin view.
Now if we were using Djangos Default User Model, password resets are included out of the
box. You can navigate to http://127.0.0.1:8000/admin/auth/user/, select a user, and you
will see a change password link. Pretty easy.
However, we are using a custom user model - the one in payments.models.User. And the
password reset form doesnt work for custom user models. If you want a quick and dirty hack
to get it to work, check out this Stack Overflow question - but lets do something a bit more
fun and learn more about customizing the admin interface along the way.
So what are we going to do? Simple: We are going to modify the change form for
payments.models.User to use Angular to make a REST call to reset a users password. In
order to accomplish this amazing feat, we need to do the following:
1. Customize the change_form template for payments / users to add a change password link.
2. Create a REST API to reset the password.
3. Integrate angular.js into the change_form so we can call our REST API.
4. Provide some nice status messages.
Lets go through each of these topics in a bit more detail.
422
Extend the change_form.html for the entire admin site. (Useful if you want to
add something to every single change form for all your models.) To do this add a
templates/admin/change_form.html
Extend the change_form.html for a particular Django app. If we wanted certain editing functionality for every model in our djangular_polls app, we would choose this
option. To do this add a templates/admin/djangular_polls/change_form.html
Extend the change_form.html for a particular model - i.e., for payments.models.user.
This is exactly what well do. To do this we just need to create an HTML template named
change_form.html and put it in templates/admin/payments/user/change_form.html.
This way we will only override the change form for the User model.
{% extends "admin/change_form.html" %}
{% load staticfiles %}
{% load i18n admin_urls %}
4
5
{% block object-tools-items %}
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
{{ block.super }}
{% endblock %}
Line 1 - Notice that we are extending admin/change_form.html which is the
default change form that ships with Djangos Admin. (Actually, since we installed
423
424
2
3
4
5
6
7
8
9
urlpatterns = patterns(
'payments.json_views',
url(r'^users$', 'post_user'),
url(r'^users/password/(?P<pk>[0-9]+)$',
json_views.ChangePassword.as_view(),
name='change_password'),
)
Notice on line 5 that we are using the view json_views.ChangePassword, which looks
like:
1
2
3
4
5
6
7
8
9
10
11
12
class ChangePassword(generics.GenericAPIView):
"""
Change password of any user if superadmin.
* pwd
* pwd2
"""
permission_classes = (permissions.IsAdminUser,)
serializer_class = PasswordSerializer
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
user = self.get_object(pk)
serializer = PasswordSerializer(user, data=request.DATA)
if serializer.is_valid():
serializer.save()
return Response("Password Changed.")
return Response(serializer.errors,
status=status.HTTP_400_BAD_REQUEST)
4
5
6
7
8
9
10
11
12
13
14
15
class PasswordSerializer(serializers.Serializer):
"""
Reset password serializer
"""
password = serializers.CharField(
max_length=PASSWORD_MAX_LENGTH
)
password2 = serializers.CharField(
max_length=PASSWORD_MAX_LENGTH,
)
16
17
18
19
20
21
pwd = attrs['password']
if pwd2 != pwd:
raise serializers.ValidationError("Passwords don't
match")
22
23
return attrs
24
25
26
27
28
29
30
31
32
33
And thats it. We now have a fully functioning REST API for changing passwords. You may
want to test it out by going to http://127.0.0.1:8000/api/v1/users/password/1 and using the
DRF form.
Now back to the front end to add Angular support and call our newly minted API.
427
{% block footer %}
{{ block.super }}
<script src="{% static "js/angular.min.js" %}"
type="text/javascript"></script>
<script src="{% static "js/ui-bootstrap-tpls-0.11.0.min.js" %}"
type="text/javascript"></script>
<script src="{% static "js/admin.js" %}"
type="text/javascript"></script>
{% endblock %}
Make sure you also add the UI-Bootsrap library to your static/js folder. You can grab it
here (make sure you get version 0.11.0) and then save it to the correct folder.
Now we need to add our ng-app and our ng-controller directives to the HTML page. To
ensure we can use Angular functionality throughout, we want to wrap the entire page in our
ng-app / ng-controller directives. Due to the way the admin/change_form.html template is set up, its not very straight forward on how to do that.
First, we will extend the navbar block and add our div like so:
1
2
3
4
{% block navbar %}
<div ng-app="adminApp" ng-controller="AdminCtrl">
{{ block.super }}
{% endblock %}
Notice we didnt close the div. We will close it in the footer block. This in effect will create a div that wraps almost the entire page. Its a bit odd to do things this way, but we are
bound by the blocks that are exposed in admin/change_form.html. We could also choose
not to inherit from admin/change_form.html and instead just create our own template
from scratch, but even though that would make for a clean way to declare ng-app and ngcontroller, thats a bit overkill for our purposes.
428
20
21
22
23
24
25
26
{% extends "admin/change_form.html" %}
{% load staticfiles %}
{% load i18n admin_urls %}
{% block navbar %}
<div ng-app="adminApp" ng-controller="AdminCtrl">
{{ block.super }}
{% endblock %}
{% block object-tools-items %}
<li class="dropdown" id="menuReset" is-open="isopen">
<a class="dropdown-toggle" href
id="navLogin">Reset Password</a>
<div class="dropdown-menu" style="padding:17px;">
<form class="form" id="resetpwd" name="resetpwd" method="post"
ng-submit="resetpass('{{original.id}}')" >
{% csrf_token %}
<input name="pass" id="pass" type="password"
placeholder="New Password" ng-model="pass" required>
<input name="pass2" id="pass2" type="password"
placeholder="Repeat Password" ng-model="pass2"
required><br>
<button type="submit" id="btnLogin" class="btn">
Reset Password</button>
</form>
</div>
</li>
{{ block.super }}
{% endblock %}
27
28
29
30
31
32
33
{% block footer %}
</div> <!-- closes the ng-app div -->
{{ block.super }}
<script src="{% static "js/angular.min.js" %}"
type="text/javascript"></script>
<script src="{% static 'js/ui-bootstrap-tpls-0.11.0.min.js' %}"
type='text/javascript'></script>
<script src="{% static "js/admin.js" %}"
type="text/javascript"></script>
429
34
{% endblock %}
Fire up the page in the admin view and click on view source or inspect element from your
browser. You should see our ng-app directive just under the content div like so:
1
2
3
4
5
6
7
8
2
3
4
5
6
7
adminApp.config(function($interpolateProvider, $httpProvider) {
$interpolateProvider.startSymbol('[[').endSymbol(']]');
$httpProvider.defaults.headers.common['X-CSRFToken'] =
$('input[name=csrfmiddlewaretoken]').val();
}
);
8
9
In line one we are injecting the ui.bootstrap module into our adminApp (which gives us
access to Bootstrap componenets implemented in Angular). Everything else we have covered
previous in the Angular chapters, so it should be a review.
1
2
3
4
5
6
7
8
9
10
Lets create the resetpass function in our AdminCtrl (in js/admin.js). But first we are
going to need an Angular factory to call our REST endpoint. Lets call it AdminUserFactory:
1
2
3
4
5
6
7
8
9
10
adminApp.factory("AdminUserFactory", function($http) {
var factory = {};
factory.resetPassword = function(data) {
var pwdData = {password : data.pass, password2 : data.pass2};
return $http.put("/api/v1/users/password/" + data.user, pwdData)
.then(function(response)
{
return response;
});
};
11
12
13
return factory;
});
We have seen this type of factory before. Basically we are just building the url api/v1/users/password/<use
id> and sending a PUT request with the password data.
Now we can hook up the factory into our AdminCtrl controller and create the resetpass
function:
1
431
3
4
5
6
7
8
9
10
$scope.resetpass = function(userId) {
var data = {
'user': userId,
'pass' : $scope.pass,
'pass2': $scope.pass2
};
return AdminUserFactory.resetPassword(data);
};
11
12
});
Above we inject the AdminUserFactory and call it with our resetpass functionality. You
can test the admin form now, and it should successfully reset the password for the user. However, from the UI you wont recieve any success or failure messages, so lets take care of that
as well.
WARNING: In case its not apparent, with this technique we are sending the
passwords in clear-text back to the server. Thus we should really only do this if
we are operating over an HTTPS connection so we can ensure our passwords are
encrypted while sent across the wire.
{% block form_top %}
<div class="alert" ng-class="alertClass"
ng-show="afterReset">[[ msg ]]</div>
{% endblock %}
Line 2 - Bootstrap alerts use one of four classes for styling - alert-info,alert-warning,
alert-success, or alert-danger. Each class simply changes the background color
of the alert from white to yellow to green to red respectively. Thus if we successfully
reset the password, we want to use the alert-success class on our alert. Well use
the alert-danger class if we fail to update the password. Somewhere in AdminCtrl
432
we will set the value of alertClass to the appropriate class so things will be displayed
with the appropriate background color.
Line 3 - Here we are using a simple ng-show to control if the alert is displayed or not.
And the text in the alert will be tied to $scope.msg.
Now lets update the AdminCtrl:
1
2
3
$scope.afterReset = false;
4
5
6
7
8
9
10
11
12
13
14
$scope.resetpass = function(userId) {
$scope.afterReset = false;
var data = {
'user': userId,
'pass' : $scope.pass,
'pass2': $scope.pass2
};
AdminUserFactory.resetPassword(data)
.then(showAlert,showAlert);
};
15
16
17
18
19
20
if (data.status == 200) {
$scope.alertClass = "alert-success";
$scope.pass = "";
$scope.pass2 = "";
}
21
22
23
24
25
26
$scope.msg = data.data;
27
28
29
};
30
31
});
Okay. Theres a fair bit going on there, so lets break it down bit by bit:
433
don't match"]}
This is because DRF will return an array of error messages linked to each field that fails validation / has an error. We can add a special case in our showAlert function to handle this.
Replace:
1
$scope.msg = data.data
With:
434
1
2
3
4
5
6
7
8
This basically says, If we get back a string (which is the success case) then just display it,
otherwise loop through the list of errors and display them all.
It would be nice to hide the password reset form after we are finished, so lets make that
happen. This is where the ui-bootstrap really helps. If you look back at our HTML for the
menuReset list_item, you will see:
1
The class dropdown is actually an Angular directive defined in ui-bootstrap that allows
us to programmatically control if the dropdown is displayed or hidden. is-open allows you
to specify the scope variable that will control this. Since we have set it to isopen, all we have
to do is set $scope.isopen to false in our controller to hide the dropdown, or true to show
it. Thus we add $scope.isopen = false to the end of our showAlert function and that
will close the dropdown menu. Now if we put it all together, our controller should look like
this:
1
2
3
4
$scope.afterReset = false;
$scope.isopen = false;
5
6
7
8
9
10
11
12
13
14
15
$scope.resetpass = function(userId) {
$scope.afterReset = false;
var data = {
'user': userId,
'pass' : $scope.pass,
'pass2': $scope.pass2
};
AdminUserFactory.resetPassword(data)
.then(showAlert,showAlert);
};
435
16
17
18
19
20
21
if (data.status == 200) {
$scope.alertClass = "alert-success";
$scope.pass = "";
$scope.pass2 = "";
}
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
$scope.msg = msg;
$scope.isopen = false;
};
39
40
});
And there you have it. A fully functioning Angular-enabled password reset function for a
custom model in the Admin view. That was a lot of work for a password reset, but hopefully
you learned a good deal about the various ways you can customize the admin view and extend
it to do whatever you need.
Oh, and one more thing - Since we are using custom user models, we may want to hide
Djangos Auth Models from the admin view. We can do that by simply adding the following lines to payments/admin.py:
1
2
3
4
5
6
admin.site.unregister(DjangoUser)
admin.site.unregister(Group)
436
admin.site.unregister(Site)
437
Conclusion
In this chapter we have talked about a number of ways to modify the admin interface. Lets
review quickly.
3
4
5
6
@admin.register(User)
class UserAdmin(admin.ModelAdmin):
pass
class UserAdmin(admin.ModelAdmin):
list_display = ('name', 'email', 'rank', 'last_4_digits',
'stripe_id', )
Remember that the list of fields can be a field or a callable. For example, we can create a
thumbnail view with the following callable:
1
2
3
4
5
def thumbnail(self):
if self.img:
return u'<img src="%s" width="100" height="100" />' %
(self.img.url)
else:
return "no image"
6
7
thumbnail.allow_tags = True
thumbnail.short_description = "badges"
Control ordering of the list with the ordering attribute using - for reverse ordering, i.e.:
1
ordering = ('-created_at', )
438
fieldsets = (
('User Info', {'fields': ('name', 'email', 'rank',)}),
('Billing', {'fields' : ('stripe_id',)}),
('Badges', {'fields' : ('badges',)}),
)
class PollItemInline(admin.TabularInline):
model = PollItem
3
4
5
@admin.register(Poll)
class PollAdmin(admin.ModelAdmin):
6
7
inlines = (PollItemInline,)
We can control which fields are read-only and thus not editable with the readonly_fields
attribute:
1
2
class PollItemInline(admin.TabularInline):
model = PollItem
3
4
readonly_fields = ('votes',)
439
440
Exercises
1. We didnt customize the interfaces for several of our models, as the customization
would be very similar to what we have already done in this chapter. For extra practice, customize both the list view and the change form for the following models:
Main.Announcements
Main.Marketing Items
Main.Status reports
Note both Announcements and Marketing Items will benefit from the thumbnail view
and ImageField setup that we did for Badges.
2. For our Payments.Users object in the change form you will notice that badges section
isnt very helpful. See if you can change that section to show a list of the actual badges
so its possible for the user to know what badges they are adding / removing from the
user.
441
Chapter 18
Testing, Testing, and More Testing
Why do we need more testing?
Throughout this book I have been singing the praises of unit testing. It has helped us catch
some errors in our application, made it easier to refactor, and changed our application as our
requirements changed - and hopefully it has given us a sense of security about the working
nature of our application.
However this security may be a bit unfounded, as we have only really been testing half of the
application - namely the Python half. In the more recent chapters we have added a lot of frontend Angular code, which is becoming increasingly important to the overall functionality of
our application. However, at this point we havent done much testing for that functionality.
In this chapter we are going to cover strategies to test the front-end (and the back-end) of
our application as one whole unit. This is commonly referred to as GUI testing or end-to-end
(E2E) testing.
442
444
Number of Tests
Percentage
Unit Test
Integration Tests
GUI Tests
32
15
0
68%
32%
0%
Basically we have 47 tests in our ../tests folder. 15 of those are integration tests since they
deal with the database and are not strictly unit tests. But honestly, the split is a bit arbitrary
(as we discussed in the Unit Testing chapter at the beginning of this course).
Looking at the percentages, we can see that we are not too far off from the suggested numbers
in the Agile Testing Pyramid. We just need to add the GUI tests, roughly 12 of them. Do
note though its more important to look at our requirements and make sure we have tests
for those requirements than to just say, we are going to add 12 tests because that will ensure
we have 20% GUI tests. Just keep in mind that if we start having significantly fewer or more
tests than 12 we should be asking ourselves questions about whether we have the appropriate
number of GUI tests.
445
Install Selenium
First things first, we need to get Selenium (the tool we use for automated GUI Testing) installed:
1
2
__init__.py
gui
__init__.py
testGui.py
unit
__init__.py
contact
__init__.py
testContactModels.py
main
__init__.py
testJSONViews.py
testMainPageView.py
testSerializers.py
payments
__init__.py
testCustomer.py
testForms.py
testUserModel.py
testViews.py
446
Under our tests directory we created two sub-folders, gui and unit(don't forget
that each folder needs an__init__.py. Then we can run the tests we want by typing
the following from thedjango_ecommerce directory:
1. Unit tests: ./manage.py test ../tests/unit
2. GUI tests: ./manage.py test ../tests/gui
3. All tests: ./manage.py test ../tests
With that out of the way, we can write our first test case.
447
3
4
5
class LoginTests(StaticLiveServerTestCase):
6
7
8
9
10
@classmethod
def setUpClass(cls):
cls.browser = webdriver.Firefox()
super(LoginTests, cls).setUpClass()
11
12
13
14
15
@classmethod
def tearDownClass(cls):
cls.browser.quit()
super(LoginTests, cls).tearDownClass()
16
17
18
def test_login(self):
self.browser.get('%s%s' % (self.live_server_url,
'/sign_in'))
448
Line 17 - Our first test. Selenium has a rich API, which we will discuss more in this
chapter. The first API call you need to know about is get, which opens a URL in the
browser. self.live_server_url is provided by Djangos LiveServerTestCase
and points to the root URL for the server started by the test case.
Now run the test with the following command from django_ecommerce:
1
You should see the Firefox browser open up and show the sign in page briefly before closing.
Be sure that you see the correctly formatted page with the appropriate looking CSS styles. If
you see a page that looks like it doesnt have CSS, you probably didnt configure your static
files correctly.
449
browser = webdriver.Firefox()
Once we have a reference called browser in the line above, we can then instruct the webdriver
to perform various actions. We can break these actions up into three main categories (there
are more, but lets start with these three first):
1. Locating elements
2. Acting on those elements
3. Waitings for things to happen
Jumping back to the previous example
1
This is an example of navigation, as it causes the browser to navigate to the specified URL. (In
our case the sign_in page.) Once we are on that page, we probably want to enter a username
and password. To do that, we have to locate the appropriate elements, which brings us to
Locating Elements
The webdriver in Selenium exposes an API that lets you interact with a webpages DOM (e.g.,
HTML structure) to locate elements. As soon as you locate an element, a WebElement is
returned that exposes element-specific interactions such as send_keys, clear, click, etc.
In Selenium there are several ways to locate elements, but you typically need to concern yourself with only two or three of these methods.
Locating an element by ID
This should be your default, go-to way to locate an element. Why? Because locating an HTML
element by its HTML ID is about the fastest way you get the browser to parse the DOM and
actually locate the element.
Heres how you do it:
450
email_textbox = self.browser.find_element_by_id("id_email")
Likewise, for the password text box we would use this line:
pwd_textbox = self.browser.find_element_by_id("id_password")
The above two examples will tell the browser to search through the DOM for the element with
the ID specified (remember in HTML, IDs should be unique) and return it.
Locating elements that dont have ids
Sometimes we want to interact with elements that dont have IDs. For example maybe we
want to check to see if the sign-in page has a header that says Sign In. The HTML element
for that looks like this:
Since this element doesnt have an ID lets look at the other locators provided by Selenium,
namely:
find_element_by_name
find_element_by_xpath
find_element_by_link_text
find_element_by_partial_link_text
find_element_by_tag_name
find_element_by_class_name
find_element_by_css_selector
Each one of these will use a different attribute to search through the DOM and find an element.
However as a best practice there are only two of these locators that I would recommend
using:
find_element_by_name
find_element_by_css_selector
All the other selectors have issues; they are either potentially slow, like find_element_by_xpath,
generally not specific enough like find_element_by_tag_name or find_element_by_class_name
or they are vulnerable to breaking when/if you decide to translate your application like
find_element_by_link_text and find_element_by_partial_link_text.
So that leaves us with:
451
8
9
10
11
12
13
14
<label for="id_email">Email:</label>
<div class="input">
<input id="id_email" name="email" type="email">
</div>
<div class="custom-error ng-hide"
ng-show="signin_form.email.$dirty &&
signin_form.email.$invalid">
Email is invalid:<span
ng-show="signin_form.email.$error.required"
class="ng-hide">value is required.</span>
<span ng-show="signin_form.email.$error.email"
class="ng-hide">Input a valid email address.</span>
</div>
</div>
<div class="clearfix">
<label for="id_password">Password:</label>
<div class="input">
<input id="id_password" name="password" type="password">
</div>
self.browser.find_element_by_css_selector("[type=email][name=email]")
# returns the element on line 3
self.browser.find_element_by_css_selector("#id_password") # this is
the same as find_element_by_id
self.browser.find_element_by_css_selector(".ng-hide") # find
element with ng-hide class
Basically anything that you can find/style with your CSS you can locate with Selenium by
using find_element_by_css_selector! Pretty cool.
452
453
email_textbox = self.browser.find_element_by_id("id_email")
-you will have a WebElement that you can interact with. In Selenium the WebElement has
a rich API which is described here. We will focus on some of the more common methods in
the API.
For our purposes, we want to sign into our application, so we are going to need to input an
email address and password in the appropriate text boxes.
We can do that with the following bit of code. Update testGui.py:
1
2
3
4
5
6
def test_login(self):
self.browser.get('%s%s' % (self.live_server_url, '/sign_in'))
email_textbox = self.browser.find_element_by_id("id_email")
pwd_textbox = self.browser.find_element_by_id("id_password")
email_textbox.send_keys("[email protected]")
pwd_textbox.send_keys("password")
Notice the last two lines where we used the send_keys() function to type a value into the
appropriate text boxes. send_keys simulates a user typing into the text box, so if you run
the test now with the following command:
1
You should actually see the values being typed into the appropriate text boxes.
Next step is to click on the Sign In button. We can do that with the following code, which
first locates the element, and then clicks it:
1
self.browser.find_element_by_name("commit").click()
Here we have done two things in one line just to show that you can chain functions in Selenium if you so desire. Also, since we are submitting a form, we can choose to use the
submit() function, which will find the form that encompasses the element and call submit
on it directly:
1
self.browser.find_element_by_name("commit").submit()
Again, update the code. Regardless of which method you choose, if you run the test again,
you should see the following output:
454
1
2
3
4
5
6
7
8
9
10
OK
Destroying test database for alias 'default'...
Notice that Selenium outputted to us what gets returned when we submit the form/click the
button. This is helpful for debugging, and it shows us that we are getting an error message
because the user doesnt exist in the database. This is true, because we are starting with a
fresh database where the test user doesnt exist. We could fix that by creating a user in our
setUpClass function. But first, lets convert this to a negative test so we can verify that we
do indeed get and error message if we input invalid data. This will also show us how to verify
information on the screen.
To do that, lets change the test_login() function like so:
1
2
3
4
5
6
def test_failed_login(self):
self.browser.get('%s%s' % (self.live_server_url, '/sign_in'))
email_textbox = self.browser.find_element_by_id("id_email")
pwd_textbox = self.browser.find_element_by_id("id_password")
email_textbox.send_keys("[email protected]")
pwd_textbox.send_keys("password")
7
8
9
# click sign in
self.browser.find_element_by_name("commit").submit()
10
11
12
13
14
2. We added the last three lines, which grab the errors div and check that the text in the
div is equal to the string we expected.
We could do a slight variation to check that our email validator works correctly like so:
1
2
3
4
5
6
def test_failed_login_invalid_email(self):
self.browser.get('%s%s' % (self.live_server_url, '/sign_in'))
email_textbox = self.browser.find_element_by_id("id_email")
pwd_textbox = self.browser.find_element_by_id("id_password")
email_textbox.send_keys("test@")
pwd_textbox.send_keys("password")
7
8
9
# click signin
self.browser.find_element_by_name("commit").submit()
10
11
12
13
14
This is basically the same test, but we change the text we send to email_textbox and change
the associated check for the error message. You could do the same thing to check for other
validations such as password required, email required, etc.
Lets move on to the positive case where login is successful. First, lets create a user in the
setUp() function.
1
2
3
def setUp(self):
self.valid_test_user = User.create(
"tester", "[email protected]", "test", 1234)
4
5
6
def tearDown(self):
self.valid_test_user.delete()
Above we create the user, and then clean up the created user. Now to write our login test:
1
2
3
4
5
def test_login(self):
self.browser.get('%s%s' % (self.live_server_url, '/sign_in'))
email_textbox = self.browser.find_element_by_id("id_email")
pwd_textbox = self.browser.find_element_by_id("id_password")
email_textbox.send_keys("[email protected]")
456
pwd_textbox.send_keys("test")
7
8
9
# click sign in
self.browser.find_element_by_name("commit").submit()
10
11
12
13
Again, very similar to the previous tests, but here on the last line we are trying to find an
element with the ID of user_info. Then we call is_displayed() on that element, which
will return true if it is visible to the user. Since we know the User Info box only shows up
after a successful login, this is a good visual cue we can use to verify that our test is running
correctly.
Selenium Best Practice #1 You should add a check for a visual cue after each
page transition or AJAX action (that should change the display). This way our
test will start to function in the same way a manual tester would test your application, e.g., do a bit of work, check that the application is responding correctly,
do a bit more work, check again, etc.
457
Implicit Waits
Webdriver provides the function implicitly_wait() that will set up an implicit wait for
each locator call. We can enable this in our setUpClass method by changing it to read as
follows:
1
2
3
4
5
@classmethod
def setUpClass(cls):
cls.browser = webdriver.Firefox()
cls.browser.implicitly_wait(10)
super(LoginTests, cls).setUpClass()
Line 4 is the important one here. We are telling the webdriver that if at first it doesnt find
an element (when we call any of our find_element functions), then keep retrying to find
that element for up to ten seconds. If the webdriver finds that element in the time span, then
everything is good and the script will continue; if not, then the script will fail.
In the interest of making our test less brittle and more reliable, it is a good idea to always
include a call to implicitly_wait. However, there is one drawback: if you have an
458
implicitly_wait call and you want to check that an element is not displayed, you would
write code like this:
1
self.assertFalse(self.browser.find_element_by_id("user_info").is_displayed())
Keep in mind that in this case, it would take the full 10 seconds for this line to complete
because of the implicitly_wait call we made in our setUpClass.
Explicit Waits
For this reason - waiting - some people dont like to use implicit waits. There are also times
when you know a particular operation may take longer than other things on the page (perhaps
because youre accessing an external API). In those cases (or if you just want to be explicit as
in The Zen of Python, you can use an explicit wait.
Going back to our test_failed_login(), lets put an explicit wait in when checking for
errors. The entire updated test case would look like this:
1
2
3
4
5
6
7
8
9
10
def test_falied_login(self):
self.browser.get('%s%s' % (self.live_server_url, '/sign_in'))
email_textbox = self.browser.find_element_by_id("id_email")
pwd_textbox = self.browser.find_element_by_id("id_password")
email_textbox.send_keys("[email protected]")
pwd_textbox.send_keys("password")
11
12
13
# click sign in
self.browser.find_element_by_name("commit").submit()
14
15
16
17
18
19
20
self.assertEquals(invalid_login.text,
"Incorrect email address or password")
Lines 1 - 3 - We need these additional imports; put them at the top of your module.
459
WebDriverWait(self.browser, 10)
Alone this command will just cause the script to pause for ten seconds - not a great idea (as
we already discussed). However, we can chain the until function to our wait, which will
cause the wait to stop waiting as soon as the until function returns a truthy result. until
takes a callable that accepts a single argument.
Lets look at a quick example:
1
WebDriverWait(self.browser,10).until(lambda x: True)
This code will immediately exit out of the WebDriverWait since the result is truthy. Then,
when WebDriverWait exits, it will return the value returned by the callable passed to until,
which in this case is True.
Usually, we want to wait until something happens on the screen, like and element becomes present, or visible, or not visible, or something like that. This is where the
expected_conditions comes in to play. expected_conditions is a Selenium-provided
module that allows us to do a number of checks on the web page. In our specific example,
we are using this bit of code (where EC is the expected_conditions module):
1
EC.visibility_of_element_located((By.CSS_SELECTOR, ".errors"))
460
Page Objects
Now that we know how to find elements, wait for them and act on them, we can do most
anything we need to do with Selenium. At this point most people just start writing a lot
of test cases, not giving much thought to maintaining and/or updating the test cases in the
future. Since we have already learned the fact that GUI tests are expensive to maintain, its
worth looking at how we might reduce the cost of maintaining these test cases.
One way to do that is to use the Page Objects pattern.
As the name implies, using Page Objects basically means creating an object for each page
of your application, and encapsulating the GUI testing code for the corresponding page in
that object. For example we could create a login page object and put all the functionality
associated with login in that page object, like filling our username and password, checking
error messages and so on.
This analogy works pretty well with a simple login page but with AJAX-rich applications, the
name Page Object might be a bit misleading, because often a page is too big to encapsulate
in a single object. While our sign_in page maps well to the term page object, our user
page probably doesnt. This is because our user page consists of multiple separate pieces
of functionality - i.e., announcements, status updates, badges, user info, polls, etc.. Thus,
for the user page, we should look to create a page object for each of the separate pieces of
functionality (such as user polls, status updates, etc..)
Again, the point is to make things more maintainable and easy to understand. Motivation for
the need to use page objects can be obtained by looking at the GUI tests we have written thus
far in the chapter, like:
1
2
3
4
5
6
def test_failed_login(self):
self.browser.get('%s%s' % (self.live_server_url, '/sign_in'))
email_textbox = self.browser.find_element_by_id("id_email")
pwd_textbox = self.browser.find_element_by_id("id_password")
email_textbox.send_keys("[email protected]")
pwd_textbox.send_keys("password")
7
8
9
# click sign in
self.browser.find_element_by_name("commit").submit()
10
11
12
13
14
15
16
self.assertEquals(invalid_login.text,
"Incorrect email address or password")
17
18
19
20
21
22
23
def test_failed_login_invalid_email(self):
self.browser.get('%s%s' % (self.live_server_url, '/sign_in'))
email_textbox = self.browser.find_element_by_id("id_email")
pwd_textbox = self.browser.find_element_by_id("id_password")
email_textbox.send_keys("test@")
pwd_textbox.send_keys("password")
24
25
26
# click signin
self.browser.find_element_by_name("commit").submit()
27
28
29
30
31
We can see there is a lot of duplication of code. And just like with our production code, if we
remove this duplication and share code it will make it cheaper to maintain our GUI tests. To
do that, we use the Page Object pattern, of course.
Lets get a clearer picture of how this works by creating a sign_in page object.
First, lets create a new folder - tests/gui/pages/. Be sure to put the __init__.py file in there,
and then create our first page object class in a new file called testPage.py. Starting off with a
base class, lets create the SeleniumPage class:
class SeleniumPage(object):
'''Place to allow for any site-wide configuration you may want
for you GUI testing.
'''
5
6
7
8
9
Nothing fancy here - The class just allows us to set a few necessary variables. Next, create a
base class for the elements on the page:
Python from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support
import expected_conditions as EC
class SeleniumElement(object):
1
2
3
4
5
6
7
8
class Page(SeleniumPage):
email = SeleniumElement((By.ID, "id_email"))
def dummy_method(self):
self.email
-the last line of the code above, where we call self.email, would call the __get__ method
of email and pass in Page for the obj parameter.
Why does this matter? For our purposes, its quite handy as it allows us an easy way to
pass values (i.e. driver and wait_time) from the containing class (Page) to our element
(email). This lets us set explicit wait times once per page object and not have to worry about
setting it each time we try to access an element.
class SignInPage(SeleniumPage):
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
@property
def error_msg(self):
return self.error_msg_elem.text
17
18
19
20
@property
def rel_url(self):
return '/sign_in'
21
22
23
24
def go_to(self):
self.driver.get('%s%s' % (self.base_url, self.rel_url))
assert self.sign_in_title.text == "Sign in"
Lines 22-24 - For page objects that represent an actual page, its helpful to have a
go_to() function that will navigate to the page and verify that we are indeed on the
correct page.
Thats our page object. For clarification purposes, here is the entire code for the sign in test:
1
2
3
4
from
from
from
from
5
6
7
8
9
10
11
12
13
class SeleniumPage(object):
'''Place to allow for any site-wide configuration you may want
for you GUI testing.
'''
14
15
16
17
18
19
20
21
class SeleniumElement(object):
22
23
24
25
26
27
28
29
30
31
32
33
class SignInPage(SeleniumPage):
465
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
@property
def error_msg(self):
return self.error_msg_elem.text
49
50
51
52
@property
def rel_url(self):
return '/sign_in'
53
54
55
56
def go_to(self):
self.driver.get('%s%s' % (self.base_url, self.rel_url))
assert self.sign_in_title.text == "Sign in"
from
from
from
from
5
6
7
466
9
10
class LoginTests(StaticLiveServerTestCase):
11
12
13
14
15
16
@classmethod
def setUpClass(cls):
cls.browser = webdriver.Firefox()
cls.browser.implicitly_wait(10)
super(LoginTests, cls).setUpClass()
17
18
19
20
21
@classmethod
def tearDownClass(cls):
cls.browser.quit()
super(LoginTests, cls).tearDownClass()
22
23
24
25
26
def setUp(self):
self.valid_test_user = User.create(
"tester", "[email protected]", "test", 1234)
self.sign_in_page = SignInPage(self.browser,
self.live_server_url)
27
28
29
def tearDown(self):
self.valid_test_user.delete()
30
31
32
33
34
35
def test_login(self):
self.sign_in_page.go_to()
self.sign_in_page.do_login("[email protected]", "test")
self.assertTrue(
self.browser.find_element_by_id("user_info").is_displayed())
36
37
38
39
40
41
def test_falied_login(self):
self.sign_in_page.go_to()
self.sign_in_page.do_login("[email protected]", "password")
self.assertEquals(self.sign_in_page.error_msg,
"Incorrect email address or password")
42
43
44
45
46
47
def test_failed_login_invalid_email(self):
self.sign_in_page.go_to()
self.sign_in_page.do_login("test@", "password")
self.assertEquals(self.sign_in_page.error_msg,
"Email: Enter a valid email address.")
467
48
49
50
51
52
53
class SeleniumPage(object):
'''Place to allow for any site-wide configuration you may want
for you GUI testing.
'''
54
55
56
57
58
59
60
61
class SeleniumElement(object):
62
63
64
65
66
67
68
69
70
71
72
73
class SignInPage(SeleniumPage):
74
75
76
77
78
79
80
81
82
83
84
85
86
@property
468
87
88
def error_msg(self):
return self.error_msg_elem.text
89
90
91
92
@property
def rel_url(self):
return '/sign_in'
93
94
95
96
def go_to(self):
self.driver.get('%s%s' % (self.base_url, self.rel_url))
assert self.sign_in_title.text == "Sign in"
As you can see, not only have we saved a lot of code, but our tests are pretty clear and readable
now as well. Double win!
Believe it or not, thats 90% of what you need to know to successfully test your web page with
Selenium. Of course, that remaining 10% can be pretty tricky. In order to help with that, the
final section in this chapter will list a few gotchas you may run into when writing tests for our
application.
469
How do you select from drop-downs such as expiration month/year on the registration page?
Keep in mind that Selenium works on the DOM. So when you call driver.find_element_by(),
Selenium will search through the entire page to find the element. But did you know that
each WebElement also supports the find_element_by function? Thus to set a dropdown,
find the dropdown element, then use that element to find the option you want. Then call
click. So if we wanted to select the year 2017 from our expiration year dropdown, the code
would look like this:
1
2
3
dd = self.driver.find_element_by_id('expiry_year')
option = dd.find_element_by_css_selector("option[value='2017']")
option.click()
In the second line we are only searching through the HTML elements that are children of the
expiry_year drop down. This ensures we dont click on the wrong dropdown option.
2
3
4
@classmethod
def setUpClass(cls):
470
5
6
7
8
profile = FirefoxProfile()
profile.set_preference('geo.prompt.testing', True)
profile.set_preference('geo.prompt.testing.allow', True)
cls.browser = webdriver.Firefox(profile)
9
10
11
cls.browser.implicitly_wait(10)
super(RegistrationTests, cls).setUpClass()
Line 1 - The FirefoxProfile() is the class that encapsulates all the browser specific
settings for Firefox.
Lines 5 - 7 - Create the FirefoxProfile() and set the preferences that we need for
our test.
Line 8 - Create a new Firefox webdriver, passing in the profile we just created.
Now you can control any of the hundreds of preferences that are available in Firefox.
email_textbox = self.driver.get_element_by_id("id_email")
2
3
4
5
6
7
self.driver.execute_script('''
angular.element($("#id_email")).scope().geoloc = {
'coords': {'latitude':'1', 'longitude':'2'}
};
angular.element(document.body).injector().get('$rootScope').$apply();''')
3. As for the second command in our execute_script call, we are getting Angulars
rootScope and calling apply (which you should always do when updating scope variables from outside of Angular).
After this bit of JavaScript is executed, we will have $scope.geoloc in our registrationCtrl
set to the value we specified, thus we can do the geolocation for the user with whatever lat /
long we decide to provide.
This is helpful as sometimes you may need to set certain Angular values to get this to work
correctly in your Selenium tests.
@classmethod
def setUpClass(cls):
cls.browser = webdriver.Chrome()
cls.browser.implicitly_wait(10)
super(RegistrationTests, cls).setUpClass()
1
2
3
4
5
Likewise just change the name to the browser that you want. Do note however that other
browsers may have certain setup requirements. Details about what needs to be configured
before using a particular browser can be found here:
IE
Chrome
Opera
Firefox
Safari
PhantomJS
mobile browsers
472
import os
import sys
import subprocess
from django.test.runner import DiscoverRunner
from django_ecommerce.guitest_settings import SERVER_ADDR
6
7
8
class LiveServerTestRunner(DiscoverRunner):
9
10
11
12
13
14
15
16
17
18
19
20
21
22
def spawn_server(self):
gui_settings = 'django_ecommerce.guitest_settings'
server_command = ["./manage.py", "runserver",
SERVER_ADDR, "--settings="+gui_settings]
self.server_p = subprocess.Popen(
server_command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
473
close_fds=True,
preexec_fn=os.setsid
23
24
)
print("server process started up... continuing with test
execution")
25
26
27
28
29
30
31
32
33
34
def kill_server(self):
try:
print("killing server process...")
os.killpg(os.getpgid(self.server_p.pid), 15)
self.server_p.wait()
except:
print("exception", sys.exc_info()[0])
35
36
37
38
39
In a nutshell, we create a custom test runner, and after we have created the test database in
setup_databases, we use subprocess.Popen to launch a Django development server in
a different process. Then in teardown_databases before we delete the database we kill off
the development server.
Timing is import here, because we have used a custom settings file (shown later) that will tell
our development server to connect to our test database. So we must be sure the development
server starts after the database is created, and before we try to drop the database.
The actual work of spawning the development server in a child process is housed in the
spawn_server function. Here are the important pieces of that function:
Line 1 - We are using a settings file called django_ecommerce.guitest_settings,
which houses the database connection definition and the address we want to use for
our development server:
1
474
4
5
6
7
8
9
10
11
12
13
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'test_django_db',
'USER': 'djangousr',
'PASSWORD': 'djangousr',
'HOST': 'localhost',
'PORT': '5432',
}
}
14
15
SERVER_ADDR = "127.0.0.1:9001"
Notice that at the start we import the existing settings to make sure we have all
the settings we might need. Also notice how we have set the database name to
test_django_db. This is the name of the database that our test runner will generate.
Since both our test and our development server are using the same database, it makes
it easier to set up test data in our tests. Bonus!
Make sure to create the new settings file as guitest_settings.py.
Lines 2 - 3 - Coming back to our spawn_server function, these two lines are the
command line that we want to execute. Notice how we grab the SERVER_ADDR from
our guitest_settings and also pass in guitest_settings as our custom settings
file for our development server.
Lines 4 - 8 - This is how we spawn the child process. Basically we are setting it to run
in a child process and capturing the stdout and sterr so they arent displayed to the
terminal. The last two options are necessary to properly clean up the process when we
want to kill it later. For more information on the highly useful subprocess library,
check out the official docs
Finally, after we spawn the server, our tests will run. When we have completed all of our tests,
we want to kill the development server, which is done in the kill_server function. Here
we are using the special function os.killpg because when we run the initial manage.py
runserver ... command, it actually spawns a couple of processes, so os.killpg will kill
the whole process tree.
With this test runner, we no longer need to use the LiveServerTestCase because our test
runner is firing up the LiveServer for us. All we need to do is use the appropriate URL. We
can get this in our tests by importing it directly from our custom settings files. A test using
our LiveServerTestRunner could look like this:
475
2
3
class LoginTests(TestCase):
4
5
6
7
8
def setUp(self):
self.valid_test_user = User.create("tester",
"[email protected]", "test", 1234)
self.sign_in_page = SignInPage(self.browser,
"http://"+SERVER_ADDR)
There are only two differences from our previous test case:
We are inheriting from a regular TestCase and not StaticLiveServerTestCase,
and
We import the URL for the server from our guitest_settings because our regular
TestCase doesnt have an associated live_server_url.
Make sure to update all of the GUI tests using the above format.
With that, you can run your Selenium tests without having to worry about the limitations
of the LiveServerTestCase. To run your tests using this test runner, use the following
command:
1
476
Conclusion
As you have seen, Selenium can be a very powerful way to execute GUI tests with Python.
It integrates well into Django and makes controlling the browser relatively straight-forward.
We can easily find elements and act on them, and it has a simple API for waiting for things
on the page to happen. While Selenium focus mainly on basic web functionality, it can also
test web 2.0 AJAX functionality without too much trouble.
The difficulties you run into will likely be around Selenium + Angular; while its not perfect,
it is getting better every day. Dont be afraid of the execute_script function so you can
modify your Angular variables / values as necessary. Take some time to learn the features
of Selenium and see if its the right GUI testing tool for you. Otherwise you can always look
at the Angular specific testing tools, such as Protractor which are mentioned in the What
about javascript testing section of this chapter.
477
Exercises
1. Write a Page Object for the Registration Page, and use it to come up with test for successful and unsuccessful registrations. Youll need to use the LiveServerTestRunner
for the tests to execute successfully.
478
Chapter 19
Deploy
You, dear reader, have come a long way. We started with a simple login and registration
page, and built the greatest Star Wars Fan Site the world has ever know. (Greatest being a
somewhat subjective term.) Nevertheless, we built something cool. Pat your self on the back.
You did it!
But we are not quite done, though. Its time to share the site with the world. That means
sticking it up on a server somewhere, maybe getting a domain name and, well deploying
the site. This chapter will walk you through what you need to know to get a Django site
deployed on a server in the cloud.
479
Where to Deploy
There are literally thousands of different web hosts that you could use to deploy your Django
app. But basically all of them fall into three different types:
Managed (Server/VPS)
Basically the same as a Server/VPS (which by contrast is often called Unmanaged Hosting),
but the hosting company will provide support for OS upgrades, backups, etc Because the
company needs to support the server, there are generally some limitation as to what software
and configurations you can install. Rackspace made a name for itself by providing Managed
Servers/VPS.
PaaS
PaaS, or Platform as a Service, is a setup that attempts to remove the complexity of deployment from the user. Using a PaaS, such as Heroku. can make deployment as simple as pushing a commit to GitHub (after the prerequisite setup has been complete). Oftentimes a PaaS
will offer additional features such as automatic scaling, backups, deployment, etc They are a
good option if you dont want to worry about deployment, but you generally have very limited
flexibility when using the PaaS. Example PaaS providers include Heroku, Amazons Elastic
Beanstalk, and Google App Engine.
Its worth clicking some of the links above and exploring some of the options, but for now,
we are going to stick with deployment on a Server/VPS. You will learn the most by going this
route, and its always good to understand how deployment works.
480
What to Deploy
Since we are going to use a VPS that means we will start with a blank OS. To host a Django web
app in production we will need three additional pieces of software (aka servers) to ensure the
application will run without any issues in the more demanding environment (e.g., multiple
users, more requests). The three (actually four) servers that we will need are:
1. Database Server - PostgreSQL and MongoDB, by now youre familiar with the database
server, as youve been using it in development also.
2. Web Server / Proxy - In development we rely on the Django Test Server (i.e.
manage.py runserver) to handle web requests and return results. The Django
Test Server is great for development, but its not meant to be deployed into production.
So since we are going into Production we are going to use a Server designed to be
able to handle multiple simultaneous requests, serve static files (i.e., javascript, css,
images, etc..), and generally not fall over when multiple users start making requests.
The particular server we are going to use for this is called Nginx. Nginx is a fast,
powerful, and most importantly easy to configure HTTP server. In our setup it will be
the first thing that responds when a user requests a web page, and it will either serve
the static files directly to the user, or pass the request to our application server (see
below) which will run the Python code and return the results to the user.
3. Application Server - Again in development manage.py runserver supplies us with
both a web server (to handle web requests) and an application server that runs the
Django code and returns the results for a particular request. In production we will use
a separate application server called Gunicorn. Gunicorn is a WSGI server written in
Python that supports Django out of the box. This means we can use Gunicorn to host
our Django application (much like the Django test server would). The difference is
that Gunicorn is written with a production scenario in mind - its more secure and will
perform better than Djangos Test Server.
When we have all of these servers setup correctly. The flow of a users web request will look
like this:
Thats how it all works. Now that we have a better idea of how the pieces fit together, lets
jump right into the nitty-gritty of deployment.
481
482
DigitalOcean Setup
This will create a Droplet (a VPS) based on the settings you just specified. Setup should take
just a couple of minutes, and then you can access your droplet through SSH; DigitalOcean
will email you the root password and IP address. Once the creation is done, you have a freshly
minted VPS and root access.
You can now do whatever you want.
483
Configuring the OS
Now how do we install the software we need on the VPS?
First, SSH into your Ubuntu machine as root and then run an upgrade and install the necessary packages
1
2
3
$ ssh root@<your-machines-ip>
$ apt-get update && apt-get upgrade
$ sudo apt-get install python-virtualenv libpq-dev python-dev
postgresql postgresql-contrib nginx git libpython3.4-dev
Here we installed nginx, postgres, git and virtualenv, plus the necessary support packages.
This will ensure we have the necessary operating system packages available. Now we need to
configure things.
Postgres setup
Lets work back to front, and configure the database first. These are the same instructions
covered in the Upgrade chapter when we installed postgres on our dev machine, but we will
repeat them here to make things easier to find:
1. Verify installation is correct:
1
$ psql --version
2. Now that Postgres is installed, you need to set up a database user and create an account
for Django to use. When Postgres is installed, the system will create a user named
postgres. Lets switch to that user so we can create an account for Django to use:
1
$ sudo su postgres
$ createuser -P djangousr
Enter the password twice and remember it; for security reasons its best not to use
the same password you used in development. We will show you how to update your
settings.py file accordingly later in the chapter.
484
4. Now using the postgres shell, create a new database to use with Django. Note: Dont
type the lines starting with a #. These are comments for your benefit.
1
$ psql
2
3
4
5
6
7
5. Then we can set up permissions for Postgres to use by editing the file /etc/postgresql/9.3/main/pg_hba.conf with vim. Just add the following line to the end of the
file:
1
local
django_db
djangousr
md5
Then save the file and exit the text editor. The above line says,
"`djangousr` user can access the `django_db` database if they
are initiating a local connection and using an md5-encrypted
password".
$ exit
$ /etc/init.d/postgresql restart
This should restart postgres, and you should now be able to access the database. Check
that your newly created user can access the database with the following command:
1
This will prompt you for the password; type it in, and you should get to the database
prompt. You can execute any SQL statements that you want from here, but at this point
we just want to make sure we can access the database, so just do a \q and exit out of
the database shell. Youre all set; postgres is working! You probably want to do a final
exit from the command line to get back to the shell of your normal user.
If you do encounter any problems installing PostgreSQL, check the wiki. It has a lot of good
troubleshooting tips.
485
Set up Mongo
We have two databases now, so lets not forget to install mongo. Start off by installing the
packages:
1
Thats all the configuration necessary. You can make sure its working by typing mongo from
the command prompt and making sure it drops you into the mongo shell.
Python Setup
We can start by creating a virtual environment. The following line will create one using
Python 3:
1
Switch to the mec_env directory, then clone our mec app from GitHub:
1
2
$ cd /opt/mec_env
$ git clone https://github.com/realpython/book3-exercises.git
mec_app
NOTE Make sure you clone the app from your Github repo.
That will fetch the code from GitHub and put it into a directory called mec_app.
Lets activate our virtualenv:
1
$ source bin/activate
And then install all the necessary dependencies for our app:
1
2
$ cd mec_app
$ pip install -r requirements.txt
This will get all of our dependencies set up. However, we are going to need to make some
changes in order for Django to run in our new production environment. We will do that in
a later section called Django Setup.
486
Configuring Nginx
Nginx configuration just involves setting up a config file to get Nginx to listen on the incoming
port and to have it pass requests onto Gunicorn, which is what will be hosting our Django
app. By default, after you install Nginx (which we did in the earlier step where we executed
apt-get install nginx ) it will listen on port 80. So if you open up your browser and
navigate to the URL of your DigitalOcean Droplet, you should see the following:
upstream app_server_djangoapp {
server localhost:8001 fail_timeout=0;
}
4
5
6
7
server {
listen 80;
server_name <<put in ipaddress or hostname of your server
here>>;
487
access_log /var/log/nginx/mec-access.log;
error_log /var/log/nginx/mec-error.log info;
9
10
11
keepalive_timeout 5;
12
13
14
15
16
17
18
19
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
20
21
22
23
24
if (!-f $request_filename) {
proxy_pass http://app_server_djangoapp;
break;
}
25
26
27
28
29
30
This will tell Nginx to serve static files from the directory /opt/mec_env/static/, which
will be set up shortly. Other requests will be passed on to whatever is running on port 8001
on the server, which should be Gunicorn and our Django application.
To activate this configuration, do the following:
1
2
$ cd /etc/nginx/sites-enabled
$ ln -s ../sites-avaliable/mec
The above will establish a symbolic link from the file we just created in the /etc/nginx/sites-enabled
folder. This folder is where nginx will look for configuration so we need to do this so Nginx
will see our configuration file and then things will work properly.
Next we need to deactivate the default Nginx configuration:
1
$ rm default
488
Configuring Gunicorn
Gunicorn is the server that will run your Django app. We can install it with pip:
1
2
3
$ cd /opt/mec_evn
$ source bin/activate
$ pip install gunicorn
This will install Gunicorn. It takes a number of configuration options to work correctly, so
lets create a new bash script that will run Gunicorn for us when we want to. Edit the file
/opt/mec_app/mec_env/deploy/gunicorn_start:
1
#!/bin/bash
2
3
4
10
# Name of the
# Django
# the user to
# the group to
# how many
# which
# WSGI module
11
12
13
14
15
16
17
18
19
20
21
22
489
23
24
25
26
27
28
Lets go through the script. From the comments we can see that it is broken up into three
section. The first section defines the variables that we will use to execute Gunicorn in the
third section.
USER and GROUP are used to set Gunicorn to run as www-data, which is a user created
during our installation that is most often used to run web/app server. This is for security
concerns to limit access to the system.
In section 2 we activate our virtualenv (since that is where Gunicorn is installed) and export
the environment variables needed. Its not uncommon to use this section to export environment variables that influence how Django runs, such as a new database connection for
example.
The final section starts Gunicorn using the variables defined in section 1.
We can now test this out by running our gunicorn_start script from the command line.
That can be done with the following commands:
1
2
3
$ cd /opt/mec_env/mec_app/deploy
$ chmod +x gunicorn_start
# make sure the script is executable
$ ./gunicorn_start
Note that the second line of the above listing only has to be executed once to ensure we can
execute the file. When you run the script, you should see a bunch of output about your configuration, ending with:
1
2
3
4
5
8
9
Of course the dates and the actual pid numbers will change, but it should have that basic
output, and then it will continue to the line1
-repeated over and over again. This is Gunicorn running. You can now go to your browser
and navigate to the server IP, and you should no longer see the welcome to nginx screen.
Instead youll probably see some ugly Django error. But thats a good thing; it means you have
Nginx talking to your Django application. We just havent finished configuring the Django
application.
Before we switch to the Django application though, we need to finish up with Gunicorn. Right
now if we execute gunicorn_start, that command will stop running as soon as we log out
of our SSH session. What we want is for Gunicorn to run like a service, meaning it starts on
system boot and keeps running forever. We can do that with Supervisor.
Configuring Supervisor
Supervisor is a Python program that is intended to keep other programs running. From the
projects own description: Supervisor is a client/server system that allows its users to monitor
and control a number of processes on UNIX-like operating systems.
In other words, we use Supervisor to make sure other processes (like Gunicorn) keep running.
Setup is straightforward. Start by installing it via:
1
Now create a configuration that will keep Gunicorn running. Edit the file /etc/supervisor/conf.d/mec.co
to include the following:
1
2
3
4
5
6
[program: mec]
directory = /opt/mec_env/mec_app
user = www-data
command = /opt/mec_env/mec_app/deploy/gunicorn_start
stdout_logfile = /opt/mec_env/log/supervisor-logfile.log
stderr_logfile = /opt/mec_env/log/supervisor-error-logfile.log
We will need to create the log directory for the log files:
1
$ mkdir /opt/mec_env/log
491
$ /etc/init.d/supervisor restart
It will probably take a few seconds. To see if it started, check the web page and see if you still
have the ugly Django error. If so, youre good.
If not, check the main supervisor log file:
1
$ less /var/log/supervisor/supervisord.log
$ tail /opt/mec_env/log/*
492
Django Setup
There are a few things we need to modify in our django_ecommerce/settings.py file to
get things to run on our production server. However, if we make the necessary changes for
production to django_ecommerce/settings.py then things wont work in our development environment.
We need two settings.py files - one for development and one for production. There are
two common ways to handle this. One is to actually create two separate settings.py files,
and the other is to use environment variables. We will discuss using separate files here as its
probably the easiest when getting started.
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
STATIC_ROOT = '/opt/mec_env/static/'
MEDIA_ROOT = '/opt/mec_env/media/'
This file will be used to overwrite the settings in your development settings.py file. The settings should be self-explanatory; we are configuring things to work in our production server.
493
$ mkdir /opt/mec_env/static
$ mkdir /opt/mec_env/media
try:
from .settings_prod import *
except ImportError:
pass
What does this do? It tries to import settings_prod from the same directory as our
normal settings.py file. Since settings_prod is imported as the last statement in
settings.py, whatever values are set in settings_prod.py will overwrite the values
already set in settings.py.
$ cd /opt/mec_env/mec_app
$ cp deploy/settings_prod.py django_ecommerce/django_ecommerce/
This will move the file into place, and then when Django starts up it will read our production
settings. To get Django to restart, just restart Gunicorn, which is done using Supervisor:
1
$ /etc/init.d/supervisor restart
How do we know it worked? Reload your web page, and youll see a 500 error, no more debug
error messages, because we have set DEBUG=False. Now to get rid of those pesky 500 errors.
494
1
2
3
$ cd /opt/mec_env/mec_app/django_ecommerce
$ source ../../bin/activate
$ ./manage.py migrate
Now reload the web page, and no more 500 error, yay! But, the page looks very ugly, boo!
$ ./manage.py collectstatic
The command will warn you that its going to overwrite anything in that directory. Say yes,
and then it should copy all the files, finishing up with a message like:
1
Now reload the web page and voil (spoken with a strong French accent)! And there you
have it. You are deployed into production! Congratulations.
So thats it, right? Wrong we are just getting started, actually. Now that we have the deployment working, we need to1. Keep track of all the config files
2. Create a deployment script so that we can update production with a single script when
the time comes to make updates.
495
Configuration as Code
A term often used in DevOps circles, configuration as code, means keeping track of your configuration files so that deployment can be automated, thus reducing the dreaded it worked
on my machine issue.
For our setup, this means gathering the configuration files for Nginx, Gunicorn, and Supervisor, plus our settings_prod.py, and storing them in GitHub. We started to do this already,
but lets make sure we have everything in the correct place.
We want a directory structure like this:
1
2
3
4
5
6
7
8
deploy
gunicorn_start
nginx
sites-avaliable
mec
settings_prod.py
supervisor
mec.conf
As a review of the deployment process, here are the contents of each of those files:
gunicorn_start
1
#!/bin/bash
2
3
NAME="Mec"
application
DJANGODIR=/opt/mec_env/mec_app/django_ecommerce
project directory
USER=www-data
run as
GROUP=www-data
run as
NUM_WORKERS=3
worker processes should Gunicorn spawn
DJANGO_SETTINGS_MODULE=django_ecommerce.settings
settings file should Django use
DJANGO_WSGI_MODULE=django_ecommerce.wsgi
name
10
11
# Name of the
# Django
# the user to
# the group to
# how many
# which
# WSGI module
12
13
14
15
16
17
18
19
20
21
22
23
24
25
nginx/sites-avaliable/mec
1
2
3
upstream app_server_djangoapp {
server localhost:8001 fail_timeout=0;
}
4
5
6
7
server {
listen 80;
server_name 128.199.202.178;
8
9
10
access_log /var/log/nginx/mec-access.log;
error_log /var/log/nginx/mec-error.log info;
11
12
keepalive_timeout 5;
13
14
15
16
17
18
19
20
21
22
23
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
24
497
if (!-f $request_filename) {
proxy_pass http://app_server_djangoapp;
break;
}
25
26
27
28
29
30
supervisor/mec.conf
1
2
3
4
5
6
[program: mec]
directory = /opt/mec_env/mec_app
user = www-data
command = /opt/mec_env/mec_app/deploy/gunicorn_start
stdout_logfile = /opt/mec_env/log/supervisor-logfile.log
stderr_logfile = /opt/mec_env/log/supervisor-error-logfile.log
settings_prod.py
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
STATIC_ROOT = '/opt/mec_env/static/'
MEDIA_ROOT = '/opt/mec_env/media/'
Thats it for our configuration. Once you have all those files in the correct place, commit it all
and push it up to your GitHub account.
498
Lets script it
At the point, we have a production environment working, and we are keeping track of all the
configuration for our production environment. But next time we need to update the configuration, move to a different server, or simply deploy some new code, there is a good chance
that we will probably have forgotten the steps necessary for installation. Plus, its annoying
to have to manually type in all that stuff if we just want to update production.
Lets fix that with an auto-deployment script.
An auto-what script?
Its a simple Python file that will SSH into our server, grab the latest code, make sure everything is configured right, and redeploy the application. Sounds like a lot? Dont worry; its
actually not too much work thanks to a nice Python library called Fabric.
Fabric allows us to script SSH and shell commands using Python. What could be better?
-e git+https://github.com/pashinin/fabric.git@p33#egg=Fabric
six==1.9.0
Once you have Fabric installed, we can use it to create a script that will automate our deployment. Lets look at a simple version of the script first (we will expand upon this as we go
on).
Start by creating deploy/fabfile.py:
1
from fabric.api import env, cd, run, prefix, lcd, settings, local
2
3
4
499
6
7
8
9
10
11
12
13
14
def update_app():
with cd("/opt/mec_env/mec_app"):
run("git pull")
with cd("/opt/mec_env/mec_app/django_ecommerce"):
with prefix("source /opt/mec_env/bin/activate"):
run("pip install -r ../requirements.txt")
run("./manage.py migrate")
run("./manage.py collectstatic")
1
2
The --noinput argument is provided by Django for just this type of situation where you
always want to answer yes and not be prompted.
In addition, if you dont want to be prompted with a password to log into your server, you
can set up an SSH keypair so that no password is required. For DigitialOcean this how-to
will describe just how thats done.
Once the script is done, use the fab command to run it. Make sure youre in the deploy
directory, then run:
1
$ fab update_app
501
Continuous Integration
By automating the deployment of your application with Fabric, you not only save time, but
you also help to prevent errors during deployment by ensuring steps arent missed or forgotten. Taken to the next level, if there were a way to ensure that all of our tests were run and
passed before any deployment happened, we would be adding even more quality assurance.
If we then performed the step of running all of our tests (plus potentially deploying) each
time we made a commit to our codebase, we would be practicing continuous integration.
The term, coined by Grady Booch back in the Extreme Programming days, and often referred
to as simply CI, literally refers to integrating the various branches of code (e.g., each developers local branch) with the mainline code daily. In git this translates to:
git commit any local changes
git pull fix any merge conflicts
git push
But with automated unit and integration tests, we can do better by adding test runs into the
mix.
For a single developer, we can actually build a continuous integration system like this
with Fabric and a few more lines of code. Lets add a function called integrate to our
deploy/fabfile.py:
1
2
3
def integrate():
with lcd("../django_ecommerce/"):
local("./manage.py test ../tests/unit")
4
5
6
with settings(warn_only=True):
local("git add -p && git commit")
7
8
9
10
local("git pull")
local("./manage.py test ../tests")
local("git push")
502
Line 4 - normally if a line fails, Fabric will abort execution; for the git add / git commit line, it will fail if we havent made any local changes, but maybe we still want to
integrate, so this line tells Fabric to spew out a warning and continue execution
Line 8 - after we commit, lets get the latest from origin (aka GitHub); this is the integration step.
Line 9 - after we integrate, run the full suite of tests to make sure everything is still
working
Line 10 - if all of our tests pass, push all changes back to GitHub
Then to practice continuous integration, instead of running git push / git pull manually, every time youre ready to push a change, just execute:
1
That will run the two functions and your fabfile, and youre cooking up some CI goodness.
503
Conclusion
We covered a lot in this chapter. We went through the complete set up of a virgin VPS,
getting it set up with Nginx, Gunicorn, postgresql, MongoDB and Django. Then, as if that
wasnt enough, we showed you how to automate the entire process. And we even looked at
some suggestions on how to implement continuous integration to ensure your app is always
well tested, integrated and behaving nicely. We came a long way in this chapter; you should
be proud of yourself for getting through it.
Furthermore, you should now have a better understanding of what goes into deployment,
how to configure the various servers and hopefully a bit about how to troubleshoot problems
when they arise. Now you know enough to be dangerous.
504
Exercises
1. While a VPS represents the most common case for deploying a Django app, there are
several others. Have a read through the following articles to get an understanding of
how to deploy using other architectures:
Amazon Elastic Beanstalk - This article details a step-by-step guide on how
to deploy a Django app to AWS with Elastic Beanstalk.
Heroku - How to Migrate your Django Project to Heroku.
Dokku - This is basically Heroku on the cheap. Find out how to get Django set
up with Dokku here.
Docker - While mentioned previous in the chapter, this article is an excellent
write-up on how to use Docker for deployment + development + CI.
2. Didnt think you were going to get out of this chapter without writing any code, did you?
Remember back in the Fabric section when we talked about creating another function
to update all of your configuration files? Well, now is the time to do that. Create a
function called update_config() that automatically updates your Nginx, Supervisor
and Django config / setting files for you.
When the function is complete you should be able to execute:
1
Bonus points if you can get it all to work so the user only has to type:
1
$ fab ci
505
Chapter 20
Conclusion
Thats about it. You should now have a fully functioning website, deployed to production,
with tests and auto-deployment and all kinds of goodness! Hurray, you made it. It was a lot
of hard work and dedication, but doesnt if feel great knowing that you have come this far? By
now you should have a good understanding of what it takes to make a web site. We covered a
ton of stuff - from mixins to transactions, from inclusion tags to form handling, from Mongo
to Python 3 to Angular to promises and a whole lot more in between.
While we have tried to cover a great deal, there is so much out there in the world of computer
programming that we have barely scratched the surface. But thats okay. You dont need to
know everything - nobody does. In fact my advise if you intend to continue growing as a software developer is to go deep not broad. There are far to many generalists / script kiddies /
developers who only know enough to be dangerous. Hopefully we have not only geared you
up with a well-rounded knowledge about how to develop web pages but also an understanding of how things work architecturally, what informs the choices that developers make when
structuring code or web apps, and the tradeoffs associated with any sizable development effort.
At this point if you just start thinking more about the structure and layout of your data and
code, then I think Ive done my job. Keep coding. Keep learning. And always strive for a
deeper understanding of why code works the way it does. That curiosity and subsequent
exploration is what leads to true software craftsmanship.
Thank you very much for allowing me to share what knowledge I have with you. I do hope it
was a fulfilling and pleasurable experience. Im always eager, to hear how things went. Let
me know what you build, what you learn, or what problems you encounter along your journey.
Until then, all the best and do enjoy your coding adventures (even when the code doesnt do
what you tell it to.)
506
507
Chapter 21
Appendix A - Solutions to Exercises
Software Craftsmanship
Exercise 1
Question: Our URL routing testing example only tested one route. Write tests to
test the other routes. Where would you put the test to verify the pages/ route?
Do you need to do anything special to test the admin routes?
There are, of course, many solutions to this. Lets add a ViewTesterMixin() class to
payments/test in order to help with the view tests so were not repeating ourselves:
1
2
3
4
5
class ViewTesterMixin(object):
6
7
8
9
10
11
12
13
14
15
@classmethod
def setupViewTester(cls, url, view_func, expected_html,
status_code=200,
session={}):
request_factory = RequestFactory()
cls.request = request_factory.get(url)
cls.request.session = session
cls.status_code = status_code
cls.url = url
508
16
17
cls.view_func = staticmethod(view_func)
cls.expected_html = expected_html
18
19
20
21
def test_resolves_to_correct_view(self):
test_view = resolve(self.url)
self.assertEquals(test_view.func, self.view_func)
22
23
24
25
def test_returns_appropriate_respose_code(self):
resp = self.view_func(self.request)
self.assertEquals(resp.status_code, self.status_code)
26
27
28
29
def test_returns_correct_html(self):
resp = self.view_func(self.request)
self.assertEquals(resp.content, self.expected_html)
2
3
4
5
6
7
8
9
10
11
@classmethod
def setUpClass(cls):
html = render_to_response(
'sign_in.html',
{
'form': SigninForm(),
'user': None
}
)
12
13
14
15
16
17
ViewTesterMixin.setupViewTester(
'/sign_in',
sign_in,
html.content
)
The ViewTesterMixin() creates some default tests that can be run against most standard
tests.
In our case, we use it for all the view functions in the payments app. These are the same functions that we previously implemented to test our view function, except these are all placed in
a base class.
509
So, all you have to do is call the setupViewTest() class method, which is meant to be
called from the derived setUpClass class method. Once you the setup is taken care of, the
ViewTesterMixin() will simply perform the basic tests for routing, checking return codes,
and making sure youre using the appropriate template.
Once setup, you can implement any other tests you like.
he great thing about using Mixins for testing is that they can reduce a lot of the boilerplate
(e.g., common test routines) that you often run against Djangos system.
As for the second part of the question. Dont bother testing the admin views as they are
generated by Django, so we can assume they are correct because they are already tested by
the Django Unit Tests.
For the pages route, that is actually not that easy to do in Django 1.5. If you are sticking
to the out-of-the-box test discovery, there isnt a base test file. So well leave this out until
we get to the chapter on upgrading to Django 1.8, and then you can see how the improved
test discovery in Django 1.8 (which was introduced in Django 1.6) will allow you to better
structure your tests to model the application under test.
Exercise 2
Question: Write a simple test to verify the functionality of the sign_out view. Do
you recall how to handle the session?
1
2
3
4
5
6
7
8
9
10
11
@classmethod
def setUpClass(cls):
ViewTesterMixin.setupViewTester(
'/sign_out',
sign_out,
"", # a redirect will return no html
status_code=302,
session={"user": "dummy"},
)
12
13
14
15
def setUp(self):
#sign_out clears the session, so let's reset it overtime
self.request.session = {"user": "dummy"}
510
cls.request = request_factory.get(url)
cls.request.session = session
We can see that it shoves the session into the request returned by our request factory. If
you dont recall what the RequestFactory() is all about, have a quick re-read of the Mocks,
fakes, test doubles, dummy object, stubs section of the Testing Routes and Views chapter.
Exercise 3
Question: Write a test for the contact/models. What do you really need to test?
Do you need to use the database backend?
If you recall from the Testing Models chapter, we said to only test the functionality you write
for a model. Thus, for the ContactForm model we only need to test two things:
1. The fact that the model will return the email value as the string representation.
2. The fact that the queries to ContactForms are always ordered by the timestamp.
Note that there is a bug in the ContactForm() class; the class Meta needs
to be indented so it is a member class of ContactForm. Make sure to updated
before running the tests!
Here is the code for the test:
1
2
3
4
5
6
class UserModelTest(TestCase):
7
8
9
10
11
12
@classmethod
def setUpClass(cls):
ContactForm(email="[email protected]", name="test").save()
ContactForm(email="[email protected]", name="jj").save()
cls.firstUser = ContactForm(
511
email="[email protected]",
name="first",
timestamp=datetime.today() + timedelta(days=2)
13
14
15
)
cls.firstUser.save()
#cls.test_user=User(email="[email protected]", name ='test user')
#cls.test_user.save()
16
17
18
19
20
21
22
def test_contactform_str_returns_email(self):
self.assertEquals("[email protected]", str(self.firstUser))
23
24
25
26
def test_ordering(self):
contacts = ContactForm.objects.all()
self.assertEquals(self.firstUser, contacts[0])
Exercise 4
Question: QA teams are particularly keen on boundary checking. Research what
it is, if you are not familiar with it, then write some unit tests for the CardForm
from the payments app to ensure that boundary checking is working correctly.
First, whats boundary checking? Check out the Wikipedia article.
To accomplish boundary checking we can make use of our FormTesterMixin() and have it
check for validation errors when we pass in values that are too long or too short.
Here is what the test might look like:
1
2
3
4
5
6
7
8
9
10
11
12
def test_card_form_data_validation_for_invalid_data(self):
invalid_data_list = [
{
'data': {'last_4_digits': '123'},
'error': (
'last_4_digits',
[u'Ensure this value has at least 4 characters
(it has 3).']
)
},
{
'data': {'last_4_digits': '12345'},
'error': (
512
'last_4_digits',
[u'Ensure this value has at most 4 characters
(it has 5).']
13
14
15
16
17
18
19
20
21
22
23
24
25
Before moving on, be sure that all your new tests pass: ./manage.py test.
513
The first part of this exercise is relatively straightforward. Use the patch decorator with the
side_effect parameter like this:
1
2
3
4
5
@mock.patch('payments.models.User.save', side_effect=IntegrityError)
def test_registering_user_twice_cause_error_msg(self, save_mock):
# create the request used to test the view
self.request.session = {}
#...snipped the rest for brevity...#
The side_effect parameter says, When this function is called, raise the IntegrityError
exception. Also note that since you are manually throwing the error, you dont need to create
the data in the database, so you can remove the first couple of lines from the test function as
well:
1
2
3
7
8
9
10
11
12
13
14
15
16
17
18
19
FAILED (failures=1)
Destroying test database for alias 'default'...
def get_MockUserForm(self):
from django import forms
3
4
class MockUserForm(forms.Form):
5
6
7
def is_valid(self):
return True
8
9
10
11
12
13
14
15
16
17
18
@property
def cleaned_data(self):
return {
'email': '[email protected]',
'name': 'pyRock',
'stripe_token': '...',
'last_4_digits': '4242',
'password': 'bad_password',
'ver_password': 'bad_password',
}
19
515
20
21
22
23
return MockUserForm()
@mock.patch('payments.views.UserForm', get_MockUserForm)
@mock.patch('payments.models.User.save',
side_effect=IntegrityError)
def test_registering_user_twice_cause_error_msg(self,
save_mock):
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
516
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
Exercise 2
Question: As alluded to in the conclusion, remove the customer creation logic
from register() and place it into a separate CustomerManager() class. Re-read
the first paragraph of the conclusion before you start, and dont forget to update
the tests accordingly.
The solution for this is pretty straightforward: Grab all the Stripe logic and wrap it in a class.
This example is a bit contrived because at this point in our application it may not make a lot
of sense to do this, but the point here is about TDD and how it helps you with refactoring. To
make this change, the first thing you would do is create a simple test to help design the new
Customer() class:
1
class CustomerTests(TestCase):
517
3
4
5
6
7
def test_create_subscription(self):
with mock.patch('stripe.Customer.create') as create_mock:
cust_data = {'description': 'test user', 'email':
'[email protected]',
'card': '4242', 'plan': 'gold'}
Customer.create(**cust_data)
8
9
create_mock.assert_called_with(**cust_data)
This test says, The Customer.create function is used to call Stripe with the arguments
passed in.
You could implement a simple solution to that to place in payments.views like this:
1
class Customer(object):
2
3
4
5
@classmethod
def create(cls, **kwargs):
return stripe.Customer.create(**kwargs)
This is probably the simplest way to achieve this. (Note: Instead of using **kwargs you
could enumerate the names.) This will pass the test, but the next requirement is to support
both subscription and one_time payments.
To do that, lets update the tests:
1
class CustomerTests(TestCase):
2
3
4
5
6
7
def test_create_subscription(self):
with mock.patch('stripe.Customer.create') as create_mock:
cust_data = {'description': 'test user', 'email':
'[email protected]',
'card': '4242', 'plan': 'gold'}
Customer.create("subscription", **cust_data)
8
9
create_mock.assert_called_with(**cust_data)
10
11
12
def test_create_one_time_bill(self):
with mock.patch('stripe.Charge.create') as charge_mock:
518
13
14
15
16
17
18
19
charge_mock.assert_called_with(**cust_data)
Here, you could have created two separate functions, but we designed it with one function
and passed in the type of billing as the first argument. You could make both of these test pass
with a slight modification to the original function:
1
class Customer(object):
2
3
4
5
6
7
8
@classmethod
def create(cls, billing_method="subscription", **kwargs):
if billing_method == "subscription":
return stripe.Customer.create(**kwargs)
elif billing_method == "one_time":
return stripe.Charge.create(**kwargs)
customer = Customer.create(
email=form.cleaned_data['email'],
description=form.cleaned_data['name'],
card=form.cleaned_data['stripe_token'],
plan="gold",
)
Finally, update our test cases so that we do not reference Stripe at all. Heres an example of
an updated test case:
1
2
3
@mock.patch('payments.views.Customer.create')
@mock.patch.object(User, 'create')
def test_registering_new_user_returns_succesfully(self,
create_mock, stripe_mock):
this way we keep our test ignorant of the fact that we are using Stripe in case we later want to
use something other than Stripe to process payments or change how we interact with Stripe.
And of course you could continue factoring out the Stripe stuff from the edit() function as
well if you wanted, but well stop here.
Finally, run all of your tests again to ensure we didnt break any functionality in another app:
1
./manage.py test
7
8
Cheers!
520
- In the main carousel, the text, "Join the Dark Side" on the Darth
Vader image, blocks the image of Darth himself. Using the
Bootstrap / carousel CSS, can you move the text and sign up
button to the left of the image so as to not cover Lord Vader?
- If we do the above change, everything looks fine until we view
things on a phone (or make our browser really small). Once we
do that, the text covers up Darth Vader completely. Can you
make it so on small screens the text is in the "normal
position" (centered / lower portion of the image) and for
larger images it's on the left.**
Part 1
For the first part, you could just add the style inline.
1
2
3
4
5
7
8
Here, we simply changed the style of the figcaption to push the caption to the top and left
top:0%;left:5% and have the text wrap at 60% of the right edge, right:60%. We then
aligned the text left: text-align:left.
521
Doing this overrides the CSS for the class carousel-caption that is defined both in
bootstrap.css and carousel.css. As a best practice, though, we should keep our
styling in CSS files and out of our HTML/templates - but we dont want this style to
apply to all .carousel-caption items. So lets give this particular caption an id (say
id="darth_caption") and then update our mec.css (where we put all of our custom CSS
rules) files as follows:
1
2
3
4
5
6
#darth_caption {
top: 0%;
left: 5%;
right: 60%;
text-align: left;
}
This has the same effect but keeps the styling from cluttering up our HTML.
Part 2
Bootstraps Responsive Utilities give you the power to create different styling depending upon
the size of the screen that is being used. Check out the chart on the Bootstrap page. By using
those special classes, you can show or hide items based on the screen size. For example,
if your used class=hidden-lg, then the HTML element associated to that class would be
hidden on large screens.
In our example, we essentially use two figcaptions. One for large screens, and the other
for all other screens:
1
2
3
5
6
7
8
10
then
%}"
then
%}"
That should do it! If you make your browser screen smaller and smaller, you will see that
eventually the caption will jump back to the middle lower third. If you make your browser
larger, then the caption will jump to the left of Darth Vader in the image. Congratulations,
you now understand the basics of how responsive websites are built.
Exercise 2
Question: In this chapter, we updated the Home Page but we havent done anything about the Contact Page, the Login Page, or the Register Page. Bootstrapify
them. Try to make them look awesome. The Bootstrap examples page is a good
place to go to get some simple ideas to implement. Remember: try to make the
pages semantic, reuse the Django templates that you already wrote where possible, and most of all have fun.
There is no right or wrong answer for this section. The point is just to explore Bootstrap and
see what you come up with. Below youll see some examples. Lets start with the login page.
Short and sweet.
Sign-in page
523
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
{% extends "__base.html" %}
{% load staticfiles %}
{% block extra_css %}
<link href="{% static 'css/signin.css' %}" rel="stylesheet">
{% endblock %}
{% block content %}
<div class="container">
<form accept-charset="UTF-8" action="{% url 'sign_in' %}"
class="form-signin" role="form" method="post">
{% csrf_token %}
<h1 class="form-signin-heading">Sign in</h1>
{% if form.is_bound and not form.is_valid %}
<div class="alert-message block-message error">
<div class="errors">
{% for field in form.visible_fields %}
{% for error in field.errors %}
<p>{{ field.label }}: {{ error }}</p>
{% endfor %}
{% endfor %}
{% for error in form.non_field_errors %}
<p>{{ error }}</p>
{% endfor %}
</div>
</div>
{% endif %}
{% for field in form %}{% include "payments/_field.html" %}{%
endfor %}
<input class="btn btn-lg btn-primary btn-lg" name="commit"
type="submit" value="Sign in">
</form>
</div>
{% endblock %}
This is not much change to our previous sign-in template. The main difference is the new
stylesheet. Notice on lines 3-5 we have the following code.
1
2
3
{% block extra_css %}
<link href="{% static 'css/signin.css' %}" rel="stylesheet">
{% endblock %}
Here we created another block in the _base.html template so that we can easily add new
524
{% block extra_css %}
{% endblock %}
{% load static %}
2
3
4
5
6
7
8
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width,
initial-scale=1">
<title>Mos Eisley's Cantina</title>
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
525
As you can see, just after the stylesheets on line 23-24 we added the {% block extra_css
%} block. This allows us to add additional CSS or tags in the header. This is a helpful technique to make your reusable templates more flexible. Its always a balance between making
your templates more flexible and making things easy to maintain; dont get carried away with
adding blocks everywhere. But a few blocks in the right places in your __base.html can be
very useful.
The final piece of the puzzle of the sign-in page is the signin.css that is responsible for the
styling. It looks like this:
1
2
3
4
body {
padding-bottom: 40px;
background-color: #eee;
}
5
6
7
8
9
10
11
.form-signin {
max-width: 330px;
padding: 15px;
margin: 0 auto;
font-family: 'starjedi', sans-serif;
}
12
13
14
15
16
.form-signin .form-signin-heading,
.form-signin .checkbox {
margin-bottom: 10px;
}
17
18
19
20
.form-signin .checkbox {
font-weight: normal;
}
21
22
23
24
25
26
27
28
29
30
.form-signin .form-control {
position: relative;
height: auto;
-webkit-box-sizing: border-box;
-moz-box-sizing: border-box;
box-sizing: border-box;
padding: 10px;
font-size: 16px;
}
31
526
32
33
34
.form-signin .form-control:focus {
z-index: 2;
}
35
36
37
38
39
40
.form-signin input[type="email"] {
margin-bottom: -1px;
border-bottom-right-radius: 0;
border-bottom-left-radius: 0;
}
41
42
43
44
45
46
.form-signin input[type="password"] {
margin-bottom: 10px;
border-top-left-radius: 0;
border-top-right-radius: 0;
}
NOTE: This particular stylesheet is relatively short as far as stylesheets go, so
we could have included it into the mec.css. This would reduce the number of
extra pages that need to be downloaded and thus improve the response time of
the website slightly. However, weve left it separate as a way to demonstrate a
good use-case for the Django templates block directive.
We can apply similar formatting to the contact page so that we have a nice consistent theme
to our site. Grab the file from the chp08 folder in the repo.
Exercise 3
Question: Previously in the chapter we introduced the marketing__circle_item
template tag. The one issue we had with it was that it required a whole lot of data
to be passed into it. Lets see if we can fix that. Inclusion tags dont have to have
data passed in. Instead, they can inherit context from the parent template. This
is done by passing in takes_context=True to the inclusion tag decorator like so:
1
@register.inclusion_tag('main/templatetags/circle_item.html',
takes_context=True)
527
Once that is all done, you can stop hard-coding all the data in the HTML template
and instead pass it to the template from the view function.
For bonus points, create a marketing_info model. Read all the necessary data
from the model in the index view function and pass it into the template.
1
2
3
4
5
6
7
8
9
10
@register.inclusion_tag(
'main/templatetags/circle_item.html',
takes_context=True
)
def marketing__circle_item(context):
return context
-the next thing to do is update our associated view function to pass in the context that we need
for our marketing items. Updating main.views to do that will cause it to look something like:
1
2
3
4
5
6
class market_item(object):
7
8
9
10
11
12
13
14
15
16
17
18
market_items = [
market_item(
img_name="yoda.jpg",
528
19
20
21
22
23
),
market_item(
img_name="clone_army.jpg",
heading="Build your Clan",
caption="Engage in meaningful conversation, or "
"bloodthirsty battle! If it's related to "
"Star Wars, in any way, you better believe we do it.",
button_title="Sign Up Now"
),
market_item(
img_name="leia.jpg",
heading="Find Love",
caption="Everybody knows Star Wars fans are the "
"best mates for Star Wars fans. Find your "
"Princess Leia or Han Solo and explore the "
"stars together.",
button_title="Sign Up Now"
),
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
def index(request):
uid = request.session.get('user')
# for now just hard-code all the marketing info stuff
# to see how this works
if uid is None:
return render_to_response(
'main/index.html',
{'marketing_items': market_items}
)
else:
return render_to_response(
'main/user.html',
{'marketing_items': market_items,
'user': User.get_by_id(uid)}
529
59
1. We created a dummy class called market_item. The class is used to make the variables
easier to access in the template, and in fact later when we make the model class it is
going to end up looking pretty similar to the dummy class.
2. Next we added dummy data. Normally you would read this from the database, but lets
start quick and dirty and stick all the data in a list called market_item. Notice how
we put the list at the module level namespace; this will make it easier to access from
the unit tests (which are going to break as soon as we implement this). 1/ The final
thing we did was pass the newly created list of marketing items to the template as the
context:
1
2
3
4
5
6
7
8
9
10
11
if uid is None:
return render_to_response(
'main/index.html',
{'marketing_items': market_items}
)
else:
return render_to_response(
'main/user.html',
{'marketing_items': market_items,
'user': User.get_by_id(uid)}
)
Simply passing in the dictionary with key marketing_items set to the market_items list
to the render_to_response() function will get the context set up so its accessible to our
templates. Then our inclusion tag, which now has access to the context, can pick it up and
pass it to the template main\templatetags\circle_item.html'. First a look at
themarketing__circle_item template. It now does more or less nothing:
1
2
3
4
5
6
7
8
9
10
@register.inclusion_tag(
'main/templatetags/circle_item.html',
takes_context=True
)
def marketing__circle_item(context):
return context
530
It does have to take the context as the first argument, and whatever it returns will be the
context that circle_item.html template has access to. We pass the entire context. Finally
our template can now loop through the list of market_items and display the nicely formatted
marketing blurb:
1
{% load staticfiles %}
2
3
4
5
6
7
8
9
10
{% for m in marketing_items %}
<div class="col-lg-4">
<img class="img-circle" src="{% static 'img/'|add:m.img %}"
width="140" height="140" alt="{{ m.img }}">
<h2>{{ m.heading }}</h2>
<p>{{ m.caption }}</p>
<p><a class="btn btn-default" href="{% url m.button_link %}"
role="button">{{ m.button_title }} »</a></p>
</div>
{% endfor %}
What we are doing here is looping through the marketing_items list (that we passed in from
the main.views.index function) and created a new circle marketing HTML for each. This
has the added advantage that it will allow us to add a variable number of marketing messages
to our page!
Finally, make sure to update the main/index.html file:
1
2
3
4
5
6
With all that done, fire up the browser and have a look at your site; it should all look as it did
before.
Dont forget to check your unit tests. You should see a big ugly failure in tests.main.testMainPageView.Ma
This is because your index page now requires the marketing_info context variable, and
we are not passing it into our test. Remember earlier when we said we were putting
the marketing_items list at the module level to aid with our testing? Well, lets fix
tests.main.testMainPageView.MainPageTests:
1. First import the marketing_items into the test, so we can reuse the same data:
531
def test_returns_exact_html(self):
resp = index(self.request)
self.assertEqual(
resp.content,
render_to_response(
"main/index.html",
{"marketing_items": market_items}
).content
)
Now rerun and all your tests should pass! Great work.
Bonus
2
3
4
5
6
7
8
9
10
Its a simple model with a couple of default values. Next we can change our main.views to
read from the model instead of from the hardcoded values like so:
532
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
def index(request):
uid = request.session.get('user')
market_items = MarketingItem.objects.all()
if uid is None:
return render_to_response(
'main/index.html',
{'marketing_items': market_items}
)
else:
return render_to_response(
'main/user.html',
{'marketing_items': market_items,
'user': User.get_by_id(uid)}
)
[
{
"model": "main.MarketingItem",
"pk": 1,
"fields":
{
"img":"yoda.jpg",
"heading":"Hone your Jedi Skills",
533
10
}
},
{
"model": "main.MarketingItem",
"pk": 2,
"fields":
{
"img":"clone_army.jpg",
"heading":"Build your Clan",
"caption":"Engage in meaningful conversation, or
bloodthirsty battle! If it's related to Star Wars, in
any way, you better believe we do it.",
"button_title":"Sign Up Now"
}
},
{
"model": "main.MarketingItem",
"pk": 3,
"fields":
{
"img":"leia.jpg",
"heading":"Find Love",
"caption":"Everybody knows Star Wars fans are the best
mates for Star Wars fans. Find your Princess Leia or
Han Solo and explore the stars together.",
"button_title":"Sign Up Now"
}
}
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
Once this is all in place, every time you run syncdb, the data from initial_data.json will automatically be put into the database. Re-run syncdb, fire up the Django development server,
and you should now see your marketing info. Awesome!
Before you pack everything up, make sure to run the tests. They should all pass.
534
Please note that while syncdb will load the data in your fixtures, if you have created any migrations for the application this feature wont work. Again we will
cover migrations in an upcoming chapter. So try not to jump ahead or this stuff
might not work.
535
11
12
This uses the same .info_box class as all our other boxes on the user.html page. It includes a heading, image and some details about the announcement. We need a few CSS rules
to control the size and position of the image.
Update mec.css:
1
2
3
.full-image {
overflow: hidden;
}
4
5
6
7
8
9
.full-image img {
position: relative;
display: block;
margin: auto;
min-width: 100%;
536
min-height: 100px;
10
11
This causes the image to center and autoscale to be the size of the info-box, minus the
border. Also if the image gets too large, rather than trying to shrink the image to a tiny size
(which will cause the image to look pretty bad) it will just be cropped.
Finally we can hook the image into our user.html page like so:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{% extends "__base.html" %}
{% load staticfiles %}
{% block content %}
<div class="row member-page">
<div class="col-sm-8">
<div class="row">
{% include "main/_announcements.html" %}
{% include "main/_lateststatus.html" %}
</div>
</div>
<div class="col-sm-4">
<div class="row">
{% include "main/_jedibadge.html" %}
{% include "main/_statusupdate.html" %}
</div>
</div>
</div>
{% endblock %}
Its just another include. We also moved the info-boxes around a bit so the announcements
are the first thing the user will see. With this, you should have a page that looks like so:
[User.html with announcements](images/announcements.png]
The advantage to doing something simple like this is:
1. It doesnt take much time.
2. You are free to use whatever HTML you like for each announcement.
The disadvantages:
1. Its static; you need to update the template for each new announcement.
537
2. You are limited to one announcement, unless of course you update the template for
more announcements.
Lets address the disadvantages by data driving the announcements info box from the
database in the same way we did for main_lateststatus.html.
Step 1: Create a model.
class Announcement(models.Model):
when = models.DateTimeField(auto_now=True)
img = models.CharField(max_length=255, null=True)
vid = models.URLField(null=True)
info = models.TextField()
Were allowing either an image or a video as the main part of the content, and then info will
allow us to store arbitrary HTML in the database, so we can put whatever we want in there.
We also time stamp the Announcements as we dont want to keep them on the site forever
because it doesnt look good to have announcements that are several years old on the site.
Sync the database.
In order to support embedding videos, lets turn to a pretty solid third party app. Its called
django-embed-video. Its not very customizable, but its pretty easy to use and does all the
heavy lifting for us.
As always, install it with pip:
1
Dont forget to add it to the requirements.txt file as well as embed_video to the INSTALLED_APPS
tuple in django_ecommerce\settings.py. Once done, update that main/_announcements.html
template.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
def index(request):
2
3
...snip...
4
5
6
7
8
9
10
11
else:
#membership page
status = StatusReport.objects.all().order_by('-when')[:20]
announce_date = date.today() - timedelta(days=30)
announce = (Announcement.objects.filter(
when__gt=announce_date).order_by('-when')
)
12
13
14
15
16
17
18
19
return render_to_response(
'main/user.html',
{
'user': User.get_by_id(uid),
'reports': status,
'announce': announce
},
539
context_instance=RequestContext(request),
20
21
from
from
from
from
Basically we are grabbing all the announcements in the last thirty days and ordering them,
so the most recent will appear at the top of the page.
Run the tests. Make sure to manually test as well.
Exercise 2
Question: You may have noticed that in the Jedi Badge box there is a list achievements link. What if the user could get achievements for posting status reports,
attending events, and any other arbitrary action that we create in the future?
This may be a nice way to increase participation, because everybody likes badges,
right? Go ahead and implement this achievements feature. Youll need a model
to represent the badges and a link between each user and the badges they own
(maybe a user_badges table). Then youll want your template to loop through
and display all the badges that the given user has.
There are several ways to do this. Well look at the most straightforward.
First create the model main.models.Badge:
1
2
3
4
class Badge(models.Model):
img = models.CharField(max_length=255)
name = models.CharField(max_length=100)
desc = models.TextField()
5
6
7
class Meta:
ordering = ('name',)
Each user will have a reference to the badges and many users can get the same badge, so this
creates a many-to-many relationship. Thus we will update payments.models.User adding
the new relationship field:
1
badges = models.ManyToManyField(Badge)
540
The default for a ManyToManyField is to create a lookup table. After adding this code,
drop the database, re-create it, and then run syncdb and in your database you will have a
table called payments_user_badges. The badges field will manage all the relationship
stuff for you. Of course we have to add from main.models import Badge for this to work.
That causes a problem though because we already have from payment.models import
User in main.models (because we are using it in the StatusReport model). This creates a
circular reference and will cause the import to break.
NOTE Circular references dont always cause imports to break, and there are
ways to make them work, but it is generally considered bad practice to have circular references. You can find a good discussion on circular references here.
We can remove this circular reference by changing our StatusReport model class so it
doesnt have to import payments.Users. We do that like so:
1
2
3
class StatusReport(models.Model):
user = models.ForeignKey('payments.User')
...snip...
In the case of the ForeignKey field, Django allows us to reference a model by using its
name as a string. This means that Django will wait until the payments.User model is
created and then link it up with StatusReport. It also means we can remove our from
payments.main import User statement, and then we dont have a circular reference.
Next up is to return the list of badges as part of the request for the user.html page. Updating
our main.views.index() function we now have:
1
def index(request):
2
3
...snip...
4
5
6
7
else:
#membership page
status = StatusReport.objects.all().order_by('-when')[:20]
8
9
10
11
12
13
14
usr = User.get_by_id(uid)
541
15
badges = usr.badges.all()
16
17
18
19
20
21
22
23
24
25
26
return render_to_response(
'main/user.html',
{
'user': usr,
'badges': badges,
'reports': status,
'announce': announce
},
context_instance=RequestContext(request),
)
On Line 13 we get all the badges linked to the current user by calling user.badges.all().
badges is the ManyToManyField we just created. The all() function will return a list of all
the related badges. We just set that list to the context variable badges and pass it into the
template.
Speaking of which, update the user.html template:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
{% extends "__base.html" %}
{% load staticfiles %}
{% block content %}
<div id="achievements" class="row member-page hide">
{% include "main/_badges.html" %}
</div>
<div class="row member-page">
<div class="col-sm-8">
<div class="row">
{% include "main/_announcements.html" %}
{% include "main/_lateststatus.html" %}
</div>
</div>
<div class="col-sm-4">
<div class="row">
{% include "main/_jedibadge.html" %}
{% include "main/_statusupdate.html" %}
</div>
</div>
</div>
{% endblock %}
542
Lines 4-6 are the important ones here. Basically we are creating another info box that will
stretch across the top of the screen to show all the badges. But we are adding the Bootstrap
CSS class hide so the achievements row wont be shown.
The main/_badges.html template:
1
2
3
4
5
6
7
8
9
10
11
12
13
{% load staticfiles %}
<section class=" row info-box text-center" id="badges">
<h1 id="achieve">Achievements</h1>
{% for bdg in badges %}
<div class="col-lg-4">
<h2>{{ bdg.name }}</h2>
<img class="img-circle" src="{% static 'img/'|add:bdg.img %}"
width="100" height="100" alt=" {{ bdg.name }}"
title="{{ bdg.name }} - {{ bdg.desc }}">
<p>{{ bdg.desc }}</p>
</div>
{% endfor %}
</section>
Here, we loop through the badges and show them with a heading and description. Using
col-lg-4 means there will be three columns per row; if there are more than three badges, it
will just wrap and add another row.
The final thing to do is to make the Show Achievements link work, since by default the
Achievements info-box is hidden. Clicking the Show Achievements link should show them.
And once they are visible, clicking the link again should hide them. The easiest way to do
this is to use some JavaScript. We havent really talked much about JavaScript in this course
yet, but we will in an upcoming chapter. For now, just have a look at the code, which goes in
application.js:
1
2
3
4
5
6
7
8
9
10
11
//show status
$("#show-achieve").click(function() {
a = $("#achievements");
l = $("#show-achieve");
if (a.hasClass("hide")) {
a.hide().removeClass('hide').slideDown('slow');
l.html("Hide Achievements");
} else {
a.addClass("hide");
l.html("Show Achievements");
}
543
return false;
});
12
13
This handles the click event that is called when you click on the Show Achievements link.
Lets walk through the code.
def test_index_handles_logged_in_user(self):
#create a session that appears to have a logged in user
self.request.session = {"user": "1"}
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
u.delete()
21
22
23
24
25
26
$ ./manage.py syncdb
545
REST
Exercise 1
Question: Flesh out the unit tests. In the JsonViewTests, check the case where
there is no data to return at all, and test a POST request with and without valid
data.
Expanding the tests, we can write a test_get_member():
1
2
3
def test_get_member(self):
stat = StatusReport(user=self.test_user, status="testing")
stat.save()
4
5
6
status = StatusReport.objects.get(pk=stat.id)
expected_json = StatusReportSerializer(status).data
7
8
response = StatusMember.as_view()(self.get_request(),
pk=stat.id)
9
10
self.assertEqual(expected_json, response.data)
11
12
stat.delete()
This is very similar to the test_get_collection(). The main difference here is that we
are saving a status report for our test test user and then passing in the pk to our view. This is
the same as calling the url /api/v1/status_reports/1.
Likewise, we can test other methods such as DELETE:
1
2
3
def test_delete_member(self):
stat = StatusReport(user=self.test_user, status="testing")
stat.save()
4
5
6
response = StatusMember.as_view()(
self.get_request(method='DELETE'), pk=stat.pk)
7
8
self.assertEqual(response.status_code,
status.HTTP_204_NO_CONTENT)
9
10
stat.delete()
546
Now, in order to get this to pass, we need to update the setUpClass() method and add a
tearDownClass() method.
1
2
3
4
5
@classmethod
def setUpClass(cls):
cls.factory = APIRequestFactory()
cls.test_user = User(id=2222, email="[email protected]")
cls.test_user.save()
6
7
8
9
@classmethod
def tearDownClass(cls):
cls.test_user.delete()
Exercise 2
Questions: Extend the REST API to cover the user.models.Badge.
To create a new endpoint you need to update three files:
1. main/serializers.py - create the JSON serializer
2. main/json_views.py - create the DRF views
3. main/urls.py - add the REST URIs (endpoints)
First, create the serializer:
1
class BadgeSerializer(serializers.ModelSerializer):
2
3
4
5
class Meta:
model = Badge
fields = ('id', 'img', 'name', 'desc')
Nothing to it. Just use the handy serializer.ModelSerializer. Make sure to update
the imports as well:
1
547
1
2
class BadgeCollection(
mixins.ListModelMixin, mixins.CreateModelMixin,
generics.GenericAPIView
):
4
5
6
7
queryset = Badge.objects.all()
serializer_class = BadgeSerializer
permission_classes = (permissions.IsAuthenticated,)
8
9
10
11
12
13
14
15
16
17
18
19
class BadgeMember(
mixins.RetrieveModelMixin, mixins.UpdateModelMixin,
mixins.DestroyModelMixin, generics.GenericAPIView
):
20
21
22
23
queryset = Badge.objects.all()
serializer_class = BadgeSerializer
permission_classes = (permissions.IsAuthenticated,)
24
25
26
27
28
29
30
31
32
Its a bit of copy and paste from the StatusMember and StatusCollection, but its pretty
clear exactly what is going on here.
Again, update the imports:
1
2
7
8
9
10
urlpatterns = patterns(
'main.json_views',
url(r'^$', 'api_root'),
url(r'^status_reports/$', json_views.StatusCollection.as_view(),
name='status_reports_collection'),
url(r'^status_reports/(?P<pk>[0-9]+)/$',
json_views.StatusMember.as_view()),
url(r'^badges/$', json_views.BadgeCollection.as_view(),
name='badges_collection'),
url(r'^badges/(?P<pk>[0-9]+)/$',
json_views.BadgeMember.as_view()),
)
Dont forget to add the unit tests as well. This is all you since you should be an expert now
after doing all the testing from exercise 1. If youll notice, though, once you do all your tests for
Badges, they are probably pretty similar to your tests for StatusReport. Can you factor
out arest_api_test_case? Have a look back at the testing mixins in Chapter 2 and see if you
can do something similar here.
Exercise 3
Question: Did you know that the browsable API uses Bootstrap for the look and
feel? Since we just learned Bootstrap, update the browsable API Template to fit
with our overall site template.
This is actually pretty easy to do once you know how. In case your Googling didnt find it,
the DRF documentation tells you how on this page.
Basically, all you have to do is create a template rest_framework/api.html that extends
the DRF template rest_framework/base.html. The simplest thing we can do is to
change the CSS to use the CSS we are using for the rest of our site. To do that, make a
rest_framework/api.html template look like this:
1
2
3
4
{% extends "rest_framework/base.html" %}
{% load staticfiles %}
{% block bootstrap_theme %}
<link href="{% static "css/bootstrap.min.css" %}"
rel="stylesheet">
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements
and media queries -->
549
7
8
10
11
12
<!-- WARNING: Respond.js doesn't work if you view the page via
file:// -->
<!--[if lt IE 9]>
<script
src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
<script
src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></scrip
<![endif]-->
<link href="{% static "css/mec.css" %}" rel="stylesheet">
{% endblock %}
Here we added the bootstrap_theme block and set it to use the Bootstrap CSS that we are
using and then also use our custom mec.css stylesheet, mainly to get our cool star_jedi font.
Super simple.
Go ahead. Test it out. Fire up the server, and then navigate to http://127.0.0.1:8000/api/v1/
status_reports/.
Again the docs have a lot more information about how to customize the look and feel of the
browsable API, so have a look if youre interested.
Exercise 4
Question: We dont have permissions on the browsable API. Add them in.
Right now, since our REST API requires authenticated users, the browsable API doesnt work
very well unless you can log in. So we can use the built-in DRF login and logout forms so
that a user can login and use the browsable API. To do that, all we need to do is update our
main/urls.py file by adding this line at the end of the urlpatterns member:
1
2
url(r'^api-auth/', include(
'rest_framework.urls', namespace='rest_framework')),
4
5
6
7
8
9
10
11
12
@api_view(('GET',))
def api_root(request):
return Response({
'status_reports': reverse(
'status_reports_collection', request=request),
'badges': reverse('badges_collection', request=request),
})
This view function returns a list of the two URIs that we currently support. This way the user
can start here and click through to view the entire API. All we need to do now is add the URL
to our main/urls.py:
1
url(r'^$', 'api_root'),
Now as an API consumer we dont have to guess what the available resources are in our REST
API - just browse to http://localhost:8000/api/v1/ to view them all.
551
Django Migrations
Exercise 1
Question: At this point if you drop your database, run migrations and then run the
tests you will have a failing test because there are no MarketItems in the database.
For testing you have two options:
Load the data in the test (or use a fixture).
Load the data by using a datamigration.
The preferred option for this case is to create a data migration to load the
MarketingItems. Can you explain why?. Create the migration.
First, create a new migrations file. Use the following command to help out:
1
This will create a new migration file in main\migrations, which should be called
0003_auto_<some ugly date string>.py. I dont particularly care for that name
so lets rename it to data_load_marketing_itmes_0003.py. (Later it will become clear
why we put the number at the end). Then we can update the contents of the file to create the
marketing items that we need. In total the file should now look like:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
"img":"clone_army.jpg",
"heading":"Build your Clan",
"caption":"Engage in meaningful conversation, or "
"bloodthirsty battle! If it's related to "
"Star Wars, in any way, you better believe we do it.",
"button_title":"Sign Up Now"
18
19
20
21
22
23
},
{
24
25
"img":"leia.jpg",
"heading":"Find Love",
"caption":"Everybody knows Star Wars fans are the "
"best mates for Star Wars fans. Find your "
"Princess Leia or Han Solo and explore the "
"stars together.",
"button_title":"Sign Up Now"
26
27
28
29
30
31
32
},
33
34
35
36
37
38
39
40
41
42
43
44
class Migration(migrations.Migration):
45
46
47
48
dependencies = [
('main', '0002_statusreport'),
]
49
50
51
52
operations = [
migrations.RunPython(create_marketing_items)
]
This follows the same pattern as the user creation data migration we did in the Data Migrations section of this chapter. The main differences are as follows:
1. The top part of the file just lists all the data that we are going to load. This makes it
553
This use the app registry that is provided by the migration framework so we can get the version of the MarketingItem that corresponds to our particular migration. This is important
in case the MarketingItem model later changes. By using the app registry we can be sure
we are getting the correct version of the model.
Once we have that correct version we just create the data using a list comprehension.
But why use a funny name for the file?
Remember that we are trying to load the data so that our test main.testMainPageView.test_returns_exac
will run correctly. If you recall the test itself looks like:
1
2
3
4
5
6
7
8
9
10
```python
def test_returns_exact_html (self):
market_items = MarketingItem.objects.all()
resp = index(self.request)
self.assertEqual(
resp.content,
render_to_response("main/index.html",
{"marketing_items":market_items}
).content)
```
So we are comparing the html rendered in index.html to the html rendered when we pass in
market_items. If you recall from earlier chapters, market_items is a list that contains all
the items that we want to pass into our template to render the template. This was left over
from before we switch the view function to read from the database. So this leaves us in a state
where we have duplication. We have two lists of system data:
1. data_load_marketing_items_0003.py - the data we used to load the database
2. main.views.market_items - a left over from when we refactored the code to load
marketing items from the database.
So to remove that duplication and only have one place to store / load the initial system data
we can change our test as follows:
554
2
3
4
5
6
7
8
9
10
11
12
13
def test_returns_exact_html(self):
data = [MarketingItem(**d) for d in init_marketing_data]
resp = index(self.request)
self.assertEqual(
resp.content,
render_to_response(
"main/index.html",
{"marketing_items": data}
).content
)
NOTE: Dont forget to remove the market_items import at the top of this file.
What we have done here is load the init_marketing_data list from our migration directly
into our test, so we dont need to keep the list in main.views.market_items around anymore and we remove the code duplication!
Again, so why the funny name for the migration file?
Because python doesnt allow you to import a file name that starts with a number. In other
words1
-will fail. Of course as with all things in python you can get around it by doing something like:
1
init_marketing_data =
__import__('main.migrations.0003_data_load_marketing_items.init_marketing_data'
But you really should avoid calling anything with __ if you can. So we renamed the file instead.
Exercise 2
Question We have a new requirement for two-factor authentication. Add a new
field to the user model called second_factor. Run ./manage.py makemigration
payments. What did it create? Can you explain what is going on in each line of
555
the migration? Now run ./manage.py migrate and check the database to see
the change that has been made. What do you see in the database? Now assume
management comes back and says two-factor is too complex for users; we dont
want to add it after all. List two different ways you can remove the newly added
field using migrations.
We are not going to show the code in the first part; just try it out and see what happens. For
the question at the end - list two different ways you can remove the newly added field using
migrations - the answers are:
1. You can migrate backwards to the previous migration. So assuming this was migration
0004 that came right after migration 0003 you could simply run:
1
That will return you to before the field was created. Then just drop the field from your
model and continue.
2. The second way is to just drop the field from the model and then run makemigrations
again, which will then drop the field. At that point, since your migrations are going
around in a circle you may want to look at Squashing Migrations. Do keep in mind
though that squashing migrations is completely optional and not at all required.
Exercise 3
Question: Lets pretend that MEC has been bought by a big corporation - well call
it BIGCO. BIGCO loves making things complicated. They say that all users must
have a bigCoID, and that ID has to follow a certain formula. The ID should look
like this: <first_two_digits_in_name><1-digit-Rank_code><sign-up-date>
1-digit-Rank_code = Y for youngling, P for padwan, J for Jedi
sign-up-date = Use whatever date format you like
Since this is an ID field we need to ensure that its unique.
Now create the new field and a migration for the field, then manually write a data
migration to populate the new field with the data from the pre-existing users.
bigCoID = models.CharField(max_length=50)
3
4
5
6
migrations.AddField(
model_name='user',
name='bigCoID',
field=models.CharField(max_length=50, default='foo'),
preserve_default=False,
),
There maybe another AlterField statement for the last_notification field, which you
can ignore or delete. Now just apply the migration with:
1
$ ./manage.py migrate
3
4
5
6
7
557
10
11
12
13
14
15
16
17
for u in User.objects.all():
bid = ("%s%s%s%s" % (u.name[:2],
u.rank[:1],
u.created_at.strftime("%Y%m%d%H%M%S%f"),
))
u.bigCoID = bid
u.save()
18
19
20
class Migration(migrations.Migration):
21
22
23
24
dependencies = [
('payments', '0004_auto_20141001_0546'),
]
25
26
27
28
operations = [
migrations.RunPython(migrate_bigcoid)
]
You may be thinking that bigCoID should be a unique field. And youre right. The problem
is if we try to add a unique field, to a table, we need to ensure the data is unique so we have
to do that as a three step process:
1. Create a new field without unique constraint (Schema Migration)
2. Update the data in the field to make it unique (Data Migration)
3. Add a unique constraint to the field (Schema Migration)
The only other way to do it would be to create a custom migration using the RunSQL operation. But that means you have to write the SQL by hand.
Coming back to our specific example. We have already completed step 1 and 2. So all we
have to do now is edit payments/models.py and add unique=True to bigCoID. Then run
create and apply the migration as usual. And that will apply the unique constraint in the
database and you are good to go.
558
One last thing we need to handle: Creating a new user. If we stop here, it will become difficult
to create new users without getting a unique key constraint error. So lets update the create
function in payments.models.User so that it will generate the bigCoID for us. The create
function will now look like this:
1
2
3
4
5
@classmethod
def create(cls, name, email, password, last_4_digits, stripe_id=''):
new_user = cls(name=name, email=email,
last_4_digits=last_4_digits, stripe_id=stripe_id)
new_user.set_password(password)
6
7
8
9
10
11
12
13
#set bigCoID
new_user.bigCoID = ("%s%s%s" % (new_user.name[:2],
new_user.rank[:1],
datetime.now().strftime("%Y%m%d%H%M%S%f"),
))
new_user.save()
return new_user
Notice that we are using datetime.now().strftime("%Y%m%d%H%M%S%f"). This is a datetime string accurate to the micro second. Otherwise known as a poor mans unique id. Since
we are accurate down to the micro second there is an extremely low probability that there will
ever be two users created in the same micro second and thus this should basically always produce a unique value. You could also use a database sequence to ensure uniqueness. Take a
look at this django snippet, for example.
Also, we need to update a few test in tests.payments.testUserModel to use the create
method (as we didnt strictly require it until now):
1
2
3
4
@classmethod
def setUpClass(cls):
cls.test_user = User.create(email="[email protected]", name='test user',
password="pass",
last_4_digits="1234")
5
6
7
def test_create_user_function_stores_in_database(self):
self.assertEquals(User.objects.get(email="[email protected]"),
self.test_user)
560
AngularJS Primer
Exercise 1
Question: Our User Poll example uses progress bars, which are showing a percentage of votes. But our vote function just shows the raw number of votes, so
the two dont actually match up. Can you update the vote() function to return
the current percentage of votes. (HINT: You will also need to keep track of the
total number of votes.)
The answer to this question relies on storing a bit more information in the controller. Since
we are updating percentages, we will have to update every item in the list each time a vote is
placed. Heres a look at the controller in its entirety, and then we will break it down line by
line:
1
2
3
mecApp.controller('UserPollCtrl', function($scope) {
$scope.total_votes = 0;
$scope.vote_data = {}
4
5
6
7
8
9
10
11
12
13
14
15
16
$scope.vote = function(voteModel) {
if (!$scope.vote_data.hasOwnProperty(voteModel)) {
$scope.vote_data[voteModel] = {"votes": 0, "percent": 0};
$scope[voteModel]=$scope.vote_data[voteModel];
}
$scope.vote_data[voteModel]["votes"] =
$scope.vote_data[voteModel]["votes"] + 1;
$scope.total_votes = $scope.total_votes + 1;
for (var key in $scope.vote_data) {
item = $scope.vote_data[key];
item["percent"] = item["votes"] / $scope.total_votes * 100;
}
};
17
18
});
1
2
3
4
5
6
7
8
9
{ 'votes_for_yoda':
{ 'votes': 0,
'percent': 0'
},
'votes_for_qui':
{ 'votes': 0,
'percent': 0'
},
}
Of course the structure will continue for each Jedi we add to the list. But the basic idea is that
we have a key with votes_for_ + the jedi name, and that key returns the number of votes
and the percentage of all votes.
Line 6 and 7 - Since we have not defined any data in our votes_data data
structure, we need to dynamically add it each time the vote() function is called.
hasOwnProperty checks to see if we have a key equal to the voteModel string passed
in, which should be something like votes_for_yoda. If we dont, then this is the first
time that particular Jedi has been voted for, so we will create a new data structure for
the Jedi and initialize everything to 0.
Line 8 - This line is completely optional, and were adding it to save typing in the view.
Instead of typing something like {{ vote_data.votes_for_yoda.percent }}, we
can simply say - {{ votes_for_yoda.percent }}.
Line 10 - Increase the number of votes for the current Jedi by one.
Line 11 - Keep track of the total number of votes.
Lines 12-14 - Loop through the data structure of all Jedis and recalculate their associated percentage of votes.
That gets us through the controller, which will automatically update all associated vote percentages.
Now we do need to make some updates to the view as well. To save space, lets just look at
Yoda:
562
2
3
4
5
7
8
Likewise, we repeat the same structure for each vote item, and now you have a set of progress
bars that always total up to 100 and all dynamically update each time you cast a vote!
Curious to see the vote count for each individual Jedi? Update the HTML like so:
1
563
[
[
{
"id": 1,
"title": "Who is the best Jedi?",
"publish_date": "2014-10-21T04:05:24.107Z",
"items": [
{
"id": 1,
"poll": 1,
"name": "Yoda",
"text": "Yoda",
"votes": 1,
"percentage": 50
},
{
"id": 2,
"poll": 1,
"name": "Vader",
"text": "Vader",
"votes": 1,
"percentage": 50
},
{
"id": 3,
"poll": 1,
"name": "Luke",
564
"text": "Luke",
"votes": 0,
"percentage": 0
28
29
30
31
],
"total_votes": 2
32
33
34
35
So, yes - there is a poll.id. Where is the code for this? Open djangular_polls/serializers.py.
Its the line:
1
Okay. Now to update the controller. We could change the JSON API to provide a way to
return only the latest poll, but lets learn a bit more about Angular instead. Update the
pollFactory:
1
2
3
4
5
6
7
var
var
var
var
var
baseUrl = '/api/v1/';
pollUrl = baseUrl + 'polls/';
pollItemsUrl = baseUrl + 'poll_items/';
pollId = 0;
pollFactory = {};
8
9
10
11
pollFactory.getPoll = function() {
var tempUrl = pollUrl;
if (pollId != 0) { tempUrl = pollUrl + pollId; }
12
13
14
15
16
17
18
19
return $http.get(pollUrl).then(function(response)
{
var latestPoll = $filter('orderBy')(response.data,
'-publish_date')[0];
pollId = latestPoll.id;
return latestPoll;
});
};
20
21
22
23
pollFactory.vote_for_item = function(poll_item) {
poll_item.votes +=1;
return $http.put(pollItemsUrl + poll_item.id, poll_item);
565
24
25
26
27
return pollFactory;
});
Line 5 - var pollId = 0;: This is our state variable to store the poll id.
Line 8 - Notice we arent taking in any parameters (before the function took an id).
Lines 9-10 - If we dont have a pollId cached then just call /api/v1/polls/,
which returns the complete list of polls. Otherwise pass the id, so we would be calling
/api/v1/polls/<pollId>.
Line 11 Notice we are calling return on our $http.get().then() function. then()
returns a promise. So, we will be returning a promise, but only after we first call
$http.get, then we call our then function (i.e., lines 12-16), then the caller or controller gets the return value from line 13.
Line 13 - This may be something new to you. Its the $filter function from Angular,
which is the same function that we use in the Angular templates - i.e., [[ model |
filter ]]. Here we are just calling it from the controller.
So this line orders the list for us, and we take the first element in the list after it is ordered.
Back to the pollFactory code:
566
Line 14 - grab the id from the poll item we are working with and
Line 15 - return the poll item.
Finally, there is a slight change we need to make in our controller. Because our getPoll
function is now returning a Poll as the promise instead of the response as the promise, we
need to change the setPoll function. It should now look like this:
1
2
3
function setPoll(promise){
$scope.poll = promise;
}
And thats it. You will now get the whole list of polls for the first call, and after that only get
the poll you are interested in.
Exercise 3
Question: Currently our application is a bit of a mismatch, we are using jQuery on
some parts - i.e,. showing achievements - and now Angular on the user polls. For
practice, convert the showing achievement functionality, from application.js,
into Angular code.
Lets handle this in four parts
Part 1
The first thing we need to do is initialize our ng-app somewhere more general. Im a big fan
of DRY, so lets just declare the ng-app on the <html> tag in __base.html.
1
Well call it mecApp since thats more fitting for our application. This also means we need to
remove it from __polls.html. Also, lets move the declaration of the angular.module to
application.js (so it will be the first thing created). The top of application.js should
now look like:
1
2
3
4
5
6
mecApp.config(function($interpolateProvider) {
$interpolateProvider.startSymbol('[[')
.endSymbol(']]');
});
567
2
3
4
5
6
pollsApp.config(function($interpolateProvider){
$interpolateProvider.startSymbol('[[')
.endSymbol(']]');
});
Then everywhere that we referenced pollsApp we need to replace with mecApp. With that
change, everything should continue to work fine. It will allow us to use Angular throughout
our entire application as opposed to just for the poll app.
Part 2
mecApp.controller('LoggedInCtrl', function($scope) {
$scope.show_badges = false ;
$scope.show_hide_label = "Show";
4
5
6
7
8
9
$scope.show = function() {
$scope.show_badges = ! $scope.show_badges;
$scope.show_hide_label = ($scope.show_badges) ? 'Hide': 'Show';
}
});
The above is a very simple controller where we define two models, $scope.show_badges
and $scope.show_hide_label. We also define a function called show(), which will be
called when the user clicks on the Show Achievements link. So lets look at that link in
templates/main/_jedibadge.html
Change this:
1
To:
1
Weve got two bits of Angular functionality tied into the list item:
568
{% extends "__base.html" %}
{% load staticfiles %}
{% block content %}
<div ng-controller="LoggedInCtrl">
<div id="achievements" class="row member-page"
ng-class="{hide: show_badges==false}">
{% include "main/_badges.html" %}
</div>
<div class="row member-page">
<div class="col-sm-8">
<div class="row">
{% include "main/_announcements.html" %}
{% include "main/_lateststatus.html" %}
</div>
</div>
<div class="col-sm-4">
<div class="row">
{% include "main/_jedibadge.html" %}
{% include "main/_statusupdate.html" %}
{% include "djangular_polls/_polls.html" %}
</div>
</div>
</div>
</div>
{% endblock %}
{% block extrajs %}
<script src="{% static "js/userPollCtrl.js" %}"
type="text/javascript"></script>
569
28
29
<div ng-controller="LoggedInCtrl">
<div id="achievements" class="row member-page"
ng-class="{hide: show_badges==false}">
{% include "main/_badges.html" %}
</div>
** Line 1** - Here we declare our controller.
** Line 3** - ng-class is a core Angular directive used to add/remove classes dynamically. So, if the value of $scope.show_badges is false, then we add the class hide to
our div; otherwise, remove the class from our div. This is all we need to do to show/hide
the div.
Part 4
4
5
571
Angular Forms
Exercise 1
Question: We are not quite done yet with our conversion to Angular, as that
register view function is begging for a refactor. A good way to organize things
would be to have the current register view function just handle the GET requests and return the register.html as it does now. As for the POST requests, I
would create a new users resource and add it to our existing REST API. So rather
than posting to /register, our Angular front-end will post to /api/v1/users.
This will allow us to separate the concerns nicely and keep the code a bit cleaner.
So, have a go at that.
The most straight forward way to do this is to leverage Django Rest Framework. The first
thing to do is to create a new file payments/json_views.py. Then add a post_user()
function to that file:
1
2
3
4
5
6
7
from
from
from
from
from
from
from
8
9
10
11
12
@api_view(['POST'])
def post_user(request):
form = UserForm(request.DATA)
13
14
15
16
17
18
19
20
21
22
if form.is_valid():
try:
#update based on your billing method (subscription vs
one time)
customer = Customer.create(
"subscription",
email=form.cleaned_data['email'],
description=form.cleaned_data['name'],
card=form.cleaned_data['stripe_token'],
plan="gold",
572
23
24
25
)
except Exception as exp:
form.addError(exp)
26
27
28
29
30
31
32
cd = form.cleaned_data
try:
with transaction.atomic():
user = User.create(
cd['name'], cd['email'],
cd['password'], cd['last_4_digits'])
33
34
35
36
37
38
if customer:
user.stripe_id = customer.id
user.save()
else:
UnpaidUsers(email=cd['email']).save()
39
40
41
42
43
44
45
except IntegrityError:
form.addError(cd['email'] + ' is already a member')
else:
request.session['user'] = user.pk
resp = {"status": "ok", "url": '/'}
return Response(resp, content_type="application/json")
46
47
48
49
50
51
After all the imports, we declare our view function (on Line 10 ) using the api_view decorator and passing in the list of http methods we support - just POST, in our case. The
post_user() function is basically the same as it was in the register view but we can take
a few shortcuts because we are using DRF:
Line 10 - No need to convert the request to JSON; DRF does that for us; we just load
up the UserForm.
Lines 40-41, 43-44, 46-47 - No need to convert the response back to JSON; just use
rest_framework.response.Response and it converts for us.
573
Now we just need to update our url config, so lets add a file payments/urls.py
1
2
3
4
5
6
7
urlpatterns = patterns(
'payments.json_views',
url(r'^users$', 'post_user'),
)
Lets update our main url file, django_ecommerce/urls.py, the same way we did when we
added djangular_polls
Add the import:
1
And code:
1
main_json_urls.extend(payments_json_urls)
7
8
9
10
admin.autodiscover()
main_json_urls.extend(djangular_polls_json_urls)
main_json_urls.extend(payments_json_urls)
11
12
13
14
15
16
17
18
urlpatterns = patterns(
'',
url(r'^admin/', include(admin.site.urls)),
url(r'^$', 'main.views.index', name='home'),
url(r'^pages/', include('django.contrib.flatpages.urls')),
url(r'^contact/', 'contact.views.contact', name='contact'),
url(r'^report$', 'main.views.report', name="report"),
19
574
# user registration/authentication
url(r'^sign_in$', views.sign_in, name='sign_in'),
url(r'^sign_out$', views.sign_out, name='sign_out'),
url(r'^register$', views.register, name='register'),
url(r'^edit$', views.edit, name='edit'),
20
21
22
23
24
25
# api
url(r'^api/v1/', include('main.urls')),
26
27
28
Finally, we have one last tiny change in our static/js/RegisterCtrl.js in order to point
it to the new url:
1
2
3
4
5
6
7
8
9
10
mecApp.factory("UserFactory", function($http) {
var factory = {}
factory.register = function(user_data) {
return $http.post("/api/v1/users",
user_data).then(function(response)
{
return response.data;
});
}
return factory;
});
Easy, right?
Exercise 2
Question: As I said in the earlier part of the chapter, Im leaving the form validation for _cardform.html to the user. True to my word, here it is as an exercise.
Put in some validation for credit card, and CVC fields.
Lets look at a simple way to add field validation for our credit card number and CVC fields.
Update django_ecommerce/templates/payments/_cardform.html:
1
2
3
4
<div class="clearfix">
<label for="credit_card_number">Credit card number</label>
<div class="input">
<input class="field" id="credit_card_number" name="cc_num"
type="text"
ng-model="card.number" ng-minlength="16" required>
575
6
7
8
10
11
12
</div>
<div class="custom-error"
ng-show="user_form.cc_num.$dirty && user_form.cc_num.$invalid"
style="display: none;">
Credit card number is invalid:<span
ng-show="user_form.cc_num.$error.required">value is
required</span>
<span ng-show="user_form.cc_num.$error.minlength">min length of
sixteen</span>
</div>
</div>
13
14
15
16
17
18
19
20
21
22
23
24
<div class="clearfix">
<label for="cvv">Security code (CVC)</label>
<div class="input">
<input class="small" id="cvc" type="text" name="cvc"
ng-model="card.cvc" required
ng-minlength="3">
</div>
<div class="custom-error" ng-show="user_form.cvc.$dirty &&
user_form.cvc.$invalid" style="display: none;">
CVC is invalid:<span
ng-show="user_form.cvc.$error.required">value is
required</span>
<span ng-show="user_form.cvc.$error.minlength">min length of
three</span>
</div>
</div>
This is similar to how we added field validation to our user_form. We add the ng-minlength
attribute and the required attribute to each of our input field and then have a custom error
div ( Line 7-11, 20-23 ) that displays an error message based upon the error that was raised
by angular.
SEE ALSO: If you wanted to validate to ensure an input is a number you would
think you could use the attribute type=number but there apparently is a bug
in Angular for those types of validations, so you would have to create your own
custom directive to get around that. There is a good explanation of how to do
that on blakeembrey.com.
576
This validation is a bit week, as you might still get an invalid credit card that is 16 digits long.
The Stripe API actually provides validation methods that you can call. Like the link above
you would need to create a custom directive or filter to do the validation for you. But there is
already an angular-stripe library that does most of that heavy lifting for you. Have a look at
the library on Github
Exercise 3
Question: While the code we added to templates/payments/_field.html is great
for our register page, it also affects our sign in page, which is now constantly
displaying errors. Fix it!
Remember that _field.html uses the template expression1
ng-show="{{form.form_name}}.{{field.name}}.$dirty &&
{{form.form_name}}.{{field.name}}.$invalid"
-which relies on the form having a form_name attribute which our SigninForm doesnt have.
So edit payments/forms.SigninForm and add the attribute just like you did previously for
payments.forms.UserForm. So now your SigninForm should look like:
1
2
3
4
class SigninForm(PaymentForm):
email = forms.EmailField(required=True)
password = forms.CharField(
required=True,
widget=forms.PasswordInput(render_value=False)
)
6
7
8
form_name = 'signin_form'
ng_scope_prefix = 'signinform'
MongoDB Time!
Exercise 1
Question: Write a MongoTestCase that clears out a MongoDB database prior to
test execution/between test runs.
To accomplish this we just need to extend Djangos built-in testing capabilities. Lets use
django.test.runner.DiscoverRunner for that:
1
2
3
4
5
from
from
from
from
from
6
7
8
class MongoTestRunner(DiscoverRunner):
9
10
11
12
13
14
15
16
17
18
return db_name
19
20
21
22
23
In the above code example we are overwriting the setup_databases() and teardown_databases()
methods from the parent DiscoverRunner.
In the setup_databases() method we use mongoengine to create a new MongoDB
database. The test database has its named derived from the word test_ and the name
defined in our settings.py file. Which in our case is mec-geodata.
578
Then in the teardown_databases() method we use pymongo to drop all collections in the
database.
We can also make a MongoTestCase if we need to do any setup/configuration per test. It
would look like this:
1
2
3
4
class MongoTestCase(TestCase):
5
6
7
def _fixture_setup(self):
pass
8
9
10
def _fixture_teardown(self):
pass
TEST_RUNNER = 'django_ecommerce.tests.MongoTestRunner'
This will tell Django to use your runner always. Of course you can use a temporary settings
file for this as well, if you want to switch back and forth between database types:
1
2
3
TEST_RUNNER = 'django_ecommerce.tests.MongoTestRunner'
And that can be called from the command line by using something like:
1
Thats all there is to it. Now you have a way to test MongoDB as well. So get cracking - write
some tests!
579
Lets go through the models one at a time. All the below code will go into main/admin.py
First up Main.Announcements
Lets create the ModelAdmin and get the list view working correctly.
1
2
@admin.register(Announcement)
class AnnouncementAdmin(admin.ModelAdmin):
3
4
5
6
7
8
9
10
info_html.short_description = "Info"
info_html.allow_tags = True
Now we are going to fix the img field in the same way we did for badges by making it an
ImageField and adding a thumbnail view. After doing that our model should now look like
this:
1
class Announcement(models.Model):
2
3
when = models.DateTimeField(auto_now=True)
580
5
6
7
8
9
10
11
12
def thumbnail(self):
if self.img:
return u'<img src="%s" width="100" height="100" />' %
(self.img.url)
else:
return "no image"
13
14
thumbnail.allow_tags = True
<img src="{{MEDIA_URL}}{{a.img}}"/>
to
2
3
class Announcement(models.Model):
4
5
when = models.DateTimeField(auto_now=True)
581
7
8
This is necessary so we can get the correct widget displayed in our Admin View. Also, this
will require you to make and run migrations again so go ahead and do that.
Now lets modify our AnnouncementAdmin to use embed_video.admin.AdminVidoMixin.
All we have to change is the class definition:
1
2
3
4
@admin.register(Announcement)
class AnnouncementAdmin(AdminVideoMixin, admin.ModelAdmin):
Once that Mixin is in place, it will take care of finding any fields of type EmbedVideoField
and will create a custom widget that displayes the video and the url for the video in the change
form. So if you now look go to http://127.0.0.1:8000/admin/main/announcement/2/ you
will see a playable thumbnail of the video plus a text box to input the url for the video.
There you go. That should do it for Announcements. Feel free to tweak anything else you like,
but that should get you most of the functionality you need.
Main.Marketing Items
Marketing Items will end up being very similar to Announcements. Lets start off with the
model updates, since we now have three models that will likely use the thumbnail view, it
time to refactor.
First lets create a ThumbnailMixin:
1
2
3
4
class ThumbnailMixin(object):
'''use this mixin if you want to easily show thumbnails for an
image field
in your admin view
'''
5
6
7
8
def thumbnail(self):
if self.img:
return u'<img src="%s" width="100" height="100" />' %
(self.img.url)
else:
582
10
11
12
thumbnail.allow_tags = True
Then we can add the ThumbnailMixin to the models MarketingItem, Announcement, and
Badge. With that here is what MarketingItem will look like:
1
2
3
4
5
$ ./manage.py makemigrations
$ ./manage.py migrate
For the Admin view, lets do something a little different. If you remember from the Bootstrap
chapter we created a template tag marketing__circle_item to display marketing items.
So why not use that tag in the admin view so Administrators can immediately see what the
marketing item will look like on the live site. Below is the admin class that will make that
happen:
1
2
3
4
@admin.register(MarketingItem)
class MarketingItemAdmin(admin.ModelAdmin):
5
6
7
8
9
10
11
12
13
583
The magic is in the live_view function which simply renders the template associated with
the marketing__circle_item template tag and passes in the MarketItem mi as the context
for the template.
Try it out, youll see the fully rendered marketing item in the admin list view!
One more to go
Main.StatusReports
This one is even simpler. No need to change the model. Lets just create an Admin class:
1
2
3
4
5
@admin.register(StatusReport)
class StatusReportAdmin(admin.ModelAdmin):
6
7
8
9
10
11
formfield_overrides = {
models.CharField: {'widget': Textarea(attrs={'rows': 4,
'cols': 70})},
}
The formfield_overrides is the only thing new here. What it says is to change the widget used to edit any field that is a models.CharField to a Textarea and pass attrs to
Textarea.__init__(). This will give us a large Textarea to edit the status message. We
want to do this since the max_length for the field is 200 and its just easier to type all that
in a Textarea.
Okay. Thats it for the models. Things should be looking better now.
Exercise 2
Question: For our Payments.Users object in the change form you will notice that
badges section isnt very helpful. See if you can change that section to show a list
of the actual badges so its possible for the user to know what badges they are
adding / removing from the user.
There are a few ways to do this. Probably the simplest is to use filter_horizontal or
filter_vertical which will give you a nice jQuery select that shows what you have chosen.
To do that just add a single line to the bottom of your UserAdmin. Here is the full UserAdmin:
584
1
2
@admin.register(User)
class UserAdmin(admin.ModelAdmin):
3
4
5
6
7
8
9
10
11
That will give us a nice select, but all options will just say badge object. We can change
this like we changed the SelectWidget by adding a __str__ function to our Badge model.
Such as:
1
2
def __str__(self):
return self.name
585
class RegisterPage(SeleniumPage):
2
3
4
5
6
7
8
9
10
11
12
13
Nothing special here; we are just defining our elements like we did with SignInPage.
Next are a few properties:
1
2
3
4
5
6
@property
def error_msg(self):
'''the errors div has a 'x' to close it
let's not return that
'''
return self.errors_div.text[2:]
7
8
9
10
@property
def rel_url(self):
return '/register'
Again, similar to SignInPage do note for the error_msg property we trim off the first two
characters since they are used to show the X a user can click on to hide the errors.
586
def go_to(self):
self.driver.get('%s%s' % (self.base_url, self.rel_url))
assert self.register_title.text == "Register Today!"
4
5
6
7
8
9
10
Okay. Lets look at that mock_geoloc(). We talked about this in the You need to run
some JavaScript code section of the chapter. Basically, we are setting the value of the
$scope.geoloc to whatever the user passes in. This way we can test that geolocation is
working, because with the default browser you get when you run selenium, the geolocation
functionality probably wont work correctly.
1
2
3
4
5
6
7
1. Setting our drop downs, by clicking on the option tag in the drop down. Note here
that normally we say webdriver.find_element..., but now we are in effect saying
webelement.find_element.... When we do this, our find will be restricted to only
search through the child DOM objects of the web element. This is how we can ensure
that we are finding only the option tags that belong to our drop down. These functions
could be refactored up in the SeleniumElement class consider that extra credit.
1
2
3
4
5
self.name_textbox.send_keys(name)
self.email_textbox.send_keys(email)
587
6
7
8
9
10
11
12
13
self.pwd_textbox.send_keys(pwd)
self.ver_pwd_textbox.send_keys(pwd2)
self.cc_textbox.send_keys(cc)
self.cvc_textbox.send_keys(cvc)
self.set_expiry_month(expiry_month)
self.set_expiry_year(expiry_year)
self.mock_geoloc(lat, lon)
self.register_button.click()
This mimics the doLogin() function we have on the SignInPage. It just fills out all the
necessary fields, and, as a bonus, mocks out the geolocation value as well.
Now onto the actual test.
1
class RegistrationTests(TestCase):
2
3
4
5
6
7
8
9
@classmethod
def setUpClass(cls):
from selenium.webdriver.firefox.firefox_profile import
FirefoxProfile
profile = FirefoxProfile()
profile.set_preference('geo.prompt.testing', True)
profile.set_preference('geo.prompt.testing.allow', True)
cls.browser = webdriver.Firefox(profile)
10
11
12
cls.browser.implicitly_wait(10)
super(RegistrationTests, cls).setUpClass()
This is the hardest part of the test, and probably sent you googling for a bit. Thats all part
of the programming game. If you ran the test as you may have noticed the browser kept
asking you if it was okay to give your location. (Wether or not the browser asks you this is
controlled by a setting in your firefox profile). Luckily with selenium we have access to the
Firefox profile and we can set whatever values we want. These settings above will ensure that
you are not prompted with that annoying popup anymore.
1
2
3
4
@classmethod
def tearDownClass(cls):
cls.browser.quit()
super(RegistrationTests, cls).tearDownClass()
5
6
def setUp(self):
588
def test_registration(self):
self.reg.go_to()
self.reg.do_reg(name="somebodynew", email="[email protected]",
pwd="test", pwd2="test", cc="4242424242424242",
cvc="123", expiry_month="4", expiry_year="2020")
self.assertTrue(
self.browser.find_element_by_id("user_info").is_displayed())
Nothing special here we are just filling out the form, using our PageObject and checking to
make sure the user_info element is displayed (which should only be on the logged in members page).
1
2
3
4
5
6
def test_failed_registration(self):
self.reg.go_to()
self.reg.do_reg(name="somebodynew", email="[email protected]",
pwd="test", pwd2="test2", cc="4242424242424242",
cvc="123", expiry_month="4", expiry_year="2020")
self.assertIn("Passwords do not match", self.reg.error_msg)
And for our final test we fill out all the information (with passwords that dont match) and
check to see that we get the right error message. We could also come up with other combinations of this test to check the various error messages one might encounter.
One final note, even though test_failed_registration() comes later in the file then
test_registration(), it runs first. Why? by default the test runner will run the tests in
alphabetical order. This useful tidbit is helpful if you want your test to run in a certain order.
For example, if you run all your failed registrations before your successful registration, this
will improve the execution speed of the tests.
While this is the only exercise in this chapter, feel free to continue the testing and see if you
can come up with test cases for the major functionality pieces. It is good practice. Cheers!
589
Deploy
Exercise 2
Question: Didnt think you were going to get out of this chapter without writing
any code, did you? Remember back in the Fabric section when we talked about
creating another function to update all of your configuration files? Well, now is
the time to do that. Create a function called update_config() that automatically
updates your Nginx, Supervisor and Django config / setting files for you.
When the function is complete you should be able to execute:
1
Bonus points if you can get it all to work so the user only has to type:
1
$ fab ci
For the first part - e.g, the update_config() function - our function should look something
like this:
1
2
3
4
5
6
7
def update_config():
with cd("/opt/mec_env/mec_app/deploy"):
run("cp settings_prod.py
../django_ecommerce/django_ecommerce/")
run("cp supervisor/mec.conf /etc/supervisor/conf.d/")
run("cp nginx/sites-avaliable/mec
/etc/nginx/sites-available/")
run("/etc/init.d/supervisor restart")
run("/etc/init.d/nginx restart")
Not a lot new above. Basically all I did was start off in the deploy directory, copy the three
files to their necessary places (unix will automatically overwrite on copy), and then I restarted
both supervisor and nginx to make sure any configuration changes get updated. With that
you can now run
1
For the bonus section well there is really nothing to it. Keep in mind that fabric is just
python so you can structure it anyway you like. Which means you just need to create a function called ci that in turn calls the other three functions.
Below is a listing of the entire fabfile.py including the new ci function.
590
from fabric.api import env, cd, run, prefix, lcd, settings, local
2
3
4
5
6
7
8
9
10
def ci():
integrate()
update_app()
update_config()
11
12
13
14
15
16
17
18
19
20
def update_app():
with cd("/opt/mec_env/mec_app"):
run("git pull")
with cd("/opt/mec_env/mec_app/django_ecommerce"):
with prefix("source /opt/mec_env/bin/activate"):
run("pip install -r ../requirements.txt")
run("./manage.py migrate --noinput")
run("./manage.py collectstatic --noinput")
21
22
23
24
25
26
27
28
29
def update_config():
with cd("/opt/mec_env/mec_app/deploy"):
run("cp settings_prod.py
../django_ecommerce/django_ecommerce/")
run("cp supervisor/mec.conf /etc/supervisor/conf.d/")
run("cp nginx/sites-avaliable/mec
/etc/nginx/sites-available/")
run("/etc/init.d/supervisor restart")
run("/etc/init.d/nginx restart")
30
31
32
33
34
35
def integrate():
with lcd("../django_ecommerce/"):
local("pwd")
local("./manage.py test ../tests/unit")
36
37
38
with settings(warn_only=True):
local("git add -p && git commit")
591
39
40
41
42
local("git pull")
local("./manage.py test ../tests")
local("git push")
592