Python Peewee
Python Peewee
Release 2.4.0
charles leifer
Contents
Contents:
1.1 Installing and Testing . . . . . .
1.2 Quickstart . . . . . . . . . . . .
1.3 Example app . . . . . . . . . . .
1.4 Additional Resources . . . . . . .
1.5 Managing your Database . . . . .
1.6 Models and Fields . . . . . . . .
1.7 Querying . . . . . . . . . . . . .
1.8 Query operators . . . . . . . . .
1.9 Foreign Keys . . . . . . . . . . .
1.10 Performance Techniques . . . . .
1.11 Transactions . . . . . . . . . . .
1.12 API Reference . . . . . . . . . .
1.13 Playhouse, a collection of addons
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
3
3
8
14
15
24
33
44
48
52
55
57
88
Note
131
133
ii
Peewee is a simple and small ORM. It has few (but expressive) concepts, making it easy to learn and intuitive to use.
A small, expressive ORM
Written in python with support for versions 2.6+ and 3.2+.
built-in support for sqlite, mysql and postgresql
tons of extensions available in the Playhouse, a collection of addons (postgres hstore/json/arrays, sqlite fulltext-search, schema migrations, and much more).
Contents
Contents
CHAPTER 1
Contents:
Note: On some systems you may need to use sudo python setup.py install to install peewee systemwide.
You can test specific features or specific database drivers using the runtests.py script. By default the test suite is
run using SQLite and the playhouse extension tests are not run. To view the available test runner options, use:
python runtests.py --help
1.2 Quickstart
This document presents a brief, high-level overview of Peewees primary features. This guide will cover:
Model Definition
Storing data
Retrieving Data
Note: If youd like something a bit more meaty, there is a thorough tutorial on creating a twitter-style web app
using peewee and the Flask framework.
I strongly recommend opening an interactive shell session and running the code. That way you can get a feel for
typing in queries.
There are lots of field types suitable for storing various types of data. Peewee handles converting between pythonic
values those used by the database, so you can use Python types in your code without having to worry.
Things get interesting when we set up relationships between models using foreign keys. This is easy to do with
peewee:
class Pet(Model):
owner = ForeignKeyField(Person, related_name=pets)
name = CharField()
animal_type = CharField()
class Meta:
database = db # this model uses the people database
Now that we have our models, lets create the tables in the database that will store our data. This will create the tables
with the appropriate columns, indexes, sequences, and foreign key constraints:
>>> db.create_tables([Person, Pet])
Chapter 1. Contents:
Note: When you call save(), the number of rows modified is returned.
You can also add a person by calling the create() method, which returns a model instance:
>>> grandma = Person.create(name=Grandma, birthday=date(1935, 3, 1), is_relative=True)
>>> herb = Person.create(name=Herb, birthday=date(1950, 5, 5), is_relative=False)
To update a row, modify the model instance and call save() to persist the changes. Here we will change Grandmas
name and then save the changes in the database:
>>> grandma.name = Grandma L.
>>> grandma.save() # Update grandmas name in the database.
1
Now we have stored 3 people in the database. Lets give them some pets. Grandma doesnt like animals in the house,
so she wont have any, but Herb is an animal lover:
>>>
>>>
>>>
>>>
After a long full life, Mittens sickens and dies. We need to remove him from the database:
>>> herb_mittens.delete_instance() # he had a great life
1
Note: The return value of delete_instance() is the number of rows removed from the database.
Uncle Bob decides that too many animals have been dying at Herbs house, so he adopts Fido:
>>> herb_fido.owner = uncle_bob
>>> herb_fido.save()
>>> bob_fido = herb_fido # rename our variable for clarity
1.2. Quickstart
Lists of records
Lets list all the people in the database:
>>> for person in Person.select():
...
print person.name, person.is_relative
...
Bob True
Grandma L. True
Herb False
There is a big problem with the previous query: because we are accessing pet.owner.name and we did not select
this value in our original query, peewee will have to perform an additional query to retrieve the pets owner. This
behavior is referred to as N+1 and it should generally be avoided.
We can avoid the extra queries by selecting both Pet and Person, and adding a join.
>>> query = (Pet
...
.select(Pet, Person)
...
.join(Person)
...
.where(Pet.animal_type == cat))
>>> for pet in query:
...
print pet.name, pet.owner.name
...
Kitty Bob
Mittens Jr Herb
We can do another cool thing here to get bobs pets. Since we already have an object to represent Bob, we can do this
instead:
>>> for pet in Pet.select().where(Pet.owner == uncle_bob):
...
print pet.name
Lets make sure these are sorted alphabetically by adding an order_by() clause:
>>> for pet in Pet.select().where(Pet.owner == uncle_bob).order_by(Pet.name):
...
print pet.name
...
Fido
Kitty
Chapter 1. Contents:
Now lets list all the people and some info about their pets:
>>> for person in Person.select():
...
print person.name, person.pets.count(), pets
...
for pet in person.pets:
...
print
, pet.name, pet.animal_type
...
Bob 2 pets
Kitty cat
Fido dog
Grandma L. 0 pets
Herb 1 pets
Mittens Jr cat
Once again weve run into a classic example of N+1 query behavior. We can avoid this by performing a JOIN and
aggregating the records:
>>> subquery = Pet.select(fn.COUNT(Pet.id)).where(Pet.owner == Person.id).
>>> query = (Person
...
.select(Person, Pet, subquery.alias(pet_count))
...
.join(Pet, JOIN_LEFT_OUTER)
...
.order_by(Person.name))
>>> for person in query.aggregate_rows(): # Note the aggregate_rows() call.
...
print person.name, person.pet_count, pets
...
for pet in person.pets:
...
print
, pet.name, pet.animal_type
...
Bob 2 pets
Kitty cat
Fido dog
Grandma L. 0 pets
Herb 1 pets
Mittens Jr cat
Even thought we created the subquery separately, only one query is actually executed.
Finally, lets do a complicated one. Lets get all the people whose birthday was either:
before 1940 (grandma)
after 1959 (bob)
>>> d1940 = date(1940, 1, 1)
>>> d1960 = date(1960, 1, 1)
>>> query = (Person
...
.select()
...
.where((Person.birthday < d1940) | (Person.birthday > d1960)))
...
>>> for person in query:
...
print person.name
...
Bob
Grandma L.
1.2. Quickstart
Now lets do the opposite. People whose birthday is between 1940 and 1960:
>>> query = (Person
...
.select()
...
.where((Person.birthday > d1940) & (Person.birthday < d1960)))
...
>>> for person in query:
...
print person.name
...
Herb
One last query. This will use a SQL function to find all people whose names start with either an upper or lower-case
G:
>>> expression = (fn.Lower(fn.Substr(Person.name, 1, 1)) == g)
>>> for person in Person.select().where(expression):
...
print person.name
...
Grandma L.
This is just the basics! You can make your queries as complex as you like.
All the other SQL clauses are available as well, such as:
group_by()
having()
limit() and offset()
Check the documentation on Querying for more info.
Chapter 1. Contents:
After ensuring that flask is installed, cd into the twitter example directory and execute the run_example.py script:
python run_example.py
In order to create these models we need to instantiate a SqliteDatabase object. Then we define our model classes,
specifying the columns as Field instances on the class.
# create a peewee database instance -- our models will use this database to
# persist information
database = SqliteDatabase(DATABASE)
# model definitions -- the standard "pattern" is to define a base model class
# that specifies which database to use. then, any subclasses will automatically
# use the correct storage.
class BaseModel(Model):
class Meta:
database = database
# the user model specifies its fields (or columns) declaratively, like django
class User(BaseModel):
username = CharField(unique=True)
password = CharField()
email = CharField()
join_date = DateTimeField()
class Meta:
order_by = (username,)
# this model contains two foreign keys to user -- it essentially allows us to
# model a "many-to-many" relationship between users. by querying and joining
# on different columns we can expose who a user is "related to" and who is
# "related to" a given user
class Relationship(BaseModel):
from_user = ForeignKeyField(User, related_name=relationships)
to_user = ForeignKeyField(User, related_name=related_to)
class Meta:
indexes = (
# Specify a unique multi-column index on from/to-user.
((from_user, to_user), True),
)
# a dead simple one-to-many relationship: one user has 0..n messages, exposed by
# the foreign key. because we didnt specify, a users messages will be accessible
# as a special attribute, User.message_set
class Message(BaseModel):
user = ForeignKeyField(User)
content = TextField()
pub_date = DateTimeField()
class Meta:
order_by = (-pub_date,)
Note: Note that we create a BaseModel class that simply defines what database we would like to use. All other
models then extend this class and will also use the correct database connection.
Peewee supports many different field types which map to different column types commonly supported by database
engines. Conversion between python types and those used in the database is handled transparently, allowing you to
use the following in your application:
Strings (unicode or otherwise)
10
Chapter 1. Contents:
Open a python shell in the directory alongside the example app and execute the following:
>>> from app import *
>>> create_tables()
Note: If you encounter an ImportError it means that either flask or peewee was not found and may not be installed
correctly. Check the Installing and Testing document for instructions on installing peewee.
Every model has a create_table() classmethod which runs a SQL CREATE TABLE statement in the database.
This method will create the table, including all columns, foreign-key constaints, indexes, and sequences. Usually this
is something youll only do once, whenever a new model is added.
Peewee provides a helper method Database.create_tables() which will resolve inter-model dependencies
and call create_table() on each model.
Note: Adding fields after the table has been created will required you to either drop the table and re-create it or
manually add the columns using an ALTER TABLE query.
Alternatively, you can use the schema migrations extension to alter your database schema using Python.
Note: You can also write database.create_tables([User, ...], True) and peewee will first check
to see if the table exists before creating it.
11
Because SQLite likes to have a separate connection per-thread, we will tell flask that during the request/response cycle
we need to create a connection to the database. Flask provides some handy decorators to make this a snap:
@app.before_request
def before_request():
g.db = database
g.db.connect()
@app.after_request
def after_request(response):
g.db.close()
return response
Note: Were storing the db on the magical variable g - thats a flask-ism and can be ignored as an implementation
detail. The important takeaway is that we connect to our db every request and close that connection when we return a
response.
Making queries
In the User model there are a few instance methods that encapsulate some user-specific functionality:
following(): who is this user following?
followers(): who is following this user?
These methods are similar in their implementation but with an important difference in the SQL JOIN and WHERE
clauses:
def following(self):
# query other users through the "relationship" table
return (User
.select()
.join(Relationship, on=Relationship.to_user)
.where(Relationship.from_user == self))
def followers(self):
return (User
.select()
.join(Relationship, on=Relationship.from_user)
.where(Relationship.to_user == self))
12
Chapter 1. Contents:
join_date=datetime.datetime.now()
)
# mark the user as being authenticated by setting the session vars
auth_user(user)
return redirect(url_for(homepage))
except IntegrityError:
flash(That username is already taken)
We will use a similar approach when a user wishes to follow someone. To indicate a following relationship, we create
a row in the Relationship table pointing from one user to another. Due to the unique index on from_user and
to_user, we will be sure not to end up with duplicate rows:
user = get_object_or_404(User, username=username)
try:
with database.transaction():
Relationship.create(
from_user=get_current_user(),
to_user=user)
except IntegrityError:
pass
Performing subqueries
If you are logged-in and visit the twitter homepage, you will see tweets from the users that you follow. In order to
implement this cleanly, we can use a subquery:
# python code
messages = Message.select().where(Message.user << user.following())
13
kwargs[var_name] = qr.paginate(kwargs[page])
return render_template(template_name, **kwargs)
Simple authentication system with a login_required decorator. The first function simply adds user data
into the current session when a user successfully logs in. The decorator login_required can be used to
wrap view functions, checking for whether the session is authenticated and if not redirecting to the login page.
def auth_user(user):
session[logged_in] = True
session[user] = user
session[username] = user.username
flash(You are logged in as %s % (user.username))
def login_required(f):
@wraps(f)
def inner(*args, **kwargs):
if not session.get(logged_in):
return redirect(url_for(login))
return f(*args, **kwargs)
return inner
Return a 404 response instead of throwing exceptions when an object is not found in the database.
def get_object_or_404(model, *expressions):
try:
return model.get(*expressions)
except model.DoesNotExist:
abort(404)
Note: Like these snippets and interested in more? Check out flask-peewee - a flask plugin that provides a django-like
Admin interface, RESTful API, Authentication and more for your peewee models.
14
Chapter 1. Contents:
To use this database with your models, set the database attribute on an inner Meta class:
class MyModel(Model):
some_field = CharField()
class Meta:
database = database
Best practice: define a base model class that points at the database object you wish to use, and then all your models
will extend it:
database = SqliteDatabase(my_app.db)
class BaseModel(Model):
class Meta:
database = database
class User(BaseModel):
username = CharField()
class Tweet(BaseModel):
user = ForeignKeyField(User, related_name=tweets)
message = TextField()
# etc, etc
Note: Remember to specify a database on your model classes, otherwise peewee will fall back to a default sqlite
database named peewee.db.
15
The Playhouse, a collection of addons contains a Postgresql extension module which provides many postgres-specific
features such as:
Arrays
HStore
JSON
Server-side cursors
And more!
If you would like to use these awesome features,
playhouse.postgres_ext module:
The Playhouse, a collection of addons contains a SQLite extension module which provides many SQLite-specific
features such as:
Full-text search
Support for custom functions, aggregates and collations
Advanced transaction support
16
Chapter 1. Contents:
And more!
If you would like to use these
playhouse.sqlite_ext module:
awesome
features,
use
the
SqliteExtDatabase
from
the
Because the outer select query is lazily evaluated, the cursor is held open for the duration of the loop. If the database
is in autocommit mode (default behavior), the call to Tweet.create will call commit() on the underlying connection,
resetting the outer-loops cursor. As a result, it may happen that the first two users actually receive duplicate tweets.
Here are some ways to work around the issue:
# By running in a transaction, the new tweets will not be committed
# immediately, and the outer SELECT will not be reset.
with database.transaction():
for user in User.select():
Tweet.create(user=user, message=hello!)
# By consuming the cursor immediately (by coercing to a list), the
# inner COMMITs will not affect the iteration.
for user in list(User.select()):
Tweet.create(user=user, message=hello!)
Many, many thanks to @tmoertel for his excellent comment explaining this behavior.
APSW, an Advanced SQLite Driver
Peewee also comes with an alternate SQLite database that uses apsw, an advanced sqlite driver, an advanced Python
SQLite driver. More information on APSW can be obtained on the APSW project website. APSW provides special
features like:
Virtual tables, virtual file-systems, Blob I/O, backups and file control.
Connections can be shared across threads without any additional locking.
Transactions are managed explicitly by your code.
Transactions can be nested.
Unicode is handled correctly.
APSW is faster that the standard library sqlite3 module.
If you would like to use APSW, use the APSWDatabase from the apsw_ext module:
17
18
Chapter 1. Contents:
postgresql://postgres:my_password@localhost:5432/my_database will create a PostgresqlDatabase instance. A username and password are provided, as well as the host and port to connect to.
mysql:///my_db will create a MySQLDatabase instance for the local MySQL database my_db.
The above code will cause peewee to store the connection state in a thread local; each thread gets its own separate
connection.
Alternatively, Python sqlite3 module can share a connection across different threads, but you have to disable runtime
checks to reuse the single connection. This behavior can lead to subtle bugs regarding nested transactions when not
used with care, so typically I do not recommend using this option.
database = SqliteDatabase(stats.db, check_same_thread=False)
Note:
For web applications or any multi-threaded (including green threads!)
threadlocals=True when instantiating your database.
As of version 2.3.3, this is the default behavior when instantiating your database, but for earlier versions you will need
to specify this manually.
# Un-initialized database.
class SomeModel(Model):
class Meta:
database = database
If you try to connect or issue any queries while your database is uninitialized you will get an exception:
>>> database.connect()
Exception: Error, database not properly initialized before opening connection
To initialize your database, call the init() method with the database name and any additional keyword arguments:
database_name = raw_input(What is the name of the db? )
database.init(database_name, host=localhost, user=postgres)
19
class BaseModel(Model):
class Meta:
database = database_proxy
class User(BaseModel):
username = CharField()
# Based on configuration, use a different database.
if app.config[DEBUG]:
database = SqliteDatabase(local.db)
elif app.config[TESTING]:
database = SqliteDatabase(:memory:)
else:
database = PostgresqlDatabase(mega_production_db)
# Configure our proxy to use the db we specified in config.
database_proxy.initialize(database)
20
Chapter 1. Contents:
Now when you execute writes (or deletes), they will be run on the master, while all read-only queries will be executed
against one of the replicas. Queries are dispatched among the read slaves in round-robin fashion.
Note: pwiz generally works quite well with even large and complex database schemas, but in some cases it will not be
able to introspect a column. You may need to go through the generated code to add indexes, fix unrecognized column
types, and resolve any circular references that were found.
21
pskel will generate code to connect to an in-memory SQLite database, as well as blank model definitions for the
model names specified on the command line.
Here is a more complete example, which will use the PostgresqlExtDatabase with query logging enabled:
pskel -l -e postgres_ext -d my_database User Tweet > my_script.py
You can now fill in the model definitions and get to hacking!
22
Chapter 1. Contents:
class FooDatabase(Database):
def _connect(self, database, **kwargs):
return foodb.connect(database, **kwargs)
The Database provides a higher-level API and is responsible for executing queries, creating tables and indexes, and
introspecting the database to get lists of tables. The above implementation is the absolute minimum needed, though
some features will not work for best results you will want to additionally add a method for extracting a list of tables
and indexes for a table from the database. Well pretend that FooDB is a lot like MySQL and has special SHOW
statements:
class FooDatabase(Database):
def _connect(self, database, **kwargs):
return foodb.connect(database, **kwargs)
def get_tables(self):
res = self.execute(SHOW TABLES;)
return [r[0] for r in res.fetchall()]
def get_indexes_for_table(self, table):
res = self.execute(SHOW INDEXES IN %s; % self.quote_name(table))
rows = sorted([(r[2], r[1] == 0) for r in res.fetchall()])
return rows
Other things the database handles that are not covered here include:
last_insert_id() and rows_affected()
interpolation and quote_char
op_overrides for mapping operations such as LIKE/ILIKE to their database equivalent
Refer to the Database API reference or the source code. for details.
Note: If your driver conforms to the DB-API 2.0 spec, there shouldnt be much work needed to get up and running.
Our new database can be used just like any of the other database subclasses:
from peewee import *
from foodb_ext import FooDatabase
db = FooDatabase(my_database, user=foo, password=secret)
class BaseModel(Model):
class Meta:
database = db
class Blog(BaseModel):
title = CharField()
contents = TextField()
pub_date = DateTimeField()
23
The db object will be used to manage the connections to the Sqlite database. In this example were
using SqliteDatabase, but you could also use one of the other database engines.
2. Create a base model class which specifies our database.
class BaseModel(Model):
class Meta:
database = db
It is good practice to define a base model class which establishes the database connection. This makes
your code DRY as you will not have to specify the database for subsequent models.
24
Chapter 1. Contents:
Model configuration is kept namespaced in a special class called Meta. This convention is borrowed
from Django. Meta configuration is passed on to subclasses, so our projects models will all subclass
BaseModel. There are many different attributes you can configure using Model.Meta.
3. Define a model class.
class User(BaseModel):
username = CharField(unique=True)
Model definition uses the declarative style seen in other popular ORMs like SQLAlchemy or Django.
Note that we are extending the BaseModel class so the User model will inherit the database connection.
We have explicitly defined a single username column with a unique constraint. Because we have
not specified a primary key, peewee will automatically add an auto-incrementing integer primary key
field named id.
Note: If you would like to start using peewee with an existing database, you can use pwiz, a model generator to
automatically generate model definitions.
1.6.1 Fields
The Field class is used to describe the mapping of Model attributes to database columns. Each field type has a
corresponding SQL storage class (i.e. varchar, int), and conversion between python data types and underlying storage
is handled transparently.
When creating a Model class, fields are defined as class attributes. This should look familiar to users of the django
framework. Heres an example:
class User(Model):
username = CharField()
join_date = DateTimeField()
about_me = TextField()
There is one special type of field, ForeignKeyField, which allows you to represent foreign-key relationships
between models in an intuitive way:
class Message(Model):
user = ForeignKeyField(User, related_name=messages)
body = TextField()
send_date = DateTimeField()
25
Sqlite
varchar
text
datetime
integer
smallint
real
real
integer
decimal
integer
integer
date
time
blob
not supported
Postgresql
varchar
text
timestamp
integer
boolean
real
double precision
bigint
numeric
serial
integer
date
time
bytea
uuid
MySQL
varchar
longtext
datetime
integer
bool
real
double precision
bigint
numeric
integer
integer
date
time
blob
not supported
26
Special Parameters
max_length
formats
formats
formats
max_digits, decimal_places, auto_round, rounding
rel_model, related_name, to_field, on_delete, on_update, extra
Chapter 1. Contents:
Note: Both default and choices could be implemented at the database level as DEFAULT and CHECK CONSTRAINT respectively, but any application change would require a schema change. Because of this, default is
implemented purely in python and choices are not validated but exist for metadata purposes only.
To add database (server-side) constraints, use the constraints parameter.
We will store the UUIDs in a native UUID column. Since psycopg2 treats the data as a string by default, we will add
two methods to the field to handle:
The data coming out of the database to be used in our application
The data from our python app going into the database
1.6. Models and Fields
27
import uuid
class UUIDField(Field):
db_field = uuid
def db_value(self, value):
return str(value) # convert UUID to str
def python_value(self, value):
return uuid.UUID(value) # convert str to UUID
Now, we need to let the database know how to map this uuid label to an actual uuid column type in the database. There
are 2 ways of doing this:
1. Specify the overrides in the Database constructor:
db = PostgresqlDatabase(my_db, fields={uuid: uuid})
That is it! Some fields may support exotic operations, like the postgresql HStore field acts like a key/value store and
has custom operators for things like contains and update. You can specify custom operations as well. For example
code, check out the source code for the HStoreField, in playhouse.postgres_ext.
Note: Strictly speaking, it is not necessary to call connect() but it is good practice to be explicit. That way if
something goes wrong, the error occurs at the connect step, rather than some arbitrary time later.
Note: Peewee can determine if your tables already exist, and conditionally create them:
# Only create the tables if they do not exist.
db.create_tables([User, Tweet], safe=True)
After you have created your tables, if you choose to modify your database schema (by adding, removing or otherwise
changing the columns) you will need to either:
Drop the table and re-create it.
Run one or more ALTER TABLE queries. Peewee comes with a schema migration tool which can greatly
simplify this. Check the schema migrations docs for details.
28
Chapter 1. Contents:
This instructs peewee that whenever a query is executed on Person to use the contacts database.
Note: Take a look at the sample models - you will notice that we created a BaseModel that defined the database,
and then extended. This is the preferred way to define a database and create models.
Once the class is defined, you should not access ModelClass.Meta, but instead use ModelClass._meta:
>>> Person.Meta
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: type object Preson has no attribute Meta
>>> Person._meta
<peewee.ModelOptions object at 0x7f51a2f03790>
The ModelOptions class implements several methods which may be of use for retrieving model metadata (such as
lists of fields, foreign key relationships, and more).
>>> Person._meta.fields
{id: <peewee.PrimaryKeyField object at 0x7f51a2e92750>, name: <peewee.CharField object at 0x7f51a
>>> Person._meta.primary_key
<peewee.PrimaryKeyField object at 0x7f51a2e92750>
>>> Person._meta.database
<peewee.SqliteDatabase object at 0x7f519bff6dd0>
There are several options you can specify as Meta attributes. While most options are inheritable, some are tablespecific and will not be inherited by subclasses.
Option
database
db_table
indexes
order_by
primary_key
table_alias
Meaning
database for model
name of the table to store data
a list of fields to index
a list of fields to use for default ordering
a CompositeKey instance
an alias to use for the table in queries
Inheritable?
yes
no
yes
yes
yes
no
29
...
db_table = model_one_tbl
...
>>> class ModelTwo(ModelOne):
...
pass
...
>>> ModelOne._meta.database is ModelTwo._meta.database
True
>>> ModelOne._meta.db_table == ModelTwo._meta.db_table
False
Multi-column indexes are defined as Meta attributes using a nested tuple. Each database index is a 2-tuple, the first
part of which is a tuple of the names of the fields, the second part a boolean indicating whether the index should be
unique.
class Transaction(Model):
from_acct = CharField()
to_acct = CharField()
amount = DecimalField()
date = DateTimeField()
class Meta:
indexes = (
# create a unique on from/to/date
((from_acct, to_acct, date), True),
# create a non-unique on from/to
((from_acct, to_acct), False),
)
Auto-incrementing IDs are, as their name says, automatically generated for you when you insert a new row into the
database. When you call save(), peewee determines whether to do an INSERT versus an UPDATE based on the
30
Chapter 1. Contents:
presence of a primary key value. Since, with our uuid example, the database driver wont generate a new ID, we need
to specify it manually. When we call save() for the first time, pass in force_insert = True:
# This works because .create() will specify force_insert=True.
obj1 = UUIDModel.create(id=uuid.uuid4())
# This will not work, however. Peewee will attempt to do an update:
obj2 = UUIDModel(id=uuid.uuid4())
obj2.save() # WRONG
obj2.save(force_insert=True) # CORRECT
# Once the object has been created, you can call save() normally.
obj2.save()
Note: Any foreign keys to a model with a non-integer primary key will have a ForeignKeyField use the same
underlying storage type as the primary key they are related to.
If you always want to have control over the primary key, simply do not use the PrimaryKeyField field type, but
use a normal IntegerField (or other column type):
class User(BaseModel):
id = IntegerField(primary_key=True)
username = CharField()
>>> u = User.create(id=999, username=somebody)
31
>>> u.id
999
>>> User.get(User.username == somebody).id
999
As you can see, the foreign key points upward to the parent object and the back-reference is named children.
Attention: Self-referential foreign-keys should always be null=True.
When querying against a model that contains a self-referential foreign key you may sometimes need to perform a
self-join. In those cases you can use Model.alias() to create a table reference. Here is how you might query the
category and parent model using a self-join:
Parent = Category.alias()
GrandParent = Category.alias()
query = (Category
.select(Category, Parent)
.join(Parent, on=(Category.parent == Parent.id))
.join(GrandParent, on=(Parent.parent == GrandParent.id))
.where(GrandParent.name == some category)
.order_by(Category.name))
# NameError!!
class Tweet(Model):
message = TextField()
user = ForeignKeyField(User, related_name=tweets)
32
Chapter 1. Contents:
class User(Model):
username = CharField()
favorite_tweet_id = IntegerField(null=True)
By using Proxy we can get around the problem and still use a foreign key field:
# Create a proxy object to stand in for our as-yet-undefined Tweet model.
TweetProxy = Proxy()
class User(Model):
username = CharField()
# Tweet has not been defined yet so use the proxy.
favorite_tweet = ForeignKeyField(TweetProxy, null=True)
class Tweet(Model):
message = TextField()
user = ForeignKeyField(User, related_name=tweets)
# Now that Tweet is defined, we can initialize the proxy object.
TweetProxy.initialize(Tweet)
After initializing the proxy the foreign key fields are now correctly set up. There is one more quirk to watch out for,
though. When you call create_table we will again encounter the same issue. For this reason peewee will not
automatically create a foreign key constraint for any deferred foreign keys.
Here is how to create the tables:
# Foreign key constraint from User -> Tweet will NOT be created because the
# Tweet table does not exist yet. favorite_tweet will just be a regular
# integer field:
User.create_table()
# Foreign key constraint from Tweet -> User will be created normally.
Tweet.create_table()
# Now that both tables exist, we can create the foreign key from User -> Tweet:
db.create_foreign_key(User, User.favorite_tweet)
1.7 Querying
This section will cover the basic CRUD operations commonly performed on a relational database:
Model.create(), for executing INSERT queries.
Model.save() and Model.update(), for executing UPDATE queries.
Model.delete_instance() and Model.delete(), for executing DELETE queries.
Model.select(), for executing SELECT queries.
1.7. Querying
33
This will INSERT a new row into the database. The primary key will automatically be retrieved and stored on the
model instance.
Alternatively, you can build up a model instance programmatically and then call save():
>>>
>>>
1
>>>
1
>>>
>>>
>>>
1
>>>
2
user = User(username=Charlie)
user.save() # save() returns the number of rows modified.
user.id
huey = User()
huey.username = Huey
huey.save()
huey.id
When a model has a foreign key, you can directly assign a model instance to the foreign key field when creating a new
record.
>>> tweet = Tweet.create(user=huey, message=Hello!)
You can also use the value of the related objects primary key:
>>> tweet = Tweet.create(user=2, message=Hello again!)
If you simply wish to insert data and do not need to create a model instance, you can use Model.insert():
>>> User.insert(username=Mickey).execute()
3
After executing the insert query, the primary key of the new row is returned.
Note: There are several ways you can speed up bulk insert operations. Check out the Bulk inserts recipe section for
more information.
34
Chapter 1. Contents:
3. Thats a lot of data (in terms of raw bytes of SQL) you are sending to your database to parse.
4. We are retrieving the last insert id, which causes an additional query to be executed in some cases.
You can get a very significant speedup by simply wrapping this in a transaction().
# This is much faster.
with db.transaction():
for data_dict in data_source:
Model.create(**data_dict)
The above code still suffers from points 2, 3 and 4. We can get another big boost by calling insert_many(). This
method accepts a list of dictionaries to insert.
# Fastest.
with db.transaction():
Model.insert_many(data_source).execute()
Depending on the number of rows in your data source, you may need to break it up into chunks:
# Insert rows 1000 at a time.
with db.transaction():
for idx in range(0, len(data_source), 1000):
Model.insert_many(data_source[idx:idx+1000]).execute()
If the data you would like to bulk load is stored in another table, you can also create INSERT queries whose source is
a SELECT query. Use the Model.insert_from() method:
query = (TweetArchive
.insert_from(
fields=[Tweet.user, Tweet.message],
query=Tweet.select(Tweet.user, Tweet.message))
.execute())
user.save()
user.id
user.save()
user.id
huey.save()
huey.id
If you want to update multiple records, issue an UPDATE query. The following example will update all Tweet objects,
marking them as published, if they were created before today. Model.update() accepts keyword arguments where
the keys correspond to the models field names:
>>> today = datetime.today()
>>> query = Tweet.update(is_published=True).where(Tweet.creation_date < today)
>>> query.execute() # Returns the number of rows that were updated.
4
1.7. Querying
35
Do not do this! Not only is this slow, but it is also vulnerable to race conditions if multiple processes are updating the
counter at the same time.
Instead, you can update the counters atomically using update():
>>> query = Stat.update(counter=Stat.counter + 1).where(Stat.url == request.url)
>>> query.update()
You can make these update statements as complex as you like. Lets give all our employees a bonus equal to their
previous bonus plus 10% of their salary:
>>> query = Employee.update(bonus=(Employee.bonus + (Employee.salary * .1)))
>>> query.execute() # Give everyone a bonus!
We can even use a subquery to update the value of a column. Suppose we had a denormalized column on the User
model that stored the number of tweets a user had made, and we updated this value periodically. Here is how you
might write such a query:
>>> subquery = Tweet.select(fn.COUNT(Tweet.id)).where(Tweet.user == User.id)
>>> update = User.update(num_tweets=subquery)
>>> update.execute()
To delete an arbitrary set of rows, you can issue a DELETE query. The following will delete all Tweet objects that
are over one year old:
36
Chapter 1. Contents:
For more advanced operations, you can use SelectQuery.get(). The following query retrieves the latest tweet
from the user named charlie:
>>> (Tweet
... .select()
... .join(User)
... .where(User.username == charlie)
... .order_by(Tweet.created_date.desc())
... .get())
<__main__.Tweet object at 0x2623410>
1.7. Querying
37
Lets say we wish to implement registering a new user account using the example User model. The User model has a
unique constraint on the username field, so we will rely on the databases integrity guarantees to ensure we dont end
up with duplicate usernames:
try:
with db.transaction():
user = User.create(username=username)
return Success
except peewee.IntegrityError:
return Failure: %s is already in use % username
Note: Subsequent iterations of the same query will not hit the database as the results are cached. To disable this
behavior (to reduce memory usage), call SelectQuery.iterator() when iterating.
When iterating over a model that contains a foreign key, be careful with the way you access values on related models.
Accidentally resolving a foreign key or iterating over a back-reference can cause N+1 query behavior.
When you create a foreign key, such as Tweet.user, you can use the related_name to create a back-reference
(User.tweets). Back-references are exposed as SelectQuery instances:
>>> tweet = Tweet.get()
>>> tweet.user # Accessing a foreign key returns the related model.
<tw.User at 0x7f3ceb017f50>
You can iterate over the user.tweets back-reference just like any other SelectQuery:
>>> for tweet in user.tweets:
...
print tweet.message
...
hello world
this is fun
look at this picture of my food
38
Chapter 1. Contents:
If you want to express a complex query, use parentheses and pythons bitwise or and and operators:
>>> Tweet.select().join(User).where(
...
(User.username == Charlie) |
...
(User.username == Peewee Herman)
... )
Check out the table of query operations to see what types of queries are possible.
Note: A lot of fun things can go in the where clause of a query, such as:
A field expression, e.g. User.username == Charlie
A function expression, e.g. fn.Lower(fn.Substr(User.username, 1, 1)) == a
A comparison of one column to another, e.g. Employee.salary < (Employee.tenure * 1000) +
40000
You can also nest queries, for example tweets by users whose username starts with a:
# get users whose username starts with "a"
a_users = User.select().where(fn.Lower(fn.Substr(User.username, 1, 1)) == a)
# the "<<" operator signifies an "IN" query
a_user_tweets = Tweet.select().where(Tweet.user << a_users)
1.7. Querying
39
Tweet.select().join(User).where(User.username == charlie)
You can also order across joins. Assuming you want to order tweets by the username of the author, then by created_date:
>>> qry = Tweet.select().join(User).order_by(User.username, Tweet.created_date.desc())
SELECT t1."id", t1."user_id", t1."message", t1."is_published", t1."created_date"
FROM "tweet" AS t1
INNER JOIN "user" AS t2
ON t1."user_id" = t2."id"
ORDER BY t2."username", t1."created_date" DESC
40
Chapter 1. Contents:
Attention: Page numbers are 1-based, so the first page of results will be page 1.
>>> for tweet in Tweet.select().order_by(Tweet.id).paginate(2, 10):
...
print tweet.message
...
tweet 10
tweet 11
tweet 12
tweet 13
tweet 14
tweet 15
tweet 16
tweet 17
tweet 18
tweet 19
If you would like more granular control, you can always use limit() and offset().
In some cases it may be necessary to wrap your query and apply a count to the rows of the inner query (such as
when using DISTINCT or GROUP BY). Peewee will usually do this automatically, but in some cases you may need to
manually call wrapped_count() instead.
The resulting query will return User objects with all their normal attributes plus an additional attribute count which
will contain the count of tweets for each user. By default it uses an inner join if the foreign key is not nullable, which
means users without tweets wont appear in the list. To remedy this, manually specify the type of join to include users
with 0 tweets:
1.7. Querying
41
query = (User
.select()
.join(Tweet, JOIN_LEFT_OUTER)
.annotate(Tweet))
Lets assume you have a tagging application and want to find tags that have a certain number of related objects. For
this example well use some different models in a many-to-many configuration:
class Photo(Model):
image = CharField()
class Tag(Model):
name = CharField()
class PhotoTag(Model):
photo = ForeignKeyField(Photo)
tag = ForeignKeyField(Tag)
Now say we want to find tags that have at least 5 photos associated with them:
query = (Tag
.select()
.join(PhotoTag)
.join(Photo)
.group_by(Tag)
.having(fn.Count(Photo.id) > 5))
Suppose we want to grab the associated count and store it on the tag:
query = (Tag
.select(Tag, fn.Count(Photo.id).alias(count))
.join(PhotoTag)
.join(Photo)
.group_by(Tag)
.having(fn.Count(Photo.id) > 5))
42
Chapter 1. Contents:
>>> PageView.select(fn.Count(fn.Distinct(PageView.url))).scalar()
100
There are times when you may want to simply pass in some arbitrary sql. You can do this using the special SQL class.
One use-case is when referencing an alias:
# Well query the user table and annotate it with a count of tweets for
# the given user
query = User.select(User, fn.Count(Tweet.id).alias(ct)).join(Tweet).group_by(User)
# Now we will order by the count, which was aliased to "ct"
query = query.order_by(SQL(ct))
There are two ways to execute hand-crafted SQL statements with peewee:
1. Database.execute_sql() for executing any type of query
2. RawQuery for executing SELECT queries and returning model instances.
Example:
db = SqliteDatabase(:memory:)
class Person(Model):
name = CharField()
class Meta:
database = db
# lets pretend we want to do an "upsert", something that SQLite can
# do, but peewee cannot.
for name in (charlie, mickey, huey):
db.execute_sql(REPLACE INTO person (name) VALUES (?), (name,))
# now lets iterate over the people using our own query.
for person in Person.raw(select * from person):
print person.name # .raw() will return model instances.
1.7. Querying
43
For general information on window functions, check out the postgresql docs.
Similarly, you can return the rows from the cursor as dictionaries using SelectQuery.dicts() or
RawQuery.dicts():
stats = Stat.select(Stat.url, fn.Count(Stat.url).alias(ct)).group_by(Stat.url).dicts()
# iterate over a list of 2-tuples containing the url and count
for stat in stats:
print stat[url], stat[ct]
44
Chapter 1. Contents:
Comparison
==
<
<=
>
>=
!=
<<
>>
%
**
~
Meaning
x equals y
x is less than y
x is less than or equal to y
x is greater than y
x is greater than or equal to y
x is not equal to y
x IN y, where y is a list or query
x IS y, where y is None/NULL
x LIKE y where y may contain wildcards
x ILIKE y where y may contain wildcards
Negation
Because I ran out of operators to override, there are some additional query operations available as methods:
Method
.contains(substr)
.startswith(prefix)
.endswith(suffix)
.between(low, high)
.regexp(exp)
.bin_and(value)
.bin_or(value)
.in_(value)
Meaning
Wild-card search for substring.
Search for values beginning with prefix.
Search for values ending with suffix.
Search for values between low and high.
Regular expression match.
Binary AND.
Binary OR.
IN lookup (identical to <<).
Meaning
AND
OR
NOT (unary negation)
Example
(User.is_active == True) & (User.is_admin == True)
(User.is_admin) | (User.is_superuser)
~(User.username << [foo, bar, baz])
Here is how you might combine expressions. Comparisons can be arbitrarily complex.
Note: Note that the actual comparisons are wrapped in parentheses. Pythons operator precedence necessitates that
comparisons be wrapped in parentheses.
# Find any users who are active administrations.
User.select().where(
(User.is_admin == True) &
(User.is_active == True))
# Find any users who are either administrators or super-users.
User.select().where(
45
(User.is_admin == True) |
(User.is_superuser == True))
# Find any Tweets by users who are not admins (NOT IN).
admins = User.select().where(User.is_admin == True)
non_admin_tweets = Tweet.select().where(
~(Tweet.user << admins))
# Find any users who are not my friends (strangers).
friends = User.select().where(
User.username << [charlie, huey, mickey])
strangers = User.select().where(~(User.id << friends))
Warning: Although you may be tempted to use pythons in, and, or and not operators in your query expressions, these will not work. The return value of an in expression is always coerced to a boolean value. Similarly,
and, or and not all treat their arguments as boolean values and cannot be overloaded.
So just remember:
Use << instead of in
Use & instead of and
Use | instead of or
Use ~ instead of not
Dont forget to wrap your comparisons in parentheses when using logical operators.
For more examples, see the Expressions section.
Note: LIKE and ILIKE with SQLite
Because SQLites LIKE operation is case-insensitive by default, peewee will use the SQLite GLOB operation for casesensitive searches. The glob operation uses asterisks for wildcards as opposed to the usual percent-sign. If you are
using SQLite and want case-sensitive partial string matching, remember to use asterisks for the wildcard.
Now you can use these custom operators to build richer queries:
# Users with even ids.
User.select().where(mod(User.id, 2) == 0)
For more examples check out the source to the playhouse.postgresql_ext module, as it contains numerous
operators specific to postgresqls hstore.
46
Chapter 1. Contents:
1.8.2 Expressions
Peewee is designed to provide a simple, expressive, and pythonic way of constructing SQL queries. This section will
provide a quick overview of some common types of expressions.
There are two primary types of objects that can be composed to create expressions:
Field instances
SQL aggregations and functions using fn
We will assume a simple User model with fields for username and other things. It looks like this:
class User(Model):
username = CharField()
is_admin = BooleanField()
is_active = BooleanField()
last_login = DateTimeField()
login_count = IntegerField()
failed_logins = IntegerField()
Comparisons can be combined using bitwise and and or. Operator precedence is controlled by python and comparisons
can be nested to an arbitrary depth:
# User is both and admin and has logged in today
(User.is_admin == True) & (User.last_login >= today)
# Users username is either charlie or charles
(User.username == charlie) | (User.username == charles)
We can do some fairly interesting things, as expressions can be compared against other expressions. Expressions also
support arithmetic operations:
# users who entered the incorrect more than half the time and have logged
# in at least 10 times
(User.failed_logins > (User.login_count * .5)) & (User.login_count > 10)
47
Note: Unless the User model was explicitly selected when retrieving the Tweet, an additional query will be required
to load the User data. To learn how to avoid the extra query, see the N+1 query documentation.
The reverse is also true, and we can iterate over the tweets associated with a given User instance:
>>> for tweet in user.tweets:
...
print tweet.message
...
http://www.youtube.com/watch?v=xdhLQCYQ-nQ
Under the hood, the tweets attribute is just a SelectQuery with the WHERE clause pre-populated to point to the
given User instance:
>>> user.tweets
<class twx.Tweet> SELECT t1."id", t1."user_id", t1."message", ...
By default peewee will use an INNER join, but you can use LEFT OUTER or FULL joins as well:
users = (User
.select(User, fn.Count(Tweet.id).alias(num_tweets))
.join(Tweet, JOIN_LEFT_OUTER)
.group_by(User)
.order_by(fn.Count(Tweet.id).desc()))
for user in users:
print user.username, has created, user.num_tweets, tweet(s).
48
Chapter 1. Contents:
class Relationship(BaseModel):
from_user = ForeignKeyField(User, related_name=relationships)
to_user = ForeignKeyField(User, related_name=related_to)
class Meta:
indexes = (
# Specify a unique multi-column index on from/to-user.
((from_user, to_user), True),
)
Since there are two foreign keys to User, we should always specify which field we are using in a join.
For example, to determine which users I am following, I would write:
(User
.select()
.join(Relationship, on=Relationship.to_user)
.where(Relationship.from_user == charlie))
On the other hand, if I wanted to determine which users are following me, I would instead join on the from_user
column and filter on the relationships to_user:
(User
.select()
.join(Relationship, on=Relationship.from_user)
.where(Relationship.to_user == charlie))
Note: By specifying an alias on the join condition, you can control the attribute peewee will assign the joined instance
to. In the previous example, we used the following join:
49
(User.id == ActivityLog.object_id).alias(log)
Then when iterating over the query, we were able to directly access the joined ActivityLog without incurring an
additional query:
for user in user_log:
print user.username, user.log.description
This query will result in a join from User to Tweet, and another join from Tweet to Comment.
If you would like to join the same table twice, use the switch() method:
# Join the Artist table on both Ablum and Genre.
Artist.join(Album).switch(Artist).join(Genre)
To query, lets say we want to find students who are enrolled in math class:
query = (Student
.select()
.join(StudentCourse)
.join(Course)
.where(Course.name == math))
for student in query:
print student.name
50
Chapter 1. Contents:
To efficiently iterate over a many-to-many relation, i.e., list all students and their respective courses, we will query the
through model StudentCourse and precompute the Student and Course:
query = (StudentCourse
.select(StudentCourse, Student, Course)
.join(Course)
.switch(StudentCourse)
.join(Student)
.order_by(Student.name))
To print a list of students and their courses you might do the following:
last = None
for student_course in query:
student = student_course.student
if student != last:
last = student
print Student: %s % student.name
- %s % student_course.course.name
print
Since we selected all fields from Student and Course in the select clause of the query, these foreign key traversals
are free and weve done the whole iteration with just 1 query.
1.9.4 Self-joins
Peewee supports several methods for constructing queries containing a self-join.
Using model aliases
To join on the same model (table) twice, it is necessary to create a model alias to represent the second instance of the
table in a query. Consider the following model:
class Category(Model):
name = CharField()
parent = ForeignKeyField(self, related_name=children)
What if we wanted to query all categories whose parent category is Electronics. One way would be to perform a
self-join:
Parent = Category.alias()
query = (Category
.select()
.join(Parent, on=(Category.parent == Parent.id))
.where(Parent.name == Electronics))
When performing a join that uses a ModelAlias, it is necessary to specify the join condition using the on keyword
argument. In this case we are joining the category with its parent category.
Using subqueries
Another less common approach involves the use of subqueries. Here is another way we might construct a query to get
all the categories whose parent category is Electronics using a subquery:
51
To access the id value from the subquery, we use the .c magic lookup which will generate the appropriate SQL
expression:
Category.parent == join_query.c.id
# Becomes: (t1."parent_id" = "jq"."id")
52
Chapter 1. Contents:
query = (Tweet
.select(Tweet, User) # Note that we are selecting both models.
.join(User) # Use an INNER join because every tweet has an author.
.order_by(Tweet.id.desc()) # Get the most recent tweets.
.limit(10))
for tweet in query:
print tweet.user.username, -, tweet.message
Without the join, accessing tweet.user.username would trigger a query to resolve the foreign key
tweet.user and retrieve the associated user. But since we have selected and joined on User, peewee will automatically resolve the foreign-key for us.
List users and all their tweets
Lets say you want to build a page that shows several users and all of their tweets. The N+1 scenario would be:
1. Fetch some users.
2. For each user, fetch their tweets.
This situation is similar to the previous example, but there is one important difference: when we selected tweets, they
only have a single associated user, so we could directly assign the foreign key. The reverse is not true, however, as one
user may have any number of tweets (or none at all).
Peewee provides two approaches to avoiding O(n) queries in this situation. We can either:
Fetch both users and tweets in a single query. User data will be duplicated, so peewee will de-dupe it and
aggregate the tweets as it iterates through the result set.
Fetch users first, then fetch all the tweets associated with those users. Once peewee has the big list of tweets, it
will assign them out, matching them with the appropriate user.
Each solution has its place and, depending on the size and shape of the data you are querying, one may be more
performant than the other.
Lets look at the first approach, since it is more general and can work with arbitrarily complex queries. We will use
a special flag, aggregate_rows(), when creating our query. This method tells peewee to de-duplicate any rows
that, due to the structure of the JOINs, may be duplicated.
query = (User
.select(User, Tweet) # As in the previous example, we select both tables.
.join(Tweet, JOIN_LEFT_OUTER)
.order_by(User.username) # We need to specify an ordering here.
.aggregate_rows()) # Tell peewee to de-dupe and aggregate results.
for user in query:
print user.username
for tweet in user.tweets:
print , tweet.message
Ordinarily, user.tweets would be a SelectQuery and iterating over it would trigger an additional query. By
using aggregate_rows(), though, user.tweets is a Python list and no additional query occurs.
Note: We used a LEFT OUTER join to ensure that users with zero tweets would also be included in the result set.
Below is an example of how we might fetch several users and any tweets they created within the past week. Because
we are filtering the tweets and the user may not have any tweets, we need our WHERE clause to allow NULL tweet
IDs.
1.10. Performance Techniques
53
Using prefetch
Besides aggregate_rows(), peewee supports a second approach using sub-queries. This method requires the use
of a special API, prefetch(). Pre-fetch, as its name indicates, will eagerly load the appropriate tweets for the given
users using subqueries. This means instead of O(n) queries for n rows, we will do O(k) queries for k tables.
Here is an example of how we might fetch several users and any tweets they created within the past week.
week_ago = datetime.date.today() - datetime.timedelta(days=7)
users = User.select()
tweets = (Tweet
.select()
.where(
(Tweet.is_published == True) &
(Tweet.created_date >= week_ago)))
# This will perform two queries.
users_with_tweets = prefetch(users, tweets)
for user in users_with_tweets:
print user.username
for tweet in user.tweets_prefetch:
print , tweet.message
Note: Note that neither the User query, nor the Tweet query contained a JOIN clause. When using prefetch()
you do not need to specify the join.
As with aggregate_rows(), you can use prefetch() to query an arbitrary number of tables. Check the API
documentation for more examples.
Chapter 1. Contents:
# Lets assume weve got 10 million stat objects to dump to a csv file.
stats = Stat.select()
# Our imaginary serializer class
serializer = CSVSerializer()
# Loop over all the stats and serialize.
for stat in stats.iterator():
serializer.serialize_object(stat)
For simple queries you can see further speed improvements by using the naive() method. This method speeds up
the construction of peewee model instances from raw cursor data. See the naive() documentation for more details
on this optimization.
for stat in stats.naive().iterator():
serializer.serialize_object(stat)
You can also see performance improvements by using the dicts() and tuples() methods.
When iterating over a large number of rows that contain columns from multiple tables, peewee will reconstruct the
model graph for each row returned. This operation can be slow for complex graphs. To speed up model creation, you
can:
Call naive(), which will not construct a graph and simply patch all attributes from the row directly onto a
model instance.
Use dicts() or tuples().
1.11 Transactions
Peewee provides several interfaces for working with transactions.
You can use the transaction to perform get or create operations as well:
try:
with db.transaction():
user = User.create(username=username)
return Success
except peewee.IntegrityError:
return Failure: %s is already in use % username
1.11. Transactions
55
1.11.2 Decorator
Similar to the context manager, you can decorate functions with the commit_on_success() decorator. This
decorator will commit if the function returns normally, otherwise the transaction will be rolled back and the exception
re-raised.
db = SqliteDatabase(:memory:)
@db.commit_on_success
def delete_user(user):
user.delete_instance(recursive=True)
If you would like to manually control every transaction, simply turn autocommit off when instantiating your database:
db = SqliteDatabase(:memory:, autocommit=False)
User.create(username=somebody)
db.commit()
Peewee supports nested transactions through the use of savepoints (for more information, see savepoint()).
If you attempt to nest transactions with peewee using the transaction() context manager, only the outer-most
transaction will be used:
56
Chapter 1. Contents:
classmethod select(*selection)
Parameters selection A list of model classes, field instances, functions or expressions. If no
argument is provided, all columns for the given model will be selected.
Return type a SelectQuery for the given Model.
Examples of selecting all columns (default):
User.select().where(User.active == True).order_by(User.username)
Example of selecting all columns on Tweet and the parent model, User. When the user foreign key is
accessed on a Tweet instance no additional query will be needed (see N+1 for more details):
(Tweet
.select(Tweet, User)
.join(User)
.order_by(Tweet.created_date.desc()))
classmethod update(**update)
Parameters update mapping of field-name to expression
Return type an UpdateQuery for the given Model
1.12. API Reference
57
Note: When an update query is executed, the number of rows modified will be returned.
classmethod insert(**insert)
Insert a new row into the database. If any fields on the model have default values, these values will be used
if the fields are not explicitly set in the insert dictionary.
Parameters insert mapping of field or field-name to expression.
Return type an InsertQuery for the given Model.
Example showing creation of a new user:
q = User.insert(username=admin, active=True, registration_expired=False)
q.execute() # perform the insert.
If you have a model with a default value on one of the fields, and that field is not specified in the insert
parameter, the default will be used:
class User(Model):
username = CharField()
active = BooleanField(default=True)
# This INSERT query will automatically specify active=True:
User.insert(username=charlie)
Note: When an insert query is executed on a table with an auto-incrementing primary key, the primary
key of the new row will be returned.
insert_many(rows)
Insert multiple rows at once. The rows parameter must be an iterable that yields dictionaries. As with
insert(), fields that are not specified in the dictionary will use their default value, if one exists.
Note: Due to the nature of bulk inserts, each row must contain the same fields. The following will not
work:
Person.insert_many([
{first_name: Peewee, last_name: Herman},
{first_name: Huey}, # Missing "last_name"!
])
58
Chapter 1. Contents:
Because the rows parameter can be an arbitrary iterable, you can also use a generator:
def get_usernames():
for username in [charlie, huey, peewee]:
yield {username: username}
User.insert_many(get_usernames()).execute()
classmethod delete()
Return type a DeleteQuery for the given Model.
Example showing the deletion of all inactive users:
q = User.delete().where(User.active == False)
q.execute() # remove the rows
Warning: This method performs a delete on the entire table. To delete a single instance, see
Model.delete_instance().
classmethod raw(sql, *params)
Parameters
sql a string SQL expression
params any number of parameters to interpolate
Return type a RawQuery for the given Model
Example selecting rows from the User table:
59
Note: Generally the use of raw is reserved for those cases where you can significantly optimize a select
query. It is useful for select queries since it will return instances of the model.
classmethod create(**attributes)
Parameters attributes key/value pairs of model attributes
Return type a model instance with the provided attributes
Example showing the creation of a user (a row will be added to the database):
user = User.create(username=admin, password=test)
the
given
query.
Raises
This method is also exposed via the SelectQuery, though it takes no parameters:
active = User.select().where(User.active == True)
try:
user = active.where(
(User.username == username) &
(User.password == password)
).get()
except User.DoesNotExist:
user = None
Note: The get() method is shorthand for selecting with a limit of 1. It has the added behavior of raising
an exception when no matching row is found. If more than one row is found, the first row returned by the
database cursor will be used.
classmethod alias()
Return type ModelAlias instance
The alias() method is used to create self-joins.
Example:
Parent = Category.alias()
sq = (Category
.select(Category, Parent)
.join(Parent, on=(Category.parent == Parent.id))
.where(Parent.name == parent category))
60
Chapter 1. Contents:
Note: When using a ModelAlias in a join, you must explicitly specify the join condition.
classmethod create_table([fail_silently=False ])
Parameters fail_silently (bool) If set to True, the method will check for the existence of the
table before attempting to create.
Create the table for the given model.
Example:
database.connect()
SomeModel.create_table()
61
example:
some_obj.delete_instance()
# it is gone forever
dependencies([search_nullable=False ])
Parameters search_nullable (bool) Search models related via a nullable foreign key
Return type Generator expression yielding queries and foreign key fields
Generate a list of queries of dependent models. Yields a 2-tuple containing the query and corresponding
foreign key field. Useful for searching dependencies of a model, i.e. things that would be orphaned in the
event of a delete.
dirty_fields
Return a list of fields that were manually set.
Return type list
Note:
If you just want to persist
model.save(only=model.dirty_fields).
modified
fields,
you
can
call
is_dirty()
Return whether any fields were manually set.
Return type bool
prepared()
This method provides a hook for performing model initialization after the row data has been populated.
1.12.2 Fields
62
Chapter 1. Contents:
kwargs named attributes containing values that may pertain to specific field subclasses,
such as max_length or decimal_places
db_field = <some field type>
Attribute used to map this field to a column type, e.g. string or datetime
_is_bound
Boolean flag indicating if the field is attached to a model class.
model_class
The model the field belongs to. Only applies to bound fields.
name
The name of the field. Only applies to bound fields.
db_value(value)
Parameters value python data type to prep for storage in the database
Return type converted python datatype
python_value(value)
Parameters value data coming from the backend storage
Return type python data type
coerce(value)
This method is a shorthand that is used, by default, by both db_value and python_value. You can
usually get away with just implementing this.
Parameters value arbitrary data from app or backend
Return type python data type
class IntegerField
Stores: integers
db_field = int
class BigIntegerField
Stores: big integers
db_field = bigint
class PrimaryKeyField
Stores: auto-incrementing integer fields suitable for use as primary key.
db_field = primary_key
class FloatField
Stores: floating-point numbers
db_field = float
class DoubleField
Stores: double-precision floating-point numbers
db_field = double
class DecimalField
Stores: decimal numbers, using python standard library Decimal objects
Additional attributes and values:
63
max_digits
decimal_places
auto_round
rounding
10
5
False
decimal.DefaultContext.rounding
db_field = decimal
class CharField
Stores: small strings (0-255 bytes)
Additional attributes and values:
max_length
255
db_field = string
class TextField
Stores: arbitrarily large strings
db_field = text
class DateTimeField
Stores: python datetime.datetime instances
Accepts a special parameter formats, which contains a list of formats the datetime can be encoded with. The
default behavior is:
%Y-%m-%d %H:%M:%S.%f # year-month-day hour-minute-second.microsecond
%Y-%m-%d %H:%M:%S # year-month-day hour-minute-second
%Y-%m-%d # year-month-day
Note: If the incoming value does not match a format, it will be returned as-is
db_field = datetime
year
An expression suitable for extracting the year, for example to retrieve all blog posts from 2013:
Blog.select().where(Blog.pub_date.year == 2013)
month
An expression suitable for extracting the month from a stored date.
day
An expression suitable for extracting the day from a stored date.
hour
An expression suitable for extracting the hour from a stored time.
minute
An expression suitable for extracting the minute from a stored time.
second
An expression suitable for extracting the second from a stored time.
class DateField
Stores: python datetime.date instances
Accepts a special parameter formats, which contains a list of formats the date can be encoded with. The
default behavior is:
64
Chapter 1. Contents:
%Y-%m-%d # year-month-day
%Y-%m-%d %H:%M:%S # year-month-day hour-minute-second
%Y-%m-%d %H:%M:%S.%f # year-month-day hour-minute-second.microsecond
Note: If the incoming value does not match a format, it will be returned as-is
db_field = date
year
An expression suitable for extracting the year, for example to retrieve all people born in 1980:
Person.select().where(Person.dob.year == 1983)
month
Same as year, except extract month.
day
Same as year, except extract day.
class TimeField
Stores: python datetime.time instances
Accepts a special parameter formats, which contains a list of formats the time can be encoded with. The
default behavior is:
%H:%M:%S.%f # hour:minute:second.microsecond
%H:%M:%S # hour:minute:second
%H:%M # hour:minute
%Y-%m-%d %H:%M:%S.%f # year-month-day hour-minute-second.microsecond
%Y-%m-%d %H:%M:%S # year-month-day hour-minute-second
Note: If the incoming value does not match a format, it will be returned as-is
db_field = time
hour
Extract the hour from a time, for example to retreive all events occurring in the evening:
Event.select().where(Event.time.hour > 17)
minute
Same as hour, except extract minute.
second
Same as hour, except extract second..
class BooleanField
Stores: True / False
db_field = bool
class BlobField
Store arbitrary binary data.
class UUIDField
Store UUID values. Currently only supported by PostgresqlDatabase.
class ForeignKeyField(rel_model[, related_name=None[,
to_field=None[, ... ]]]]])
Stores: relationship to another model
1.12. API Reference
on_delete=None[,
on_update=None[,
65
Parameters
rel_model related Model class or the string self if declaring a self-referential foreign
key
related_name (string) attribute to expose on related model
on_delete (string) on delete behavior, e.g. on_delete=CASCADE.
on_update (string) on update behavior.
to_field the field (or field name) on rel_model the foreign key references. Defaults to
the primary key field for rel_model.
class User(Model):
name = CharField()
class Tweet(Model):
user = ForeignKeyField(User, related_name=tweets)
content = TextField()
# "user" attribute
>>> some_tweet.user
<User: charlie>
# "tweets" related name attribute
>>> for tweet in charlie.tweets:
...
print tweet.content
Some tweet
Another tweet
Yet another tweet
Note: Foreign keys do not have a particular db_field as they will take their field type depending on the type
of primary key on the model they are related to.
Note: If you manually specify a to_field, that field must be either a primary key or have a unique constraint.
class CompositeKey(*fields)
Specify a composite primary key for a model. Unlike the other fields, a composite key is defined in the models
Meta class after the fields have been defined. It takes as parameters the string names of the fields to use as the
primary key:
class BlogTagThrough(Model):
blog = ForeignKeyField(Blog, related_name=tags)
tag = ForeignKeyField(Tag, related_name=blogs)
class Meta:
primary_key = CompositeKey(blog, tag)
Chapter 1. Contents:
Example selecting tweets made by users who are either editors or administrators:
sq = SelectQuery(Tweet).join(User).where(
(User.is_editor == True) |
(User.is_admin == True))
Note: where() calls are chainable. Multiple calls will be AND-ed together.
join(model, join_type=None, on=None)
Parameters
model the model to join on. there must be a ForeignKeyField between the current
query context and the model passed in.
join_type allows the type of JOIN used to be specified explicitly, one of JOIN_INNER,
JOIN_LEFT_OUTER, JOIN_FULL
on if multiple foreign keys exist between two models, this parameter is the ForeignKeyField to join on.
Return type a Query instance
Generate a JOIN clause from the current query context to the model passed in, and establishes
model as the new query context.
Example selecting tweets and joining on user in order to restrict to only those tweets made by admin
users:
sq = SelectQuery(Tweet).join(User).where(User.is_admin == True)
Example selecting users and joining on a particular foreign key field. See the example app for a real-life
usage:
sq = SelectQuery(User).join(Relationship, on=Relationship.to_user)
switch(model)
Parameters model model to switch the query context to.
Return type a clone of the query with a new query context
Switches the query context to the given model. Raises an exception if the model has not been selected
or joined on previously. Useful for performing multiple joins from a single table.
The following example selects from blog and joins on both entry and user:
sq = SelectQuery(Blog).join(Entry).switch(Blog).join(User)
alias(alias=None)
67
Example selecting users and additionally the number of tweets made by the user. The User instances returned
will have an additional attribute, count, that corresponds to the number of tweets made:
sq = (SelectQuery(
User, User, fn.Count(Tweet.id).alias(count))
.join(Tweet)
.group_by(User))
select(*selection)
Parameters selection a list of expressions, which can be model classes or fields. if left blank,
will default to all the fields of the given model.
Return type SelectQuery
Note: Usually the selection will be specified when the instance is created. This method simply exists for
the case when you want to modify the SELECT clause independent of instantiating a query.
68
Chapter 1. Contents:
query = User.select()
query = query.select(User.username)
from_(*args)
Parameters args one or more expressions, for example Model or SelectQuery instance(s). if left blank, will default to the table of the given model.
Return type SelectQuery
# rather than a join, select from both tables and join with where.
query = User.select().from_(User, Blog).where(Blog.user == User.id)
group_by(*clauses)
Parameters clauses a list of expressions, which can be model classes or individual field instances
Return type SelectQuery
Group by one or more columns. If a model class is provided, all the fields on that model class will be used.
Example selecting users, joining on tweets, and grouping by the user so a count of tweets can be calculated
for each user:
sq = (User
.select(User, fn.Count(Tweet.id).alias(count))
.join(Tweet)
.group_by(User))
having(*expressions)
Parameters expressions a list of one or more expressions
Return type SelectQuery
Here is the above example selecting users and tweet counts, but restricting the results to those users who
have created 100 or more tweets:
sq = (User
.select(User, fn.Count(Tweet.id).alias(count))
.join(Tweet)
.group_by(User)
.having(fn.Count(Tweet.id) > 100))
order_by(*clauses)
Parameters clauses a list of fields, calls to field.[asc|desc]() or one or more expressions
Return type SelectQuery
Example of ordering users by username:
User.select().order_by(User.username)
Example of selecting tweets and ordering them first by user, then newest first:
Tweet.select().join(User).order_by(
User.username, Tweet.created_date.desc())
A more complex example ordering users by the number of tweets made (greatest to least), then ordered by
username in the event of a tie:
69
tweet_ct = fn.Count(Tweet.id)
sq = (User
.select(User, tweet_ct.alias(count))
.join(Tweet)
.group_by(User)
.order_by(tweet_ct.desc(), User.username))
window(*windows)
Parameters windows (Window) One or more Window instances.
Add one or more window definitions to this query.
window = Window(partition_by=[fn.date_trunc(day, PageView.timestamp)])
query = (PageView
.select(
PageView.url,
PageView.timestamp,
fn.Count(PageView.id).over(window=window))
.window(window)
.order_by(PageView.timestamp))
limit(num)
Parameters num (int) limit results to num rows
offset(num)
Parameters num (int) offset results by num rows
paginate(page_num, paginate_by=20)
Parameters
page_num a 1-based page number to use for paginating results
paginate_by number of results to return per-page
Return type SelectQuery
Shorthand for applying a LIMIT and OFFSET to the query.
Page indices are 1-based, so page 1 is the first page.
User.select().order_by(User.username).paginate(3, 20)
distinct([is_distinct=True ])
Parameters is_distinct See notes.
Return type SelectQuery
Indicates that this query should only return distinct rows. Results in a SELECT DISTINCT query.
Note: The value for is_distinct should either be a boolean, in which case the query will (or wont)
be DISTINCT.
You can specify a list of one or more expressions to generate a DISTINCT ON query, e.g.
.distinct([Model.col1, Model.col2]).
for_update([for_update=True[, nowait=False ]])
Return type SelectQuery
70
Chapter 1. Contents:
Indicate that this query should lock rows for update. If nowait is True then the database will raise an
OperationalError if it cannot obtain the lock.
naive()
Return type SelectQuery
Flag this query indicating it should only attempt to reconstruct a single model instance for every row
returned by the cursor. If multiple tables were queried, the columns returned are patched directly onto the
single model instance.
Generally this method is useful for speeding up the time needed to construct model instances given a
database cursor.
Note: this can provide a significant speed improvement when doing simple iteration over a large result
set.
iterator()
Return type iterable
By default peewee will cache rows returned by the cursor. This is to prevent things like multiple iterations,
slicing and indexing from triggering extra queries. When you are iterating over a large number of rows,
however, this cache can take up a lot of memory. Using iterator() will save memory by not storing
all the returned model instances.
# iterate over large number of rows.
for obj in Stats.select().iterator():
# do something.
pass
tuples()
Return type SelectQuery
Flag this query indicating it should simply return raw tuples from the cursor. This method is useful when
you either do not want or do not need full model instances.
dicts()
Return type SelectQuery
Flag this query indicating it should simply return dictionaries from the cursor. This method is useful when
you either do not want or do not need full model instances.
aggregate_rows()
Return type SelectQuery
This method provides one way to avoid the N+1 query problem.
Consider a webpage where you wish to display a list of users and all of their associated tweets. You could
approach this problem by listing the users, then for each user executing a separate query to retrieve their
tweets. This is the N+1 behavior, because the number of queries varies depending on the number of users.
Conventional wisdom is that it is preferable to execute fewer queries. Peewee provides several ways to
avoid this problem.
You can use the prefetch() helper, which uses IN clauses to retrieve the tweets for the listed users.
Another method is to select both the user and the tweet data in a single query, then de-dupe the users,
aggregating the tweets in the process.
The raw column data might appear like this:
71
# user.id,
[1,
[1,
[2,
[3,
[3,
[3,
user.username,
charlie,
charlie,
no-tweets,
huey,
huey,
huey,
tweet.id,
1,
2,
NULL,
3,
4,
5,
tweet.user_id,
1,
1,
NULL,
3,
3,
3,
tweet.message
hello],
goodbye],
NULL],
meow],
purr],
hiss],
We can infer from the JOIN clause that the user data will be duplicated, and therefore by de-duping the
users, we can collect their tweets in one go and iterate over the users and tweets transparently.
query = (User
.select(User, Tweet)
.join(Tweet, JOIN_LEFT_OUTER)
.order_by(User.username, Tweet.id)
.aggregate_rows()) # .aggregate_rows() tells peewee to de-dupe the rows.
for user in query:
print user.username
for tweet in user.tweets:
print , tweet.message
# Producing the following output:
charlie
hello
goodbye
huey
meow
purr
hiss
no-tweets
Warning: Be sure that you specify an ORDER BY clause that ensures duplicated data will appear in
consecutive rows.
Note: You can specify arbitrarily complex joins, though for more complex queries it may be more efficient
to use prefetch(). In short, try both and see what works best for your data-set.
Note: For more information, see the Avoiding N+1 queries document.
annotate(related_model, aggregation=None)
Parameters
related_model related Model on which to perform aggregation, must be linked by
ForeignKeyField.
aggregation
the
type
of
aggregation
fn.Count(Tweet.id).alias(count)
to
use,
e.g.
Chapter 1. Contents:
Note: If the ForeignKeyField is nullable, then a LEFT OUTER join may need to be used:
User.select().join(Tweet, JOIN_LEFT_OUTER).annotate(Tweet)
aggregate(aggregation)
Parameters aggregation a function specifying what aggregation to perform, for example
fn.Max(Tweet.created_date).
Method to look at an aggregate of rows using a given function and return a scalar value, such as the count
of all rows or the average value of a particular column.
count([clear_limit=False ])
Parameters clear_limit (bool) Remove any limit or offset clauses from the query before
counting.
Return type an integer representing the number of rows in the current query
Note:
If the query has a GROUP BY, DISTINCT, LIMIT, or OFFSET clause, then the
wrapped_count() method will be used instead.
>>> sq = SelectQuery(Tweet)
>>> sq.count()
45 # number of tweets
>>> deleted_tweets = sq.where(Tweet.status == DELETED)
>>> deleted_tweets.count()
3 # number of tweets that are marked as deleted
wrapped_count([clear_limit=True ])
Parameters clear_limit (bool) Remove any limit or offset clauses from the query before
counting.
Return type an integer representing the number of rows in the current query
Wrap the count query in a subquery. Additional overhead but will give correct counts when performing
DISTINCT queries or those with GROUP BY clauses.
Note: count() will automatically default to wrapped_count() in the event the query is distinct or
has a grouping.
exists()
Return type boolean whether the current query will return any rows. uses an optimized lookup,
so use this rather than get().
sq = User.select().where(User.active == True)
if sq.where(User.username == username, User.password == password).exists():
authenticated = True
get()
Return type Model instance or raises DoesNotExist exception
Get a single row from the database that matches
<model-class>.DoesNotExist if no rows are returned:
the
given
query.
Raises
73
This method is also exposed via the Model api, in which case it accepts arguments that are translated to
the where clause:
user = User.get(User.active == True, User.username == username)
first()
Return type Model instance or None if no results
Fetch the first row from a query. The result will be cached in case the entire query result-set should be
iterated later.
execute()
Return type QueryResultWrapper
Executes the query and returns a QueryResultWrapper for iterating over the result set. The results
are managed internally by the query and whenever a clause is added that would possibly alter the result
set, the query is marked for re-execution.
__iter__()
Executes the query and returns populated model instances:
for user in User.select().where(User.active == True):
print user.username
__getitem__(value)
Parameters value Either an index or a slice object.
Return the model instance(s) at the requested indices. To get the first model, for instance:
query = User.select().order_by(User.username)
first_user = query[0]
first_five = query[:5]
__or__(rhs)
Parameters rhs Either a SelectQuery or a CompoundSelect
Return type CompoundSelect
Create a UNION query with the right-hand object. The result will contain all values from both the left and
right queries.
customers = Customer.select(Customer.city).where(Customer.state == KS)
stores = Store.select(Store.city).where(Store.state == KS)
# Get all cities in kansas where we have either a customer or a store.
all_cities = (customers | stores).order_by(SQL(city))
__and__(rhs)
Parameters rhs Either a SelectQuery or a CompoundSelect
Return type CompoundSelect
Create an INTERSECT query. The result will contain values that are in both the left and right queries.
74
Chapter 1. Contents:
__sub__(rhs)
Parameters rhs Either a SelectQuery or a CompoundSelect
Return type CompoundSelect
Create an EXCEPT query. The result will contain values that are in the left-hand query but not in the
right-hand query.
customers = Customer.select(Customer.city).where(Customer.state == KS)
stores = Store.select(Store.city).where(Store.state == KS)
# Get all cities in kanasas where we have customers but no stores.
cities = (customers - stores).order_by(SQL(city))
__xor__(rhs)
Parameters rhs Either a SelectQuery or a CompoundSelect
Return type CompoundSelect
Create an symmetric difference query. The result will contain values that are in either the left-hand query
or the right-hand query, but not both.
customers = Customer.select(Customer.city).where(Customer.state == KS)
stores = Store.select(Store.city).where(Store.state == KS)
# Get all cities in kanasas where we have either customers with no
# store, or a store with no customers.
cities = (customers ^ stores).order_by(SQL(city))
execute()
Return type Number of rows updated
Performs the query
75
execute()
Return type primary key of the new row
Performs the query
upsert([upsert=True ])
Perform an INSERT OR REPLACE query. Currently only Sqlite supports this method.
class DeleteQuery(model_class)
Creates a DELETE query for the given model.
Note: DeleteQuery will not traverse foreign keys or ensure that constraints are obeyed, so use it with care.
Example deleting users whose account is inactive:
dq = DeleteQuery(User).where(User.active == False)
execute()
76
Chapter 1. Contents:
tuples()
Return type RawQuery
Flag this query indicating it should simply return raw tuples from the cursor. This method is useful when
you either do not want or do not need full model instances.
dicts()
Return type RawQuery
Flag this query indicating it should simply return raw dicts from the cursor. This method is useful when
you either do not want or do not need full model instances.
execute()
Return type a QueryResultWrapper for iterating over the result set. The results are instances of the given model.
Performs the query
class CompoundSelect(model_class, lhs, operator, rhs)
Compound select query.
Parameters
model_class The type of model to return, by default the model class of the lhs query.
lhs Left-hand query, either a SelectQuery or a CompoundQuery.
operator A Node instance used to join the two queries, for example SQL(UNION).
rhs Right query, either a SelectQuery or a CompoundQuery.
prefetch(sq, *subqueries)
Parameters
sq SelectQuery instance
subqueries one or more SelectQuery instances to prefetch for sq. You can also pass
models, but they will be converted into SelectQueries.
Return type SelectQuery with related instances pre-populated
77
Pre-fetch the appropriate instances from the subqueries and apply them to their corresponding parent row in the
outer query. This function will eagerly load the related instances specified in the subqueries. This is a technique
used to save doing O(n) queries for n rows, and rather is O(k) queries for k subqueries.
For example, consider you have a list of users and want to display all their tweets:
# lets impost some small restrictions on our queries
users = User.select().where(User.active == True)
tweets = Tweet.select().where(Tweet.published == True)
# this will perform 2 queries
users_pf = prefetch(users, tweets)
# now we can:
for user in users_pf:
print user.username
for tweet in user.tweets_prefetch:
print - , tweet.content
You can prefetch an arbitrary number of items. For instance, suppose we have a photo site, User -> Photo ->
(Comments, Tags). That is, users can post photos, and these photos can have tags and comments on them. If we
wanted to fetch a list of users, all their photos, and all the comments and tags on the photos:
users = User.select()
published_photos = Photo.select().where(Photo.published == True)
published_comments = Comment.select().where(
(Comment.is_spam == False) &
(Comment.num_flags < 3))
# note that we are just passing the Tag model -- it will be converted
# to a query automatically
users_pf = prefetch(users, published_photos, published_comments, Tag)
# now we can iterate users, photos, and comments/tags
for user in users_pf:
for photo in user.photo_set_prefetch:
for comment in photo.comment_set_prefetch:
# ...
for tag in photo.tag_set_prefetch:
# ...
Note: Subqueries must be related by foreign key and can be arbitrarily deep
Note: For more information, see the Avoiding N+1 queries document.
Warning: prefetch() can use up lots of RAM when the result set is large, and will not warn you if
you are doing something dangerous, so it is up to you to know when to use it. Additionally, because of the
semantics of subquerying, there may be some cases when prefetch does not act as you expect (for instnace,
when applying a LIMIT to subqueries, but there may be others) please report anything you think is a bug
to github.
Chapter 1. Contents:
Parameters
database the name of the database (or filename if using sqlite)
threadlocals (bool) whether to store connections in a threadlocal
autocommit (bool) automatically commit every query executed by calling execute()
fields (dict) a mapping of db_field to database column type, e.g. string => varchar
ops (dict) a mapping of operations understood by the querycompiler to expressions
autorollback (bool) automatically rollback when an exception occurs while executing a
query.
connect_kwargs any arbitrary parameters to pass to the database driver when connecting
Note: if your database name is not known when the class is declared, you can pass None in as the database
name which will mark the database as deferred and any attempt to connect while in this state will raise an
exception. To initialize your database, call the Database.init() method with the database name
A high-level api for working with the supported database engines. Database provides a wrapper around some
of the functions performed by the Adapter, in addition providing support for:
execution of SQL queries
creating and dropping tables and indexes
commit_select = False
Whether to issue a commit after executing a select query. With some engines can prevent implicit transactions from piling up.
compiler_class = QueryCompiler
A class suitable for compiling queries
field_overrides = {}
A mapping of field types to database column types, e.g. {primary_key:
SERIAL}
foreign_keys = True
Whether the given backend enforces foreign key constraints.
for_update = False
Whether the given backend supports selecting rows for update
interpolation = ?
The string used by the driver to interpolate query parameters
op_overrides = {}
A mapping of operation codes to string operations, e.g. {OP_LIKE: LIKE BINARY}
quote_char = "
The string used by the driver to quote names
reserved_tables = []
Table names that are reserved by the backend if encountered in the application a warning will be issued.
sequences = False
Whether the given backend supports sequences
subquery_delete_same_table = True
Whether the given backend supports deleting rows using a subquery that selects from the same table
79
init(database[, **connect_kwargs ])
If the database was instantiated with database=None, the database is said to be in a deferred state
(see notes) if this is the case, you can initialize it at any time by calling the init method.
Parameters
database the name of the database (or filename if using sqlite)
connect_kwargs any arbitrary parameters to pass to the database driver when connecting
connect()
Establishes a connection to the database
Note: If you initialized with threadlocals=True, then this will store the connection inside a threadlocal, ensuring that connections are not shared across threads.
close()
Closes the connection to the database (if one is open)
Note: If you initialized with threadlocals=True, only a connection local to the calling thread will
be closed.
get_conn()
Return type a connection to the database, creates one if does not exist
get_cursor()
Return type a cursor for executing queries
last_insert_id(cursor, model)
Parameters
cursor the database cursor used to perform the insert query
model the model class that was just created
Return type the primary key of the most recently inserted instance
rows_affected(cursor)
Return type number of rows affected by the last query
compiler()
Return type an instance of QueryCompiler using the field and op overrides specified.
execute_sql(sql[, params=None[, require_commit=True ]])
Parameters
sql a string sql query
params a list or tuple of parameters to interpolate
Note: You can configure whether queries will automatically commit by using the set_autocommit()
and Database.get_autocommit() methods.
begin()
Initiate a new transaction. By default not implemented as this is not part of the DB-API 2.0, but provided
for API compatibility.
80
Chapter 1. Contents:
commit()
Call commit() on the active connection, committing the current transaction
rollback()
Call rollback() on the active connection, rolling back the current transaction
set_autocommit(autocommit)
Parameters autocommit a boolean value indicating whether to turn on/off autocommit for
the current connection
get_autocommit()
Return type a boolean value indicating whether autocommit is on for the current connection
get_tables()
Return type a list of table names in the database
Warning: Not implemented implementations exist in subclasses
get_indexes_for_table(table)
Parameters table the name of table to introspect
Return type a list of (index_name, is_unique) tuples
Warning: Not implemented implementations exist in subclasses
sequence_exists(sequence_name)
Rtype boolean
Warning: Not implemented implementations exist in subclasses
create_table(model_class)
Parameters model_class Model class to create table for
create_index(model_class, fields[, unique=False ])
Parameters
model_class Model table on which to create index
fields field(s) to create index on (either field instances or field names)
unique whether the index should enforce uniqueness
create_foreign_key(model_class, field[, constraint=None ])
Parameters
model_class Model table on which to create foreign key constraint
field Field object
constraint (str) Name to give foreign key constraint.
Manually create a foreign key constraint using an ALTER TABLE query. This is primarily used when
creating a circular foreign key dependency, for example:
81
PostProxy = Proxy()
class User(Model):
username = CharField()
favorite_post = ForeignKeyField(PostProxy, null=True)
class Post(Model):
title = CharField()
author = ForeignKeyField(User, related_name=posts)
PostProxy.initialize(Post)
# Create tables. The foreign key from Post -> User will be created
# automatically, but the foreign key from User -> Post must be added
# manually.
User.create_table()
Post.create_table()
# Manually add the foreign key constraint on User, since we could
# not add it until we had created the Post table.
db.create_foreign_key(User, User.favorite_post)
create_sequence(sequence_name)
Parameters sequence_name name of sequence to create
Note: only works with database engines that support sequences
drop_table(model_class[, fail_silently=False[, cascade=False ]])
Parameters
model_class Model table to drop
fail_silently (bool) if True, query will add a IF EXISTS clause
cascade (bool) drop table with CASCADE option.
drop_sequence(sequence_name)
Parameters sequence_name name of sequence to drop
Note: only works with database engines that support sequences
create_tables(models[, safe=False ])
Parameters
models (list) A list of models.
safe (bool) Check first whether the table exists before attempting to create it.
This method should be used for creating tables as it will resolve the model dependency graph and ensure
the tables are created in the correct order.
Usage:
db.create_tables([User, Tweet, Something], safe=True)
82
Chapter 1. Contents:
transaction()
Return a context manager that executes statements in a transaction. If an error is raised inside the context
manager, the transaction will be rolled back, otherwise statements are committed when exiting.
# delete a blog instance and all its associated entries, but
# do so within a transaction
with database.transaction():
blog.delete_instance(recursive=True)
commit_on_success(func)
Decorator that wraps the given function in a single transaction, which, upon success will be committed. If
an error is raised inside the function, the transaction will be rolled back and the error will be re-raised.
Nested functions can be wrapped with commit_on_success - the database will keep a stack and only
commit when it reaches the end of the outermost function.
Parameters func function to decorate
@database.commit_on_success
def transfer_money(from_acct, to_acct, amt):
from_acct.charge(amt)
to_acct.pay(amt)
return amt
savepoint([sid=None ])
Return a context manager that executes statements in a savepoint. If an error is raised inside the context
manager, the savepoint will be rolled back, otherwise statements are committed when exiting.
Savepoints can be thought of as nested transactions.
Parameters sid (str) A string identifier for the savepoint.
atomic()
Return a context manager that executes statements in either a transaction or a savepoint. The outer-most
call to atomic will use a transaction, and any subsequent nested calls will use savepoints.
classmethod register_fields(fields)
Register a mapping of field overrides for the database class. Used to register custom fields or override the
defaults.
Parameters fields (dict) A mapping of db_field to column type
classmethod register_ops(ops)
Register a mapping of operations understood by the QueryCompiler to their SQL equivalent, e.g.
{OP_EQ: =}. Used to extend the types of field comparisons.
Parameters fields (dict) A mapping of db_field to column type
extract_date(date_part, date_field)
Return an expression suitable for extracting a date part from a date field. For instance, extract the year
from a DateTimeField.
1.12. API Reference
83
Parameters
date_part (str) The date part attribute to retrieve. Valid options are: year, month,
day, hour, minute and second.
date_field (Field) field instance storing a datetime, date or time.
Return type an expression object.
sql_error_handler(exception, sql, params, require_commit)
This hook is called when an error is raised executing a query, allowing your application to inject custom
error handling behavior. The default implementation simply reraises the exception.
class SqliteDatabaseCustom(SqliteDatabase):
def sql_error_handler(self, exception, sql, params, require_commit):
# Perform some custom behavior, for example close the
# connection to the database.
self.close()
# Re-raise the exception.
raise exception
class SqliteDatabase(Database)
Database subclass that works with the sqlite3 driver. In addition to the default database parameters,
SqliteDatabase also accepts a journal_mode parameter which will configure the journaling mode.
To use write-ahead log and connection-per-thread:
db = SqliteDatabase(my_app.db, threadlocals=True, journal_mode=WAL)
class MySQLDatabase(Database)
Database subclass that works with either MySQLdb or pymysql.
class PostgresqlDatabase(Database)
Database subclass that works with the psycopg2 driver
1.12.5 Misc
class fn
A helper class that will convert arbitrary function calls to SQL function calls.
To express functions in peewee, use the fn object. The way it works is anything to the right of the dot
operator will be treated as a function. You can pass that function arbitrary parameters which can be other valid
expressions.
For example:
Peewee expression
fn.Count(Tweet.id).alias(count)
fn.Lower(fn.Substr(User.username, 1,
1))
fn.Rand().alias(random)
fn.Stddev(Employee.salary).alias(sdv)
Equivalent SQL
Count(t1."id") AS count
Lower(Substr(t1."username", 1,
1))
Rand() AS random
Stddev(t1."salary") AS sdv
84
Chapter 1. Contents:
85
class Proxy
Proxy class useful for situations when you wish to defer the initialization of an object. For instance, you want
to define your models but you do not know what database engine you will be using until runtime.
Example:
database_proxy = Proxy()
class BaseModel(Model):
class Meta:
database = database_proxy
class User(BaseModel):
username = CharField()
# Based on configuration, use a different database.
if app.config[DEBUG]:
database = SqliteDatabase(local.db)
elif app.config[TESTING]:
database = SqliteDatabase(:memory:)
else:
database = PostgresqlDatabase(mega_production_db)
# Configure our proxy to use the db we specified in config.
database_proxy.initialize(database)
initialize(obj)
Parameters obj The object to proxy to.
Once initialized, the attributes and methods on obj can be accessed directly via the Proxy instance.
class Node
The Node class is the parent class for all composable parts of a query, and forms the basis of peewees expression
API. The following classes extend Node:
SelectQuery, UpdateQuery, InsertQuery, DeleteQuery, and RawQuery.
86
Chapter 1. Contents:
Field
Func (and fn())
SQL
Expression
Param
Window
Clause
Entity
Check
Overridden operators:
Bitwise and- and or- (& and |): combine multiple nodes using the given conjunction.
+, -, *, / and ^ (add, subtract, multiply, divide and exclusive-or).
==, !=, <, <=, >, >=: create a binary expression using the given comparator.
<<: create an IN expression.
>>: create an IS expression.
% and **: LIKE and ILIKE.
contains(rhs)
Create a binary expression using case-insensitive string search.
startswith(rhs)
Create a binary expression using case-insensitive prefix search.
endswith(rhs)
Create a binary expression using case-insensitive suffix search.
between(low, high)
Create an expression that will match values between low and high.
regexp(expression)
Match based on regular expression.
concat(rhs)
Concatenate the current node with the provided rhs.
__invert__()
Negate the node. This translates roughly into NOT (<node>).
alias([name=None ])
Apply an alias to the given node. This translates into <node> AS <name>.
asc()
Apply ascending ordering to the given node. This translates into <node> ASC.
desc()
Apply descending ordering to the given node. This translates into <node> DESC.
87
88
Chapter 1. Contents:
89
hstore support
Postgresql hstore is an embedded key/value store. With hstore, you can store arbitrary key/value pairs in your database
alongside structured relational data.
Currently the postgres_ext module supports the following operations:
Store and retrieve arbitrary dictionaries
Filter by key(s) or partial dictionary
Update/add one or more keys to an existing dictionary
Delete one or more keys from an existing dictionary
Select keys, values, or zip keys and values
Retrieve a slice of keys/values
Test for the existence of a key
Test that a key has a non-NULL value
90
Chapter 1. Contents:
Using hstore
To start with, you will need to import the custom database class and the hstore functions from
playhouse.postgres_ext (see above code snippet). Then, it is as simple as adding a HStoreField to your
model:
class House(BaseExtModel):
address = CharField()
features = HStoreField()
f = House.features
House.select().where(f.contains(garage)) # <-- all houses w/garage key
House.select().where(f.contains([garage, bath])) # <-- all houses w/garage & bath
House.select().where(f.contains({garage: 2 cars})) # <-- houses w/2-car garage
You can select just keys, just values, or zip the two:
>>> f = House.features
>>> for h in House.select(House.address, f.keys().alias(keys)):
...
print h.address, h.keys
123 Main St [ubath, ugarage]
>>> for h in House.select(House.address, f.values().alias(vals)):
...
print h.address, h.vals
91
You can retrieve a slice of data, for example, all the garage data:
>>> f = House.features
>>> for h in House.select(House.address, f.slice(garage).alias(garage_data)):
...
print h.address, h.garage_data
123 Main St {garage: 2 cars}
You can check for the existence of a key and filter rows accordingly:
>>> for h in House.select(House.address, f.exists(garage).alias(has_garage)):
...
print h.address, h.has_garage
123 Main St True
>>> for h in House.select().where(f.exists(garage)):
...
print h.address, h.features[garage] # <-- just houses w/garage data
123 Main St 2 cars
JSON Support
peewee has basic support for Postgres native JSON data type, in the form of JSONField.
Warning: Postgres supports a JSON data type natively as of 9.2 (full support in 9.3). In order to use this
functionality you must be using the correct version of Postgres with psycopg2 version 2.5 or greater.
Note: You must be sure your database is an instance of PostgresqlExtDatabase in order to use the JSONField.
Here is an example of how you might declare a model with a JSON field:
import json
import urllib2
from playhouse.postgres_ext import *
db = PostgresqlExtDatabase(my_database)
# note
class APIResponse(Model):
url = CharField()
response = JSONField()
class Meta:
database = db
@classmethod
def request(cls, url):
fh = urllib2.urlopen(url)
return cls.create(url=url, response=json.loads(fh.read()))
92
Chapter 1. Contents:
APIResponse.create_table()
# Store a JSON response.
offense = APIResponse.request(http://wtf.charlesleifer.com/api/offense/)
booking = APIResponse.request(http://wtf.charlesleifer.com/api/booking/)
# Query a JSON data structure using a nested key lookup:
offense_responses = APIResponse.select().where(
APIResponse.response[meta][model] == offense)
#
#
#
q
for result in q:
print result.person[name], result.person[dob]
If you would like all SELECT queries to automatically use a server-side cursor, you can specify this when creating
your PostgresqlExtDatabase:
from postgres_ext import PostgresqlExtDatabase
ss_db = PostgresqlExtDatabase(my_db, server_side_cursors=True)
Note: Server-side cursors live only as long as the transaction, so for this reason peewee will not automatically call
commit() after executing a SELECT query. If you do not commit after you are done iterating, you will not release
the server-side resources until the connection is closed (or the transaction is committed later). Furthermore, since
peewee will by default cache rows returned by the cursor, you should always call .iterator() when iterating over
a large query.
If you are using the ServerSide() helper, the transaction and call to iterator() will be handled transparently.
1.13. Playhouse, a collection of addons
93
Full-text search
Postgresql provides sophisticated full-text search using special data-types (tsvector and tsquery). Documents
should be stored or converted to the tsvector type, and search queries should be converted to tsquery.
For simple cases, you can simply use the Match() function, which will automatically perform the appropriate conversions, and requires no schema changes:
def blog_search(query):
return Blog.select().where(
(Blog.status == Blog.STATUS_PUBLISHED) &
Match(Blog.content, query))
The Match() function will automatically convert the left-hand operand to a tsvector, and the right-hand operand
to a tsquery. For better performance, it is recommended you create a GIN index on the column you plan to search:
CREATE INDEX blog_full_text_search ON blog USING gin(to_tsvector(content));
Alternatively, you can use the TSVectorField to maintain a dedicated column for storing tsvector data:
class Blog(Model):
content = TextField()
search_content = TSVectorField()
You will need to explicitly convert the incoming text data to tsvector when inserting or updating the
search_content field:
content = Excellent blog post about peewee ORM.
blog_entry = Blog.create(
content=content,
search_content=fn.to_tsvector(content))
Note: If you are using the TSVectorField, it will automatically be created with a GIN index.
94
Chapter 1. Contents:
Additionally, you can use the __getitem__ API to query values or slices in the database:
# Get the first tag on a given blog post.
first_tag = (BlogPost
.select(BlogPost.tags[0].alias(first_tag))
.where(BlogPost.id == 1)
.dicts()
.get())
# first_tag = {first_tag: foo}
contains(*items)
95
Parameters items One or more items that must be in the given array field.
# Get all blog posts that are tagged with both "python" and "django".
Blog.select().where(Blog.tags.contains(python, django))
contains_any(*items)
Parameters items One or more items to search for in the given array field.
Like contains(), except will match rows where the array contains any of the given items.
# Get all blog posts that are tagged with "flask" and/or "django".
Blog.select().where(Blog.tags.contains_any(flask, django))
values()
Return the values for a given row.
>>> for h in House.select(House.address, f.values().alias(vals)):
...
print h.address, h.vals
123 Main St [u2 bath, u2 cars]
items()
Like pythons dict, return the keys and values in a list-of-lists:
>>> for h in House.select(House.address, f.items().alias(mtx)):
...
print h.address, h.mtx
123 Main St [[ubath, u2 bath], [ugarage, u2 cars]]
slice(*args)
Return a slice of data given a list of keys.
>>> f = House.features
>>> for h in House.select(House.address, f.slice(garage).alias(garage_data)):
...
print h.address, h.garage_data
123 Main St {garage: 2 cars}
exists(key)
Query for whether the given key exists.
>>> for h in House.select(House.address, f.exists(garage).alias(has_garage)):
...
print h.address, h.has_garage
123 Main St True
96
Chapter 1. Contents:
defined(key)
Query for whether the given key has a value associated with it.
update(**data)
Perform an atomic update to the keys/values for a given row or rows.
>>> query = House.update(features=House.features.update(
...
sqft=2000,
...
year_built=2012))
>>> query.where(House.id == 1).execute()
delete(*keys)
Delete the provided keys for a given row or rows.
Note: We will use an UPDATE query.
contains(value)
Parameters value Either a dict, a list of keys, or a single key.
Query rows for the existence of either:
a partial dictionary.
a list of keys.
a single key.
>>>
>>>
>>>
>>>
f = House.features
House.select().where(f.contains(garage)) # <-- all houses w/garage key
House.select().where(f.contains([garage, bath])) # <-- all houses w/garage & bath
House.select().where(f.contains({garage: 2 cars})) # <-- houses w/2-car garage
contains_any(*keys)
Parameters keys One or more keys to search for.
Query rows for the existince of any key.
class JSONField(dumps=None, *args, **kwargs)
Field class suitable for storing and querying arbitrary JSON. When using this on a model, set the fields value to
a Python object (either a dict or a list). When you retrieve your value from the database it will be returned as a
Python data structure.
Parameters dumps The default is to call json.dumps() or the dumps function. You can override
this method to create a customized JSON wrapper.
Note: You must be using Postgres 9.2 / psycopg2 2.5 or greater.
Example model declaration:
97
db = PostgresqlExtDatabase(my_db)
class APIResponse(Model):
url = CharField()
response = JSONField()
class Meta:
database = db
To illustrate the use of the [] operators, imagine we have the following data stored in an APIResponse:
{
"foo": {
"bar": ["i1", "i2", "i3"],
"baz": {
"huey": "mickey",
"peewee": "nugget"
}
}
}
98
Chapter 1. Contents:
Match(field, query)
Generate a full-text search expression, automatically converting the left-hand operand to a tsvector, and the
right-hand operand to a tsquery.
Example:
def blog_search(query):
return Blog.select().where(
(Blog.status == Blog.STATUS_PUBLISHED) &
Match(Blog.content, query))
class TSVectorField
Field type suitable for storing tsvector data. This field will automatically be created with a GIN index for
improved search performance.
Note:
Data stored in this field will still need to be manually converted to the tsvector type.
Example usage:
class Blog(Model):
content = TextField()
search_content = TSVectorField()
content = this is a sample blog entry.
blog_entry = Blog.create(
content=content,
search_content=fn.to_tsvector(content))
# Note to_tsvector().
99
collation([name ])
Function decorator for registering a custom collation.
Parameters name string name to use for this collation.
@db.collation()
def collate_reverse(s1, s2):
return -cmp(s1, s2)
# To use this collation:
Book.select().order_by(collate_reverse.collation(Book.title))
As you might have noticed, the original collate_reverse function has a special attribute called
collation attached to it. This extra attribute provides a shorthand way to generate the SQL necessary to use our custom collation.
func([name[, num_params ]])
Function decorator for registering user-defined functions.
Parameters
name name to use for this function.
num_params number of parameters this function accepts. If not provided, peewee will
introspect the function for you.
100
Chapter 1. Contents:
@db.func()
def title_case(s):
return s.title()
# Use in the select clause...
titled_books = Book.select(fn.title_case(Book.title))
@db.func()
def sha1(s):
return hashlib.sha1(s).hexdigest()
# Use in the where clause...
user = User.select().where(
(User.username == username) &
(fn.sha1(User.password) == password_hash)).get()
granular_transaction([lock_type=deferred ])
With the granular_transaction helper, you can specify the isolation level for an individual transaction. The valid options are:
exclusive
immediate
deferred
Example usage:
with db.granular_transaction(exclusive):
# no other readers or writers!
(Account
.update(Account.balance=Account.balance - 100)
.where(Account.id == from_acct)
.execute())
(Account
.update(Account.balance=Account.balance + 100)
.where(Account.id == to_acct)
.execute())
class VirtualModel
Subclass of Model that signifies the model operates using a virtual table provided by a sqlite extension.
_extension = name of sqlite extension
class FTSModel
Model class that provides support for Sqlites full-text search extension. Models should be defined normally,
however there are a couple caveats:
Indexes are ignored completely
Sqlite will treat all column types as TextField (although you can store other data types, Sqlite will treat
them as text).
Therefore it usually makes sense to index the content you intend to search and a single link back to the original
document, since all SQL queries except full-text searches and rowid lookups will be slow.
Example:
class Document(FTSModel):
title = TextField() # type affinities are ignored by FTS, so use TextField
content = TextField()
101
Document.create_table(tokenize=porter)
If you have an existing table and would like to add search for a column on that table, you can specify it using
the content option:
class Blog(Model):
title = CharField()
pub_date = DateTimeField()
content = TextField() # we want to search this.
class FTSBlog(FTSModel):
content = TextField()
Blog.create_table()
FTSBlog.create_table(content=Blog.content)
# Now, we can manage content in the FTSBlog.
# content:
FTSBlog.rebuild()
To populate it with
The content option accepts either a single Field or a Model and can reduce the amount of storage used.
However, content will need to be manually moved to/from the associated FTSModel.
102
Chapter 1. Contents:
classmethod rank()
Calculate the rank based on the quality of the match.
query = (Document
.select(Document, Document.rank().alias(score))
.where(Document.match(search phrase))
.order_by(SQL(score).desc()))
for search_result in query:
print search_result.title, search_result.score
103
match(lhs, rhs)
Generate a SQLite MATCH expression for use in full-text searches.
Document.select().where(match(Document.content, search term))
Rank(model_class)
Calculate the rank of the search results, for use with FTSModel queries using the MATCH operator.
# Search for documents and return results ordered by quality
# of match.
docs = (Document
.select(Document, Rank(Document).alias(score))
.where(Document.match(some search term))
.order_by(SQL(score).desc()))
BM25(model_class, field_index)
Calculate the rank of the search results, for use with FTSModel queries using the MATCH operator.
Parameters
model_class (Model) The FTSModel on which the query is being performed.
field_index (int) The 0-based index of the field being queried.
# Assuming the content field has index=2 (0=pk, 1=title, 2=content),
# calculate the BM25 score for each result.
docs = (Document
.select(Document, BM25(Document, 2).alias(score))
.where(Document.match(search term))
.order_by(SQL(score).desc()))
104
Chapter 1. Contents:
Then get a copy of the standard library SQLite driver and build it against BerkeleyDB:
git clone https://github.com/ghaering/pysqlite
cd pysqlite
sed -i "s|#||g" setup.cfg
python setup.py build
sudo python setup.py install
To simplify this process, peewee comes with a script that will automatically build the appropriate libraries for you.
The berkeley_build.sh script can be found in the playhouse directory (or you can view the source online).
You can also find step by step instructions on my blog.
class BerkeleyDatabase(database, **kwargs)
Subclass of the SqliteExtDatabase that supports connecting to BerkeleyDB-backed version of SQLite.
105
106
Chapter 1. Contents:
1.13.6 DataSet
The dataset module contains a high-level API for working with databases modeled after the popular project of the
same name. The aims of the dataset module are to provide:
A simplified API for working with relational data, along the lines of working with JSON.
An easy way to export relational data.
A minimal data-loading script might look like this:
from playhouse.dataset import DataSet
db = DataSet(sqlite:///:memory:)
table = db[sometable]
table.insert(name=Huey, age=3)
table.insert(name=Mickey, age=5, gender=male)
huey = table.find_one(name=Huey)
print huey
# {age: 3, gender: None, id: 1, name: Huey}
for obj in table:
print obj
# {age: 3, gender: None, id: 1, name: Huey}
# {age: 5, gender: male, id: 2, name: Mickey}
Getting started
DataSet
objects
are
initialized
by
passing
in
a
database
URL
of
the
format
dialect://user:password@host/dbname. See the Database URL section for examples of connecting to various databases.
# Create an in-memory SQLite database.
db = DataSet(sqlite:///:memory:)
Storing data
To store data, we must first obtain a reference to a table. If the table does not exist, it will be created automatically:
# Get a table reference, creating the table if it does not exist.
table = db[users]
We can now insert() new rows into the table. If the columns do not exist, they will be created automatically:
107
To update existing entries in the table, pass in a dictionary containing the new values and filter conditions. The list of
columns to use as filters is specified in the columns argument. If no filter columns are specified, then all rows will be
updated.
# Update the gender for "Huey".
table.update(name=Huey, gender=male, columns=[name])
# Update all records. If the column does not exist, it will be created.
table.update(favorite_orm=peewee)
Using transactions
DataSet supports nesting transactions using a simple context manager.
table = db[users]
with db.transaction() as txn:
table.insert(name=Charlie)
with db.transaction() as nested_txn:
# Set Charlies favorite ORM to Django.
table.update(name=Charlie, favorite_orm=django, columns=[name])
# jk/lol
nested_txn.rollback()
Reading data
To retrieve all rows, you can use the all() method:
# Retrieve all the users.
users = db[user].all()
# We can iterate over all rows without calling .all()
108
Chapter 1. Contents:
To export data, use the freeze() method, passing in the query you wish to export:
peewee_users = db[user].find(favorite_orm=peewee)
db.freeze(peewee_users, format=json, filename=peewee_users.json)
API
class DataSet(url)
The DataSet class provides a high-level API for working with relational databases.
Parameters url (str) A database URL. See Database URL for examples.
tables
Return a list of tables stored in the database. This list is computed dynamically each time it is accessed.
__getitem__(table_name)
Provide a Table reference to the specified table. If the table does not exist, it will be created.
query(sql[, params=None[, commit=True ]])
Parameters
sql (str) A SQL query.
params (list) Optional parameters for the query.
commit (bool) Whether the query should be committed upon execution.
Returns A database cursor.
Execute the provided query against the database.
transaction()
Create a context manager representing a new transaction (or savepoint).
freeze(query[, format=csv[, filename=None[, file_obj=None[, **kwargs ]]]])
Parameters
query A SelectQuery, generated using all() or ~Table.find.
format Output format. By default, csv and json are supported.
filename Filename to write output to.
file_obj File-like object to write output to.
kwargs Arbitrary parameters for export-specific functionality.
connect()
Open a connection to the underlying database. If a connection is not opened explicitly, one will be opened
the first time a query is executed.
109
close()
Close the connection to the underlying database.
class Table(dataset, name, model_class)
The Table class provides a high-level API for working with rows in a given table.
columns
Return a list of columns in the given table.
model_class
A dynamically-created Model class.
create_index(columns[, unique=False ])
Create an index on the given columns:
# Create a unique index on the username column.
db[users].create_index([username], unique=True)
insert(**data)
Insert the given data dictionary into the table, creating new columns as needed.
update(columns=None, conjunction=None, **data)
Update the table using the provided data. If one or more columns are specified in the columns parameter,
then those columns values in the data dictionary will be used to determine which rows to update.
# Update all rows.
db[users].update(favorite_orm=peewee)
# Only update Hueys record, setting his age to 3.
db[users].update(name=Huey, age=3, columns=[name])
find(**query)
Query the table for rows matching the specified equality conditions. If no query is specified, then all rows
are returned.
peewee_users = db[users].find(favorite_orm=peewee)
find_one(**query)
Return a single row matching the specified equality conditions. If no matching row is found then None
will be returned.
huey = db[users].find_one(name=Huey)
all()
Return all rows in the given table.
delete(**query)
Delete all rows matching the given equality conditions. If no query is provided, then all rows will be
deleted.
# Adios, Django!
db[users].delete(favorite_orm=Django)
# Delete all the secret messages.
db[secret_messages].delete()
110
Chapter 1. Contents:
111
As you can see in the example above, although only User and Group were passed in to translate(), several other
models which are related by foreign key were also created. Additionally, the many-to-many through tables were
created as separate models since peewee does not abstract away these types of relationships.
Using the above models it is possible to construct joins. The following example will get all users who belong to a
group that starts with the letter A:
>>> P = translate(User, Group)
>>> query = P.User.select().join(P.User_groups).join(P.Group).where(
...
fn.Lower(fn.Substr(P.Group.name, 1, 1)) == a)
>>> sql, params = query.sql()
>>> print sql # formatted for legibility
SELECT t1."id", t1."password", ...
FROM "auth_user" AS t1
INNER JOIN "auth_user_groups" AS t2 ON (t1."id" = t2."user_id")
INNER JOIN "auth_group" AS t3 ON (t2."group_id" = t3."id")
WHERE (Lower(Substr(t3."name", %s, %s)) = %s)
djpeewee API
translate(*models, **options)
Translate the given Django models into roughly equivalent peewee models suitable for use constructing queries.
Foreign keys and many-to-many relationships will be followed and models generated, although back references
are not traversed.
Parameters
models One or more Django model classes.
options A dictionary of options, see note below.
Returns A dict-like object containing the generated models, but which supports dotted-name style
lookups.
The following are valid options:
recurse: Follow foreign keys and many to many (default: True).
max_depth: Maximum depth to recurse (default: None, unlimited).
backrefs: Follow backrefs (default: False).
exclude: A list of models to exclude.
112
Chapter 1. Contents:
Like regular ForeignKeys, GFKs support a back-reference via the ReverseGFK descriptor.
How to use GFKs
1. Be sure your model subclasses playhouse.gfk.Model
2. Add a CharField to store the object_type
3. Add a field to store the object_id (usually a IntegerField)
4. Add a GFKField and instantiate it with the names of the object_type and object_id fields.
5. (optional) On any other models, add a ReverseGFK descriptor
Example:
from playhouse.gfk import *
class Tag(Model):
tag = CharField()
object_type = CharField(null=True)
object_id = IntegerField(null=True)
object = GFKField(object_type, object_id)
class Blog(Model):
tags = ReverseGFK(Tag, object_type, object_id)
class Photo(Model):
tags = ReverseGFK(Tag, object_type, object_id)
b = Blog.create(name=awesome post)
Tag.create(tag=awesome, object=b)
b2 = Blog.create(name=whiny post)
Tag.create(tag=whiny, object=b2)
113
<__main__.Blog at 0x268f450>
>>> t.object.name
uawesome post
GFK API
class GFKField([model_type_field=object_type[, model_id_field=object_id ]])
Provide a clean API for storing generic foreign keys. Generic foreign keys are comprised of an object type,
which maps to a model class, and an object id, which maps to the primary key of the related model class.
Setting the GFKField on a model will automatically populate the model_type_field and
model_id_field. Similarly, getting the GFKField on a model instance will resolve the two fields, first
looking up the model class, then looking up the instance by ID.
class ReverseGFK(model[, model_type_field=object_type[, model_id_field=object_id ]])
Back-reference support for GFKField.
Note: To store arbitrary python objects, use the PickledKeyStore, which stores values in a pickled BlobField.
Using the KeyStore it is possible to use expressions to retrieve values from the dictionary. For instance, imagine
you want to get all keys which contain a certain substring:
>>> keys_matching_substr = kv[kv.key % %substr%]
>>> keys_start_with_a = kv[fn.Lower(fn.Substr(kv.key, 1, 1)) == a]
KeyStore API
class KeyStore(value_field[, ordered=False[, database=None ]])
Lightweight dictionary interface to a model containing a key and value. Implements common dictionary methods, such as __getitem__, __setitem__, get, pop, items, keys, and values.
Parameters
value_field (Field) Field instance to use as value field, e.g. an instance of TextField.
ordered (boolean) Whether the keys should be returned in sorted order
database (Database) Database class to use for the storage backend. If none is supplied,
an in-memory Sqlite DB will be used.
Example:
114
Chapter 1. Contents:
1.13.10 Shortcuts
This module contains helper functions for expressing things that would otherwise be somewhat verbose or cumbersome
using peewees APIs.
case(predicate, expression_tuples, default=None)
Parameters
predicate A SQL expression or can be None.
expression_tuples An iterable containing one or more 2-tuples comprised of an expression and return value.
default default if none of the cases match.
Example SQL case statements:
-- case with predicate -SELECT "username",
CASE "user_id"
WHEN 1 THEN "one"
WHEN 2 THEN "two"
ELSE "?"
END
FROM "users";
-- case with no predicate (inline expressions) -SELECT "username",
CASE
WHEN "user_id" = 1 THEN "one"
WHEN "user_id" = 2 THEN "two"
ELSE "?"
115
END
FROM "users";
You can specify a value for the CASE expression using the alias() method:
User.select(User.username, case(User.user_id, (
(1, "one"),
(2, "two")), "?").alias("id_string"))
116
Chapter 1. Contents:
username: charlie
}
}
>>> model_to_dict(t2, recurse=False)
{id: 1, message: tweet-2, user: 1}
class MyModel(Model):
data = IntegerField()
117
@post_save(sender=MyModel)
def on_save_handler(model_class, instance, created):
put_data_in_cache(instance.data)
an
object
is
deleted
from
the
database
when
an
object
is
deleted
from
the
database
when
All signal handlers accept as their first two arguments sender and instance, where sender is the model class
and instance is the actual model being acted upon.
If youd like, you can also use a decorator to connect signal handlers. This is functionally equivalent to the above
example:
@post_save(sender=SomeModel)
def post_save_handler(sender, instance, created):
print %s was just saved % instance
Signal API
class Signal
Stores a list of receivers (callbacks) and calls them when the send method is invoked.
118
Chapter 1. Contents:
This will print a bunch of models to standard output. So you can do this:
pwiz.py -e postgresql my_postgres_db > mymodels.py
python # <-- fire up an interactive shell
>>> from mymodels import Blog, Entry, Tag, Whatever
>>> print [blog.name for blog in Blog.select()]
119
Option
-h
-e
-H
-p
-u
-P
-s
Meaning
show help
database backend
host to connect to
port to connect on
database user
database password
postgres schema
Example
-e mysql
-H remote.db.server
-p 9001
-u postgres
-P secret
-s public
Instantiate a migrator. The SchemaMigrator class is responsible for generating schema altering operations,
which can then be run sequentially by the migrate() helper.
# Postgres example:
my_db = PostgresqlDatabase(...)
migrator = PostgresqlMigrator(my_db)
# SQLite example:
my_db = SqliteDatabase(my_database.db)
migrator = SqliteMigrator(my_db)
120
Chapter 1. Contents:
Warning: Migrations are not run inside a transaction. If you wish the migration to run in a transaction you will
need to wrap the call to migrate in a transaction block, e.g.
with my_db.transaction():
migrate(...)
Supported Operations
Add new field(s) to an existing model:
# Create your field instances. For non-null fields you must specify a
# default value.
pubdate_field = DateTimeField(null=True)
comment_field = TextField(default=)
# Run the migration, specifying the database table, field name and field.
migrate(
migrator.add_column(comment_tbl, pub_date, pubdate_field),
migrator.add_column(comment_tbl, comment, comment_field),
)
Renaming a field:
# Specify the table, original name of the column, and its new name.
migrate(
migrator.rename_column(story, pub_date, publish_date),
migrator.rename_column(story, mod_date, modified_date),
)
Dropping a field:
migrate(
migrator.drop_column(story, some_old_field),
)
Renaming a table:
migrate(
migrator.rename_table(story, stories_tbl),
)
Adding an index:
# Specify the table, column names, and whether the index should be
# UNIQUE or not.
121
migrate(
# Create an index on the pub_date column.
migrator.add_index(story, (pub_date,), False),
# Create a multi-column index on the pub_date and status fields.
migrator.add_index(story, (pub_date, status), False),
# Create a unique index on the category and title fields.
migrator.add_index(story, (category_id, title), True),
)
Dropping an index:
# Specify the index name.
migrate(migrator.drop_index(story, story_pub_date_status))
Migrations API
migrate(*operations)
Execute one or more schema altering operations.
Usage:
migrate(
migrator.add_column(some_table, new_column, CharField(default=)),
migrator.create_index(some_table, (new_column,)),
)
class SchemaMigrator(database)
Parameters database a Database instance.
The SchemaMigrator is responsible for generating schema-altering statements.
add_column(table, column_name, field)
Parameters
table (str) Name of the table to add column to.
column_name (str) Name of the new column.
field (Field) A Field instance.
Add a new column to the provided table. The field provided will be used to generate the appropriate
column definition.
Note: If the field is not nullable it must specify a default value.
Note: For non-null fields, the field will initially be added as a null field, then an UPDATE statement will
be executed to populate the column with the default value. Finally, the column will be marked as not null.
drop_column(table, column_name[, cascade=True ])
Parameters
table (str) Name of the table to drop column from.
column_name (str) Name of the column to drop.
cascade (bool) Whether the column should be dropped with CASCADE.
122
Chapter 1. Contents:
1.13.14 Reflection
The reflection module contains helpers for introspecting existing databases. This module is used internally by several
other modules in the playhouse, including DataSet and pwiz, a model generator.
123
generate_models()
Introspect the database, reading in the tables, columns, and foreign key constraints, then generate a dictionary mapping each database table to a dynamically-generated Model class.
Returns A dictionary mapping table-names to model classes.
124
Chapter 1. Contents:
For more information and examples check out this blog post.
CSV Loader API
load_csv(db_or_model, filename[, fields=None[, field_names=None[, has_header=True[, sample_size=10[, converter=None[, db_table=None[, **reader_kwargs ]]]]]]])
Load a CSV file into the provided database or model class, returning a Model suitable for working with the
CSV data.
Parameters
db_or_model Either a Database instance or a Model class. If a model is not provided,
one will be automatically generated for you.
filename (str) Path of CSV file to load.
fields (list) A list of Field instances mapping to each column in the CSV. This allows
you to manually specify the column types. If not provided, and a model is not provided, the
field types will be determined automatically.
field_names (list) A list of strings to use as field names for each column in the CSV. If not
provided, and a model is not provided, the field names will be determined by looking at the
header row of the file. If no header exists, then the fields will be given generic names.
has_header (bool) Whether the first row is a header.
sample_size (int) Number of rows to look at when introspecting data types. If set to 0,
then a generic field type will be used for all fields.
converter (RowConverter) a RowConverter instance to use for introspecting the CSV.
If not provided, one will be created.
db_table (str) The name of the database table to load data into. If this value is not provided, it will be determined using the filename of the CSV file. If a model is provided, this
value is ignored.
125
peewee import *
playhouse.csv_loader import *
SqliteDatabase(:memory:)
= load_csv(db, users.csv)
Specifying fields:
Dumping CSV
dump_csv(query,
file_or_name[,
csv_writer=None ]]]])
include_header=True[,
close_file=True[,
append=True[,
Parameters
query A peewee SelectQuery to dump as CSV.
file_or_name Either a filename or a file-like object.
include_header Whether to generate a CSV header row consisting of the names of the
selected columns.
close_file Whether the file should be closed after writing the query data.
append Whether new data should be appended to the end of the file.
csv_writer A python csv.writer instance to use.
Example usage:
with open(account-export.csv, w) as fh:
query = Account.select().order_by(Account.id)
dump_csv(query, fh)
Chapter 1. Contents:
the backend. The pool can specify a timeout after which connections are recycled, as well as an upper bound on the
number of open connections.
If your application is single-threaded, only one connection will be opened.
If your application is multi-threaded (this includes green threads) and you specify threadlocals=True when instantiating your database, then up to max_connections will be opened. As of version 2.3.3, this is the default behavior.
class PooledDatabase(database[, max_connections=20[, stale_timeout=None[, **kwargs ]]])
Mixin class intended to be used with a subclass of Database.
Parameters
database (str) The name of the database or database file.
max_connections (int) Maximum number of connections. Provide None for unlimited.
stale_timeout (int) Number of seconds to allow connections to be used.
kwargs Arbitrary keyword arguments passed to database class.
Note: Connections will not be closed exactly when they exceed their stale_timeout. Instead, stale connections
are only closed when a new connection is requested.
Note: If the number of open connections exceeds max_connections, a ValueError will be raised.
manual_close()
Close the currently-open connection without returning it to the pool.
_connect(*args, **kwargs)
Request a connection from the pool. If there are no available connections a new one will be opened.
_close(conn[, close_conn=False ])
By default conn will not be closed and instead will be returned to the pool of available connections. If
close_conn=True, then conn will be closed and not be returned to the pool.
class PooledPostgresqlDatabase
Subclass of PostgresqlDatabase that mixes in the PooledDatabase helper.
class PooledPostgresqlExtDatabase
Subclass of PostgresqlExtDatabase that mixes in the PooledDatabase helper.
The
PostgresqlExtDatabase is a part of the Postgresql Extensions module and provides support for many
Postgres-specific features.
class PooledMySQLDatabase
Subclass of MySQLDatabase that mixes in the PooledDatabase helper.
127
When you execute writes (or deletes), they will be executed against the master database:
User.create(username=Peewee)
When you execute a read query, it will run against one of the replicas:
users = User.select().where(User.username == Peewee)
Note: To force a SELECT query against the master database, manually create the SelectQuery.
SelectQuery(User)
# master database.
128
Chapter 1. Contents:
class TestUsersTweets(TestCase):
def create_test_data(self):
# ... create a bunch of users and tweets
for i in range(10):
User.create(username=user-%d % i)
def test_timeline(self):
with test_database(test_db, (User, Tweet)):
# This data will be created in test_db
self.create_test_data()
# Perform assertions on test data inside ctx manager.
self.assertEqual(Tweet.timeline(user-0) [...])
# once we exit the context manager, were back to using the normal database
class count_queries([only_select=False ])
Context manager that will count the number of queries executed within the context.
Parameters only_select (bool) Only count SELECT queries.
with count_queries() as counter:
huey = User.get(User.username == huey)
huey_tweets = [tweet.message for tweet in huey.tweets]
assert counter.count == 2
count
The number of queries executed.
get_queries()
Return a list of 2-tuples consisting of the SQL query and a list of parameters.
assert_query_count(expected[, only_select=False ])
Function or method decorator that will raise an AssertionError if the number of queries executed in the
decorated function does not equal the expected number.
class TestMyApp(unittest.TestCase):
@assert_query_count(1)
def test_get_popular_blogs(self):
popular_blogs = Blog.get_popular()
self.assertEqual(
[blog.title for blog in popular_blogs],
["Peewees Playhouse!", "All About Huey", "Mickeys Adventures"])
1.13.20 pskel
I often find myself writing very small scripts with peewee. pskel will generate the boilerplate code for a basic peewee
script.
Usage:
129
Default
False
sqlite
:memory:
Description
Log all queries to stdout.
Database driver to use.
Database to connect to.
Example:
$ pskel -e postgres -d my_database User Tweet
This will print the following code to stdout (which you can redirect into a file using >):
#!/usr/bin/env python
import logging
from peewee import *
from peewee import create_model_tables
db = PostgresqlDatabase(my_database)
class BaseModel(Model):
class Meta:
database = db
class User(BaseModel):
pass
class Tweet(BaseModel):
pass
def main():
create_model_tables([User, Tweet], fail_silently=True)
if __name__ == __main__:
main()
130
Chapter 1. Contents:
CHAPTER 2
Note
If you find any bugs, odd behavior, or have an idea for a new feature please dont hesitate to open an issue on GitHub
or contact me.
131
132
Chapter 2. Note
CHAPTER 3
genindex
modindex
search
133
134
Index
Symbols
A
add_column() (SchemaMigrator method), 122
add_index() (SchemaMigrator method), 123
add_not_null() (SchemaMigrator method), 123
aggregate() (SelectQuery method), 73
aggregate() (SqliteExtDatabase method), 100
aggregate_rows() (SelectQuery method), 71
alias() (Model class method), 60
alias() (Node method), 87
alias() (Query method), 67
all() (Table method), 110
annotate() (SelectQuery method), 72
APSWDatabase (built-in class), 89
ArrayField (built-in class), 95
asc() (Node method), 87
assert_query_count() (built-in function), 129
atomic() (Database method), 83
B
begin() (Database method), 80
BerkeleyDatabase (built-in class), 105
between() (Node method), 87
BigIntegerField (built-in class), 63
BlobField (built-in class), 65
BM25() (built-in function), 104
bm25() (FTSModel class method), 103
BooleanField (built-in class), 65
D
Database (built-in class), 78
DataSet (built-in class), 109
DateField (built-in class), 64
135
E
endswith() (Node method), 87
execute() (DeleteQuery method), 76
execute() (InsertQuery method), 76
execute() (Query method), 68
execute() (RawQuery method), 77
execute() (SelectQuery method), 74
execute() (UpdateQuery method), 75
execute_sql() (Database method), 80
exists() (HStoreField method), 96
exists() (SelectQuery method), 73
extract_date() (Database method), 83
F
find() (Table method), 110
find_one() (Table method), 110
first() (SelectQuery method), 74
FloatField (built-in class), 63
fn (built-in class), 84
for_update() (SelectQuery method), 70
ForeignKeyField (built-in class), 65
freeze() (DataSet method), 109
from_() (SelectQuery method), 69
from_database() (Introspector class method), 124
136
G
generate_models() (Introspector method), 124
get() (Model class method), 60
get() (SelectQuery method), 73
get_autocommit() (Database method), 81
get_conn() (Database method), 80
get_cursor() (Database method), 80
get_indexes_for_table() (Database method), 81
get_queries() (count_queries method), 129
get_tables() (Database method), 81
GFKField (built-in class), 114
granular_transaction() (SqliteExtDatabase method), 101
group_by() (SelectQuery method), 69
H
having() (SelectQuery method), 69
hour (DateTimeField attribute), 64
hour (TimeField attribute), 65
HStoreField (built-in class), 96
I
init() (Database method), 79
initialize() (Proxy method), 86
insert() (Model class method), 58
insert() (Table method), 110
insert_from() (Model class method), 59
insert_many() (Model method), 58
InsertQuery (built-in class), 75
IntegerField (built-in class), 63
Introspector (built-in class), 123
is_dirty() (Model method), 62
items() (HStoreField method), 96
iterator() (SelectQuery method), 71
J
join() (Query method), 67
JSONField (built-in class), 97
K
keys() (HStoreField method), 96
KeyStore (built-in class), 114
L
last_insert_id() (Database method), 80
limit() (SelectQuery method), 70
load_csv() (built-in function), 125
M
manual_close() (PooledDatabase method), 127
Match() (built-in function), 99
Index
N
naive() (SelectQuery method), 71
name, 63
Node (built-in class), 86
O
offset() (SelectQuery method), 70
optimize() (FTSModel class method), 103
order_by() (SelectQuery method), 69
over() (fn method), 84
P
paginate() (SelectQuery method), 70
PickledKeyStore (built-in class), 115
PooledDatabase (built-in class), 127
PooledMySQLDatabase (built-in class), 127
PooledPostgresqlDatabase (built-in class), 127
PooledPostgresqlExtDatabase (built-in class), 127
PostgresqlDatabase (built-in class), 84
PostgresqlExtDatabase (built-in class), 94
PostgresqlMigrator (built-in class), 123
prefetch() (built-in function), 77
prepared() (Model method), 62
PrimaryKeyField (built-in class), 63
Proxy (built-in class), 86
python_value(), 63
Q
Query (built-in class), 66
query() (DataSet method), 109
R
Rank() (built-in function), 104
rank() (FTSModel class method), 103
raw() (Model class method), 59
RawQuery (built-in class), 77
ReadSlaveModel (built-in class), 127
rebuild() (FTSModel class method), 103
regexp() (Node method), 87
Index
S
save() (Model method), 61
savepoint() (Database method), 83
scalar() (Query method), 68
SchemaMigrator (built-in class), 122
search() (FTSModel class method), 103
search_bm25() (FTSModel class method), 104
second (DateTimeField attribute), 64
second (TimeField attribute), 65
select() (Model class method), 57
select() (SelectQuery method), 68
SelectQuery (built-in class), 68
send() (Signal method), 119
sequence_exists() (Database method), 81
ServerSide() (built-in function), 95
set_autocommit() (Database method), 81
Signal (built-in class), 118
slice() (HStoreField method), 96
SQL (built-in class), 85
sql() (Query method), 68
sql_error_handler() (Database method), 84
sqlall() (Model class method), 61
SqlCipherDatabase (built-in class), 106
SqliteDatabase (built-in class), 84
SqliteExtDatabase (built-in class), 100
SqliteMigrator (built-in class), 123
startswith() (Node method), 87
switch() (Query method), 67
T
Table (built-in class), 110
table_exists() (Model class method), 61
tables (DataSet attribute), 109
test_database (built-in class), 128
TextField (built-in class), 64
TimeField (built-in class), 65
transaction() (APSWDatabase method), 89
transaction() (Database method), 83
transaction() (DataSet method), 109
translate() (built-in function), 112
TSVectorField (built-in class), 99
tuples() (RawQuery method), 77
tuples() (SelectQuery method), 71
137
U
unregister_module() (APSWDatabase method), 89
update() (HStoreField method), 97
update() (Model class method), 57
update() (Table method), 110
UpdateQuery (built-in class), 75
upsert() (InsertQuery method), 76
UUIDField (built-in class), 65
V
values() (HStoreField method), 96
VirtualModel (built-in class), 101
W
where() (Query method), 66
Window (built-in class), 86
window() (SelectQuery method), 70
wrapped_count() (SelectQuery method), 73
Y
year (DateField attribute), 65
year (DateTimeField attribute), 64
138
Index