Picamera2 Manual
Picamera2 Manual
Raspberry Pi Ltd
The Picamera2 Library
2
The Picamera2 Library
Colophon
© 2022-2024 Raspberry Pi Ltd
This documentation is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International (CC BY-ND).
build-date: 2024-04-29
build-version: 215621a-clean
RPL reserves the right to make any enhancements, improvements, corrections or any other modifications to the
RESOURCES or any products described in them at any time and without further notice.
The RESOURCES are intended for skilled users with suitable levels of design knowledge. Users are solely responsible for
their selection and use of the RESOURCES and any application of the products described in them. User agrees to indem‐
nify and hold RPL harmless against all liabilities, costs, damages or other losses arising out of their use of the
RESOURCES.
RPL grants users permission to use the RESOURCES solely in conjunction with the Raspberry Pi products. All other use
of the RESOURCES is prohibited. No licence is granted to any other RPL or other third party intellectual property right.
HIGH RISK ACTIVITIES. Raspberry Pi products are not designed, manufactured or intended for use in hazardous environ‐
ments requiring fail safe performance, such as in the operation of nuclear facilities, aircraft navigation or communication
systems, air traffic control, weapons systems or safety-critical applications (including life support systems and other
medical devices), in which the failure of the products could lead directly to death, personal injury or severe physical or
environmental damage (“High Risk Activities”). RPL specifically disclaims any express or implied warranty of fitness for
High Risk Activities and accepts no liability for use or inclusions of Raspberry Pi products in High Risk Activities.
Raspberry Pi products are provided subject to RPL’s Standard Terms. RPL’s provision of the RESOURCES does not expand
or otherwise modify RPL’s Standard Terms including but not limited to the disclaimers and warranties expressed in them.
Colophon 1
The Picamera2 Library
Table of contents
Colophon 1
Legal disclaimer notice 1
1. Introduction 4
2. Getting started 6
2.1. Requirements 6
2.2. Installation and updating 6
2.3. A first example 7
2.4. Picamera2’s high-level API 8
2.5. Multiple Cameras 9
2.6. Additional software 9
2.6.1. OpenCV 9
2.6.2. TensorFlow Lite 9
2.6.3. FFmpeg 9
2.7. Further examples 9
3. Preview windows 11
3.1. Preview window parameters 11
3.2. Preview window implementations 11
3.2.1. QtGL preview 11
3.2.2. DRM/KMS preview 12
3.2.3. Qt preview 12
3.2.4. NULL preview 13
3.3. Starting and stopping previews 13
3.4. Remote preview windows 14
3.5. Other Preview Features 14
3.5.1. Setting the Preview Title Bar 14
3.5.2. Further Preview Topics 15
3.6. Further examples 15
4. Configuring the camera 16
4.1. Generating and using a camera configuration 16
4.2. Configurations in more detail 16
4.2.1. General configuration parameters 17
4.2.2. Stream configuration parameters 20
4.2.3. Configurations and runtime camera controls 25
4.3. Configuration objects 26
4.4. Configuring a USB Camera 28
4.5. Further examples 29
5. Camera controls and properties 30
5.1. Camera controls 30
5.1.1. How to set camera controls 30
5.1.2. Object syntax for camera controls 31
5.2. Autofocus Controls 32
5.2.1. Autofocus Modes and State 32
5.2.2. Continuous Autofocus 32
5.2.3. Setting the Lens Position Manually 33
5.2.4. Triggering an Autofocus Cycle 33
5.2.5. Other Autofocus Controls 33
5.3. Camera properties 34
5.4. Further examples 34
6. Capturing images and requests 35
6.1. Capturing images 35
6.1.1. Capturing arrays 35
6.1.2. Capturing PIL images 36
6.1.3. Switching camera mode and capturing 36
Table of contents 2
The Picamera2 Library
Table of contents 3
The Picamera2 Library
1. Introduction
Picamera2 is a Python library that gives convenient access to the camera system of the Raspberry Pi. It is designed for
cameras connected with the flat ribbon cable directly to the connector on the Raspberry Pi itself, and not for other types
of camera, although there is some limited support for USB cameras.
Figure 1. A Raspberry
Pi with a supported
camera
Picamera2 is built on top of the open source libcamera project, which provides support for complex camera systems in
Linux. Picamera2 directly uses the Python bindings supplied by libcamera, although the Picamera2 API provides access
at a higher level. Most users will find it significantly easier to use for Raspberry Pi applications than libcamera’s own
bindings, and Picamera2 is tuned specifically to address the capabilities of the Raspberry Pi’s built-in camera and imag‐
ing hardware.
Picamera2 is the replacement for the legacy PiCamera Python library. It provides broadly the same facilities, although
many of these capabilities are exposed differently. Picamera2 provides a very direct and more accurate view of the Pi’s
camera system, and makes it easy for Python applications to make use of them.
Those still using the legacy camera stack should continue to use the old PiCamera library. Note that the legacy camera
stack and the old PiCamera library have been deprecated for a number of years and no longer receive any kind of
support.
NOTE
This document assumes general familiarity with Raspberry Pis and Python programming. A working understanding of images and
how they can be represented as a two-dimensional array of pixels will also be highly beneficial. For a deeper understanding of
Picamera2, some basic knowledge of Python’s numpy library will be helpful. For some more advanced use-cases, an awareness of
OpenCV (the Python cv2 module) will also be useful.
1. Introduction 4
The Picamera2 Library
Software version
This manual describes Picamera2 version 0.3.16 which is at the time of writing the most up-to-date release.
1. Introduction 5
The Picamera2 Library
2. Getting started
2.1. Requirements
Picamera2 is designed for systems running either Raspberry Pi OS or Raspberry Pi OS Lite, using a Bullseye or later im‐
age. Picamera2 is pre-installed in current images obtained using the Raspberry Pi Imager tool. Alternatively the latest im‐
ages can also be downloaded from the Raspberry Pi website. We strongly recommend users with older images to con‐
sider updating them or to proceed to the installation instructions.
Picamera2 can operate in a headless manner, not requiring an attached screen or keyboard. When first setting up such a
system we would recommend attaching a keyboard and screen if possible, as it can make trouble-shooting easier.
Raspberry Pi OS Bullseye and later images by default run the libcamera camera stack, which is required for Picamera2.
You can check that libcamera is working by opening a command window and typing:
rpicam-hello
You should see a camera preview window for about five seconds. If you do not, please refer to the Raspberry Pi camera
documentation.
Some lower-powered devices, such as the Raspberry Pi Zero, are generally much slower at running desktop GUI (Graphi‐
cal User Interface) software. Correspondingly, performance may be poor trying to run the camera system with a preview
window that has to display images through the GUI’s display stack.
On such devices we would recommend either not displaying the images, or displaying them without the GUI. The Pi can
be configured to boot to the console (avoiding the GUI) using the raspi-config tool, or if you are using the GUI it can
temporarily be suspended by holding the Ctrl+Alt+F1 keys (and use Ctrl+Alt+F7 to return again).
As of mid-September 2022, Picamera2 is pre-installed in all Raspberry Pi OS images. You can update it with a full system
update, or via the terminal with:
For Raspberry Pi OS users this will contain all the GUI dependencies, but these will be omitted in Raspberry Pi OS Lite. If
OS Lite users wish to use these features, they should run:
2. Getting started 6
The Picamera2 Library
WARNING
We strongly recommend installing and updating Picamera2 using apt which will avoid compatibility problems. If you do wish to install
Picamera2 using pip, please read to the end of this section before starting. On Bookworm and later images you will also need to be
familiar with Python virtual environments.
For users needing to do so, Picamera2 can also be installed using pip. However, we do not recommend this because the
underlying libcamera library is not ABI (Application Binary Interface) stable, so you can end up with versions of
Picamera2 and libcamera that do not work together. If you always install using apt then all these dependencies are man‐
aged for you.
If you do wish to proceed, please note that versions of Picamera2 after 0.3.12 are not compatible with Raspberry Pi OS
Bullseye, unless you also wish to compile your own version of libcamera, which is beyond the scope of this manual. At
some point even this may no longer work if the Linux kernel moves on. So in short, the process will be much safer if you
are using the very latest version of Raspberry Pi OS.
If you still wish to proceed, first ensure you have the necessary dependencies:
This should install the latest version of Picamera2. If after all this you find that it does not work, we recommend trying to
uninstall anything that you have installed or built, updating with apt, and as a last resort starting over with a clean instal‐
lation of the latest version of Raspberry Pi OS. It goes without saying that any critical data and systems should be
backed up before you start.
Normally no changes should be required to the /boot/config.txt file from the one supplied when the operating system
was installed.
Some users, for some applications, may find themselves needing to allocate more memory to the camera system. In this
case, please consider increasing the amount of available CMA memory.
In the past, legacy camera stack users may have increased the amount of gpu_mem to enable the old camera system to
run. Picamera2 does not use this type of memory, so any such lines should be deleted from your /boot/config.txt file
as they will simply cause system memory to be wasted.
6. Wait for two seconds and capture a JPEG file (still in the preview resolution)
NOTE
Users of Raspberry Pi 3 or earlier devices will need to enable Glamor in order for this example script using X Windows to work. To do
this, run sudo raspi-config in a command window, choose Advanced Options and then enable Glamor graphic acceleration. Finally
reboot your device.
picam2 = Picamera2()
camera_config = picam2.create_preview_configuration()
picam2.configure(camera_config)
picam2.start_preview(Preview.QTGL)
picam2.start()
time.sleep(2)
picam2.capture_file("test.jpg")
Non-GUI users should use the same script, but replacing Preview.QTGL by Preview.DRM, so as to use the non-GUI pre‐
view implementation:
picam2 = Picamera2()
camera_config = picam2.create_preview_configuration()
picam2.configure(camera_config)
picam2.start_preview(Preview.DRM)
picam2.start()
time.sleep(2)
picam2.capture_file("test.jpg")
picam2 = Picamera2()
picam2.start_and_capture_file("test.jpg")
You can capture multiple images with the start_and_capture_files function. Or, to record a five second video:
picam2 = Picamera2()
picam2.start_and_record_video("test.mp4", duration=5)
We will learn more about these functions later on, for both still images and video.
2.6.1. OpenCV
OpenCV is not a requirement for Picamera2, though a number of examples use it. It can by installed from apt very easily,
avoiding the long build times involved in some other methods:
Installing from apt also ensures you do not get a version that is incompatible with the Qt GUI toolkit that is installed with
Picamera2.
2.6.3. FFmpeg
Some features of Picamera2 make use of the FFmpeg library. Normally this should be installed by default on a Raspberry
Pi, but in case it isn’t the following should fetch it:
The examples folder in the repository can be found here. There are some additional examples framed as Qt applications
which can be found here.
3. Preview windows
3.1. Preview window parameters
In the previous section we have already seen two different types of preview window. There are in fact four different ver‐
sions, which we shall discuss below.
All four preview implementations accept exactly the same parameters so that they are interchangeable:
transform - a transform that allows the camera image to be horizontally and/or vertically flipped on the display
All the parameters are optional, and default values will be chosen if omitted. The following example will place an
800x600 pixel preview window at (100, 200) on the display, and will horizontally mirror the camera preview image:
picam2 = Picamera2()
picam2.start_preview(Preview.QTGL, x=100, y=200, width=800, height=600, transform=Transform(hflip=1))
picam2.start()
Transform(hflip=1, vflip=1) - horizontal and vertical flip (equivalent to a 180 degree rotation)
It’s important to realise that the display transform discussed here does not have any effect on the actual images re‐
ceived from the camera. It only applies the requested transform as it renders the pixels onto the screen. We’ll encounter
camera transforms again when it comes actually to transforming the images as the camera delivers them.
Please also note that in the example above, the start_preview() function must be called before the call to
picam2.start().
Finally, if the camera images have a different aspect ratio to the preview window, they will be letter- or pillar-boxed to fit,
preserving the image’s proper aspect ratio.
3. Preview windows 11
The Picamera2 Library
picam2 = Picamera2()
picam2.start_preview(Preview.QTGL)
The QtGL preview window is not recommended when the image needs to be shown on a remote display (not connected
to the Pi). Please refer to the Qt preview window below.
Users of Pi 3 or earlier devices will need to enable Glamor graphic acceleration to use the QtGL preview window.
NOTE
There is a limit to the size of image that the 3D graphics hardware on the Pi can handle. For Raspberry Pi 4 and later devices this limit
is 4096 pixels in either dimension. For Pi 3 and earlier devices this limit is 2048 pixels. If you try to feed a larger image to the QtGL
preview window it will report an error and the program will terminate.
picam2 = Picamera2()
picam2.start_preview(Preview.DRM)
Because X Windows is not running, it is not possible to move or resize this window with the mouse.
The DRM/KMS preview will be the natural choice for Raspberry Pi OS Lite users. It is also strongly recommended for low‐
er-powered Raspberry Pis that would find it expensive to pass a preview (for example at 30 frames per second) through
the X Windows display stack.
NOTE
The DRM/KMS preview window is not supported when using the legacy fkms display driver. Please use the recommended kms display
driver (dtoverlay=vc4-kms-v3d in your /boot/config.txt file) instead.
3.2.3. Qt preview
Like the QtGL preview, this window is also implemented using the Qt framework, but this time using software rendering
rather than 3D hardware acceleration. As such, it is computationally costly and should be avoided where possible. Even a
Raspberry Pi 4 will start to struggle once the preview window size increases.
picam2 = Picamera2()
picam2.start_preview(Preview.QT)
The main use case for the Qt preview is displaying the preview window on another networked computer using X forward‐
ing, or using the VNC remote desktop software. Under these conditions the 3D-hardware-accelerated implementation ei‐
ther does not work at all, or does not work very well.
Users of Raspberry Pi 3 or earlier devices will need to enable Glamor graphic acceleration to use the Qt preview window.
This is exactly what the NULL preview does. It displays nothing; it merely drives the camera system.
The NULL preview is in fact started automatically whenever the camera system is started (picam2.start()) if no pre‐
view is yet running, which is why alternative preview windows must be started earlier. You can start the NULL preview ex‐
plicitly like this:
picam2 = Picamera2()
picam2.start_preview(Preview.NULL)
though in fact the call to start_preview is redundant for the NULL preview and can be omitted.
The NULL preview accepts the same parameters as the other preview implementations, but ignores them completely.
None - No preview of any kind is started. The application would have to supply its own code to drive the camera
system.
True - One of the three other previews is started. Picamera2 will attempt to auto-detect which one it should
start, though this is purely on a "best efforts" basis.
Preview windows can be stopped; an alternative one should then be started. We do not recommend starting and stop‐
ping preview windows because it can be quite expensive to open and close windows, during which time camera frames
are likely to be dropped.
The Picamera2.start function accepts a show_preview parameter which can take on any one of these same values.
This is just a convenient shorthand that allows the amount of boilerplate code to be reduced. Note that stopping the
camera (Picamera2.stop) does not stop the preview window, so the stop_preview function would have to be called ex‐
plicitly before starting another.
For example, the following script would start the camera system running, run for a short while, and then attempt to auto-
detect which preview window to use in order actually to start displaying the images:
picam2 = Picamera2()
config = picam2.create_preview_configuration()
picam2.configure(config)
picam2.start()
time.sleep(2)
picam2.stop_preview()
picam2.start_preview(True)
time.sleep(2)
In this example:
1. The NULL preview will start automatically with the picam2.start() call and will run for 2 seconds
2. It will then be stopped and a preview that displays an actual window will be started
It’s worth noting that nothing particularly bad happens if you stop the preview and then fail to restart another, or do not
start another immediately. All the buffers that are available will be filled with camera images by libcamera. But with no
preview running, nothing will read out these images and recycle the buffers, so libcamera will simply stall. When a pre‐
view is restarted, normal operation will resume, starting with those slightly old camera images that are still queued up
waiting to be read out.
NOTE
Many programmers will be familiar with the notion of an event loop. Each type of preview window implements an event loop to
dequeue frames from the camera, so the NULL preview performs this function when no other event loop (such as the one provided by
Qt) is running.
The QtGL (hardware accelerated) preview will not work and will result in an error message. The Qt preview must
be used instead, though being software rendered (and presumably travelling over a network), framerates can be
expected to be significantly poorer.
The QtGL (hardware accelerated) window works adequately if you also have a display connected directly to your
Raspberry Pi
If you do not have a display connected directly to the Pi, the QtGL preview will work very poorly, and the Qt pre‐
view window should be used instead
If you are not running, or have suspended, the GUI on the Pi but still have a display attached, you can log into the Pi with‐
out X-forwarding and use the DRM/KMS preview implementation. This will appear on the display that is attached directly
to the Pi.
picam2 = Picamera2()
picam2.start(show_preview=True)
This will display every image’s exposure time and analogue gain values on the title bar. If any of the given field names are
misspelled or unavailable then the value INVALID is reported for them.
When using the NULL preview or DRM preview, or when Picamera2 is embedded in a larger Qt application, then the ti‐
tle_fields property has no effect.
The metadata that is available can easily be inspected using the capture_metadata method. Alternatively, more informa‐
tion on the different forms of metadata is available in the appendices.
Overlays (transparent images overlaid on the live camera feed) are discussed among the Advanced Topics.
For Qt applications, displaying a preview window doesn’t make sense as the Qt framework will run the event
loop. However, the underlying widgets are still useful and are discussed further here.
Picamera2 provides a number of configuration-generating methods that can be used to provide suitable configurations
for common use cases:
There is nothing inherently different about any one of these methods over another, they differ only in that they supply
slightly different defaults so that it’s easier to get something appropriate to the use case at hand. But, if you choose the
necessary parameters carefully, you can use a configuration from any of these functions for any use case.
So, for example, to set up the camera to start delivering a stream of preview images you might use:
picam2 = Picamera2()
config = picam2.create_preview_configuration()
picam2.configure(config)
picam2.start()
This is fairly typical, though the configuration-generating methods allow numerous optional parameters to adjust the re‐
sulting configuration precisely for different situations. Additionaly, once a configuration object has been created, ap‐
plications are free to alter the object’s recommendations before calling picam2.configure.
One thing we shall learn is that configurations are just Python dictionaries, and it’s easy for us to inspect them and see
what they are saying.
The diagram below shows how the camera hardware on the Raspberry Pi works.
Figure 2. The
Raspberry Pi’s camera
system
1. On the left we have the camera module, which delivers images through the flat ribbon cable to the Pi. The im‐
ages delivered by the camera are not human-viewable images, but need lots of work to clean them up and pro‐
duce a realistic picture.
2. A hardware block called a CSI-2 Receiver on the Pi transfers the incoming camera image into memory.
3. The Pi has an Image Signal Processor (ISP) which reads this image from memory. It performs all these cleaning
and processing steps on the pixels that were received from the camera.
4. The ISP can produce up to two output images for every input frame from the camera. We designate one of them
as the main image, and it can be in either RGB or YUV formats.
5. The second image is a lower resolution image, referred to often as the "lores" image; it must be no larger than
the main image. On a Pi 4 or earlier device, the lores image must be in a YUV format, whereas on a Pi 5 (or later
device) it can be RGB or YUV, like the main image.
6. Finally, the image data that was received from the sensor and written directly to memory can also be delivered
to applications. This is called the raw image.
General parameters that apply globally to the Picamera2 system and across the whole of the ISP.
And per-stream configuration within the ISP that determines the output formats and sizes of the main and lores
streams. We note that the main stream is always defined and delivered to the application, using default values
if the application did not explicitly request one.
Some applications need to be able to control the mode (resolution, bit depth and so on) that the sensor is run‐
ning in. This can be done using the sensor part of the camera configuration or, if this is absent, it will be inferred
from the specification of the raw stream (if present).
Mostly, a configuration does not include camera settings that can be changed at runtime (such as brightness or
contrast). However, certain use cases do sometimes have particular preferences about certain of these control
values, and they can be stored as part of a configuration so that applying the configuration will apply the runtime
controls automatically too.
transform - whether camera images are horizontally or vertically mirrored, or both (giving a 180 degree rotation).
All three streams (if present) share the same transform.
colour_space - the colour space of the output images. The main and lores streams must always share the same
colour space. The raw stream is always in a camera-specific colour space.
buffer_count - the number of sets of buffers to allocate for the camera system. A single set of buffers repre‐
sents one buffer for each of the streams that have been requested.
queue - whether the system is allowed to queue up a frame ready for a capture request.
sensor - parameters that allow an application to select a particular mode of operation for the sensor. This is
quite an involved topic, which we shall cover later.
display - this names which (if any) of the streams are to be shown in the preview window. It does not actually af‐
fect the camera images in any way, only what Picamera2 does with them.
encode - this names which (if any) of the streams are to be encoded if a video recording is started. This too
does not affect the camera images in any way, only what Picamera2 does with them. This parameter only speci‐
fies a default value that can be overridden when the encoders are started.
>>> Transform()
<libcamera.Transform 'identity'>
>>> Transform(hflip=True)
<libcamera.Transform 'hflip'>
>>> Transform(vflip=True)
<libcamera.Transform 'vflip'>
>>> Transform(hflip=True, vflip=True)
<libcamera.Transform 'hvflip'>
Transforms can be passed to all the configuration-generating methods using the transform keyword parameter. For
example:
picam2 = Picamera2()
preview_config = picam2.create_preview_configuration(transform=Transform(hflip=True))
Picamera2 only supports the four transforms shown above. Other transforms (involving image transposition) exist but
are not supported. If unspecified, the transform always defaults to the identity transform.
The implementation of colour spaces in libcamera follows that of the Linux V4L2 API quite closely. Specific choices are
provided for each of colour primaries, the transfer function, the YCbCr encoding matrix and the quantisation (or range).
In addition, libcamera provides convenient shorthand forms for commonly used colour spaces:
<libcamera.ColorSpace 'SMPTE170M'>
>>> ColorSpace.Rec709()
<libcamera.ColorSpace 'Rec709'>
These are in fact the only colour spaces supported by the Pi’s camera system. The required choice can be passed to all
the configuration-generating methods using the colour_space keyword parameter:
picam2 = Picamera2()
preview_config = picam2.create_preview_configuration(colour_space=ColorSpace.Sycc())
When omitted, Picamera2 will choose a default according to the use case:
create_preview_configuration and create_still_configuration will use the sYCC colour space by default
(by which we mean sRGB primaries and transfer function and full-range BT.601 YCbCr encoding).
create_video_configuration will choose sYCC if the main stream is requesting an RGB format. For YUV for‐
mats it will choose SMPTE 170M if the resolution is less than 1280x720, otherwise Rec.709.
This number defines how many sets of buffers (one for each requested stream) are allocated for the camera system to
use. Allocating more buffers can mean that the camera will run more smoothly and drop fewer frames, though the down‐
side is that particularly at high resolutions, there may not be enough memory available.
The configuration-generating methods all choose an appropriate number of buffers for their use cases:
create_still_configuration requests just one set of buffers (as these are normally large full resolution
buffers)
create_video_configuration requests six buffers, as the extra work involved in encoding and outputting the
video streams makes it more susceptible to jitter or delays, which is alleviated by the longer queue of buffers.
The number of buffers can be overridden in all the configuration-generating methods using the buffer_count keyword
parameter:
picam2 = Picamera2()
preview_config = picam2.create_still_configuration(buffer_count=2)
By default, Picamera2 keeps hold of the last frame to be received from the camera and, when you make a capture re‐
quest, this frame may be returned to you. This can be useful for burst captures, particularly when an application is doing
some processing that can take slightly longer than a frame period. In these cases, the queued frame can be returned im‐
mediately rather than remaining idle until the next camera frame arrives.
But this does mean that the returned frame can come from slightly before the moment of the capture request, by up to a
frame period. If this behaviour is not wanted, please set the queue parameter to False. For example:
picam2 = Picamera2()
preview_config = picam2.create_preview_configuration(queue=False)
Note that, when the buffer_count is set to one, as is the case by default for still capture configurations, then no frames
are ever queued up (because holding on to the only buffer would completely stall the camera pipeline).
These work in conjunction with some of the stream parameters and are discussed just below.
Normally we would display the main stream in the preview window. In some circumstances it may be preferable to dis‐
play a lower resolution image (from the lores stream) instead. We could use:
picam2 = Picamera2()
config = picam2.create_still_configuration(lores={"size": (320, 240)}, display="lores")
This would request a full resolution main stream, but then also a QVGA lores stream which would be displayed (recall
that the main stream is always defined even when the application does not explicitly request it).
The display parameter may take the value None which means that no images will be rendered to the preview window. In
fact this is the default choice of the create_still_configuration method.
This is similar to the display parameter, in that it names the stream (main or lores) that will be encoded if a video record‐
ing is started. By default we would normally encode the main stream, but a user might have an application where they
want to record a low resolution video stream instead:
picam2 = Picamera2()
config = picam2.create_video_configuration(main={"size": (2048, 1536)}, lores={"size": (320, 240)}, encode="lore
This would enable a QVGA stream to be recorded, while allowing 2048x1536 still images to be captured simultaneously.
The encode parameter may also take the value None, which is again the default choice of the create_still_configura‐
tion method.
NOTE
the encode parameter in the configuration is retained principally for backwards compatibility. Recent versions of Picamera2 allow
multiple encoders to run at the same time, using either the same or different streams. For this reason, the Picamera2.start_encoder
and Picamera2.start_recording methods accpet a name parameter to define which stream is being recorded. If this is omitted, the
encode stream from the configuration will be used.
To request one of these streams, a dictionary should be supplied. The dictionary may be an empty dictionary, at which
point that stream will be generated for the application but populated by default values:
picam2 = Picamera2()
config = picam2.create_preview_configuration(lores={})
Here, the main stream will be produced as usual, but a lores stream will be produced as well. By default it will have the
same resolution as the main stream, but using the YUV420 image format.
"size" - a tuple of two values giving the width and height of the image
The configuration-generating functions can make minimal changes to the configuration where they detect something is
invalid. But they will attempt to observe whatever values they are given even where others may be more efficient. The
most obvious case of this is in relation to the image sizes.
Here, hardware restrictions means that images can be processed more efficiently if they are particular sizes. Other sizes
will have to be copied more frequently in the Python world, but these special image alignment rules are somewhat ar‐
cane. They are covered in detail in the appendices.
If a user wants to request these optimal image sizes, they should use the align_configuration method. For example:
At the end we have an 800x606 image, which will result in less data copying. Observe also how, once a configuration has
been applied, Picamera2 knows some extra things: the length of each row of the image in bytes (the stride), and the total
amount of memory every such image will occupy.
A wide variety of image formats are supported by libcamera, as described in the appendices. For our purposes, however,
these are some of the most common ones. For the main stream:
XBGR8888 - every pixel is packed into 32-bits, with a dummy 255 value at the end, so a pixel would look like [R, G,
B, 255] when captured in Python. (These format descriptions can seem counter-intuitive, but the underlying in‐
frastructure tends to take machine endianness into account, which can mix things up!)
YUV420 - YUV images with a plane of Y values followed by a quarter plane of U values and then a quarter plane
of V values.
For the lores stream, only 'YUV420' is really used on Pi 4 and earlier devices. On Pi 5, the lores stream may specify an
RGB format.
WARNING
Picamera2 takes its pixel format naming from libcamera, which in turn takes them from certain underlying Linux components. The
results are not always the most intuitive. For example, OpenCV users will typically want each pixel to be a (B, G, R) triple for which the
RGB888 format should be chosen, and not BGR888. Similarly, OpenCV users wanting an alpha channel should select XRGB8888.
Image sensors normally have a number of different modes in which they can operate. Modes are distinguished by pro‐
ducing different output resolutions, some of them may give you a different field of view, and there’s often a trade-off
where lower resolution sensor modes can produce higher framerates. In many applications, it’s important to understand
the available sensor modes, and to be able to choose the one that is the most appropriate.
We can inspect the available sensor modes by querying the sensor_modes property of the Picamera2 object. You should
normally do this as soon as you open the camera, because finding out all the reported information will require the cam‐
era to be stopped, and will reconfigure it multiple times. For the HQ camera, we obtain:
This gives us the exact sensor modes that we can request, with the following information for each mode:
crop_limits - this tells us the exact field of view of this mode within the full resolution sensor output. In the ex‐
ample above, only the final two modes will give us the full field of view.
exposure_limits - the maximum and minimum exposure values (in microseconds) permitted in this mode.
format - the packed sensor format. This can be passed as the "format" when requesting the raw stream.
size - the resolution of the sensor output. This value can be passed as the "size" when requesting the raw
stream.
unpacked - use this in place of the earlier format in the raw stream request if unpacked raw images are required
(see below). We recommend anyone wanting to access the raw pixel data to ask for the unpacked version of the
format.
In this example there are three 12-bit modes (one at the full resolution) and one 10-bit mode useful for higher framerate
applications (but with a heavily cropped field of view).
NOTE
For a raw stream, the format normally begins with an S, followed by four characters that indicate the Bayer order of the sensor (the
only exception to this is for raw monochrome sensors, which use the single letter R instead). Next is a number, 10 or 12 here, which is
the number of bits in each pixel sample sent by the sensor (some sensors may have eight-bit modes too). Finally there may be the
characters _CSI2P. This would mean that the pixel data will be packed tightly in memory, so that four ten-bit pixel values will be stored
in every five bytes, or two twelve-bit pixel values in every three bytes. When _CSI2P is absent, it means the pixels will each be
unpacked into a 16-bit word (or eight-bit pixels into a single byte). This uses more memory but can be useful for applications that
want easy access to the raw pixel data. Pi 5 uses a compressed raw format in place of the _CSI2P version, and which we shall learn
about later.
WARNING
This disucussion of the sensor configuration applies only to Raspberry Pi OS Bookworm or later. Bullseye users should configure the
sensor by specifying the raw stream format, as shown here, except that the 'sensor' field will not be reported back as part of the
applied configuration. Both packed (format ending in _CSI2P) and unpacked formats may be requested. Configuring the raw stream in
this way is also supported in Bookworm, for backwards compatibility with Bullseye.
The best way to ask for a particular sensor mode is to set with sensor parameter when requesting a camera configura‐
tion. The sensor configuration accepts the following parameters:
output_size - the resolution of the sensor mode, which you can find in the size field in the sensor modes list,
and
bit_depth - the bit depth of each pixel sample from the sensor, also available in the sensor modes list.
All other properties are ignored. So for example, to request the fast framerate sensor mode (the first in the list) we could
generate a configuration like this:
mode = picam2.sensor_modes[0]
config = picam2.create_preview_configuration(sensor={'output_size': mode['size'], 'bit_depth': mode['bit_depth']
picam2.configure(config)
Whatever output size and bit depth you specify, Picamera2 will try to choose the best match for you. In order to be sure
you get the sensor mode that you want, specify the exact size and bit depth for that sensor mode.
On a Pi 4 (or earlier)
The raw stream configuration will always be filled in for you from the sensor configuration where a sensor configuration
was given. For example:
Note how the raw stream configuration has been updated to match the sensor configuration even though we never spec‐
ified it. By default we get packed raw pixels; had we wanted unpacked pixels it would have been sufficient to request an
unpacked raw format (even though the size, bit depth and Bayer order may get overwritten):
Again, the raw stream has been updated and the request for unpacked pixels respected, even though the bit depth in the
raw format has been changed to match the value in the sensor configuration.
For backwards compatibility with earlier versions of Picamera2, when no sensor configuration is given but a raw stream
configuration is supplied, Picamera2 will take the size and bit depth from the raw stream in order to select the correct
sensor configuration. After Picamera2 is configured, the configuration that you supplied will have the updated sensor
configuration filled in for you. For example:
On a Pi 5 (or later)
The situation on a Pi 5 is very similar, so familiarity with the Pi 4 discussion above will be assumed.
The sensor configuration is the best way to request a particular sensor mode.
A raw stream with unpacked pixels can be requested by asking for a raw stream with an unpacked format, with
everything else coming from the sensor configuration.
For backwards compatibility, if no sensor configuration is specified, the raw stream (if present) will be used
instead.
However, the Pi 5 is different because it uses different raw stream formats. It supports:
Compressed raw pixels. Here, camera samples are compressed to 8 bits per pixel using a visually lossless com‐
pression scheme. This format is more memory efficient, but not suitable if an application wants access to the
raw pixels (because they would have to be decompressed).
Uncompressed pixels. These are stored as one pixel per 16-bit word. It differs from the Pi 4 unpacked scheme in
that pixels are left-shifted so that the zero padding bits are at the least significant end (the earlier devices will
pad with zeros at the most significant end), and the full 16-bit dynamic range is used. These pixels have never
been through the compression scheme, and so give a bit-exact version of what came out of the sensor (allowing
for the shift that is applied).
Again for backwards compatibility, Picamera2 will translate a request for packed pixels in the raw stream into a request
for compressed pixels on a Pi 5. Requests for unpacked raw stream pixels will be translated into uncompressed (and left-
shifted) pixels.
Here, the 'SBGGR10_CSI2P' format, which would have remained unchanged on a Pi 4, has become 'BGGR16_PISP_COM‐
P1' or "BGGR order 16-bit Pi ISP Compressed 1" format. And the sensor configuration tells us the sensor mode, as it did
before.
The sensor configuration tells us the sensor mode as it did in the previous example, but the 'SBGGR10' format that would
have remained unchanged on a Pi 4 has become 'SBGGR16' (BGGR order 16-bits per pixel, full 16-bit dynamic range). It
should be clear that, in order to determine the bit depth that the sensor is operating in, an application needs to check the
sensor configuration, and that this method works on all Pi devices. Checking the raw stream format for this value will not
work on a Pi 5 or later device.
TIP
After configuring the camera, it’s often helpful to inspect picam2.camera_configuration() to check what you actually got!
One such example is in video recording. Normally the camera can run at a variety of framerates, and this can be changed
by an application while the camera is running. When recording a video, however, people commonly prefer to record at a
fixed 30 frames per second, even if the camera was set to something else previously.
The configuration-generating methods therefore supply some recommended runtime control values corresponding to
the use case. These can be overridden or changed, but as the the optimal or usual values are sometimes a bit technical,
it’s helpful to supply them automatically:
We see here that for a video use-case, we’re recommended to set the NoiseReductionMode to Fast (because when en‐
coding video it’s important to get the frames quickly), and the FrameDurationLimits are set to (33333, 33333). This
means that every camera frame may not take less than the first value (33333 microseconds), and may not take longer
than the second (also 33333 microseconds). Therefore the framerate is fixed to 30 frames per second.
New control values, or ones to override the default ones, can be specified with the controls keyword parameter. If you
wanted a 25 frames-per-second video you could use:
picam2 = Picamera2()
config = picam2.create_video_configuration(controls={"FrameDurationLimits": (40000, 40000)})
When controls have been set into the returned configuration object, we can think of them as being part of that configura‐
tion. If we hold on to the configuration object and apply it again later, then those control values will be restored.
We are of course always free to set runtime controls later using the Picamera2.set_controls method, but these will not
become part of any configuration that we can recover later.
For a full list of all the available runtime camera controls, please refer to the relevant appendix.
Camera configurations can be represented by the CameraConfiguration class. This class contains the exact same
things we have seen previously, namely:
the buffer_count
a Transform object
a ColorSpace object
a main stream
When a Picamera2 object is created, it contains three embedded configurations, in the following fields:
Finally, CameraConfiguration objects can also be passed to the configure method. Alternatively, the strings "preview",
"still" and "video" may be passed as shorthand for the three embedded configurations listed above.
picam2 = Picamera2()
picam2.preview_configuration.main.size = (800, 600)
picam2.configure("preview")
Before setting the size or format of the optional streams, they must first be enabled with:
configuration_object.enable_lores()
or
configuration_object.enable_raw()
as appropriate. This would normally be advised after the main stream size has been set up so that they can pick up more
appropriate defaults. After that, the size and the format fields can be set in the usual way:
picam2 = Picamera2()
picam2.preview_configuration.enable_lores()
picam2.preview_configuration.lores.size = (320, 240)
picam2.configure("preview")
Setting the format field is optional as defaults will be chosen - "YUV420" for the lores stream. In the case of the raw
stream format, this can be left at its default value (None) and the system will use the sensor’s native format.
The sensor configuration and raw stream behave in the same way as they do when using dictionaries instead of configu‐
ration objects. This means that:
When the sensor object in the camera configuration contains proper values, they will take precedence over any
in the raw stream.
When the sensor object does not contain meaningful values, any values in the raw stream will be used to decide
the sensor mode.
After configuration, the sensor field is updated to reflect the sensor mode being used, whether or not it was con‐
figured previously.
The SensorConfiguration object allows the application to set the output_size and bit_depth, just as was the case
with dictionaries. For example on a Pi 4:
Here, the raw stream has been configured from the programmed sensor configuration. But if we don’t fill in the sensor
configuration, it will be deduced from the raw stream (for backwards compatibility).
WARNING
This means that, because these configuration objects are persistent, after doing one configuration you can no longer update just the
raw stream and expect a different sensor mode to be chosen. The application should either update the sensor configuration, or, if it
wants to deduce it from the raw stream again, the sensor configuration should be cleared: picam2.preview_configuration.sensor
= None
Stream alignment
Just as with the dictionary method, stream sizes are not forced to optimal alignment by default. This can easily be ac‐
complished using the configuration object’s align method:
picam2 = Picamera2()
picam2.preview_configuration.main.size = (808, 600)
picam2.preview_configuration.main.format = "YUV420"
picam2.preview_configuration.align()
picam2.configure("preview")
We saw earlier how control values can be associated with a configuration. Using the object style of configuration, the
equivalent to that example would be:
picam2 = Picamera2()
picam2.video_configuration.controls.FrameDurationLimits = (40000, 40000)
picam2.configure("video")
For convenience, the "controls" object lets you set the FrameRate instead of the FrameDurationLimits in case this is
easier. You can give it either a range (as FrameDurationLimits) or a single value, where framerate = 1000000 / framedu‐
ration, and frameduration is given in microseconds (as we did above):
picam2 = Picamera2()
picam2.video_configuration.controls.FrameRate = 25.0
picam2.configure("video")
are available. For more information, please refer to the section on multiple cameras.
You can create the Picamera2 object in the usual way, but only the main stream will be available. The supported formats
will depend on the camera, but Picamera2 can in principle deal with both MJPEG and YUYV cameras, and where the cam‐
era supports both you can select by requesting the format "MJPEG" or "YUYV".
USB cameras can only use the software-rendered Qt preview window (Preview.QT). None of the hardware assisted ren‐
dering is supported. MJPEG streams can be rendered directly, but YUYV would require OpenCV to be installed in order to
convert the image into a format that Qt understands. Both cases will use a significant extra amount of CPU.
The capture_buffer method will give you the raw camera data for each frame (a JPEG bitstream from an MJPEG cam‐
era, or an uncompressed YUYV image from a YUYV camera). A simple example:
picam2 = Picamera2()
config = picam2.create_preview_configuration({"format": "MJPEG"})
picam2.configure(config)
picam2.start_preview(Preview.QT)
picam2.start()
jpeg_buffer = picam2.capture_buffer()
If you have multiple cameras and need to discover which camera to open, please use the
Picamera2.global_camera_info method.
In general, users should assume that other features, such as video recording, camera controls that are supported on
Raspberry Pi cameras, and so forth, are not available. Hot-plugging of USB cameras is also not supported - Picamera2
should be completely shut down and restarted when cameras are added or removed.
capture_dng_and_jpeg.py - shows how you can configure a still capture for a full resolution main stream and
also obtain the raw image buffer.
capture_motion.py - shows how you can capture both a main and a lores stream.
rotation.py - shows a 180 degree rotation being applied to the camera images. In this example the rotation is ap‐
plied after the configuration has been generated, though we could have passed the transform in to the cre‐
ate_preview_configuration function with transform=Transform(hflip=1, vflip=1) too.
still_capture_with_config.py - shows how to configure a still capture using the configuration object method. In
this example we also request a raw stream.
In Picamera2, all camera controls can be changed at runtime. Anything that cannot be changed at runtime is regarded
not as a control but as configuration. We do, however, allow the camera’s configuration to include controls in case there
are particular standard control values that could be conveniently applied along with the rest of the configuration.
For example, some obvious controls that we might want to set while the camera is delivering images are:
Exposure time
Gain
White balance
Colour saturation
Sharpness
…and there are many more. A complete list of all the available camera controls can be found in the appendices, and also
by inspecting the camera_controls property of the Picamera2 object:
picam2 = Picamera2()
picam2.camera_controls
This returns a dictionary with the control names as keys, and each value being a tuple of (min, max, default) values for
that control. The default value should be interpreted with some caution as in many cases libcamera’s default value will
be overwritten by the camera tuning as soon as the camera is started.
One example of a control that might be assoicated with a configuration might be the camera’s framerate (or equivalently,
the frame duration). Normally we might let a camera operate at whatever framerate is appropriate to the exposure time
requested. For video recording, however, it’s quite common to fix the framerate to (for example) 30 frames per second,
and so this might be included by default along with the rest of the video configuration.
1. Into the camera configuration. These will be stored with the configuration so that they will be re-applied whenev‐
er that configuration is requested. They will be enacted before the camera starts.
2. After configuration but before the camera starts. The controls will again take effect before the camera starts,
but will not be stored with the configuration and so would not be re-applied again automatically.
3. After the camera has started. The camera system will apply the controls as soon as it can, but typically there will
be some number of frames of delay.
Camera controls can be set by passing a dictionary of updates to the set_controls method, but there is also an object
style syntax for accomplishing the same thing.
We have seen an example of this when discussing camera configurations. One important feature of this is that the con‐
trols are applied before the camera even starts, meaning the the very first camera frame will have the controls set as
requested.
This time we set the controls after configuring the camera, but before starting it. For example:
picam2 = Picamera2()
picam2.configure(picam2.create_preview_configuration())
picam2.set_controls({"ExposureTime": 10000, "AnalogueGain": 1.0})
picam2.start()
Here too the controls will have already been applied on the very first frame that we receive from the camera.
This time, there will be a delay of several frames before the controls take effect. This is because there is perhaps quite a
large number of requests for camera frames already in flight, and for some controls (exposure time and analogue gain
specifically), the camera may actually take several frames to apply the updates.
picam2 = Picamera2()
picam2.configure(picam2.create_preview_configuration())
picam2.start()
picam2.set_controls({"ExposureTime": 10000, "AnalogueGain": 1.0})
This time we cannot rely on any specific frame having the value we want, so would have to check the frame’s metadata.
There is also an embedded instance of the Controls class inside the Picamera2 object that allows controls to be set
subsequently. For example, to set controls after configuration but before starting the camera:
picam2 = Picamera2()
picam2.configure("preview")
picam2.controls.ExposureTime = 10000
picam2.controls.AnalogueGain = 1.0
picam2.start()
To set these controls after the camera has started we should use:
picam2 = Picamera2()
picam2.configure("preview")
picam2.start()
with picam2.controls as controls:
controls.ExposureTime = 10000
controls.AnalogueGain = 1.0
In this final case we note the use of the with construct. Although you would normally get by without it (just set the pi‐
cam2.controls directly), that would not absolutely guarantee that both controls would be applied on the same frame.
You could technically find the analogue gain being set on the frame after the exposure time.
In all cases, the same rules apply as to whether the controls take effect immediately or incur several frames of delay.
Camera modules that do not support autofocus (including earlier Raspberry Pi camera modules and the HQ camera) will
not advertise these options as being available (in the Picamera2.camera_controls property), and attempting to set
them will fail.
Manual - The lens will never move spontaneously, but the "LensPosition" control can be used to move the lens
"manually". The units for this control are dioptres (1 / distance in metres), so that zero can be used to denote "in‐
finity". The "LensPosition" can be monitored in the image metadata too, and will indicate when the lens has
reached the requested location.
Auto - In this mode the "AfTrigger" control can be used to start an autofocus cycle. The "AfState" metadata
that is received with images can be inspected to determine when this finishes and whether it was successful,
though we recommend the use of helper functions that save the user from having to implement this. In this
mode too, the lens will never move spontaneously until it is "triggered" by the application.
Continuous - The autofocus algorithm will run continuously, and refocus spontaneously when necessary.
picam2=Picamera2()picam2.start(show_preview=True)picam2.set_controls({"AfMode":controls.AfModeEnum.Continuous})
picam2 = Picamera2()
picam2.start(show_preview=True)
picam2.set_controls({"AfMode": controls.AfModeEnum.Manual, "LensPosition": 0.0})
The lens position control (use picam2.camera_controls['LensPosition']) gives three values which are the minimum,
maximum and default lens positions. The minimum value defines the furthest focal distance, and the maximum speci‐
fies the closest achieveable focal distance (by taking its reciprocal). The third value gives a "default" value, which is nor‐
mally the hyperfocal position of the lens.
The minimum value for the lens position is most commonly 0.0 (meaning infinity). For the maximum, a value of 10.0
would indicate that the closest focal distance is 1 / 10 metres, or 10cm. Default values might often be around 0.5 to 1.0,
implying a hyperfocal distance of approximately 1 to 2m.
In general, users should expect the distance calibrations to be approximate as it will depend on the accuracy of the tun‐
ing and the degree of variation between the user’s module and the module for which the calibration was performed.
picam2 = Picamera2()
picam2.start(show_preview=True)
success = picam2.autofocus_cycle()
The function returns True if the lens focused successfully, otherwise False. Should an application wish to avoid blocking
while the autofocus cycle runs, we recommend replacing the final line (success = picam2.autofocus_cycle()) by
job = picam2.autofocus_cycle(wait=False)
# Now do some other things, and when you finally want to be sure the autofocus
# cycle is finished:
success = picam2.wait(job)
This is in fact the normal method for running requests asynchronously - please see see the section on asynchronous
capture for more details.
"AfMetering" and "AfWindows" - lets the user change the area of the image used for focus.
To find out more about these controls, please consult the appendices or the libcamera documentation and search for Af.
Finally, there is also a Qt application that demonstrates the use of the autofocus API.
Camera properties may be inspected through the camera_properties property of the Picamera2 object:
picam2 = Picamera2()
picam2.camera_properties
Some examples of camera properties include the model of sensor and the size of the pixel array. After configuring the
camera into a particular mode it will also report the field of view from the pixel array that the mode represents, and the
sensitivity of this mode relative to other camera modes.
A complete list and explanation of each property can be found in the appendices.
controls.py - shows how to set controls while the camera is running. In this example we query the
ExposureTime, AnalogueGain and ColourGains and then fix them so that they can no longer vary.
opencv_mertens_merge.py - demonstrates how to stop and restart the camera with multiple new exposure
values.
zoom.py - shows how to implement a smooth digital zoom. We use capture_metadata to synchronise the con‐
trol value updates with frames coming from the camera.
controls_3.py - illustrates the use of the Controls class, rather than dictionaries, to update control values.
The process of requests being submitted, returned and then sent back to the camera system is transparent to the user.
In fact, the process of sending the images for display (when a preview window has been set up) or forwarding them to a
video encoder (when one is running) is all entirely automatic too, and the application does not have to do anything.
The user application only needs to say when it wants to receive any of these images for its own use, and Picamera2 will
deliver them. The application can request a single image belonging to any of the streams, or it can ask for the entire re‐
quest, giving access to all the images and the associated metadata.
NOTE
In this section we make use of some convenient default behaviour of the start function. If the camera is completely unconfigured, it
will apply the usual default preview configuration before starting the camera.
When capturing images, the Picamera2 functions use the following nomenclature in its capture functions:
arrays - these are two-dimensional arrays of pixels and are usually the most convenient way to manipulate im‐
ages. They are often three-dimensional numpy arrays because every pixel has several colour components,
adding another dimension.
images - this refers to Python Image Library (PIL) images and can be useful when interfacing to other modules
that expect this format
buffers - by buffer we simply mean the entire block of memory where the image is stored as a one-dimensional
numpy array, but the two- (or three-) dimensional array form is generally more useful.
There are also capture functions for saving images directly to files, and for switching between camera modes so as to
combine fast framerate previews with high resolution captures.
picam2 = Picamera2()
picam2.start()
time.sleep(1)
array = picam2.capture_array("main")
Although we regard this as a two-dimensional image, numpy will often report a third dimension of size three or four de‐
pending on whether every pixel is represented by three channels (RGB) or four channels (RGBA, that is RGB with an alpha
channel). Remember also that numpy lists the height as the first dimension.
shape will report (height, width, 3) for 3 channel RGB type formats
shape will report (height, width, 4) for 4 channel RGBA (alpha) type formats
YUv420 is a slightly special case because the first height rows give the Y channel, the next height/4 rows contain the U
channel and the final height/4 rows contain the V channel. For the other formats, where there is an "alpha" value it will
take the fixed value 255.
picam2 = Picamera2()
picam2.start()
time.sleep(1)
image = picam2.capture_image("main")
picam2 = Picamera2()
capture_config = picam2.create_still_configuration()
picam2.start(show_preview=True)
time.sleep(1)
array = picam2.switch_mode_and_capture_array(capture_config, "main")
This will switch to the high resolution capture mode and return the numpy array, and will then switch automatically back
to the preview mode without any user intervention.
We note that the process of switching camera modes can be performed "manually", if preferred:
picam2 = Picamera2()
preview_config = picam2.create_preview_configuration()
capture_config = picam2.create_still_configuration()
picam2 = picam2.configure(preview_config)
picam2.start(show_preview=True)
time.sleep(1)
picam2.switch_mode(capture_config)
array=picam2.capture_array("main")picam2.switch_mode(preview_config)
From Pi 5 onwards, temporal denoise is supported. Therefore it can sometimes be beneficial to allow several frames to
go by after a mode switch so that temporal denoise can start to operate. This is enabled by the switch_mode_and_cap‐
ture family of methods with a delay parameter, which indicates how many frames to skip before actually capturing (and
switching back).
by (for example)
This is entirely optional in most circumstances, though we do recommend it for Pi 5 HDR captures.
picam2 = Picamera2()
capture_config = picam2.create_still_configuration()
picam2.start(show_preview=True)
time.sleep(1)
picam2.switch_mode_and_capture_file(capture_config, "image.jpg")
The file format is deduced automatically from the filename. Picamera2 uses PIL to save the images, and so this sup‐
ports JPEG, BMP, PNG and GIF files.
But applications can also capture to file-like objects. A typical example would be memory buffers from Python’s io li‐
brary. In this case there is no filename so the format of the "file" must be given by the format parameter:
picam2 = Picamera2()
picam2.start()
time.sleep(1)
data = io.BytesIO()
picam2.capture_file(data, format='jpeg')
The format parameter may take the values 'jpeg', 'png', 'bmp' or 'gif'.
The image quality can be set globally in the Picamera2 object’s options field (though can be changed while Picamera2 is
running), as listed in the table below.
quality 90 JPEG quality level, where 0 is the worst quality and 95 is best.
compress_level 1 PNG compression level, where 0 gives no compression, 1 is the fastest that actually does
any compression, and 9 is the slowest.
For example, you can change these default quality parameters as follows:
picam2 = Picamera2()
picam2.options["quality"] = 95
picam2.options["compress_level"] = 2
picam2.start()
time.sleep(1)
picam2.capture_file("test.jpg")
picam2.capture_file("test.png")
picam2 = Picamera2()
picam2.start()
metadata = picam2.capture_metadata()
print(metadata["ExposureTime"], metadata["AnalogueGain"])
Capturing metadata is a good way to synchronise an application with camera frames (if you have no actual need of the
frames). The first call to capture_metadata (or indeed any capture function) will often return immediately because
Picamera2 usually holds on to the last camera image internally. But after that, every capture call will wait for a new frame
to arrive (unless the application has waited so long to make the request that the image is once again already there). For
example:
picam2 = Picamera2()
picam2.start()
for i in range(30):
metadata = picam2.capture_metadata()
print("Frame", i, "has arrived")
The process of obtaining the metadata that belongs to a specific image is explained through the use of requests.
For those who prefer the object-style syntax over Python dictionaries, the metadata can be wrapped in the Metadata
class:
picam2 = Picamera2()
picam2.start()
metadata = Metadata(picam2.capture_metadata())
print(metadata.ExposureTime, metadata.AnalogueGain)
For this reason some of the functions we saw earlier have "pluralised" versions. To list them explicitly:
Picamera2.capture_buffer Picamera2.capture_buffers
Picamera2.switch_mode_and_capture_buffer Picamera2.switch_mode_and_capture_buffers
Picamera2.capture_array Picamera2.capture_arrays
Picamera2.switch_mode_and_capture_array Picamera2.switch_mode_and_capture_arrays
All these functions work in the same way as their single-capture counterparts except that:
Instead of providing a single stream name, a list of stream names must be provided.
The return value is a tuple of two values, the first being the list of arrays (in the order the names were given), and
the second being the image metadata.
picam2 = Picamera2()
config = picam2.create_preview_configuration(lores={})
picam2.configure(config)
picam2.start()
In this case we configure both a main and a lores stream. We then ask to capture an image from each and these are re‐
turned to us along with the metadata for that single capture from the camera.
Finally, to faciliate using these images, Picamera2 has a small Helpers library that can convert arrays to PIL images, save
them to a file, and so on. The table below lists all the available functions:
picam2.helpers.make_array Make a 2d (or 3d, allowing for multiple colour channels) array from a flat 1d array (as returned by, for
example, capture_buffer)
picam2.helpers.make_image Make a PIL image from a flat 1d array (as returned by, for example, capture_buffer)
These helpers can be accessed directly from the Picamera2 object. If we wanted to capture a single buffer and use one
of these helpers to save the file, we could use:
picam2 = Picamera2()
picam2.configure(picam2.create_preview_configuration())
picam2.start()
Normally, when we capture arrays or images, the image data is copied so that the camera system can keep hold of all
the memory buffers it was using and carry on running in just the same manner. When we capture a request, however, we
have only borrowed it and all the memory buffers from the camera system, and nothing has yet been copied. When we
are finished with the request it must be returned back to the camera system using the request’s release method.
IMPORTANT
If an application fails to release captured requests back to the camera system, the camera system will gradually run out of buffers. It
is likely to start dropping ever more camera frames, and eventually the camera system will stall completely.
picam2 = Picamera2()
picam2.start()
request = picam2.capture_request()
request.save("main", "image.jpg")
print(request.get_metadata()) # this is the metadata for this image
request.release()
As soon as we have finished with the request, it is released back to the camera system. This example also shows how
we are able to obtain the exact metadata for the captured image using the request’s get_metadata function.
Notice how we saved the image from the main stream to the file. All Picamera2’s capture methods are implemented
through methods in the CompletedRequest class, and once we have the request we can call them directly. The corre‐
spondence is illustrated in the table below.
Picamera2.capture_buffer CompletedRequest.make_buffer
Picamera2.capture_array CompletedRequest.make_array
Picamera2.capture_image CompletedRequest.make_image
When we capture a request we can access its metadata, which gives us a SensorTimestamp. This is the time, measured
in nanoseconds from when the system booted, that the first pixel was read out of the sensor. So we have to further sub‐
tract the image exposure time to find out when the first pixel started being exposed.
The Picamera2.capture_request method makes this easy for us. By setting the flush parameter to True we can invoke
exactly this behaviour - that the first pixel started being exposed no earlier than the moment we call the function.
Alternatively, we can pass an explicit timestamp in nanoseconds if we have a slightly different instant in time in mind.
So for example
request = picam2.capture_request(flush=True)
is equivalent to
request = picam2.capture_request(flush=time.monotonic_ns())
Just as an example, if we were recording a video and wanted to capture a JPEG simultaneously whilst minimising the
risk of dropping any video frames, then it would be beneficial to move that processing out of the camera loop.
This is easily accomplished simply by capturing a request and calling request.save as we saw above. Camera events
can still be handled in parallel (though this is somewhat at the mercy of Python’s multi-tasking abilities), and the only
downside is that camera system has to make do with one less set of buffers until that request is finally released.
However, this can in turn always be mitigated by allocating one or more extra sets of buffers via the camera configura‐
tion’s buffer_count parameter.
All the capture and switch_mode_and_capture methods take two additional arguments:
Both take the default value None, but can be changed resulting in the following behaviour:
If wait and signal_function are both None, then the function will block until the operation is complete. This is
what we would probably term the "usual" behaviour.
If wait is None but a signal_function is supplied, then the function will not block, but return immediately even
though the operation is not complete. The caller should use the supplied signal_function to notify the applica‐
tion when the operation is complete.
If a signal_function is supplied, and wait is not None, then the given value of wait determines whether the
function blocks (it blocks if wait is true). The signal_function is still called, however.
You can also set wait to False and not supply a signal_function. In this case the function returns immediate‐
ly, and you can block later for the operation to complete (see below).
When you call a function in the usual blocking manner, the function in question will obviously return its "normal" result.
When called in a non-blocking manner, the function will return a handle to a job which is what you will need if you want to
block later on for the job to complete.
WARNING
The signal_function should not initiate any Picamera2 activity (by calling Picamera2 methods, for example) itself, as this is likely to
result in a deadlock. Instead, it should be setting events or variables for threads to respond to.
After launching an asynchronous operation as described above, you should have recorded the job handle that was re‐
turned to you. An application may then call Picamera2.wait(job) to complete the process, for example:
result = picam2.wait(job)
The wait function returns the result that would have been returned if the operation had blocked initially. You don’t have
to wait for one job to complete before submitting another. They will complete in the order they were submitted.
Here’s a short example. The switch_mode_and_capture_file method captures an image to file and returns the image
metadata. So we can do the following:
picam2 = Picamera2()
still_config = picam2.create_still_configuration()
picam2.configure(picam2.create_preview_configuration())
picam2.start()
time.sleep(1)
job = picam2.switch_mode_and_capture_file(still_config, "test.jpg", wait=False)
6.6.1. start_and_capture_file
For simple image capture we have the Picamera2.start_and_capture_file method. This function will configure and
start the camera automatically, and return once the capture is complete. It accepts the following parameters, though all
have sensible default values so that the function can be called with no arguments at all.
name (default "image.jpg") - the file name under which to save the captured image.
delay (default 1) - the number of seconds of delay before capturing the image. The value zero (no delay) is valid.
preview_mode (default "preview") - the camera configuration to use for the preview phase of the operation. The
default value indicates to use the configuration in the Picamera2 object’s preview_configuration field, though
any other configuration can be supplied. The capture operation only has a preview phase if the delay is greater
than zero.
capture_mode (default "still") - the camera configuration to use for capturing the image. The default value in‐
dicates to use the configuration in the Picamera2 object’s still_configuration field, though any other configu‐
ration can be supplied.
show_preview (default True) - whether to show a preview window. By default preview images are only displayed
for the preview phase of the operation, unless this behaviour is overridden by the supplied camera configura‐
tions using the display parameter. If subsequent calls are made which change the value of this parameter, we
note that the application should call the Picamera2.stop_preview method in between.
picam2 = Picamera2()
picam2.start_and_capture_file("test.jpg")
All the usual file formats (JPEG, PNG, BMP and GIF) are supported.
6.6.2. start_and_capture_files
This function is very similar to start_and_capture_file, except that it can capture multiple images with a time delay
between them. Again, it can be called with no arguments at all, but it accepts the following optional parameters:
name (default "image{:03d}.jpg") - the file name under which to save the captured image. This should include a
format directive (such as in the default name) that will be replaced by a counter, otherwise the images will sim‐
ply overwrite one another.
initial_delay (default 1) - the number of seconds of delay before capturing the first image. The value zero (no
delay) is valid.
preview_mode (default "preview") - the camera configuration to use for the preview phases of the operation.
The default value indicates to use the configuration in the Picamera2 object’s preview_configuration field,
though any other configuration can be supplied. The capture operation only has a preview phase when the corre‐
sponding delay parameter (delay or initial_delay) is greater than zero.
capture_mode (default "still") - the camera configuration to use for capturing the images. The default value
indicates to use the configuration in the Picamera2 object’s still_configuration field, though any other config‐
uration can be supplied.
delay (default 1) - the number of second between captures for all images except the very first (which is gov‐
erned by initial_delay). If this has the value zero, then there is no preview phase between the captures at all.
show_preview (default True) - whether to show a preview window. By default, preview images are only displayed
for the preview phases of the operation, unless this behaviour is overridden by the supplied camera configura‐
tions using the display parameter. If subsequent calls are made which change the value of this parameter, we
note that the application should call the Picamera2.stop_preview method in between.
picam2 = Picamera2()
This will capture ten files named test0.jpg through test9.jpg, with a five-second delay before every capture.
picam2 = Picamera2()
In practice, the rate of capture will be limited by the time it takes to encode and save the JPEG files. For faster capture, it
might be worth saving a video in MJPEG format instead.
metadata_with_image.py - shows how to capture an image and also the metadata for that image.
still_during_video.py - shows how you might capture a still image while a video recording is in progress.
opencv_mertens_merge.py - takes several captures at different exposures, starting and stopping the camera for
each capture.
capture_dng_and_jpeg_helpers.py - uses the "helpers" to save a JPEG and DNG file without holding the entire
CompletedRequest object.
7. Capturing videos
In Picamera2, the process of capturing and encoding video is largely automatic. The application only has to define what
encoder it wants to use to compress the image data, and how it wants to output this compressed data stream.
The mechanics of taking the camera images that arrive, forwarding them to an encoder, which in turn sends the results
directly to the requested output, is entirely transparent to the user. The encoding and output all happens in a separate
thread from the camera handling to minimise the risk of dropping camera frames.
picam2 = Picamera2()
video_config = picam2.create_video_configuration()
picam2.configure(video_config)
encoder = H264Encoder(bitrate=10000000)
output = "test.h264"
picam2.start_recording(encoder, output)
time.sleep(10)
picam2.stop_recording()
In this example we use the H.264 encoder. For the output object we can just use a string for convenience; this will be in‐
terpreted as a simple output file. For configuring the camera, the create_video_configuration is a good starting point,
as it will use a larger buffer_count to reduce the risk of dropping frames.
We also used the convenient start_recording and stop_recording functions, which start and stop both the encoder
and the camera together. Sometimes it can be useful to separate these two operations, for example you might want to
start and stop a recording multiple times while leaving the camera running throughout. For this reason, start_recording
could have been replaced by:
picam2.start_encoder(encoder, output)
picam2.start()
picam2.stop()
picam2.stop_encoder()
7.1. Encoders
All the video encoders can be constructed with parameters that determine the quality (amount of compression) of the
output, such as the bitrate for the H.264 encoder. For those not so familiar with the details of these encoders, these pa‐
rameters can also be omitted in favour of supplying a quality to the start_encoder or start_recording functions. The
permitted quality parameters are:
Quality.VERY_LOW
Quality.LOW
7. Capturing videos 45
The Picamera2 Library
Quality.MEDIUM - this is the default for both functions if the parameter is not specified
Quality.HIGH
Quality.VERY_HIGH
This quality parameter only has any effect if the encoder was not passed explicit codec-specific parameters. It could be
used like this:
picam2 = Picamera2()
picam2.configure(picam2.create_video_configuration())
encoder = H264Encoder()
Suitable adjustments will be made by the encoder according to the supplied quality parameter, though this is of a "best
efforts" nature and somewhat subject to interpretation. Applications are recommended to choose explicit parameters for
themselves if the quality parameter is not having the desired effect.
NOTE
On Pi 4 and earlier devices there is dedicated hardware for H264 and MJPEG encoding. On a Pi 5, however, these codecs are
implemented in software using FFmpeg libraries. The performance on a Pi 5 is similar or better, and the images are encoded with
better quality. The JPEG encoder runs in software on all platforms.
7.1.1. H264Encoder
The H264Encoder class implements an H.264 encoder using the Pi’s in-built hardware, accessed through the V4L2 kernel
drivers, supporting up to 1080p30. The constructor accepts the following optional parameters:
bitrate (default None) - the bitrate (in bits per second) to use. The default value None will cause the encoder to
choose an appropriate bitrate according to the Quality when it starts.
repeat (default False) - whether to repeat the stream’s sequence headers with every Intra frame (I-frame). This
can be sometimes be useful when streaming video over a network, when the client may not receive the start of
the stream where the sequence headers would normally be located.
iperiod (default None) - the number of frames from one I-frame to the next. The value None leaves this at the
discretion of the hardware, which defaults to 60 frames.
This encoder can accept either 3-channel RGB ("RGB888" or "BGR888"), 4-channel RGBA ("XBGR8888" or "XRGB8888") or
YUV420 ("YUV420").
7.1.2. JpegEncoder
The JpegEncoder class implements a multi-threaded software JPEG encoder, which can also be used as a motion JPEG
("MJPEG") encoder. It accepts the following optional parameters:
7.1. Encoders 46
The Picamera2 Library
q (default None) - the JPEG quality number. The default value None will cause the encoder to choose an appropri‐
ate value according to the Quality when it starts.
colour_space (default None) - the software will select the correct "colour space" for the stream being encoded
so this parameter should normally be left blank.
colour_subsampling (default 420) - this is the form of YUV that the encoder will convert the RGB pixels into in‐
ternally before encoding. It therefore determines whether a JPEG decoder will see a YUV420 image or some‐
thing else. Valid values are 444 (YUV444), 422 (YUV422), 440 (YUV440), 420 (YUV420, the default), 411 (YUV411)
or Gray (greyscale).
This encoder can accept either three-channel RGB ("RGB888" or "BGR888") or 4-channel RGBA ("XBGR8888" or
"XRGB8888"). Note that you cannot pass it any YUV formats.
NOTE
The JpegEncoder is derived from the MultiEncoder class which wraps frame-level multi-threading around an existing software
encoder, and some users may find it helpful in creating their own parallelised codec implementations. There will only be a significant
performance boost if Python’s GIL (Global Interpreter Lock) can be released while the encoding is happening - as is the case with the
JpegEncoder as the bulk of the work happens in a call to a C library.
7.1.3. MJPEGEncoder
The MJPEGEncoder class implements an MJPEG encoder using the Raspberry Pi’s in-built hardware, accessed through
the V4L2 kernel drivers. The constructor accepts the following optional parameter:
bitrate (default None) - the bitrate (in bits per second) to use. The default value None will cause the encoder to
choose an appropriate bitrate according to the Quality when it starts.
This encoder can accept either 3-channel RGB ("RGB888" or "BGR888"), 4-channel RGBA ("XBGR8888" or "XRGB8888") or
YUV420 ("YUV420").
picam2 = Picamera2()
config = picam2.create_video_configuration(raw={}, encode="raw")
picam2.configure(config)
encoder = Encoder()
picam2.start_recording(encoder, "test.raw")
time.sleep(5)
picam2.stop_recording()
7.2. Outputs
Output objects receive encoded video frames directly from the encoder and will typically forward them to files or to net‐
work sockets. An output object is often made with its constructor, although a simple string can be passed to the
start_encoder and start_recording functions which will cause a FileOutput object to be made automatically.
7.2. Outputs 47
The Picamera2 Library
The available output objects are described in the sections that follow.
7.2.1. FileOutput
The FileOutput is constructed from a single file parameter which may be:
a file-like object, which might be a memory buffer created using io.BytesIO(), or a network socket
output = FileOutput('test.h264')
A memory buffer:
buffer = io.Bytes()
output = FileOutput(buffer)
7.2.2. FfmpegOutput
The FfmpegOutput class forwards the encoded frames to an FFmpeg process. This opens the door to some quite so‐
phisticated new kinds of output, including MP4 files and even audio, but may require substantial knowledge about
FFmpeg itself (which is well beyond the scope of this document, but FFmpeg’s own documentation is available).
The class constructor has one required parameter, the output file name, and all the others are optional:
output_filename - typically we might pass something like "test.mp4", however, it is used as the output part of
the FFmpeg command line and so could equally well contain "test.ts" (to record an MPEG-2 transport
stream), or even "-f mpegts udp://<ip-addr>:>port" to forward an MPEG-2 transport stream to a given net‐
work socket.
audio (default False) - if you have an attached microphone, pass True for the audio to be recorded along with
the video feed from the camera. The microphone is assumed to be available through PulseAudio.
audio_device (default "default") - the name by which PulseAudio knows the microphone. Usually "default"
will work.
audio_sync (default -0.3) - the time shift in seconds to apply between the audio and video streams. This may
need tweaking to improve the audio/video synchronisation.
7.2. Outputs 48
The Picamera2 Library
The range of output file names that can be passed to FFmpeg is very wide because it may actually include any of
FFmpeg’s output options, thereby exceeding the scope of what can be documented here or even tested comprehensive‐
ly. We list some more complex examples later on, and conclude with a simple example that records audio and video to
an MP4 file:
NOTE
One limitation of the FfmpegOutput class is that there is no easy way to pass the frame timestamps - which Picamera2 knows
precisely - to FFmpeg. As such, we have to get FFmpeg to resample them, meaning they become subject to a relatively high degree of
jitter. Whilst this may matter for some applications, it does not affect most users.
7.2.3. CircularOutput
The CircularOutput class is derived from the FileOutput and adds the ability to start a recording with video frames
that were from several seconds earlier. This is ideal for motion detection and security applications. The CircularOutput
constructor accepts the following optional parameters:
file (default None) - a string (representing a file name) or a file-like object which is used to construct a
FileOutput. This is where output from the circular buffer will get written. The value None means that, when the
circular buffer is created, it will accumulate frames within the circular buffer, but will not be writing them out
anywhere.
buffersize (default 150) - set this to the number of seconds of video you want to be able to access from before
the current time, multiplied by the frame rate. So 150 buffers is enough for five seconds of video at 30fps.
To make the CircularOutput start writing the frames out to a file (for example), an application should:
picam2 = Picamera2()
picam2.configure(picam2.create_video_configuration())
encoder = H264Encoder()
output = CircularOutput()
picam2.start_recording(encoder, output)
# Now when it's time to start recording the output, including the previous 5 seconds:
output.fileoutput = "file.h264"
output.start()
7.2. Outputs 49
The Picamera2 Library
output (required) - the name of the file to record to, or an output object of one of the types described above.
When a string ending in .mp4 is supplied, an FfmpegOutput rather than a FileOutput is created, so that a valid
MP4 file is made.
encoder (default None) - the encoder to use. If left unspecified, the function will make a best effort to choose
(MJPEG if the file name ends in mjpg or mjpeg, otherwise H.264).
config (default None) - the camera configuration to use if not None. If the camera is unconfigured but none was
given, the camera will be configured according to the "video" (Picamera2.video_configuration) configuration.
quality (default Quality.MEDIUM) - the video quality to generate, unless overriden in the encoder object.
show_preview (default False) - whether to show a preview window. If this value is changed, it will have no effect
unless stop_preview is called beforehand.
duration (default 0) - the recording duration. The function will block for this long before stopping the recording.
When the value is zero, the function returns immediately and the application will have to call stop_recording
later.
audio (default False) - whether to record an audio stream. This only works when recording to an MP4 file, and
when a microphone is attached as the default PulseAudio input.
picam2 = Picamera2()
picam2.start_and_record_video("test.mp4", duration=5)
capture_circular_stream.py - is a similar but more complex example that simultaneously sends the stream over
a network connection, using the multiple outputs feature.
mjpeg_server.py - implements a simple web server than can deliver streaming MJPEG video to a web page.
8. Advanced topics
8.1. Display overlays
All the Picamera2 preview windows support overlays. That is a bitmap with an alpha channel that can be super-imposed
over the live camera image. The alpha channel allows the overlay image to be opaque, partially transparent or wholly
transparent, pixel by pixel.
To add an overlay we can use the Picamera2.add_overlay function. It takes a single argument which is a three-dimen‐
sional numpy array. The first two dimensions are the height and then the width, and the final dimension should have the
value 4 as all pixels have R, G, B and A (alpha) values.
We note that:
The overlay width and height do not have to match the camera images being displayed, as the overlay will be re‐
sized to fit exactly over the camera image.
set_overlay should only be called after the camera has been configured, as only at this point does Picamera2
know how large the camera images being displayed will be.
The overlay is always copied by the set_overlay call, so it is safe for an application to overwrite the overlay
afterwards.
Overlays are designed to provide simple effects or GUI elements over a camera image. They are not designed
for sophisticated or fast-moving animations.
Overlays ignore any display transform that was specified when the preview was created.
picam2 = Picamera2()
picam2.configure(picam2.create_preview_configuration())
picam2.start(show_preview=True)
and the result shows the red, green and blue quadrants over the camera image:
8. Advanced topics 51
The Picamera2 Library
Figure 3. A simple
overlay
For real applications, more complex overlays can of course be designed with image editing programs and loaded from
file. Remember that, if loading an RGBA image with OpenCV, you need to use the IMREAD_UNCHANGED flag:
Further examples
Sometimes it is useful to be able to apply some processing within the camera event loop, that happens unconditionally
to all frames. For example, an application might want to monitor the image metadata, or annotate images, all without
We would generally recommend that any such code does not take too long because, being in the middle of the camera
event handling, it could easily cause frames to be dropped. It goes without saying that functions that make asyn‐
chronous requests to Picamera2 (capturing metadata or images, for example) must be avoided as they would almost
certainly lead to instant deadlocks.
There are two places where user processing may be inserted into the event loop:
The pre_callback, where the processing happens before the images are supplied to applications, before they
are passed to any video encoders, and before they are passed to any preview windows.
The post_callback, where the processing happens before the images are passed to any video encoder, before
they are passed to any preview windows, but after images have been supplied to applications.
It is not possible to do processing on frames that will be recorded as video but to avoid doing the same processing on
the frames when they are displayed, or vice versa, as these two processes run in parallel. Though we note that an appli‐
cation could display a different stream from the one it encodes (it might display the "main" stream and encode the "lores"
version), and apply processing only to one of them which would simulate this effect.
The following example uses OpenCV to apply a date and timestamp to every image.
picam2 = Picamera2()
def apply_timestamp(request):
timestamp = time.strftime("%Y-%m-%d %X")
with MappedArray(request, "main") as m:
cv2.putText(m.array, timestamp, origin, font, scale, colour, thickness)
picam2.pre_callback = apply_timestamp
picam2.start(show_preview=True)
Because we have used the pre_callback, it means that all images will be timestamped, whether the application re‐
quests them through any of the capture methods, whether they are being encoded and recorded as video, and indeed
when they are displayed.
Had we used the post_callback instead, images acquired through the capture methods would not be timestamped.
Finally we draw attention to the MappedArray class. This class is provided as a convenient way to gain in-place access to
the camera buffers - all the capture methods that applications normally use are returning copies.
The MappedArray needs to be given a request and the name of the stream for which we want access to its image buffer.
It then maps that memory into user space and presents it to us as a regular numpy array, just as if we had obtained it
via capture_array. Once we leave the with block, the memory is unmapped and everything is cleaned up.
WARNING
The amount of processing placed into the event loop should always be as limited as possible. It is recommended that any such
processing is restricted to drawing in-place on the camera buffers (as above), or using metadata from the request. Above all, calls to
the camera system should be avoided or handled with extreme caution as they are likely to block waiting for the event loop to
complete some task and can cause a deadlock.
Further examples
opencv_face_detect_3.py shows how you would draw faces on a recorded video, but not on the image used for
face detection.
The idea is that a list of functions can be submitted to the event loop which behaves as follows:
1. Every time a completed request is received from the camera, it calls the first function in the list.
2. The functions must always return a tuple of two values. The first should be a boolean, indicating whether that
function is "finished". In this case, it is popped from the list, otherwise it remains at the front of the list and will
be called again next time. (Note that we never move on and call the next function with the same request.)
3. If the list is now empty, that list of tasks is complete and this is signalled to the caller. The second value from the
tuple is the one that is passed back to the caller as the result of the operation (usually through the wait
method).
Normally when we call Picamera2.switch_mode_and_capture_file(), the camera system switches from the preview to
the capture mode, captures an image, then it switches back the the preview mode and starts the camera running again.
What if we want to stop the camera as soon as possible after the capture? In this case, we’ve spent time restarting the
camera in the preview mode before we can call Picamera2.stop() from our application (and wait again for that to
happen).
picam2 = Picamera2()
capture_config = picam2.create_still_configuration()
picam2.start()
switch_mode_capture_file_and_stop(capture_config, "test.jpg")
switch_mode_capture_file_and_stop creates a list of two functions that it dispatches to the event loop.
The first of these functions (picam2.switch_mode_) will switch the camera into the capture mode, and then re‐
turn True as its first value, removing it from the list.
When the first frame in the capture mode arrives, the local capture_and_stop_ function will run, capturing the
file and stopping the camera.
This function returns True as its first value as well, so it will be popped of the list. The list is now empty so the
event loop will signal that it is finished.
WARNING
Here too the application must take care what functions it calls from the event loop. For example, most of the usual Picamera2
functions are likely to cause a deadlock. The convention has been adopted that functions that are explicitly safe should end with a _
(an underscore).
Similarly, some Pi versions have more memory available than others, and it may further depend which sensor (v1, v2 or
HQ) is being used and whether full-resolution images are being processed.
The table below lists the image formats that we would recommend users choose, the size of a single full resolution
12MP image, and whether they work with certain other modules.
Table 1.
Recommended image XRGB8888/XBGR8888 RGB888/BGR888 YUV420/YVU420
formats
12MP size 48MB 36MB 18MB
CMA memory
CMA stands for Contiguous Memory Allocator, and on the Raspberry Pi it provides memory that can be used directly by
the camera system and all its hardware devices. All the memory buffers for the main, lores and raw streams are allocat‐
ed here and, being shared with the rest of the Linux operating system, it can come under pressure and be subject to
fragmentation.
When the CMA area runs out of space, this can be identified by a Picamera2 error saying that V4L2 (the Video for Linux
kernel subsystem) has been unable to allocate buffers. Mitigations may include allocating fewer buffers (the buffer_‐
count parameter in the camera configuration), choosing image formats that use less memory, or using lower resolution
images. The following workarounds can also be tried.
The default CMA size on Pi devices is: 256MB if the total system memory is less than or equal to 1GB, otherwise 320MB.
CMA memory is still available to the system when "regular" memory starts to run low, so increasing its size does not nor‐
mally starve the rest of the operating system.
As a rule of thumb, all systems should usually be able to increase the size to 320MB if they are experiencing problems;
1GB systems could probably go to 384MB and 2GB or larger systems could go as far as 512MB. But you may find that
different limits work best on your own systems, and some experimentation may be necessary.
To change the size of the CMA area, you will need edit the /boot/config.txt file. Find the line that says
dtoverlay=vc4-kms-v3d and replace it with:
Do not add any spaces or change any of the formatting from what is provided above.
NOTE
Anyone using the fkms driver can continue to use it and change the CMA area as described above. Just keep fkms in the driver name.
WARNING
Legacy camera-stack users may at some point in time have increased the amount of gpu_mem available in their system, as this was
used by the legacy camera stack. Picamera2 and libcamera make no use of gpu_mem so we strongly recommend removing any
gpu_mem lines from your /boot/config.txt as its only effect is likely to be to waste memory.
One final suggestion for reducing the size of images in memory is to use the YUV420 format instead. Third-party mod‐
ules often do not support this format very well, so some kind of software conversion to a more familiar RGB format may
be necessary. The benefit of doing this conversion in the application is that the large RGB buffer ends up in user space
where we benefit from virtual memory, and the CMA area only needs space for the smaller YUV420 version.
Fortunately, OpenCV provides a conversion function from YUV420 to RGB. It takes substantially less time to execute
than, for example, the JPEG encoder, so for some applications it may be a good trade-off. The following example shows
how to convert a YUV420 image to RGB:
picam2 = Picamera2()
picam2.create_preview_configuration({"format": "YUV420"})
picam2.start()
yuv420 = picam2.capture_array()
rgb = cv2.cvtColor(yuv420, cv2.COLOR_YUV420p2RGB)
This function does not appear to let the application choose the YUV/RGB conversion matrix, however.
Further examples
Preview configurations are given four buffers by default. This is normally enough to keep the camera system
running smoothly even when there is moderate additional processing going on.
Still capture configurations are given only one buffer by default. This is because they can be very large and may
pose particular problems for 512MB platforms. But it does mean that, because of the way the readout from the
image sensor is pipelined, we are certain to drop at least every other camera frame. If you have memory avail‐
able and want fast, full-resolution burst captures, you may want to increase this number.
Video configurations allocate six buffers. The system is likely to be more busy while recording video, so the ex‐
tra buffers reduce the chances of dropping frames.
Holding on to requests
We have seen how an application can hold on to requests for its own use, during which time they are not available to the
camera system. If this causes unacceptable frame drops, or even stalls the camera system entirely, then the answer is
simply to allocate more buffers to the camera at configuration time.
As a general rule, if your application will only ever be holding on to one request, then it should be sufficient to allocate
just one extra buffer to restore the status quo ante.
Normally Picamera2 always tries to keep hold of the most recent camera frame to arrive. For example, if an application
does some processing that normally fits within a frame period, but occasionally takes a bit longer (which is always a par‐
ticular risk on a multi-tasking operating system), then it’s less likely to drop frames and fall behind.
The length of this queue within Picamera2 is just a single frame. It does mean that, when you ask to capture a frame or
metadata, the function is likely to return immediately, unless you have already made such a request just before.
The exception to this is when the camera is configured with a single buffer (the default setup for still capture), when it is
not possible to hang on to the previous camera image - because there is no "spare" buffer for the camera to fill to replace
it! In this case, no previous frame is held within Picamera2, and requests to capture a frame will always wait for the next
one from the camera. In turn, this does mean there is some risk of image "tearing" in the preview windows (when the
new image replaces the old one half way down the frame).
This behaviour (where Picamera2 holds on to the last camera frame) is customisable in the camera configuration via the
queue parameter. If you want to guarantee that every capture request returned to you arrived from the camera system af‐
ter the request was made, then this parameter should be passed as False. Please refer to the queue parameter docu‐
mentation for more details.
Qt widgets
QGlPicamera2 - this is a Qt widget that renders the camera images using hardware acceleration through the Pi’s
GPU.
QPicamera2 - a software rendered but otherwise equivalent widget. This version is much slower and the
QGlPicamera2 should be preferred in nearly all circumstances except those where it does not work (for example,
the application has to operate with a remote window through X forwarding).
Both widgets have an add_overlay method which implements the overlay functionality of Picamera2’s preview windows.
This method accepts a single 3-dimensional numpy array as an RGBA image in exactly the same way, making this fea‐
ture also avilable to Qt applications.
When the widget is created there is also an optional keep_ar (keep aspect ratio) parameter, defaulting to True. This al‐
lows the application to choose whether camera images should be letter- or pillar-boxed to fit the size of the widget
(keep_ar=True) or stretched to fill it completely, possibly distorting the relative image width and height (keep_ar=False).
Finally, the two Qt widgets both support the transform parameter that allows camera images to be flipped horizontally
and/or vertically as they are drawn to the screen (without having any affect on the camera image itself).
When we’re writing a Picamera2 script that runs in the Python interpreter’s main thread, the event loop that drives the
camera system is supplied by the preview window (which may also be the NULL preview that doesn’t display anything).
In this case, however, the Qt event loop effectively becomes the main thread and drives the camera application too.
picam2 = Picamera2()
picam2.configure(picam2.create_preview_configuration())
app = QApplication([])
qpicamera2 = QGlPicamera2(picam2, width=800, height=600, keep_ar=False)
qpicamera2.setWindowTitle("Qt Picamera2 App")
picam2.start()
qpicamera2.show()
app.exec()
In a real application, of course, the qpicamera2 widget will be embedded into the layout of a more complex parent widget
or window.
Camera functions fall broadly into three types for the purposes of this discussion. There are:
2. Functions that have to wait for the camera event loop to do something before the operation is complete, so they
must be called in a non-blocking manner, and
The following table lists the public API functions and indicates which category they are in:
Table 2. Public APIs
in a Qt application Function Name Status
When we’re running a script in the Python interpreter thread, the camera event loop runs asynchronously - meaning we
can ask it to do things and wait for them to finish.
We accomplish this because these functions have a wait and a signal_function parameter which were discussed earli‐
er. The default values cause the function to block until it is finished, but if we pass a signal_function then the call will
return immediately, and we rely on the signal_function to return control to us.
In the Qt world, the camera thread and the Qt thread are the same, so we simply cannot block for the camera to finish -
because everything will deadlock. Instead, we have to tell these functions that they may not block, and also provide them
with an alternative Qt-friendly way of telling us they’re finished. Therefore, both widgets provide:
a done_signal member - this is a Qt signal to which an application can connect its own callback function, and
The job that was started by the non-blocking call will be passed to the function connected to the done_signal.
The application can call Picamera2.wait with this job to obtain the return value of the original call.
frompicamera2.previews.qtimportQGlPicamera2frompicamera2importPicamera2picam2=Picamera2()picam2.configure(picam2.create_preview_c
)defcapture_done(job):result=picam2.wait(job)button.setEnabled(True)app=QApplication([])qpicamera2=QGlPicamera2(picam2,width=800,
Observe that:
When we call switch_mode_and_capture_file we must tell it not to block by supplying a function to call when it
is finished (qpicamera2.signal_done).
This function emits the done_signal, which gives back control in the capture_done function where we can re-
enable the button.
Once the operation is done, we can call the wait(job) method if there is a result that we need.
Figure 4. A Qt app
using Picamera2
Further Examples
app_capture2.py lets you capture a JPEG by clicking a button, and then re-enables the button afterwards using
the Qt signal mechanism.
Users familiar with the logging module can assign handlers and set the level in the usual way. Users not familiar with the
logging module, and who simply want to see some debug statements being printed out, can use the
Picamera2.set_logging function, for example:
Picamera2.set_logging(Picamera2.DEBUG)
which would set the logging level to output all DEBUG (or more serious) messages. (Picamera2.DEBUG is a synonym for
logging.DEBUG but saves you the extra import logging.)
Besides the logging level, the set_logging function also allows you to specify:
msg - the logging module format string for the message, defaulting to "%(name)s %(levelname)s: %
(message)s".
For more information on these parameters, please consult the logging module documentation.
NOTE
Older versions of Picamera2 had a verbose_console parameter in the Picamera2 class constructor, and which would set up the
logging level. This has been deprecated and no longer does anything. The Picamera2.set_logging function should be used instead.
LIBCAMERA_LOG_FILE - the name of a file to which to send the log output. If unspecified, logging messages will
go to stderr.
LIBCAMERA_LOG_LEVELS - specifies which parts of libcamera should output logging, and at what level.
Although it has more sophisticated usages, LIBCAMERA_LOG_LEVELS can simply be set to one of a range of fixed values
(or levels) which then apply to all the modules within libcamera. These levels may be given as numbers or as (easier to
remember) strings. For each logging level, messages at that level or any more serious levels will be emitted. These levels
are:
For example, to run Picamera2 with only warning or error messages, you might start Python like this:
LIBCAMERA_LOG_LEVELS=WARN python
You can of course set LIBCAMERA_LOG_LEVELS permanently in your user profile (for example, your .bashrc file).
The process is essentially identical to using a single camera, except that you create multiple Picamera2 objects and then
configure and start them independently.
Before creating the Picamera2 objects, you can call the Picamera2.global_camera_info() method to find out what
cameras are attached. This returns a list containing one dictionary for each camera, ordered according the camera num‐
ber you would pass to the Picamera2 constructor to open that device. The dictionary contains:
"Model" - the model name of the camera, as advertised by the camera driver.
"Rotation" - how the camera is rotated for normal operation, as reported by libcamera.
"Id" - an identifier string for the camera, indicating how the camera is connected. You can tell from this value
whether the camera is accessed using I2C or USB.
You should always check this list to discover which camera is which as the order can change when the system boots or
USB cameras are re-connected. However, you can rely on CSI2 cameras (attached to the dedicated camera port) coming
ahead of any USB cameras, as they are probed earlier in the boot process.
The following example would start both cameras and capture a JPEG image from each.
picam2a = Picamera2(0)
picam2b = Picamera2(1)
picam2a.start()
picam2b.start()
picam2a.capture_file("cam0.jpg")
picam2b.capture_file("cam1.jpg")
picam2a.stop()
picam2b.stop()
Because the cameras run independently, there is no form of synchronisation of any kind between them and they may be
of completely different types (for example a Raspberry Pi v2 camera and an HQ camera, or a Raspberry Pi camera and a
USB webcam).
It is also possible to use bridging devices to connect multiple cameras to a single Raspberry Pi camera port, so long as
you have appropriate dtoverlay files from the supplier. In this case, of course, you can only open one camera at a time
and so you cannot drive them simultaneously.
9. Application notes
This section answers a selection of "how to" questions for particular use cases.
There are further examples below showing how to send HSL or MPEG-DASH live streams, and also one where we send
an MPEG-2 transport stream to a socket.
You would also have to start an HTTP server to enable remote clients to access the stream. One simple way to do this is
to open another terminal window in the same folder and enter:
python3 -m http.server
You would also have to run an HTTP server just as we saw previously:
python3 -m http.server
9. Application notes 65
The Picamera2 Library
Adding audio=True will add and send an audio stream if a microphone is available.
In this example we stream an MPEG-2 transport stream over the network using the UDP protocol where any client may
connect and view the stream. After five seconds we start the second output and record five seconds' worth of H.264
video to a file. We close this output file, but the network stream continues to play.
picam2 = Picamera2()
video_config = picam2.create_video_configuration()
picam2.configure(video_config)
Here, we imagine that we want to receive a camera image and use OpenCV to perform face detection. Rather than using
OpenCV to display the resulting image (which would be at the framerate we use to perform the face detection), we are
going to use Picamera2’s own preview window, running at 30 frames per second.
Before each frame is displayed, we draw the face rectangles in place on the image. The face rectangles correspond to
the locations returned when the face detector last ran - so while the preview image updates at the full rate, the face box‐
es move only at the face detector’s rate.
face_detector = cv2.CascadeClassifier("/path/to/haarcascade_frontalface_default.xml")
def draw_faces(request):
with MappedArray(request, "main") as m:
for f in faces:
(x, y, w, h) = [c * n // d for c, n, d in zip(f, (w0, h0) * 2, (w1, h1) * 2)]
cv2.rectangle(m.array, (x, y), (x + w, y + h), (0, 255, 0, 0))
picam2 = Picamera2()
config = picam2.create_preview_configuration(main={"size": (640, 480)},
lores={"size": (320, 240), "format": "YUV420"})
picam2.configure(config)
picam2.start(show_preview=True)
while True:
array = picam2.capture_array("lores")
grey = array[h1,:]
faces = face_detector.detectMultiScale(grey, 1.1, 3)
To enable the HDR mode, please type the following into a terminal window before starting Picamera2:
To disable the HDR mode, please type the following into a terminal window before starting Picamera2:
NOTE
The sensor may not always be v4l-subdev0 in which case you will have to discover the correct sub-device. On a Pi 5 in particular, the
sensor is more likely to be v4l-subdev2, though again this can vary. To find the correct one, you can use (for example) v4l2-ctl -d
/dev/v4l-subdev0 -l to list a device’s controls, and select the one that has a wide_dynamic_range control.
To turn on the HDR mode, use (assuming picam2 is your Picamera2 object)
import libcamera
picam2.set_controls({'HdrMode': libcamera.controls.HdrModeEnum.SingleExposure})
picam2.set_controls({'HdrMode': libcamera.controls.HdrModeEnum.Off})
The HDR mode can be configured in the Raspberry Pi Camera Tuning Guide but which is beyond the scope of this
document.
Because this HDR mode relies on the Pi 5’s TDN (Temporal Denoise) function to accumulate images, it’s important to
skip a number of frames after a mode switch to allow this process to happen. When using the switch_mode_and_cap‐
ture family of methods, the delay parameter should therefore be specified for this purpose, for example
The shape of an image as reported by numpy on an array obtained using capture_array (where supported)
Table 3. Different
image formats Bits per Optimal Shape Description
pixel alignment
XBGR8888 32 16 (height, width, 4) RGB format with an alpha channel. Each pixel is laid out as [R, G, B, A]
where the A (or alpha) value is fixed at 255.
XRGB8888 32 16 (height, width, 4) RGB format with an alpha channel. Each pixel is laid out as [B, G, R, A]
where the A (or alpha) value is fixed at 255.
BGR888 24 32 (height, width, 3) RGB format. Each pixel is laid out as [R, G, B].
RGB888 24 32 (height, width, 3) RGB format. Each pixel is laid out as [B, G, R].
YUV420 12 64 (height×3/2, width) YUV 4:2:0 format. There are height rows of Y values, then height/2 rows of
half-width U and height/2 rows of half-width V. The array form has two rows
of U (or V) values on each row of the matrix.
YVU420 12 64 (height×3/2, width) YUV 4:2:0 format. There are height rows of Y values, then height/2 rows of
half-width V and height/2 rows of half-width U. The array form has two
rows of V (or U) values on each row of the matrix.
YUYV 16 32 Unsupported YUV 4:2:2 format. A plane of height×stride interleaved values in the order Y
U Y V for every two pixels.
YVYU 16 32 Unsupported YUV 4:2:2 format. A plane of height×stride interleaved values in the order Y
V Y U for every two pixels.
UYVY 16 32 Unsupported YUV 4:2:2 format. A plane of height×stride interleaved values in the order U
Y V Y for every two pixels.
VYUY 16 32 Unsupported YUV 4:2:2 format. A plane of height×stride interleaved values in the order V
Y U Y for every two pixels.
The final table lists the extent of support for each of these formats, both in Picamera2 and in some third party libraries.
Table 4. Support for
different image XRGB8888/XBGR8888 RGB888/BGR888 YUV420/YVU420 NV12/NV21 YUYV/UYVY YVYU/VYUY
formats
Capture Yes Yes Yes Yes Yes Yes
buffer
"transform" Transform() The 2D plane transform that is applied to all images from all the configured
Transform(hflip=1) streams. The listed values represent, respectively, the identity transform, a hori‐
Transform(vflip=1) zontal mirror, a vertical flip and a 180 degree rotation. The default is always the
Transform(hflip=1, identity transform.
vflip=1)
"colour_space" Sycc() The colour space to be used for the main and lores streams. The allowed values
Smpte170m() are either the JPEG colour space (meaning sRGB primaries and transfer function
Rec709() and full-range BT.601 YCbCr encoding), the SMPTE 170M colour space or the
Rec.709 colour space.
For any raw stream, the colour space will always implicitly be the image sensor’s
native colour space.
"buffer_count" 1, 2, … The number of sets of buffers to allocate for requests, which becomes the num‐
ber of "request" objects that are available to the camera system. By default we
choose four for preview configurations, one for still capture configurations, and
six for video.
Increasing the number of buffers will tend to lead to fewer frame drops, although
this comes with diminishing returns. The maximum possible number of buffers
depends on the platform, the image resolution, the amount of CMA allocated.
"display" None The name of the stream that will be displayed in the preview window (if one is
"main" running). Normally the main stream will be displayed, though the lores stream
"lores" can be shown instead when it is defined. By default, create_still_configura‐
tion() will use the value None, as the buffers are typically very large and can lead
to memory fragmentation problems in some circumstances if the display stack
is holding on to them.
"encode" None The name of the stream that will be used for video recording. By default cre‐
"main" ate_video_configuration() will set this to the main stream, though the lores
"lores" stream can also be used if it is defined. For preview and still use cases the value
will be set to None. This value can always be overridden when an encoder is
started.
"sensor" Dictionary containing When present, this determines the chosen sensor mode, overriding any raw
"output_size" stream parameters (which will be adjusted to match the chosen sensor mode).
"bit_depth" This parameter is a dictionary containing the sensor "output_size", a (width,
height) tuple, and "bit_depth", which is the number of bits in each pixel sam‐
ple (10 or 12 for most Raspberry Pi cameras).
"controls" Please refer to the camera con‐ With this parameter we can specify a set of runtime controls that can be regard‐
trols section. ed as part of the camera configuration, and applied whenever the configuration is
(re-)applied. Different use cases may also supply some slightly different default
control values.
"size" (width, height) A tuple of two values giving the width and height of the output image. Both num‐
bers should be no less than 64.
For raw streams, the allowed resolutions are listed again by libcamera-hello -
-list-cameras, along with the correct format to use for that resolution. You can
pick different sizes, but the system will simply use whichever of the allowed val‐
ues it deems to be "closest".
Some camera controls have enum values, that is to say, an explicitly enumerated list of acceptable values. To use such
values, the correct method would be to import controls from libcamera and use code like the following:
picam2 = Picamera2()
picam2.set_controls({"AeMeteringMode": controls.AeMeteringModeEnum.Spot})
Some controls have limits that change with the camera configuration. In this case the valid range of values can be
queried from the camera_controls property where each control has a (minimum_value, maximum_value, default_‐
value) triple. The controls specifically affected are: AnalogueGain, ExposureTime, FrameDurationLimits and
ScalerCrop. For example:
picam2 = Picamera2()
picam2.configure(picam2.create_preview_configuration())
min_exp, max_exp, default_exp = picam2.camera_controls["ExposureTime"]
Note that controls also double up as values reported in captured image metadata. Some controls only appear in such
metadata, which means they cannot be set, therefore making them read-only. We indicate this in the table below.
Table 7. Available
camera controls Control name Description Permitted values
"AeConstraintMode" Sets the constraint mode of the AEC/AGC algorithm. AeConstraintModeEnum followed by one of:
[1] Normal - normal metering
Highlight - meter for highlights
Shadows - meter for shadows
Custom - user-defined metering
"AeEnable" Allow the AEC/AGC algorithm to be turned on and off. False - turn AEC/AGC off
When if is off, there will be no automatic updates to True - turn AEC/AGC on
the camera’s gain or exposure settings.
"AeExposureMode" Sets the exposure mode of the AEC/AGC algorithm.[1] AeExposureModeEnum followed by one of:
Normal - normal exposures
Short - use shorter exposures
Long - use longer exposures
Custom - use custom exposures
"AeFlickerMode" Sets the flicker avoidance mode of the AEC/AGC algo‐ AeFlickereModeEnum followed by one of:
rithm.[1] FlickerOff - no flicker avoidance
FlickerManual - flicker is avoided with a period set
in the AeFlickerPeriod control
"AeFlickerPeriod" Sets the lighting flicker period in microseconds.[1] The period of the lighting cycle in microseconds.
For example, for 50Hz mains lighting the flicker oc‐
curs at 100Hz, so the period would be 10000
microseconds.
"AeMeteringMode" Sets the metering mode of the AEC/AGC algorithm.[1] AeMeteringModeEnum followed by one of:
CentreWeighted - centre weighted metering
Spot - spot metering
Matrix - matrix metering
Custom - custom metering
"AfPause" Pause continuous autofocus. Only has any effect AfPauseEnum followed by one of:
when in continuous autofocus mode. Deferred - pause continuous autofocus when no
longer scanning
Immediate - pause continuous autofocus immedi‐
ately, even if scanning
Resume - resume continuous autofocus
"AfTrigger" Start an autofocus cycle. Only has any effect when in AfTriggerEnum followed by one of:
auto mode. Start - start the cycle
Cancel - cancel an in progress cycle
"AfWindows" Location of the windows in the image to use to mea‐ A list of rectangles (tuples of 4 numbers denoting
sure focus. x_offset, y_offset, width and height). The rectangle
units refer to the maximum scaler crop window
(please refer to the ScalerCropMaximum value in
the camera_properties property).
"AnalogueGain" Analogue gain applied by the sensor. Consult the camera_controls property.
"AwbEnable" Turn the auto white balance (AWB) algorithm on or False - turn AWB off
off. When it is off, there will be no automatic updates True - turn AWB on
to the colour gains.
"AwbMode" Sets the mode of the AWB algorithm.[2] AwbModeEnum followed by one of:
Auto - any illumant
Tungsten - tungsten lighting
Fluorescent - fluorescent lighting
Indoor - indoor illumination
Daylight - daylight illumination
Cloudy - cloudy illumination
Custom - custom setting
"Brightness" Adjusts the image brightness where -1.0 is very dark, Floating point number from -1.0 to 1.0
1.0 is very bright, and 0.0 is the default "normal"
brightness.
"ColourCorrectionMa‐ The 3×3 matrix used within the image signal proces‐ Tuple of nine floating point numbers between -16.0
trix" sor (ISP) to convert the raw camera colours to sRGB. and 16.0.
This control appears only in captured image metadata
and is read-only.
"ColourGains" Pair of numbers where the first is the red gain (the Tuple of two floating point numbers between 0.0
gain applied to red pixels by the AWB algorithm) and and 32.0.
the second is the blue gain. Setting these numbers
disables AWB.
"Contrast" Sets the contrast of the image, where zero means "no Floating point number from 0.0 to 32.0
contrast", 1.0 is the default "normal" contrast, and
larger values increase the contrast proportionately.
"DigitalGain" The amount of digital gain applied to an image. Floating point number
Digital gain is used automatically when the sensor’s
analogue gain control cannot go high enough, and so
this value is only reported in captured image metada‐
ta. It cannot be set directly - users should set the
AnalogueGain instead and digital gain will be used
when needed.
"ExposureTime" Exposure time for the sensor to use, measured in Consult the camera_controls property
microseconds.
"ExposureValue" Exposure compensation value in "stops", which ad‐ Floating point number between -8.0 and 8.0
justs the target of the AEC/AGC algorithm. Positive
values increase the target brightness, and negative
values decrease it. Zero represents the base or "nor‐
mal" exposure level.
"FrameDuration" The amount of time (in microseconds) since the pre‐ Integer
vious camera frame. This value is only available in
captured image metadata and is read-only. To change
the camera’s framerate, the "FrameDurationLimits"
control should be used.
"FrameDurationLimits" The maximum and minimum time that the sensor can Consult the camera_controls property
take to deliver a frame, measured in microseconds.
So the reciprocals of these values (first divided by
1000000) will give the minimum and maximum fram‐
erates that the sensor can deliver.
"HdrChannel" Reports which HDR channel the current frame repre‐ HdrChannelEnum followed by one of:
sents. It is read-only and cannot be set. HdrChannelNone - image not used for HDR
HdrChannelShort - a short exposure image for
HDR
HdrChannelMedium - a medium exposure image for
HDR
HdrChannelLong - a long exposure image for HDR
"HdrMode" Whether to run the camera in an HDR mode (distinct HdrModeEnum followed by one of:
from the in-camera HDR supported by the Camera Off - disable HDR (default)
Module 3). Most of these HDR features work only on SingleExposure - combine multiple short expo‐
Pi 5 or later devices. sure images, this is the recommended mode (Pi 5
only)
MultiExposure - combine short and long images,
ony recommended when a scene is completely
static (Pi 5 only)
Night - an HDR mode that combines multiple low
light images, and can recover some highlights (Pi 5
only)
MultiExposureUnmerged - return unmerged dis‐
tinct short and long exposure images.
"LensPosition" Position of the lens. The units are dioptres (reciprocal Consult the camera_controls property.
of the distance in metres).
"NoiseReductionMode" Selects a suitable noise reduction mode. Normally draft.NoiseReductionModeEnum followed by one
Picamera2’s configuration will select an appropriate of:
mode automatically, so it should not normally be nec‐ Off - no noise reduction
essary to change it. The HighQuality noise reduction Fast - fast noise reduction
mode can be expected to affect the maximum achiev‐ HighQuality - best noise reduction
able framerate.
"Saturation" Amount of colour saturation, where zero produces Floating point number from 0.0 to 32.0
greyscale images, 1.0 represents default "normal" sat‐
uration, and higher values produce more saturated
colours.
"ScalerCrop" The scaler crop rectangle determines which part of A libcamera.Rectangle consisting of:
the image received from the sensor is cropped and x_offset
then scaled to produce an output image of the correct y_offset
size. It can be used to implement digital pan and width
zoom. The coordinates are always given from within height
the full sensor resolution.
"SensorTimestamp" The time this frame was produced by the sensor, Integer
measured in nanoseconds since the system booted.
The time is sampled on the camera start of frame in‐
terrupt, which occurs as the first pixel of the new
frame is written out by the sensor. This control ap‐
pears only in captured image metadata and is read-
only.
"SensorBlackLevels" The black levels of the raw sensor image. This control Tuple of four integers
appears only in captured image metadata and is read-
only. One value is reported for each of the four Bayer
channels, scaled up as if the full pixel range were 16
bits (so 4096 represents a black level of 16 in 10-bit
raw data).
"Sharpness" Sets the image sharpness, where zero implies no ad‐ Floating point number from 0.0 to 16.0
ditional sharpening is performed, 1.0 is the default
"normal" level of sharpening, and larger values apply
proportionately stronger sharpening.
"ColourFilterArrange‐ A number representing the native Bayer order of sensor (before any rotation 0 - RGGB
ment" is taken into account). 1 - GRBG
2 - GBRG
3 - BGGR
4 - monochrome
"Location" An integer which specifies where on the device the camera is situated (for Integer
example, front or back). For the Raspberry Pi, the value has no meaning.
"PixelArrayActiveAreas" The active area of the sensor’s pixel array within the entire sensor pixel ar‐ Tuple of four integers
ray. Given as a tuple of (x_offset, y_offset, width, height) values.
"PixelArraySize" The size of the active pixel area as an (x, y) tuple. This is the full available Tuple of two integers
resolution of the sensor.
"Rotation" The rotation of the sensor relative to the camera board. On many Raspberry Integer
Pi devices, the sensor is actually upside down when the camera board is
held with the connector at the bottom, and these will return a value of 180°
here.
"ScalerCropMaximum" This value is updated when a camera mode is configured. It returns the rec‐ Tuple of 4 integers
tangle as a (x_offset, y_offset, width, height) tuple within the pixel
area active area, that is read out by this camera mode.
"SensorSensitivity" This value is updated when a camera mode is configured. It represents a Floating point number
relative sensitivity of this camera mode compared to other camera modes.
Usually, camera modes all have the same sensitivity so that the same expo‐
sure time and gain yield an image of the same brightness. Sometimes cam‐
eras have modes where this is not true, and to get the same brightness you
would have to adjust the total requested exposure by the ratio of these sen‐
sitivities. For most sensors this will always return 1.0.
"UnitCellSize" The physical size of this sensor’s pixels, if known. Given as an (x, y) tuple Tuple of two integers
in units of nanometres.