505 Live Photo Editing and Raw Processing With Core Image
505 Live Photo Editing and Raw Processing With Core Image
© 2016 Apple Inc. All rights reserved. Redistribution or public display not permitted without written permission from Apple.
What You Will Learn Today
image = image.applyingFilter(
Input to Sepia
⚠ Hue Contrast Working Space
Wide O
Working Spacecolor Filter
images and
FilterdisplaysFilter
are common.
to Output
Input to SepiaContrast
ConcatenatedHue Working Space
O
Working Space FilterProgram
Filter
Filter to Output
CIHueSaturationValueGradient
New Built-In CIFilters NEW
CIHueSaturationValueGradient
New Built-In CIFilters NEW
CIEdgePreserveUpsampleFilter
CIEdgePreserveUpsampleFilter
CIEdgePreserveUpsampleFilter
+ =
CIEdgePreserveUpsampleFilter
+ =
Metal on by default
New Performance Controls NEW
Metal on by default
Now UIImage(ciImage:) is much faster
New Performance Controls NEW
Metal on by default
Now UIImage(ciImage:) is much faster
Core Image supports input and output of half-float CGImageRefs
New Performance Controls NEW
Metal on by default
Now UIImage(ciImage:) is much faster
Core Image supports input and output of half-float CGImageRefs
Metal on by default
Now UIImage(ciImage:) is much faster
Core Image supports input and output of half-float CGImageRefs
Metal on by default
Now UIImage(ciImage:) is much faster
Core Image supports input and output of half-float CGImageRefs
Metal on by default
Now UIImage(ciImage:) is much faster
Core Image supports input and output of half-float CGImageRefs
CVPixelFormat
30RGBLEPackedWideGamut
4 10 –0.37 … 1.62 gamma’d
Adjusting RAW Images with Core Image
Adjusting RAW Images with Core Image
Sensor Array
Adjusting RAW Images
What is a RAW file
Light from Scene
Most cameras use a color filter array
and a sensor array
Sensor Array
Adjusting RAW Images
What is a RAW file
Contains linear and deep pixel data which enables great editability
Adjusting RAW Images
Advantages of RAW
Contains linear and deep pixel data which enables great editability
Image processing gets better every year
Adjusting RAW Images
Advantages of RAW
Contains linear and deep pixel data which enables great editability
Image processing gets better every year
Can be rendered to any color space
Adjusting RAW Images
Advantages of RAW
Contains linear and deep pixel data which enables great editability
Image processing gets better every year
Can be rendered to any color space
Users can use different software to interpret the image
Adjusting RAW Images
Advantages of JPEG
Adjusting RAW Images
Advantages of JPEG
User Adjustments
• Exposure
• Temperature, tint
• Noise reduction
Adjusting RAW Images
Using the CIRAWFilter API
User Adjustments
• Exposure
• Temperature, tint
• Noise reduction
// Using the CIRAWFilter API
User Adjustments
• Exposure
• Temperature, tint
• Noise reduction
Adjusting RAW Images
Using the CIRAWFilter API
User Adjustments
• Exposure
• Temperature, tint
• Noise reduction
Adjusting RAW Images
Using the CIRAWFilter API
User Adjustments
• Exposure
• Temperature, tint
• Noise reduction
Adjusting RAW Images
Using the CIRAWFilter API
User Adjustments
• Exposure
• Temperature, tint
• Noise reduction
Adjusting RAW Images
Using the CIRAWFilter API
User Adjustments
• Exposure
• Temperature, tint
• Noise reduction
Adjusting RAW Images
Using the CIRAWFilter API
User Adjustments
• Exposure
• Temperature, tint
• Noise reduction
Adjusting RAW Images
Using the CIRAWFilter API
User Adjustments
• Exposure
• Temperature, tint
• Noise reduction
Adjusting RAW Images
Using the CIRAWFilter API
User Adjustments
• Exposure
• Temperature, tint
• Noise reduction
Adjusting RAW Images
Using the CIRAWFilter API
User Adjustments
• Exposure
• Temperature, tint
• Noise reduction
Output Image
(JPG, TIF,…)
// Saving a RAW to a JPEG or TIFF
class myClass {
class myClass {
class myClass {
class myClass {
class myClass {
try contextForSaving.writeJPEGRepresentation(
of: rawImage,
to: jpegDestination,
colorSpace: cs,
options: [kCGImageDestinationLossyCompressionQuality: 1.0])
}
// Saving a RAW to a JPEG or TIFF
try contextForSaving.writeJPEGRepresentation(
of: rawImage,
to: jpegDestination,
colorSpace: cs,
options: [kCGImageDestinationLossyCompressionQuality: 1.0])
}
// Saving a RAW to a JPEG or TIFF
try contextForSaving.writeJPEGRepresentation(
of: rawImage,
to: jpegDestination,
colorSpace: cs,
options: [kCGImageDestinationLossyCompressionQuality: 1.0])
}
// Saving a RAW to a JPEG or TIFF
try contextForSaving.writeJPEGRepresentation(
of: rawImage,
to: jpegDestination,
colorSpace: cs,
options: [kCGImageDestinationLossyCompressionQuality: 1.0])
}
// Saving a RAW to a JPEG or TIFF
try contextForSaving.writeJPEGRepresentation(
of: rawImage,
to: jpegDestination,
colorSpace: cs,
options: [kCGImageDestinationLossyCompressionQuality: 1.0])
}
// Share a RAW to a JPEG or TIFF
// Useful if the receiver doesn't support color management
func share(from rawImage: CIImage,
to jpegDestination: URL) throws
{
let cs = CGColorSpace(name: CGColorSpace.displayP3)!
try contextForSaving.writeJPEGRepresentation(
of: rawImage,
to: jpegDestination,
colorSpace: cs,
options: [kCGImageDestinationLossyCompressionQuality: 1.0,
kCGImageDestinationOptimizeColorForSharing: true])
}
// Share a RAW to a JPEG or TIFF
// Useful if the receiver doesn't support color management
func share(from rawImage: CIImage,
to jpegDestination: URL) throws
{
let cs = CGColorSpace(name: CGColorSpace.displayP3)!
try contextForSaving.writeJPEGRepresentation(
of: rawImage,
to: jpegDestination,
colorSpace: cs,
options: [kCGImageDestinationLossyCompressionQuality: 1.0,
kCGImageDestinationOptimizeColorForSharing: true])
}
// Saving a RAW to a CGImageRef
User Adjustments
• Exposure
• Temperature, tint
• Noise reduction
Adjusting RAW Images
Using the CIRAWFilter API
User Adjustments
• Exposure
• Temperature, tint
• Noise reduction
Adjusting RAW Images
Supporting wide gamut
Adjusting RAW Images
Supporting wide gamut
RAW files can be very large and require several intermediate buffers to render
Saving RAW Images
Warning: “Objects are larger than they appear”
RAW files can be very large and require several intermediate buffers to render
To reduce memory high water-mark use these new APIs:
Saving RAW Images
Warning: “Objects are larger than they appear”
RAW files can be very large and require several intermediate buffers to render
To reduce memory high water-mark use these new APIs:
CIContext(options: [kCIContextCacheIntermediates: false])
Saving RAW Images
Warning: “Objects are larger than they appear”
RAW files can be very large and require several intermediate buffers to render
To reduce memory high water-mark use these new APIs:
CIContext(options: [kCIContextCacheIntermediates: false])
context.writeJPEGRepresentationOfImage()
Saving RAW Images
Warning: “Objects are larger than they appear”
RAW files can be very large and require several intermediate buffers to render
To reduce memory high water-mark use these new APIs:
CIContext(options: [kCIContextCacheIntermediates: false])
context.writeJPEGRepresentationOfImage()
context.createCGImage(... deferred: true)
Saving RAW Images
Warning: “Objects are larger than they appear”
Introduction
Editing Live Photos
Agenda
Introduction
What Can be Edited?
Editing Live Photos
Agenda
Introduction
What Can be Edited?
Obtaining a Live Photo for Editing
Editing Live Photos
Agenda
Introduction
What Can be Edited?
Obtaining a Live Photo for Editing
Setting Up a Live Photo Editing Context
Editing Live Photos
Agenda
Introduction
What Can be Edited?
Obtaining a Live Photo for Editing
Setting Up a Live Photo Editing Context
Applying Core Image Filters
Editing Live Photos
Agenda
Introduction
What Can be Edited?
Obtaining a Live Photo for Editing
Setting Up a Live Photo Editing Context
Applying Core Image Filters
Previewing an Edited Live Photo
Editing Live Photos
Agenda
Introduction
What Can be Edited?
Obtaining a Live Photo for Editing
Setting Up a Live Photo Editing Context
Applying Core Image Filters
Previewing an Edited Live Photo
Saving to the PhotoLibrary
Editing Live Photos
Agenda
Introduction
What Can be Edited?
Obtaining a Live Photo for Editing
Setting Up a Live Photo Editing Context
Applying Core Image Filters
Previewing an Edited Live Photo
Saving to the PhotoLibrary
Demo
Live Photo
Introduction
Live Photo
Introduction
Introduction
Introduction
Introduction
Introduction
Photo
Live Photo
What can be edited?
Photo
Video frames
Live Photo
What can be edited?
Photo
Video frames
Audio volume
Live Photo
What can be edited?
Photo
Video frames
Audio volume
Dimensions
Obtaining a Live Photo for Editing
Photo editing extension
Input image
Working with the Frame Processor
PHLivePhotoFrame
Input image
Frame type
Working with the Frame Processor
PHLivePhotoFrame
Input image
Frame type
Frame time
Working with the Frame Processor
PHLivePhotoFrame
Input image
Frame type
Frame time
Render scale
Working with the Frame Processor
PHLivePhotoFrame
Input image
Frame type
Frame time
Render scale
self.livePhotoEditingContext.frameProcessor = {
(frame: PHLivePhotoFrame, error: NSErrorPointer) -> CIImage? in
// Your adjustments go here...
return frame.image
}
Working with the Frame Processor
PHLivePhotoFrame
Input image
Frame type
Frame time
Render scale
self.livePhotoEditingContext.frameProcessor = {
(frame: PHLivePhotoFrame, error: NSErrorPointer) -> CIImage? in
// Your adjustments go here...
return frame.image
}
Working with the Frame Processor
PHLivePhotoFrame
Input image
Frame type
Frame time
Render scale
self.livePhotoEditingContext.frameProcessor = {
(frame: PHLivePhotoFrame, error: NSErrorPointer) -> CIImage? in
// Your adjustments go here...
return frame.image
}
Working with the Frame Processor
PHLivePhotoFrame
Input image
Frame type
Frame time
Render scale
self.livePhotoEditingContext.frameProcessor = {
(frame: PHLivePhotoFrame, error: NSErrorPointer) -> CIImage? in
// Your adjustments go here...
return frame.image
}
647 x 1150
// Applying a static adjustment
self.livePhotoEditingContext.frameProcessor = {
(frame: PHLivePhotoFrame, error: NSErrorPointer) -> CIImage? in
var image = frame.image
// Crop to square
let extent = image.extent
let size = min(extent.width, extent.height)
let rect = CGRect(x: (extent.width - size) / 2, y: (extent.height - size) / 2,
width: size, height: size)
image = image.cropping(to: rect)
return image
}
// Applying a static adjustment
self.livePhotoEditingContext.frameProcessor = {
(frame: PHLivePhotoFrame, error: NSErrorPointer) -> CIImage? in
var image = frame.image
// Crop to square
let extent = image.extent
let size = min(extent.width, extent.height)
let rect = CGRect(x: (extent.width - size) / 2, y: (extent.height - size) / 2,
width: size, height: size)
image = image.cropping(to: rect)
return image
}
// Applying a static adjustment
self.livePhotoEditingContext.frameProcessor = {
(frame: PHLivePhotoFrame, error: NSErrorPointer) -> CIImage? in
var image = frame.image
// Crop to square
let extent = image.extent
let size = min(extent.width, extent.height)
let rect = CGRect(x: (extent.width - size) / 2, y: (extent.height - size) / 2,
width: size, height: size)
image = image.cropping(to: rect)
return image
}
// Applying a static adjustment
self.livePhotoEditingContext.frameProcessor = {
(frame: PHLivePhotoFrame, error: NSErrorPointer) -> CIImage? in
var image = frame.image
// Crop to square
let extent = image.extent
let size = min(extent.width, extent.height)
let rect = CGRect(x: (extent.width - size) / 2, y: (extent.height - size) / 2,
width: size, height: size)
image = image.cropping(to: rect)
return image
}
647 x 1150
// Applying a time-based adjustment
let tP = CMTimeGetSeconds(self.livePhotoEditingContext.photoTime)
let duration = CMTimeGetSeconds(self.livePhotoEditingContext.duration)
self.livePhotoEditingContext.frameProcessor = {
(frame: PHLivePhotoFrame, error: NSErrorPointer) -> CIImage? in
var image = frame.image
let tF = CMTimeGetSeconds(frame.time)
// Simple linear ramp function from (0, tP, duration) to (-1, 0, +1)
let dt = (tF < tP) ? CGFloat((tF - tP) / tP) : CGFloat((tF - tP) / (duration - tP))
// Animate crop rect
image = image.cropping(to: rect.offsetBy(dx: dt * rect.minX, dy: dt * rect.minY))
return image
}
// Applying a time-based adjustment
let tP = CMTimeGetSeconds(self.livePhotoEditingContext.photoTime)
let duration = CMTimeGetSeconds(self.livePhotoEditingContext.duration)
self.livePhotoEditingContext.frameProcessor = {
(frame: PHLivePhotoFrame, error: NSErrorPointer) -> CIImage? in
var image = frame.image
let tF = CMTimeGetSeconds(frame.time)
// Simple linear ramp function from (0, tP, duration) to (-1, 0, +1)
let dt = (tF < tP) ? CGFloat((tF - tP) / tP) : CGFloat((tF - tP) / (duration - tP))
// Animate crop rect
image = image.cropping(to: rect.offsetBy(dx: dt * rect.minX, dy: dt * rect.minY))
return image
}
// Applying a time-based adjustment
let tP = CMTimeGetSeconds(self.livePhotoEditingContext.photoTime)
let duration = CMTimeGetSeconds(self.livePhotoEditingContext.duration)
self.livePhotoEditingContext.frameProcessor = {
(frame: PHLivePhotoFrame, error: NSErrorPointer) -> CIImage? in
var image = frame.image
let tF = CMTimeGetSeconds(frame.time)
// Simple linear ramp function from (0, tP, duration) to (-1, 0, +1)
let dt = (tF < tP) ? CGFloat((tF - tP) / tP) : CGFloat((tF - tP) / (duration - tP))
// Animate crop rect
image = image.cropping(to: rect.offsetBy(dx: dt * rect.minX, dy: dt * rect.minY))
return image
}
// Applying a time-based adjustment
let tP = CMTimeGetSeconds(self.livePhotoEditingContext.photoTime)
let duration = CMTimeGetSeconds(self.livePhotoEditingContext.duration)
self.livePhotoEditingContext.frameProcessor = {
(frame: PHLivePhotoFrame, error: NSErrorPointer) -> CIImage? in
var image = frame.image
let tF = CMTimeGetSeconds(frame.time)
// Simple linear ramp function from (0, tP, duration) to (-1, 0, +1)
let dt = (tF < tP) ? CGFloat((tF - tP) / tP) : CGFloat((tF - tP) / (duration - tP))
// Animate crop rect
image = image.cropping(to: rect.offsetBy(dx: dt * rect.minX, dy: dt * rect.minY))
return image
}
// Applying a time-based adjustment
let tP = CMTimeGetSeconds(self.livePhotoEditingContext.photoTime)
let duration = CMTimeGetSeconds(self.livePhotoEditingContext.duration)
self.livePhotoEditingContext.frameProcessor = {
(frame: PHLivePhotoFrame, error: NSErrorPointer) -> CIImage? in
var image = frame.image
let tF = CMTimeGetSeconds(frame.time)
// Simple linear ramp function from (0, tP, duration) to (-1, 0, +1)
let dt = (tF < tP) ? CGFloat((tF - tP) / tP) : CGFloat((tF - tP) / (duration - tP))
// Animate crop rect
image = image.cropping(to: rect.offsetBy(dx: dt * rect.minX, dy: dt * rect.minY))
return image
}
647 x 1150
647 x 1150
// Applying a resolution-dependent adjustment
livePhotoEditingContext.frameProcessor = {
(frame: PHLivePhotoFrame, error: NSErrorPointer) -> CIImage? in
var image = frame.image
// Apply screen effect
let scale = frame.renderScale
image = image.applyingFilter("CILineScreen", withInputParameters:
[ "inputAngle" : 3 * Double.pi / 4,
"inputWidth" : 50 * scale,
"inputCenter" : CIVector(x: image.extent.midX, y: image.extent.midY)
])
return image
}
// Applying a resolution-dependent adjustment
livePhotoEditingContext.frameProcessor = {
(frame: PHLivePhotoFrame, error: NSErrorPointer) -> CIImage? in
var image = frame.image
// Apply screen effect
let scale = frame.renderScale
image = image.applyingFilter("CILineScreen", withInputParameters:
[ "inputAngle" : 3 * Double.pi / 4,
"inputWidth" : 50 * scale,
"inputCenter" : CIVector(x: image.extent.midX, y: image.extent.midY)
])
return image
}
// Applying a resolution-dependent adjustment
livePhotoEditingContext.frameProcessor = {
(frame: PHLivePhotoFrame, error: NSErrorPointer) -> CIImage? in
var image = frame.image
// Apply screen effect
let scale = frame.renderScale
image = image.applyingFilter("CILineScreen", withInputParameters:
[ "inputAngle" : 3 * Double.pi / 4,
"inputWidth" : 50 * scale,
"inputCenter" : CIVector(x: image.extent.midX, y: image.extent.midY)
])
return image
}
// Applying a resolution-dependent adjustment
livePhotoEditingContext.frameProcessor = {
(frame: PHLivePhotoFrame, error: NSErrorPointer) -> CIImage? in
var image = frame.image
// Apply screen effect
let scale = frame.renderScale
image = image.applyingFilter("CILineScreen", withInputParameters:
[ "inputAngle" : 3 * Double.pi / 4,
"inputWidth" : 50 * scale,
"inputCenter" : CIVector(x: image.extent.midX, y: image.extent.midY)
])
return image
}
647 x 1150
// Applying an adjustment to the photo only
livePhotoEditingContext.frameProcessor = {
(frame: PHLivePhotoFrame, error: NSErrorPointer) -> CIImage? in
var image = frame.image
// Add watermark to the photo only
if frame.type == .photo {
// Composite logo
image = logo.applyingFilter("CILinearDodgeBlendMode",
withInputParameters: ["inputBackgroundImage" : image])
}
return image
}
// Applying an adjustment to the photo only
livePhotoEditingContext.frameProcessor = {
(frame: PHLivePhotoFrame, error: NSErrorPointer) -> CIImage? in
var image = frame.image
// Add watermark to the photo only
if frame.type == .photo {
// Composite logo
image = logo.applyingFilter("CILinearDodgeBlendMode",
withInputParameters: ["inputBackgroundImage" : image])
}
return image
}
Previewing a Live Photo
PHLivePhotoView
Custom
CustomMetal
CPU code
code
// Applying a CIKernel in a CIFilter subclass
Useful when you have an algorithm that isn’t suitable for CIKernel language
Using CIImageProcessor
Useful when you have an algorithm that isn’t suitable for CIKernel language
A good example of this is an integral image
Using CIImageProcessor
Useful when you have an algorithm that isn’t suitable for CIKernel language
A good example of this is an integral image
• Each output pixel contains the sum of all input pixels above and to the left
Using CIImageProcessor
Useful when you have an algorithm that isn’t suitable for CIKernel language
A good example of this is an integral image
• Each output pixel contains the sum of all input pixels above and to the left
• This cannot be calculated as a traditional data-parallel pixel shader
Using CIImageProcessor
What’s an integral image?
1 4 5 3 2 1 5 10 13 15
0 2 4 6 3 1 7 16 25 30
3 7 8 2 1 4 17 34 45 51
6 8 3 4 7 10 31 51 66 79
7 2 1 0 3 17 40 61 76 92
Using CIImageProcessor
What’s an integral image?
1 4 5 3 2 1 5 10 13 15
0 2 4 6 3 1 7 16 25 30
3 7 8 2 1 4 17 34 45 51
6 8 3 4 7 10 31 51 66 79
7 2 1 0 3 17 40 61 76 92
Using CIImageProcessor
What’s an integral image?
1 4 5 3 2 1 5 10 13 15
0 2 4 6 3 1 7 16 25 30
3 7 8 2 1 4 17 34 45 51
6 8 3 4 7 10 31 51 66 79
7 2 1 0 3 17 40 61 76 92
Using CIImageProcessor
What’s an integral image?
1 4 5 3 2 1 5 10 13 15
0 2 4 6 3 1 7 16 25 30
3 7 8 2 1 4 17 34 45 51
6 8 3 4 7 10 31 51 66 79
7 2 1 0 3 17 40 61 76 92
Using CIImageProcessor
What’s an integral image?
1 4 5 3 2 1 5 10 13 15
0 2 4 6 3 1 7 16 25 30
3 7 8 2 1 4 17 34 45 51
6 8 3 4 7 10 31 51 66 79
7 2 1 0 3 17 40 61 76 92
// CIImageProcessor block of integral image
for j in 0..<outputHeight {
for i in 0..<outputWidth {
// ... compute value of output(i,j) from input(i,j,xShift,yShift)
}
}
}
// CIImageProcessor block of integral image
for j in 0..<outputHeight {
for i in 0..<outputWidth {
// ... compute value of output(i,j) from input(i,j,xShift,yShift)
}
}
}
// CIImageProcessor block of integral image
for j in 0..<outputHeight {
for i in 0..<outputWidth {
// ... compute value of output(i,j) from input(i,j,xShift,yShift)
}
}
}
// CIImageProcessor block of integral image
for j in 0..<outputHeight {
for i in 0..<outputWidth {
// ... compute value of output(i,j) from input(i,j,xShift,yShift)
}
}
}
// CIImageProcessor block of integral image
for j in 0..<outputHeight {
for i in 0..<outputWidth {
// ... compute value of output(i,j) from input(i,j,xShift,yShift)
}
}
}
// CIImageProcessor block of integral image using MPS
kernel.encodeToCommandBuffer(output.metalCommandBuffer?,
sourceTexture: input.metalTexture,
destinationTexture: output.metalTexture)
}
// CIImageProcessor block of integral image using MPS
kernel.encodeToCommandBuffer(output.metalCommandBuffer?,
sourceTexture: input.metalTexture,
destinationTexture: output.metalTexture)
}
// CIImageProcessor block of integral image using MPS
kernel.encodeToCommandBuffer(output.metalCommandBuffer?,
sourceTexture: input.metalTexture,
destinationTexture: output.metalTexture)
}
// CIImageProcessor block of integral image using MPS
kernel.encodeToCommandBuffer(output.metalCommandBuffer?,
sourceTexture: input.metalTexture,
destinationTexture: output.metalTexture)
}
Use Integral Image to Do Fast Variable Box Blur
1176 x 660
Use Integral Image to Do Fast Variable Box Blur
1176 x 660
How Can You Use an Integral Image
Very fast box sums
Input Image
1 4 5 3 2
0 2 4 6 3
3 7 8 2 1
6 8 3 4 7
7 2 1 0 3
How Can You Use an Integral Image
Very fast box sums
Input Image
1 4 5 3 2
0 2 4 6 3
3 7 8 2 1
6 8 3 4 7
7 2 1 0 3
How Can You Use an Integral Image
Very fast box sums
Input Image
1 4 5 3 2
0 2 4 6 3
3 7 8 2 1
6 8 3 4 7
7 2 1 0 3
How Can You Use an Integral Image
Very fast box sums
Input Image
1 4 5 3 2
0 2 4 6 3
3 7 8 2 1
6 8 3 4 7
7 2 1 0 3
n² Reads
How Can You Use an Integral Image
Very fast box sums
Input Image
1 4 5 3 2
0 2 4 6 3
3 7 8 2 1
6 8 3 4 7
7 2 1 0 3
2n Reads
How Can You Use an Integral Image
Very fast box sums
1 4 5 3 2 1 5 10 13 15
0 2 4 6 3 1 7 16 25 30
3 7 8 2 1 4 17 34 45 51
6 8 3 4 7 10 31 51 66 79
7 2 1 0 3 17 40 61 76 92
2n Reads
How Can You Use an Integral Image
Very fast box sums
1 4 5 3 2 1 5 10 13 15
0 2 4 6 3 1 7 16 25 30
3 7 8 2 1 4 17 34 45 51
6 8 3 4 7 10 31 51 66 79
7 2 1 0 3 17 40 61 76 92
2n Reads
How Can You Use an Integral Image
Very fast box sums
1 4 5 3 2 1 5 10 13 15
0 2 4 6 3 1 7 16 25 30
3 7 8 2 1 4 17 34 45 51
6 8 3 4 7 10 31 51 66 79
7 2 1 0 3 17 40 61 76 92
2n Reads
How Can You Use an Integral Image
Very fast box sums
1 4 5 3 2 1 5 10 13 15
0 2 4 6 3 1 7 16 25 30
3 7 8 2 1 4 17 34 45 51
6 8 3 4 7 10 31 51 66 79
7 2 1 0 3 17 40 61 76 92
2n Reads
How Can You Use an Integral Image
Very fast box sums
1 4 5 3 2 1 5 10 13 15
0 2 4 6 3 1 7 16 25 30
3 7 8 2 1 4 17 34 45 51
6 8 3 4 7 10 31 51 66 79
7 2 1 0 3 17 40 61 76 92
2n Reads
How Can You Use an Integral Image
Very fast box sums
1 4 5 3 2 1 5 10 13 15
0 2 4 6 3 1 7 16 25 30
3 7 8 2 1 4 17 34 45 51
6 8 3 4 7 10 31 51 66 79
7 2 1 0 3 17 40 61 76 92
2n Reads 4 Reads
How Can You Use an Integral Image
Very fast box sums
2+4+6+7+8+2+8+3+4 == 66 – 10 – 13 + 1
Input Image Integral Image
1 4 5 3 2 1 5 10 13 15
0 2 4 6 3 1 7 16 25 30
3 7 8 2 1 4 17 34 45 51
6 8 3 4 7 10 31 51 66 79
7 2 1 0 3 17 40 61 76 92
2n Reads 4 Reads
// CIKernel box blur from integral image
return ( ul + lr - ur - ll ) * usedArea / originalArea;
}
// CIKernel box blur from integral image
return ( ul + lr - ur - ll ) * usedArea / originalArea;
}
// CIKernel box blur from integral image
return ( ul + lr - ur - ll ) * usedArea / originalArea;
}
// CIKernel box blur from integral image
return ( ul + lr - ur - ll ) * usedArea / originalArea;
}
// CIKernel box blur from integral image
return ( ul + lr - ur - ll ) * usedArea / originalArea;
}
// CIKernel box blur from integral image
return ( ul + lr - ur - ll ) * usedArea / originalArea;
}
// CIKernel variable box blur from integral image and mask
let maskImage =
CIFilter(name: "CIRadialGradient",
withInputParameters: [
"inputCenter": centerOfEffect,
"inputRadius0": innerRadius,
"inputRadius1": outerRadius,
"inputColor0": CIColor.black(),
"inputColor1": CIColor.white()
])?.outputImage
1176 x 660
1176 x 660
Using CIImageProcessor
Tips and tricks
If your processor:
• Wants data in a color space other than the context working space,
- Call CIImage.byColorMatchingWorkingSpace(to: CGColorSpace)
on the processor input
• Returns data in a color space other than the context working space,
- Call CIImage.byColorMatchingColorSpace(toWorking: CGColorSpace)
on the processor output
Using CIImageProcessor
Tips and tricks
If your processor:
• Wants data in a color space other than the context working space,
- Call CIImage.byColorMatchingWorkingSpace(to: CGColorSpace)
on the processor input
• Returns data in a color space other than the context working space,
- Call CIImage.byColorMatchingColorSpace(toWorking: CGColorSpace)
on the processor output
You can see how your processor fits into a full-render graph
by running with the CI_PRINT_TREE environment variable
// Example log with CI_PRINT_TREE=1
programs graph render_to_display (metal context 1 frame 1 tile 1) roi=[0 0 1532 1032] =
program affine(clamp_to_alpha(premultiply(linear_to_srgb(
unpremultiply(color_matrix_3x3(variableBoxBlur(0,1))))))) rois=[0 0 1532 1032]
program RGBAf processor integralImage 0x12345678 () rois=[-1 -1 1502 1002]
program clamp(affine(srgb_to_linear())) rois=[-1 -1 1502 1002]
IOSurface BGRA8 1500x1000 alpha_one edge_clamp rois=[0 0 1500 1000]
program _radialGradient() rois=[0 0 1500 1000]
// Example log with CI_PRINT_TREE=8
programs graph render_to_display (metal context 1 frame 1 tile 1) roi=[0 0 1532 1032] =
program affine(clamp_to_alpha(premultiply(linear_to_srgb(
unpremultiply(color_matrix_3x3(variableBoxBlur(0,1))))))) rois=[0 0 1532 1032]
program RGBAf processor integralImage 0x12345678 () rois=[-1 -1 1502 1002]
program clamp(affine(srgb_to_linear())) rois=[-1 -1 1502 1002]
IOSurface BGRA8 1500x1000 alpha_one edge_clamp rois=[0 0 1500 1000]
program _radialGradient() rois=[0 0 1500 1000]
// Example log with CI_PRINT_TREE=8
programs graph render_to_display (metal context 1 frame 1 tile 1) roi=[0 0 1532 1032] =
program affine(clamp_to_alpha(premultiply(linear_to_srgb(
unpremultiply(color_matrix_3x3(variableBoxBlur(0,1))))))) rois=[0 0 1532 1032]
program RGBAf processor integralImage 0x12345678 () rois=[-1 -1 1502 1002]
program clamp(affine(srgb_to_linear())) rois=[-1 -1 1502 1002]
IOSurface BGRA8 1500x1000 alpha_one edge_clamp rois=[0 0 1500 1000]
program _radialGradient() rois=[0 0 1500 1000]
// Example log with CI_PRINT_TREE=“8 graphviz”
programs graph
render_to_surface
(metal context 2 frame 1 tile 1)
rois=[0 0 736 414]
{46}
program
affine [1 0 0 -1 0 414]
clamp_to_alpha
premultiply
linear_to_srgb
unpremultiply
kernel variableBoxBlur
rois=[0 0 736 414]
extent=[0 0 736 414]
{44} {45}
program RGBAf program
processor IntegralImage: 0x12345678 colorkernel _radialGradient
rois=[0 0 736 414] rois=[0 0 736 414]
extent=[0 0 736 414] extent=[infinite]
{42}
program
affine [1 0 0 -1 0 414]
rois=[0 0 736 414]
extent=[infinite][0 0 736 414]
{32}
IOSurface 0x170000d60
BGRA 735x414
edge_clamp
rois=[0 0 736 414]
extent=[infinite][0 0 736 414]
// Example log with CI_PRINT_TREE=“8 graphviz”
programs graph
render_to_surface
(metal context 2 frame 1 tile 1)
rois=[0 0 736 414]
{46}
program
affine [1 0 0 -1 0 414]
clamp_to_alpha
premultiply
linear_to_srgb
unpremultiply
kernel variableBoxBlur
rois=[0 0 736 414]
extent=[0 0 736 414]
{44} {45}
program RGBAf program
processor IntegralImage: 0x12345678 colorkernel _radialGradient
rois=[0 0 736 414] rois=[0 0 736 414]
extent=[0 0 736 414] extent=[infinite]
{42}
program
affine [1 0 0 -1 0 414]
rois=[0 0 736 414]
extent=[infinite][0 0 736 414]
{32}
IOSurface 0x170000d60
BGRA 735x414
edge_clamp
rois=[0 0 736 414]
extent=[infinite][0 0 736 414]
The Core Image Book Club Recommends
What You Learned Today
https://developer.apple.com/wwdc16/505
Related Sessions
Live Photo and Core Image Lab Graphics, Games, and Media Lab C Thursday 1:30PM
Live Photo and Core Image Lab Graphics, Games, and Media Lab D Friday 9:00AM