iOS: Image filters using CoreImage and MetalKitView

Image filters are not only the most popular feature of image editing apps, but also in many social network apps such as Instagram, Snapchat, and Facebook Messenger. As an iOS developer, you might like to include such option into your apps.

The most convenient method would be using CIFilter create the filter and a UIImageView to show it, but there is one big problem: it’s not fast. Regarding the size of images token by iPhones (usually 12Mp), trying several image filters would not create a pleasant and “it just works” experience for the user, they don’t like to see that spinner on the screen for a simple filter, instagram does it instantly, why not your app?

For this, you need to use MetalKit instead of UIKit, which is way faster.  In this tutorial, I will create a subclass of MTKView to display a Metal drawing, filtered image in this case. It does not mean that we cannot show the final image in our lovely UIImageView.

The first step is to create a CIFilter, but before doing it let’s see what is it according to Apple:

An image processor that produces an image by manipulating one or more input images or by generating new image data. The CIFilter class produces a CIImage object as output. Typically, a filter takes one or more images as input. Some filters, however, generate an image based on other types of input parameters. The parameters of a CIFilter object are set and retrieved through the use of key-value pairs.

Let’s create a simple class to handle filters:

import Foundation
import CoreImage

enum CIFilterName: String, CaseIterable, Equatable {
    case CIPhotoEffectChrome = "CIPhotoEffectChrome"
    case CIPhotoEffectFade = "CIPhotoEffectFade"
    case CIPhotoEffectInstant = "CIPhotoEffectInstant"
    case CIPhotoEffectNoir = "CIPhotoEffectNoir"
    case CIPhotoEffectProcess = "CIPhotoEffectProcess"
    case CIPhotoEffectTonal = "CIPhotoEffectTonal"
    case CIPhotoEffectTransfer = "CIPhotoEffectTransfer"
    case CISepiaTone = "CISepiaTone"
}

class ImageFilters {
    private var context: CIContext
    private let image: CIImage
    
    init() {
        self.context = CIContext()
        self.image = CIImage()
    }
    
    init(image: CIImage, context: CIContext){
        self.context = context
        self.image = image
    }
    
    func apply(filterName: CIFilterName) -> CIImage?{
 
        let filter = CIFilter(name: filterName.rawValue)
        filter?.setDefaults()

        filter?.setValue(self.image, forKey: kCIInputImageKey)
        //filter?.setValue(Double(0.5), forKey: kCIInputIntensityKey)
        return filter?.outputImage
    }
}

The above code is to create a filtered image in CIImage format using its apply() function and supports 8 filters with default settings. To use it we need to pass a CIImage and a CIContext to the initializer. 

Now we need a subsclass of MTKview which can draw a CIImage into a MTView:

import UIKit
import MetalKit
import AVFoundation

class MetalKitView: MTKView {
    
    private var commanQueue: MTLCommandQueue?
    private var ciContext: CIContext?
    var mtlTexture: MTLTexture?
    
    required init(coder: NSCoder) {
        super.init(coder: coder)
        self.isOpaque = false
        self.enableSetNeedsDisplay = true
    }
    
    func render(image: CIImage, context: CIContext, device: MTLDevice) {
        #if !targetEnvironment(simulator)
        self.ciContext = context
        self.device = device
        
        var size = self.bounds
        size.size = self.drawableSize
        size = AVMakeRect(aspectRatio: image.extent.size, insideRect: size)
        let filteredImage = image.transformed(by: CGAffineTransform(
            scaleX: size.size.width/image.extent.size.width,
            y: size.size.height/image.extent.size.height))
        let x = -size.origin.x
        let y = -size.origin.y
        
        self.commanQueue = device.makeCommandQueue()
        
        let buffer = self.commanQueue!.makeCommandBuffer()!
        self.mtlTexture = self.currentDrawable!.texture
        self.ciContext!.render(filteredImage,
                               to: self.currentDrawable!.texture,
                               commandBuffer: buffer,
                               bounds: CGRect(origin:CGPoint(x:x, y:y), size:self.drawableSize),
                               colorSpace: CGColorSpaceCreateDeviceRGB())
        buffer.present(self.currentDrawable!)
        buffer.commit()
        #endif
    }
    
    func getUIImage(texture: MTLTexture, context: CIContext) -> UIImage?{
        let kciOptions = [CIImageOption.colorSpace: CGColorSpaceCreateDeviceRGB(),
                          CIContextOption.outputPremultiplied: true,
                          CIContextOption.useSoftwareRenderer: false] as! [CIImageOption : Any]
        
        if let ciImageFromTexture = CIImage(mtlTexture: texture, options: kciOptions) {
            if let cgImage = context.createCGImage(ciImageFromTexture, from: ciImageFromTexture.extent) {
                let uiImage = UIImage(cgImage: cgImage, scale: 1.0, orientation: .downMirrored)
                return uiImage
            }else{
                return nil
            }
        }else{
            return nil
        }
    }
    
}

Let’s talk about this new class in more details. Here and in the previous class, we have a CIContext. In both classes, it is injected into the class because it’s quite expensive operation to initiate one. Its main responsibility is compiling and running the filters, whether on CPU or GPU. Next one is a MTLCommandQueue property which, as it appears, handles queuing an ordered list of command buffers for the Metal device — has not introduced yet — to execute. The MTLCommandQueue, as well as CIContext, is thread safe and allows multiple outstanding command buffer to be encoded simultaneously. Finally, we have a MTLTexture property which is a memory allocation for storing formatted image data that is accessible to the GPU. The required init() sets two properties of our custom class, isOpaque = false tells it to do not render a black color for empty space and enableSetNeedsDisplay = true asks it to respond to setNeedsDisplay(). This class is created to render the given CIImage and here it’s done via the render() function which takes three arguments. We already know about two of them but MTLDevie is new. As you might have guessed, it defines the interface to the GPU. The body of this method makes sure there is an image to render and does a simple transform to make the image fit into the drawable area. To avoid compiler error while testing the code on a simulator, we enclose the body within #if !targetEnvironment(simulator) /// #endif, because with a simulator target the device‘s type is unknown to the compiler. The last method is obvious, it converts the formatted image to a UIImage object. 

The last step is applying a filter to a UIImage and show it on the screen:

    
if let device = MTLCreateSystemDefaultDevice(), let uiImage = UIImage(named: "someImage"){
    if let ciImage = CIImage(image: uiImage) {
        let context = CIContext(mtlDevice: device)
        filter = ImageFilters(image: ciImage, context: self.context)
        let ciImage = filter.apply(filterName: .CIPhotoEffectNoir)
        mtkView.render(image: ciImage, context: context, device: device)
        mtkView.setNeedsDisplay()
    }
}

These two classes help you to apply image filters almost instantly to images, whether small or large. 

7 Comments

  • Daniel K
    at 5 years ago

    Thanks for the tutorial! Having an issue with the background though. I have PNG with a transparent background and when I render the image the background is now black. Any ideas?

    Reply
    • admin
      at 5 years ago

      Since I lost the sample project files, I have to build it again or if you have it ready upload it somewhere (github?) with you PNG image, i’ll take a look.

      Reply
  • ethan
    at 4 years ago

    Thanks for the tutorial. I have an issue with getUIImage(). It seems it always gets the previous image instead of the current one. below is my code getting UIImage using your getUIImage(). Can you tell me how to solve it? Thanks. Ethan. email: [email protected]

    if let device = MTLCreateSystemDefaultDevice(){

    let context = CIContext(mtlDevice: device)
    let texture = mtkView.currentDrawable?.texture

    guard let imageToSave = mtkView.getUIImage(texture: texture!, context: context) else {
    print(“Image not found!”)
    return
    }
    UIImageWriteToSavedPhotosAlbum(imageToSave, self, #selector(image(_:didFinishSavingWithError:contextInfo:)), nil)
    }

    Reply
  • Brian W
    at 4 years ago

    When I try to run your code, I get and assertion failure in render() that says that currentDrawable!.texture = nil.
    It seems that currentDrawable is not being initialized?

    Reply
    • admin
      at 4 years ago

      What is the configuration of the device you are running the code on it?

      Reply
  • ram
    at 4 years ago

    I’m trying to convert contents of MTKView to UIIMage. I used your `getUIImage()` function. I have to set the `frameBufferOnly` property of MTKView to false; otherwise, it crashes inside the method.

    When I set the `frameBufferOnly” property to false, I get a UIImage which is empty (like a transparent png with no content).

    Any help is appreciated.

    Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.