Yet Another Covid-19 App

In the past few weeks I’ve had a lot free time, and thought to myself why not another Covid-19 app? Only for Canada. You can get it from here

I tried to practice having a clean, readable and modular code. I hope I achieved these goals. Three external libraries are used in the app. CSV to decode CSV data, Charts and MBProgressHUB to show an activity indicator (I really could avoid this one and use the native one).

Data which feeds the app comes from website. The main challenge here was the input data were not consistent. From the URL to data structure. I fix the first problem I added a helper function to retrieve the file URL from an alternative when the primary function fails to fetch the file, you can getAlternativeAddress() from DeliveryManager class. To overcome to the data structure inconsistency I declared most of the columns as optional string and through two failable initializers converted them to Int and Double.

Another thing I practiced was using custom UIViews, it is a good way to modularize the code and avoid having fat view controllers.

KeyPath was another thing I used here. I guess most of us don’t care to use them, but they are really cool. Imagine you have a structure which consists of several Int type and you want to manipulate some of the numbers and show different output for different purpose. In my case I wanted to feed the chart with two different set of data, one for confirmed cases and one for number of deaths. I could create two almost identical functions to generate required data or one generic function which knew what element of the structure should be used. KeyPath was the silver bullet here:

func setChartData(data: [CSVDecodable], keyPath: KeyPath<CSVDecodable, Int>){}

Now I can call it like setChartData(data: data, keyPath: \.confirmed) or setChartData(data: data, keyPath: \.deaths) and it knows how to generate the data set.

You can download the project from

Snap to edge for UIView

For Tahrir I wanted to add a new feature in the new major update. I thought implementing a snap to edge might help users to align texts with each other, on small screen and with fingers it would be difficult without it.

The general idea is storing edges (frame.origin) of all existing views in an array, then calculate the distance of current view’s edges, which the user is moving, and if it’s smaller than a threshold you activate the snap.

The snap works, as you surely have noticed in apps like Instagram, like this: when the view gets near an edge it grips the edge until you push it further to release the grip and move freely again. Sounds simple, right?

I post pieces of the code here and you can get the rest from github.

var squareItems = [Int: CGPoint]()

This variable keeps record of the edges of the existing views. The key is hash value of the view, which is unique. We update it every time a new view is added or when the user is done with moving the view.

Let’s calculate the distances:

for item in squareItems { if item.key != senderView.hash { let leftDistance = abs(senderView.frame.origin.x + translation.x - item.value.x) if leftDistance < 5 { snapOnLeftEdge = true shouldSnap = true horizontalDifference = senderView.frame.origin.x + translation.x - item.value.x } let topDistance = abs(senderView.frame.origin.y + translation.y - item.value.y) if topDistance < 5 { snapOnTopEdge = true shouldSnap = true verticalDiffecence = senderView.frame.origin.y + translation.y - item.value.y } } }

leftDistance and topDistance do the trick, the griping part.

And here it applies the trick, shows a line on the screen and actives the haptic, lightly.

if snapOnLeftEdge { var _frame = senderView.frame _frame.origin.x = _frame.origin.x + translation.x - horizontalDifference _frame.origin.y = _frame.origin.y + translation.y senderView.frame = _frame if feedbackIsAllowed { feedbackIsAllowed = false let generator = UIImpactFeedbackGenerator(style: .light) generator.impactOccurred() if let _rulersGuid = setupRulerGuidesView(.horizontal(_frame.origin.x)) { self.view.addSubview(_rulersGuid) _rulersGuid.layer.zPosition = 2 } } } else if snapOnTopEdge { var _frame = senderView.frame _frame.origin.x = _frame.origin.x + translation.x _frame.origin.y = _frame.origin.y + translation.y - verticalDiffecence senderView.frame = _frame if feedbackIsAllowed { feedbackIsAllowed = false let generator = UIImpactFeedbackGenerator(style: .light) generator.impactOccurred() if let _rulersGuid = setupRulerGuidesView(.vertical(_frame.origin.y)) { self.view.addSubview(_rulersGuid) _rulersGuid.layer.zPosition = 2 } } }

I think you have grasped the idea about how snap works, you can download the whole code, as an app, from the github repo. 

Data entry with Table View and custom Table View Cells

I wanted to write about creating inspectable custom UITableViewCell, but I thought it might be better make a use case for it: data entry.

There is one big advantage of using UITableViewController over using your own implementation of UIScrollView and some more views: you don’t have to deal with Autolayout and keyboard notifications to align your inputs, you get them for free.

Let’s assume we have a fixed number of input data which makes the job easier and allows us to use a static table, for the sake of simplicity. Now we can create a reusable UITableViewCell which could become a handy in asset for our later projects too.

We can call the new custom cell DataEntryCell. In Xcode go to File > New > File and choose Cocoa Touch Class. In the class name type DataEntryCell and Choose UITableViewCell for subclass and check Also create XIB file.

Now we have two new files, a xib file and a swift file. Now we have to setup the cell. First of all, in the Identity inspector remove the existing class name:

Note: make sure both .swift and .xib file have the same name.

Now from Document Outline, click on File’s Owner, open the Identity inspector and enter the class name there:

Now add a label and a text box and align them as you wish.

If you open DataEntryCell.swift, there are two functions prepared for you, but we don’t need them, since we are not using this class conventionally. Remove setSelected and add this snippet:

private var view: UIView! public required init?(coder aDecoder: NSCoder) { super.init(coder: aDecoder) setup() } public override init(style: UITableViewCell.CellStyle, reuseIdentifier: String?) { super.init(style: style, reuseIdentifier: reuseIdentifier) layoutSubviews() setup() } private func setup() { let bundle = Bundle(for: type(of: self)) let nib = UINib(nibName: String(describing: type(of: self)), bundle: bundle) let view = nib.instantiate(withOwner: self, options: nil).first as! UIView view.frame = bounds view.autoresizingMask = [ UIView.AutoresizingMask.flexibleWidth, UIView.AutoresizingMask.flexibleHeight ] addSubview(view) self.view = view configureUI() }

These initializers allow us to treat our custom class like those of default iOS classes, we no longer need to register the nib.

‌Now we can customize the cell according to our requirements, like adding validation, custom keyboard type, etc. You can see for yourself a simple implementation of validation and customizing keyboard type in the completed project files.

In the interface builder you can see the live preview of the custom cell:

After wiring up the cells we are almost done. isValid can be used to validate fields individually and take appropriate measure to tell the user what’s wrong with their input, here the color of the label will be red in case of error.

Get the source code from github.

Adaptive Image Viewer using UIScrollView

Having an image viewer within your app is a trivial feature; For one of my apps I needed one and found a tutorial on Ray Wenderlich website but it was not exactly what I wanted. You could zoom in and out the image but the image sticked to the top left corner of the screen, I wanted it to be centered. Also, is not it nicer for this simple UI to do everything in code? no dealing with storyboard for constraints and two elements makes maintaining the code easer in the future.

let scrollView = UIScrollView() let imageView: UIImageView = { let _imageView = UIImageView() _imageView.translatesAutoresizingMaskIntoConstraints = false return _imageView }() override func viewDidLoad() { super.viewDidLoad() setupScrollView() setupViews() } override func viewWillLayoutSubviews() { super.viewWillLayoutSubviews() if let image = UIImage(named: "image.jpg") { scrollView.contentSize = image.size imageView.image = image let minZoom = min(self.view.bounds.size.width / image.size.width, self.view.bounds.size.height / image.size.height) self.scrollView.minimumZoomScale = minZoom DispatchQueue.main.asyncAfter(deadline: .now() + 0.3) { let vertical = (self.view.bounds.size.height - (image.size.height * minZoom)) / 2 self.scrollView.contentInset = UIEdgeInsets(top: vertical, left: 0, bottom: vertical, right: 0) self.scrollView.setZoomScale(minZoom, animated: true) } } } func setupScrollView(){ scrollView.translatesAutoresizingMaskIntoConstraints = false imageView.translatesAutoresizingMaskIntoConstraints = false scrollView.delegate = self view.addSubview(scrollView) scrollView.centerXAnchor.constraint(equalTo: view.centerXAnchor).isActive = true scrollView.widthAnchor.constraint(equalTo: view.widthAnchor).isActive = true scrollView.heightAnchor.constraint(equalTo: view.heightAnchor).isActive = true scrollView.centerYAnchor.constraint(equalTo: view.centerYAnchor).isActive = true scrollView.minimumZoomScale = 0.2 } func setupViews(){ scrollView.addSubview(imageView) } func centerContent() { var top: CGFloat = 0 var left: CGFloat = 0 if scrollView.contentSize.width < scrollView.bounds.size.width { left = (scrollView.bounds.size.width - scrollView.contentSize.width) * 0.5 } if scrollView.contentSize.height < scrollView.bounds.size.height { top = (scrollView.bounds.size.height - scrollView.contentSize.height) * 0.5 } scrollView.contentInset = UIEdgeInsets(top: top, left: left, bottom: top, right: left) }

The last step is conforming to UIScrollViewDelegate protocol:

func viewForZooming(in scrollView: UIScrollView) -> UIView? { return imageView } func scrollViewDidZoom(_ scrollView: UIScrollView) { centerContent() }

You can download the project from github.

Swift Operation and OperationQueue

This is based on Operation and OperationQueue Tutorial in Swift article with some modifications.

Let’s start with this: your client wants an app which downloads images from the internet and applies some filters on them then shares them with other apps or saves in the camera roll. The easy way would be downloading them one by one then going back and applying the filter(s) one by one again, a bit painful. The other option would be using OperationQueue which makes things easier for you a lot.

This is what happens in this scenario: You create two queues, one for downloading images, the other for applying a filter on the images; In the next step you add operations to the queues respectively. When the first queue, download queue, finishes its job, it sends a notification to inform the system it’s done with downloading, then the second queue will be filled with new operations to apply the filter on the images and a another notification will be send when the queue is finished with filters.

Apple defines an Operation class as such:

An abstract class that represents the code and data associated with a single task.

Before we proceed, it should be noted that an Operation can be either synchronous or asynchronous. By default they are synchronous, but since our operation, downloading from an internet location, is async we need to subclass it and set the isAsynchronous true in addition to a little more modification.

class AsyncOperation: Operation { override var isAsynchronous: Bool { return true } private let _queue = DispatchQueue(label: "asyncOperationQueue", attributes: .concurrent) private var _isExecuting: Bool = false override var isExecuting: Bool { set { willChangeValue(forKey: "isExecuting") _queue.async(flags: .barrier) { self._isExecuting = newValue } didChangeValue(forKey: "isExecuting") } get { return _isExecuting } } var _isFinished: Bool = false override var isFinished: Bool { set { willChangeValue(forKey: "isFinished") _queue.async(flags: .barrier) { self._isFinished = newValue } didChangeValue(forKey: "isFinished") } get { return _isFinished } } }

We need to override isExecuting and isFinished properties so the OperationQueue will know when the operation is finished. Now AsyncOperation can be used as the parent class for our Operation subclasses. What we need now is an Operation subclass which can asynchronously download an image from a URL. Before continuing our journey to the operations, let’s have another class which holds needed data for our images and call it PhotoRecord. This class will need three properties, the URL of an image, the downloaded image and another property to keep track of its state:

enum OperationState { case new, downloading, downloaded, filtered, failed } class PhotoRecord { let url: URL var image: UIImage? = nil var state = init(url: URL) { self.url = url } }

Going back to operations, this subclass is responsible for downloading the image from a URL. For the download part a helper function is used, for the sake of simplicity. When you subclass an Operation, the main function will be called when it’s initialized. Notice how isExecuting and isFinished are used. In the main function first it checks if the operation is not canceled; If not, it tells the parent queue that the operation has started and when download, which is asynchronous, is finish it tells the queue it’s done.

class DownloadOperation: AsyncOperation { let photoRecord: PhotoRecord init(_ photoRecord: PhotoRecord) { self.photoRecord = photoRecord } override func main() { if isCancelled { return } isExecuting = true isFinished = false downloader(url: photoRecord.url) { (result) in switch result { case .failure: self.photoRecord.state = .failed case .success(let image): self.photoRecord.state = .downloaded self.photoRecord.image = image } self.isExecuting = false self.isFinished = true } } }

Similar to the DownloadOperation, we have another operation which applies a monochrome effect to the images:

class ImageFilterOperation: AsyncOperation { let photoRecord: PhotoRecord init(_ photoRecord: PhotoRecord) { self.photoRecord = photoRecord } override func main() { if isCancelled { return } isExecuting = true isFinished = false guard let currentCGImage = photoRecord.image?.cgImage else { self.photoRecord.state = .failed self.isExecuting = false self.isFinished = true return } let currentCIImage = CIImage(cgImage: currentCGImage) let filter = CIFilter(name: "CIColorMonochrome") filter?.setValue(currentCIImage, forKey: "inputImage") filter?.setValue(CIColor(red: 0.65, green: 0.65, blue: 0.65), forKey: "inputColor") filter?.setValue(1.0, forKey: "inputIntensity") guard let outputImage = filter?.outputImage else { return } let ciContext = CIContext() if let cgimg = ciContext.createCGImage(outputImage, from: outputImage.extent) { let processedImage = UIImage(cgImage: cgimg) self.photoRecord.image = processedImage self.photoRecord.state = .filtered self.isExecuting = false self.isFinished = true }else{ self.photoRecord.state = .failed self.isExecuting = false self.isFinished = true } } }

Now that we are done with operations, it’s time to add them to the queues. For each group of actions we need a separate queue and another property to keep track of operations, .done, .failed, etc. Let’s wrap these in a PendingOperations class.

class PendingOperations { lazy var downloadInProgress: [Int: Operation] = [:] lazy var downloadQueue: OperationQueue = { var queue = OperationQueue() = "Download Queue" return queue }() lazy var filteringInProgress: [Int: Operation] = [:] lazy var filterQueue: OperationQueue = { var queue = OperationQueue() = "Filter Queue" return queue }() }

“A queue that regulates the execution of operations.” Says Apple documents. OperationQueue inherits from NSObject, therefore it’s a KVO-complient class which helps us to know about the current state of the queue.

With this last morsel, we are done with the logic of the app. Now we have to put together the pieces.

The operation will start running when it’s added to the queue, so we need to create our DownloadOperation‘s then add them to the DownloadQueue. Now you can see how that state is useful here; When we initialize a PhotoRecord, its state value is .new, so we can track the state and run the appropriate operation .The whole code is not posted here to make it easier to read, but you can download it from github.

var photos = [PhotoRecord]() listOfImages.append(URL.init(string: "")!) listOfImages.append(URL.init(string: "")!) listOfImages.append(URL.init(string: "")!) for item in listOfImages { let photo = PhotoRecord(url: item) photos.append(photo) }

The next step is creating a DownloadOperation for each PhotoRecord and add them to the DownloadQueue and wait for the queue to finish. Afterwards, we should create ImageFilterQueue and add ImageFilterOperation objects to it and wait for it to finish its job. After this last step, we will have images which are fetched from URLs and modified with a monochrome filter, ready to be saved in the gallery, shared with other apps or just be show up on the screen.

func runQueues(){ for (index, item) in { startOperations(for: item, at: index) } } func startOperations(for photoRecord: PhotoRecord, at index: Int) { switch (photoRecord.state) { case .new: startRetrieving(for: photoRecord, at: index) case .downloaded: startApplyingFilter(for: photoRecord, at: index) default: break } } func startRetrieving(for photoRecord: PhotoRecord, at index: Int) { guard pendingOperations.downloadInProgress[index] == nil else { return } let download = DownloadOperation(photoRecord) download.completionBlock = { if download.isCancelled { return } DispatchQueue.main.async { self.pendingOperations.downloadInProgress.removeValue(forKey: index) } } pendingOperations.downloadInProgress[index] = download pendingOperations.downloadQueue.addOperation(download) } func startApplyingFilter(for photoRecord: PhotoRecord, at index: Int){ guard pendingOperations.filteringInProgress[index] == nil else { return } let filter = ImageFilterOperation(photoRecord) filter.completionBlock = { if filter.isCancelled { return } DispatchQueue.main.async { self.pendingOperations.filteringInProgress.removeValue(forKey: index) } } pendingOperations.filteringInProgress[index] = filter pendingOperations.filterQueue.addOperation(filter) }

We call runQueues() and the whole process begins as described. But there is a missing important thing: how can we know when a queue is finishes its job? The answer is observing the appropriate key path which is named "operations" and the sender object . To observe it, first we need to register an observer, in our case, two:

pendingOperations.downloadQueue.addObserver(self, forKeyPath: "operations", options: .new, context: nil) pendingOperations.filterQueue.addObserver(self, forKeyPath: "operations", options: .new, context: nil)

Then override addObserver(_:forKeyPath:options:context:) and listen for the right key path and object:

override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) { if object as? OperationQueue == pendingOperations.downloadQueue && keyPath == "operations" { if self.pendingOperations.downloadQueue.operations.isEmpty { pendingOperations.downloadQueue.removeObserver(self, forKeyPath: "operations") pendingOperations.filterQueue.addObserver(self, forKeyPath: "operations", options: .new, context: nil) self.runQueues() } }else if object as? OperationQueue == pendingOperations.filterQueue && keyPath == "operations" { if self.pendingOperations.filterQueue.operations.isEmpty { pendingOperations.filterQueue.removeObserver(self, forKeyPath: "operations") } } else { super.observeValue(forKeyPath: keyPath, of: object, change: change, context: context) } }

Here is the link to the complete project hosted on github.

iOS: Fix label cuts for custom fonts

Has it ever occurred to you that a custom font does not look as expected in your app? like some parts of it are cut off? Look at this example, it supposed to be “Blog” but all you see is “Bl”:

Or this one in Farsi (and Arabic) which expected to be “کریم” but the last two characters are cut off completely:

The code to create it is pretty simple. I have used a third party library, FontBlaster, to load custom fonts which is available on github.

label = UILabel(frame:
let font = UIFont(name: "BleedingCowboys", size: 60)! // We are in debug mode, right?
label.backgroundColor = UIColor.yellow
label.frame.size =
label.text = "Blog"
let size = label.sizeThatFits(CGSize.init(
width: CGFloat.greatestFiniteMagnitude, 
height: CGFloat.greatestFiniteMagnitude))
label.frame.size = size =

It seems sizeThatFits(:) cannot determine the size correctly for all fonts. To fix this, I found an extension to UIBezierPath which returns a CGPath for an attributed string, you can find it here. This is how you can the path:

let line = CAShapeLayer() 
line.path = UIBezierPath(forMultilineAttributedString: mutabbleAttributedString, maxWidth: CGFloat.greatestFiniteMagnitude).cgPath
line.bounds = (line.path?.boundingBox)!
// We gonna need it later 
let sizeFromPath = CGSize(width: (line.path?.boundingBoxOfPath.width)!, height: (line.path?.boundingBoxOfPath.height)!) 

UIBezierPath(forMultilineAttributedString:, maxWidth:) comes from that extension I mentioned above. Now we can determine the actual size of the label frame, let’s see it in action:

It’s still not exactly what we want, the size seems to be correct but the left inset is not. To solve this last problem, let’s create a custom UILabel class which can set custom inset while drawing the label:

import Foundation
import UIKit

class CustomLabel: UILabel {
    var textInsets = {
        didSet { invalidateIntrinsicContentSize() }
    override func textRect(forBounds bounds: CGRect,
 limitedToNumberOfLines numberOfLines: Int) -> CGRect {
        let insetRect = bounds.inset(by: textInsets)
        let textRect = super.textRect(forBounds: insetRect, limitedToNumberOfLines: numberOfLines)
        let invertedInsets = UIEdgeInsets(top:,
                                          left: -textInsets.left,
                                          bottom: -textInsets.bottom,
                                          right: -textInsets.right)
        return textRect.inset(by: invertedInsets)
    override func drawText(in rect: CGRect) {
        super.drawText(in: rect.inset(by: textInsets))

How many points we should add to left inset? the difference between the actual width and width from sizeThatFits.First we need to replace the line in which we declared the label. Instead of the UILabel we need to use CustomLabel. Then:

label.textInsets = UIEdgeInsets(top: 0, left: sizeFromPath.width - size.width, bottom: 0, right: 0)

Let’s see the final result:

Nice! yeah? Thing is you might not need the inset for all troublesome fonts, check it yourself

Fibonacci Sequence with Swift

The aim of this post is showing how to conform to Sequence protocol and create a custom sequence. To make it less boring, we can create a new type and call it FibonacciSequence which gets nth desired number then iterate over values.

The requirement to conform to Sequence protocol is fairly simple, you need to provide a makeIterator() method that returns an iterator. The code should look like this:

struct FibsSequence: Sequence {
    private var upTo: Int
    init(upTo: Int) {
        self.upTo = upTo
    func makeIterator() -> FibsIterator {
        return FibsIterator(upTo: upTo)

Our makeIterator() returns another custom type which conforms to IteratorProtocol. This type obliges the conformer to provide a method to supply the values of the sequence one at a time. The only method should be implemented is next() which advances to the next element and returns it, or returns nil if no next element exists:

struct FibsIterator: IteratorProtocol {
    private var state:(UInt, UInt) = (0, 1)
    private var upTo: Int
    private var counter = 0
    init(upTo: Int) {
        self.upTo = upTo
    mutating func next() -> UInt? {
        guard upTo > counter else {return nil}
        guard upTo > 0 else {return nil}
        let upcomingNumber = state.0
        state = (state.1, state.0 + state.1)
        counter += 1
        return upcomingNumber

In this type we have three private variables, which are clearly named, and a mutating function. Being mutating is not strictly necessary in general but since we want to keep track of iterator and update it, it is required here. The next() function returns optional UInt, optional because at some point we want to exit the function unless we deliberately want to make the code crash!

There are two exit points in the function, first one checks against nth element, if it is bigger than what user initially asked, it quits. The other one is to makes sure the nth element is a positive one.

Now we have a custom type which returns Fibonacci sequence up to a certain index in the series, indicated during initialization.

for (index, fib) in FibsSequence(upTo: 15).enumerated() {
    print("fib: \(fib), index: \(index + 1)")

fib: 0, index: 1
fib: 1, index: 2
fib: 1, index: 3
fib: 2, index: 4
fib: 3, index: 5
fib: 5, index: 6
fib: 8, index: 7
fib: 13, index: 8
fib: 21, index: 9
fib: 34, index: 10
fib: 55, index: 11
fib: 89, index: 12
fib: 144, index: 13
fib: 233, index: 14
fib: 377, index: 15

Whole code is available on github.


After posting this on reddit a user, Nobody_1707, suggested a shorter version of the code which is:

public struct FibSequence: Sequence, IteratorProtocol {
    private var state: (UInt, UInt) = (0, 1)
    public init() { }
    public mutating func next() -> UInt? {
        guard state.1 >= state.0 else { return nil }
        defer { state = (state.1, state.0 &+ state.1) }
        return state.0

for (i, fib) in zip(0..., FibSequence().prefix(15)) {
    print("fib(\(i)) = \(fib)")

iOS: Image filters using CoreImage and MetalKitView

Image filters are not only the most popular feature of image editing apps, but also in many social network apps such as Instagram, Snapchat, and Facebook Messenger. As an iOS developer, you might like to include such option into your apps.

The most convenient method would be using CIFilter create the filter and a UIImageView to show it, but there is one big problem: it’s not fast. Regarding the size of images token by iPhones (usually 12Mp), trying several image filters would not create a pleasant and “it just works” experience for the user, they don’t like to see that spinner on the screen for a simple filter, instagram does it instantly, why not your app?

For this, you need to use MetalKit instead of UIKit, which is way faster.  In this tutorial, I will create a subclass of MTKView to display a Metal drawing, filtered image in this case. It does not mean that we cannot show the final image in our lovely UIImageView.

The first step is to create a CIFilter, but before doing it let’s see what is it according to Apple:

An image processor that produces an image by manipulating one or more input images or by generating new image data. The CIFilter class produces a CIImage object as output. Typically, a filter takes one or more images as input. Some filters, however, generate an image based on other types of input parameters. The parameters of a CIFilter object are set and retrieved through the use of key-value pairs.

Let’s create a simple class to handle filters:

import Foundation
import CoreImage

enum CIFilterName: String, CaseIterable, Equatable {
    case CIPhotoEffectChrome = "CIPhotoEffectChrome"
    case CIPhotoEffectFade = "CIPhotoEffectFade"
    case CIPhotoEffectInstant = "CIPhotoEffectInstant"
    case CIPhotoEffectNoir = "CIPhotoEffectNoir"
    case CIPhotoEffectProcess = "CIPhotoEffectProcess"
    case CIPhotoEffectTonal = "CIPhotoEffectTonal"
    case CIPhotoEffectTransfer = "CIPhotoEffectTransfer"
    case CISepiaTone = "CISepiaTone"

class ImageFilters {
    private var context: CIContext
    private let image: CIImage
    init() {
        self.context = CIContext()
        self.image = CIImage()
    init(image: CIImage, context: CIContext){
        self.context = context
        self.image = image
    func apply(filterName: CIFilterName) -> CIImage?{
        let filter = CIFilter(name: filterName.rawValue)

        filter?.setValue(self.image, forKey: kCIInputImageKey)
        //filter?.setValue(Double(0.5), forKey: kCIInputIntensityKey)
        return filter?.outputImage

The above code is to create a filtered image in CIImage format using its apply() function and supports 8 filters with default settings. To use it we need to pass a CIImage and a CIContext to the initializer. 

Now we need a subsclass of MTKview which can draw a CIImage into a MTView:

import UIKit
import MetalKit
import AVFoundation

class MetalKitView: MTKView {
    private var commanQueue: MTLCommandQueue?
    private var ciContext: CIContext?
    var mtlTexture: MTLTexture?
    required init(coder: NSCoder) {
        super.init(coder: coder)
        self.isOpaque = false
        self.enableSetNeedsDisplay = true
    func render(image: CIImage, context: CIContext, device: MTLDevice) {
        #if !targetEnvironment(simulator)
        self.ciContext = context
        self.device = device
        var size = self.bounds
        size.size = self.drawableSize
        size = AVMakeRect(aspectRatio: image.extent.size, insideRect: size)
        let filteredImage = image.transformed(by: CGAffineTransform(
            scaleX: size.size.width/image.extent.size.width,
            y: size.size.height/image.extent.size.height))
        let x = -size.origin.x
        let y = -size.origin.y
        self.commanQueue = device.makeCommandQueue()
        let buffer = self.commanQueue!.makeCommandBuffer()!
        self.mtlTexture = self.currentDrawable!.texture
                               to: self.currentDrawable!.texture,
                               commandBuffer: buffer,
                               bounds: CGRect(origin:CGPoint(x:x, y:y), size:self.drawableSize),
                               colorSpace: CGColorSpaceCreateDeviceRGB())
    func getUIImage(texture: MTLTexture, context: CIContext) -> UIImage?{
        let kciOptions = [CIImageOption.colorSpace: CGColorSpaceCreateDeviceRGB(),
                          CIContextOption.outputPremultiplied: true,
                          CIContextOption.useSoftwareRenderer: false] as! [CIImageOption : Any]
        if let ciImageFromTexture = CIImage(mtlTexture: texture, options: kciOptions) {
            if let cgImage = context.createCGImage(ciImageFromTexture, from: ciImageFromTexture.extent) {
                let uiImage = UIImage(cgImage: cgImage, scale: 1.0, orientation: .downMirrored)
                return uiImage
                return nil
            return nil

Let’s talk about this new class in more details. Here and in the previous class, we have a CIContext. In both classes, it is injected into the class because it’s quite expensive operation to initiate one. Its main responsibility is compiling and running the filters, whether on CPU or GPU. Next one is a MTLCommandQueue property which, as it appears, handles queuing an ordered list of command buffers for the Metal device — has not introduced yet — to execute. The MTLCommandQueue, as well as CIContext, is thread safe and allows multiple outstanding command buffer to be encoded simultaneously. Finally, we have a MTLTexture property which is a memory allocation for storing formatted image data that is accessible to the GPU. The required init() sets two properties of our custom class, isOpaque = false tells it to do not render a black color for empty space and enableSetNeedsDisplay = true asks it to respond to setNeedsDisplay(). This class is created to render the given CIImage and here it’s done via the render() function which takes three arguments. We already know about two of them but MTLDevie is new. As you might have guessed, it defines the interface to the GPU. The body of this method makes sure there is an image to render and does a simple transform to make the image fit into the drawable area. To avoid compiler error while testing the code on a simulator, we enclose the body within #if !targetEnvironment(simulator) /// #endif, because with a simulator target the device‘s type is unknown to the compiler. The last method is obvious, it converts the formatted image to a UIImage object. 

The last step is applying a filter to a UIImage and show it on the screen:

if let device = MTLCreateSystemDefaultDevice(), let uiImage = UIImage(named: "someImage"){
    if let ciImage = CIImage(image: uiImage) {
        let context = CIContext(mtlDevice: device)
        filter = ImageFilters(image: ciImage, context: self.context)
        let ciImage = filter.apply(filterName: .CIPhotoEffectNoir)
        mtkView.render(image: ciImage, context: context, device: device)

These two classes help you to apply image filters almost instantly to images, whether small or large. 

Swift: An app to search through movie titles using The Open Movie Database API

This demo app uses OMDb APIs to search in movie titles and show details of the selected movie. First of all, you need to get your API key, it’s free. All right, this is the plan:

  • Create a class to make network requests using URLSession
  • Test the class
  • Create the UI

NetworService will handle the network requests. This is a singleton class which uses the builtin URLSession with a handful of configuration options. An extension to this class contains the required methods to search. Since I have created this as a reusable utility class, there are some extra features which won’t be used in this tutorial, maybe you need them later.

class NetworkService {
    //MARK: - Internal structs
    private struct authParameters {
        struct Keys {
            static let accept = "Accept"
            static let apiKey = "apikey"
        static let apiKey = "YOURKEY"
    //An NSCach object to cache images, if necessary
    private let cache = NSCache()
    //Default session configuration
    private let urlSessionConfig = URLSessionConfiguration.default
    //Additional headers such as authentication token, go here
    private func configSession(){
        self.urlSessionConfig.httpAdditionalHeaders = [
            AnyHashable(authParameters.Keys.accept): MIMETypes.json.rawValue
    private static var sharedInstance: NetworkService = {
        return NetworkService()
    //MARK: - Public APIs
    class func shared() -> NetworkService {
        return sharedInstance

    //MARK: - Private APIs
    private func createAuthParameters(with parameters:[String:String]) -> Data? {
        guard parameters.count > 0 else {return nil}
        return {"\($0.key)=\($0.value)"}.joined(separator: "&").data(using: .utf8)

This is the skeleton of our class, shared() function returns a static instance of the class after running the internal configSession() function. authParameters structure is used to store keys and values for authentication, just to prevent writing a messy code. Now we can create an instance of the class using, let networkService = NetworkService.shared().

Now the we need another public method to make network requests:

    private func request(url:String,
                 cachePolicy: URLRequest.CachePolicy = .reloadRevalidatingCacheData,
                 httpMethod: RequestType,
                 body: [String:String]?,
                 parameters: [URLQueryItem]?,
                 useSharedSession: Bool = false,
                 handler: @escaping (Data?, URLResponse?, Int?, Error?) -> Void){
        if var urlComponent = URLComponents(string: url) {
            urlComponent.queryItems = parameters
            var session = URLSession(configuration: urlSessionConfig)
            if useSharedSession {
                session = URLSession.shared
            if let _url = urlComponent.url {
                var request = URLRequest(url: _url)
                request.cachePolicy = cachePolicy
                request.allHTTPHeaderFields = headers
                if let _body = body {
                    request.httpBody = createAuthParameters(with: _body)
                request.httpMethod = httpMethod.rawValue
                session.dataTask(with: request) { (data, response, error) in
                    let httpResponsStatusCode = (response as? HTTPURLResponse)?.statusCode
                    handler(data, response, httpResponsStatusCode, error)
                handler(nil, nil, nil, Failure.invalidURL)
            handler(nil, nil, nil, Failure.invalidURL)

This method has the basic functionality of a request session, getting parameters and headers and returning response, data, status code and an error object if exists, asynchronously. For the httpPart, you can replace createAuthParameters(:_) with an array of URLQueryItems.

For errors, I’m using a enumerator which is inherited from Error protocol, here is it:

import Foundation
public enum Failure:Error {
    case invalidURL
    case invalidSearchParameters
    case invalidResults(String)
    case invalidStatusCode(Int?)

extension Failure: LocalizedError {
    public var errorDescription: String? {
        switch self {
        case .invalidURL:
            return NSLocalizedString("The requested URL is invalid.", comment: "")
        case .invalidSearchParameters:
            return NSLocalizedString("The URL parameters is invalid.", comment: "")
        case .invalidResults(let message):
            return NSLocalizedString(message, comment: "")
        case .invalidStatusCode(let message):
            return NSLocalizedString("Invalid HTTP status code:\(message ?? -1)", comment: "")

OMDb works with simple queries, it doesn’t have many end points! but to keep everything structured, let’s create another enumerator:

import Foundation

enum EndPoints {
    case Search

extension EndPoints {
    var path:String {
        let baseURL = ""
        struct Section {
            static let search = "/?"
        switch(self) {
        case .Search:
            return "\(baseURL)\("


The next step is testing:

import XCTest
@testable import OpenMovie

class OpenMovieTests: XCTestCase {
    private let networkService = NetworkService.shared()
    func testSearch() {
        let promise = expectation(description: "Search for batman movies") "batman", page: 1) { (searchObject, error) in
            XCTAssertTrue(searchObject?.response == "True")
        waitForExpectations(timeout: 2, handler: nil)
    func testSearchByIMDBID() {
        let promise = expectation(description: "Search for Batman: Dark Night Returns")
        networkService.getMovie(with: "tt2313197") { (movieObject, error) in
            XCTAssertTrue(movieObject?.imdbID == "tt2313197")
        waitForExpectations(timeout: 2, handler: nil)

And the results:

Now the last step, isn’t it better you look at it yourself? It’s TL;DR sort of thing, download it from github and give it a try.

This is how it looks:

iOS: Show activity indicator in UISearchBar

Updated for iOS 13 and Swift 5

When you are using a search controller, most probably it’s a network call. Isn’t it nice to show a tiny loading animation instead of the magnifier icon on the left side of the search while your app is still waiting for data from the internet? Indeed.

To do this, you can use either use private API, which is frowned upon, or use UITextField‘s built-in methodsetImage(), I go for the latter one.

First of all, let’s see how it looks:

To show the loading animation we have to:

  1. find the search bar within the search controller
  2. remove the magnifier icon
  3. add an UIActivityIndicatorView to the search bar’s leftView

And to hide it it’s almost the same steps but in reverse.

To make the code reusable, let’s do it as an extension:

import Foundation
import UIKit

extension UISearchBar {

First of all, we need to find the text field which is basically a UITextField within UISearchBar subviews. this computed property does the job for us and returns an option UItextField:

private var textField: UITextField? {
    let subViews = self.subviews.flatMap { $0.subviews }
    if #available(iOS 13, *) {
        if let _subViews = subViews.last?.subviews {
            return (_subViews.filter { $0 is UITextField }).first as? UITextField
            return nil
    } else {
        return (subViews.filter { $0 is UITextField }).first as? UITextField

The next step is finding the current magnifier image, it’s similar to the previous step:

    private var searchIcon: UIImage? {
        let subViews = subviews.flatMap { $0.subviews }
        return  ((subViews.filter { $0 is UIImageView }).first as? UIImageView)?.image

Let’s get hold of our loading animation, which is an UIActivityIndicatorView:

   private var activityIndicator: UIActivityIndicatorView? {
        return textField?.leftView?.subviews.compactMap{ $0 as? UIActivityIndicatorView }.first

Now let’s add a public variable to show/hide the loading animation:

    var isLoading: Bool {
        get {
            return activityIndicator != nil
        } set {
            let _searchIcon = searchIcon
            if newValue {
                if activityIndicator == nil {
                    let _activityIndicator = UIActivityIndicatorView(style: .gray)
                    _activityIndicator.backgroundColor = UIColor.clear
                    let clearImage = UIImage().imageWithPixelSize(size: CGSize.init(width: 14, height: 14)) ?? UIImage()
                    self.setImage(clearImage, for: .search, state: .normal)
                    textField?.leftViewMode = .always
                    let leftViewSize = CGSize.init(width: 14.0, height: 14.0)
           = CGPoint(x: leftViewSize.width/2, y: leftViewSize.height/2)
            } else {
                self.setImage(_searchIcon, for: .search, state: .normal)

This piece of code looks for an existing activity indicator and if there is none, first creates one, cleans the default image for seach mode, the magnifier image, sets a new transparent image with 14 point in 14 point in size then adds the recently created UIActivityIndicatorView to left search bar’s leftView; And to hide it, smiple removes the  UIActivityIndicatorView and set back the image for search mode.

To use it simple set searchController.searchBar.isLoading = true  but be careful if you are using it inside another block, call it withing the main queue.

You can download the whole code from github. The UIImage extension, imageWithPixelSize(), is also available on github