Apps

Adaptive Image Viewer using UIScrollView

Having an image viewer within your app is a trivial feature; For one of my apps I needed one and found a tutorial on Ray Wenderlich website but it was not exactly what I wanted. You could zoom in and out the image but the image sticked to the top left corner of the screen, I wanted it to be centered. Also, is not it nicer for this simple UI to do everything in code? no dealing with storyboard for constraints and two elements makes maintaining the code easer in the future.

let scrollView = UIScrollView() let imageView: UIImageView = { let _imageView = UIImageView() _imageView.translatesAutoresizingMaskIntoConstraints = false return _imageView }() override func viewDidLoad() { super.viewDidLoad() setupScrollView() setupViews() } override func viewWillLayoutSubviews() { super.viewWillLayoutSubviews() if let image = UIImage(named: "image.jpg") { scrollView.contentSize = image.size imageView.image = image let minZoom = min(self.view.bounds.size.width / image.size.width, self.view.bounds.size.height / image.size.height) self.scrollView.minimumZoomScale = minZoom DispatchQueue.main.asyncAfter(deadline: .now() + 0.3) { let vertical = (self.view.bounds.size.height - (image.size.height * minZoom)) / 2 self.scrollView.contentInset = UIEdgeInsets(top: vertical, left: 0, bottom: vertical, right: 0) self.scrollView.setZoomScale(minZoom, animated: true) } } } func setupScrollView(){ scrollView.translatesAutoresizingMaskIntoConstraints = false imageView.translatesAutoresizingMaskIntoConstraints = false scrollView.delegate = self view.addSubview(scrollView) scrollView.centerXAnchor.constraint(equalTo: view.centerXAnchor).isActive = true scrollView.widthAnchor.constraint(equalTo: view.widthAnchor).isActive = true scrollView.heightAnchor.constraint(equalTo: view.heightAnchor).isActive = true scrollView.centerYAnchor.constraint(equalTo: view.centerYAnchor).isActive = true scrollView.minimumZoomScale = 0.2 } func setupViews(){ scrollView.addSubview(imageView) } func centerContent() { var top: CGFloat = 0 var left: CGFloat = 0 if scrollView.contentSize.width < scrollView.bounds.size.width { left = (scrollView.bounds.size.width - scrollView.contentSize.width) * 0.5 } if scrollView.contentSize.height < scrollView.bounds.size.height { top = (scrollView.bounds.size.height - scrollView.contentSize.height) * 0.5 } scrollView.contentInset = UIEdgeInsets(top: top, left: left, bottom: top, right: left) }

The last step is conforming to UIScrollViewDelegate protocol:

func viewForZooming(in scrollView: UIScrollView) -> UIView? { return imageView } func scrollViewDidZoom(_ scrollView: UIScrollView) { centerContent() }

You can download the project from github.

Swift Operation and OperationQueue

This is based on Operation and OperationQueue Tutorial in Swift article with some modifications.

Let’s start with this: your client wants an app which downloads images from the internet and applies some filters on them then shares them with other apps or saves in the camera roll. The easy way would be downloading them one by one then going back and applying the filter(s) one by one again, a bit painful. The other option would be using OperationQueue which makes things easier for you a lot.

This is what happens in this scenario: You create two queues, one for downloading images, the other for applying a filter on the images; In the next step you add operations to the queues respectively. When the first queue, download queue, finishes its job, it sends a notification to inform the system it’s done with downloading, then the second queue will be filled with new operations to apply the filter on the images and a another notification will be send when the queue is finished with filters.

Apple defines an Operation class as such:

An abstract class that represents the code and data associated with a single task.

Before we proceed, it should be noted that an Operation can be either synchronous or asynchronous. By default they are synchronous, but since our operation, downloading from an internet location, is async we need to subclass it and set the isAsynchronous true in addition to a little more modification.

class AsyncOperation: Operation { override var isAsynchronous: Bool { return true } private let _queue = DispatchQueue(label: "asyncOperationQueue", attributes: .concurrent) private var _isExecuting: Bool = false override var isExecuting: Bool { set { willChangeValue(forKey: "isExecuting") _queue.async(flags: .barrier) { self._isExecuting = newValue } didChangeValue(forKey: "isExecuting") } get { return _isExecuting } } var _isFinished: Bool = false override var isFinished: Bool { set { willChangeValue(forKey: "isFinished") _queue.async(flags: .barrier) { self._isFinished = newValue } didChangeValue(forKey: "isFinished") } get { return _isFinished } } }

We need to override isExecuting and isFinished properties so the OperationQueue will know when the operation is finished. Now AsyncOperation can be used as the parent class for our Operation subclasses. What we need now is an Operation subclass which can asynchronously download an image from a URL. Before continuing our journey to the operations, let’s have another class which holds needed data for our images and call it PhotoRecord. This class will need three properties, the URL of an image, the downloaded image and another property to keep track of its state:

enum OperationState { case new, downloading, downloaded, filtered, failed } class PhotoRecord { let url: URL var image: UIImage? = nil var state = OperationState.new init(url: URL) { self.url = url } }

Going back to operations, this subclass is responsible for downloading the image from a URL. For the download part a helper function is used, for the sake of simplicity. When you subclass an Operation, the main function will be called when it’s initialized. Notice how isExecuting and isFinished are used. In the main function first it checks if the operation is not canceled; If not, it tells the parent queue that the operation has started and when download, which is asynchronous, is finish it tells the queue it’s done.

class DownloadOperation: AsyncOperation { let photoRecord: PhotoRecord init(_ photoRecord: PhotoRecord) { self.photoRecord = photoRecord } override func main() { if isCancelled { return } isExecuting = true isFinished = false downloader(url: photoRecord.url) { (result) in switch result { case .failure: self.photoRecord.state = .failed case .success(let image): self.photoRecord.state = .downloaded self.photoRecord.image = image } self.isExecuting = false self.isFinished = true } } }

Similar to the DownloadOperation, we have another operation which applies a monochrome effect to the images:

class ImageFilterOperation: AsyncOperation { let photoRecord: PhotoRecord init(_ photoRecord: PhotoRecord) { self.photoRecord = photoRecord } override func main() { if isCancelled { return } isExecuting = true isFinished = false guard let currentCGImage = photoRecord.image?.cgImage else { self.photoRecord.state = .failed self.isExecuting = false self.isFinished = true return } let currentCIImage = CIImage(cgImage: currentCGImage) let filter = CIFilter(name: "CIColorMonochrome") filter?.setValue(currentCIImage, forKey: "inputImage") filter?.setValue(CIColor(red: 0.65, green: 0.65, blue: 0.65), forKey: "inputColor") filter?.setValue(1.0, forKey: "inputIntensity") guard let outputImage = filter?.outputImage else { return } let ciContext = CIContext() if let cgimg = ciContext.createCGImage(outputImage, from: outputImage.extent) { let processedImage = UIImage(cgImage: cgimg) self.photoRecord.image = processedImage self.photoRecord.state = .filtered self.isExecuting = false self.isFinished = true }else{ self.photoRecord.state = .failed self.isExecuting = false self.isFinished = true } } }

Now that we are done with operations, it’s time to add them to the queues. For each group of actions we need a separate queue and another property to keep track of operations, .done, .failed, etc. Let’s wrap these in a PendingOperations class.

class PendingOperations { lazy var downloadInProgress: [Int: Operation] = [:] lazy var downloadQueue: OperationQueue = { var queue = OperationQueue() queue.name = "Download Queue" return queue }() lazy var filteringInProgress: [Int: Operation] = [:] lazy var filterQueue: OperationQueue = { var queue = OperationQueue() queue.name = "Filter Queue" return queue }() }

“A queue that regulates the execution of operations.” Says Apple documents. OperationQueue inherits from NSObject, therefore it’s a KVO-complient class which helps us to know about the current state of the queue.

With this last morsel, we are done with the logic of the app. Now we have to put together the pieces.

The operation will start running when it’s added to the queue, so we need to create our DownloadOperation‘s then add them to the DownloadQueue. Now you can see how that state is useful here; When we initialize a PhotoRecord, its state value is .new, so we can track the state and run the appropriate operation .The whole code is not posted here to make it easier to read, but you can download it from github.

var photos = [PhotoRecord]() listOfImages.append(URL.init(string: "https://picsum.photos/id/1/500/500")!) listOfImages.append(URL.init(string: "https://picsum.photos/id/2/500/500")!) listOfImages.append(URL.init(string: "https://picsum.photos/id/3/500/500")!) for item in listOfImages { let photo = PhotoRecord(url: item) photos.append(photo) }

The next step is creating a DownloadOperation for each PhotoRecord and add them to the DownloadQueue and wait for the queue to finish. Afterwards, we should create ImageFilterQueue and add ImageFilterOperation objects to it and wait for it to finish its job. After this last step, we will have images which are fetched from URLs and modified with a monochrome filter, ready to be saved in the gallery, shared with other apps or just be show up on the screen.

func runQueues(){ for (index, item) in self.photos.enumerated() { startOperations(for: item, at: index) } } func startOperations(for photoRecord: PhotoRecord, at index: Int) { switch (photoRecord.state) { case .new: startRetrieving(for: photoRecord, at: index) case .downloaded: startApplyingFilter(for: photoRecord, at: index) default: break } } func startRetrieving(for photoRecord: PhotoRecord, at index: Int) { guard pendingOperations.downloadInProgress[index] == nil else { return } let download = DownloadOperation(photoRecord) download.completionBlock = { if download.isCancelled { return } DispatchQueue.main.async { self.pendingOperations.downloadInProgress.removeValue(forKey: index) } } pendingOperations.downloadInProgress[index] = download pendingOperations.downloadQueue.addOperation(download) } func startApplyingFilter(for photoRecord: PhotoRecord, at index: Int){ guard pendingOperations.filteringInProgress[index] == nil else { return } let filter = ImageFilterOperation(photoRecord) filter.completionBlock = { if filter.isCancelled { return } DispatchQueue.main.async { self.pendingOperations.filteringInProgress.removeValue(forKey: index) } } pendingOperations.filteringInProgress[index] = filter pendingOperations.filterQueue.addOperation(filter) }

We call runQueues() and the whole process begins as described. But there is a missing important thing: how can we know when a queue is finishes its job? The answer is observing the appropriate key path which is named "operations" and the sender object . To observe it, first we need to register an observer, in our case, two:

pendingOperations.downloadQueue.addObserver(self, forKeyPath: "operations", options: .new, context: nil) pendingOperations.filterQueue.addObserver(self, forKeyPath: "operations", options: .new, context: nil)

Then override addObserver(_:forKeyPath:options:context:) and listen for the right key path and object:

override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) { if object as? OperationQueue == pendingOperations.downloadQueue && keyPath == "operations" { if self.pendingOperations.downloadQueue.operations.isEmpty { pendingOperations.downloadQueue.removeObserver(self, forKeyPath: "operations") pendingOperations.filterQueue.addObserver(self, forKeyPath: "operations", options: .new, context: nil) self.runQueues() } }else if object as? OperationQueue == pendingOperations.filterQueue && keyPath == "operations" { if self.pendingOperations.filterQueue.operations.isEmpty { pendingOperations.filterQueue.removeObserver(self, forKeyPath: "operations") } } else { super.observeValue(forKeyPath: keyPath, of: object, change: change, context: context) } }

Here is the link to the complete project hosted on github.

iOS: Fix label cuts for custom fonts

Has it ever occurred to you that a custom font does not look as expected in your app? like some parts of it are cut off? Look at this example, it supposed to be “Blog” but all you see is “Bl”:

Or this one in Farsi (and Arabic) which expected to be “کریم” but the last two characters are cut off completely:

The code to create it is pretty simple. I have used a third party library, FontBlaster, to load custom fonts which is available on github.

label = UILabel(frame: CGRect.zero)
let font = UIFont(name: "BleedingCowboys", size: 60)! // We are in debug mode, right?
label.backgroundColor = UIColor.yellow
label.frame.size = CGSize.zero
label.text = "Blog"
let size = label.sizeThatFits(CGSize.init(
width: CGFloat.greatestFiniteMagnitude, 
height: CGFloat.greatestFiniteMagnitude))
label.frame.size = size
label.center = self.view.center
self.view.addSubview(label)

It seems sizeThatFits(:) cannot determine the size correctly for all fonts. To fix this, I found an extension to UIBezierPath which returns a CGPath for an attributed string, you can find it here. This is how you can the path:

let line = CAShapeLayer() 
line.path = UIBezierPath(forMultilineAttributedString: mutabbleAttributedString, maxWidth: CGFloat.greatestFiniteMagnitude).cgPath
line.bounds = (line.path?.boundingBox)!
// We gonna need it later 
let sizeFromPath = CGSize(width: (line.path?.boundingBoxOfPath.width)!, height: (line.path?.boundingBoxOfPath.height)!) 

UIBezierPath(forMultilineAttributedString:, maxWidth:) comes from that extension I mentioned above. Now we can determine the actual size of the label frame, let’s see it in action:

It’s still not exactly what we want, the size seems to be correct but the left inset is not. To solve this last problem, let’s create a custom UILabel class which can set custom inset while drawing the label:

import Foundation
import UIKit

class CustomLabel: UILabel {
    var textInsets = UIEdgeInsets.zero {
        didSet { invalidateIntrinsicContentSize() }
    }
    
    override func textRect(forBounds bounds: CGRect,
 limitedToNumberOfLines numberOfLines: Int) -> CGRect {
        let insetRect = bounds.inset(by: textInsets)
        let textRect = super.textRect(forBounds: insetRect, limitedToNumberOfLines: numberOfLines)
        let invertedInsets = UIEdgeInsets(top: -textInsets.top,
                                          left: -textInsets.left,
                                          bottom: -textInsets.bottom,
                                          right: -textInsets.right)
        return textRect.inset(by: invertedInsets)
    }
    
    override func drawText(in rect: CGRect) {
        super.drawText(in: rect.inset(by: textInsets))
}

How many points we should add to left inset? the difference between the actual width and width from sizeThatFits.First we need to replace the line in which we declared the label. Instead of the UILabel we need to use CustomLabel. Then:

label.textInsets = UIEdgeInsets(top: 0, left: sizeFromPath.width - size.width, bottom: 0, right: 0)

Let’s see the final result:

Nice! yeah? Thing is you might not need the inset for all troublesome fonts, check it yourself

Fibonacci Sequence with Swift

The aim of this post is showing how to conform to Sequence protocol and create a custom sequence. To make it less boring, we can create a new type and call it FibonacciSequence which gets nth desired number then iterate over values.

The requirement to conform to Sequence protocol is fairly simple, you need to provide a makeIterator() method that returns an iterator. The code should look like this:

struct FibsSequence: Sequence {
    private var upTo: Int
    
    init(upTo: Int) {
        self.upTo = upTo
    }
    
    func makeIterator() -> FibsIterator {
        return FibsIterator(upTo: upTo)
    }
}

Our makeIterator() returns another custom type which conforms to IteratorProtocol. This type obliges the conformer to provide a method to supply the values of the sequence one at a time. The only method should be implemented is next() which advances to the next element and returns it, or returns nil if no next element exists:

struct FibsIterator: IteratorProtocol {
    private var state:(UInt, UInt) = (0, 1)
    private var upTo: Int
    private var counter = 0
    
    init(upTo: Int) {
        self.upTo = upTo
    }
    
    mutating func next() -> UInt? {
        guard upTo > counter else {return nil}
        guard upTo > 0 else {return nil}
        
        let upcomingNumber = state.0
        state = (state.1, state.0 + state.1)
        counter += 1
        
        return upcomingNumber
    }
}

In this type we have three private variables, which are clearly named, and a mutating function. Being mutating is not strictly necessary in general but since we want to keep track of iterator and update it, it is required here. The next() function returns optional UInt, optional because at some point we want to exit the function unless we deliberately want to make the code crash!

There are two exit points in the function, first one checks against nth element, if it is bigger than what user initially asked, it quits. The other one is to makes sure the nth element is a positive one.

Now we have a custom type which returns Fibonacci sequence up to a certain index in the series, indicated during initialization.

for (index, fib) in FibsSequence(upTo: 15).enumerated() {
    print("fib: \(fib), index: \(index + 1)")
}

****************************
fib: 0, index: 1
fib: 1, index: 2
fib: 1, index: 3
fib: 2, index: 4
fib: 3, index: 5
fib: 5, index: 6
fib: 8, index: 7
fib: 13, index: 8
fib: 21, index: 9
fib: 34, index: 10
fib: 55, index: 11
fib: 89, index: 12
fib: 144, index: 13
fib: 233, index: 14
fib: 377, index: 15

Whole code is available on github.

Update:

After posting this on reddit a user, Nobody_1707, suggested a shorter version of the code which is:

public struct FibSequence: Sequence, IteratorProtocol {
    private var state: (UInt, UInt) = (0, 1)
    public init() { }
    public mutating func next() -> UInt? {
        guard state.1 >= state.0 else { return nil }
        defer { state = (state.1, state.0 &+ state.1) }
        return state.0
    }
}

for (i, fib) in zip(0..., FibSequence().prefix(15)) {
    print("fib(\(i)) = \(fib)")
}

iOS: Image filters using CoreImage and MetalKitView

Image filters are not only the most popular feature of image editing apps, but also in many social network apps such as Instagram, Snapchat, and Facebook Messenger. As an iOS developer, you might like to include such option into your apps.

The most convenient method would be using CIFilter create the filter and a UIImageView to show it, but there is one big problem: it’s not fast. Regarding the size of images token by iPhones (usually 12Mp), trying several image filters would not create a pleasant and “it just works” experience for the user, they don’t like to see that spinner on the screen for a simple filter, instagram does it instantly, why not your app?

For this, you need to use MetalKit instead of UIKit, which is way faster.  In this tutorial, I will create a subclass of MTKView to display a Metal drawing, filtered image in this case. It does not mean that we cannot show the final image in our lovely UIImageView.

The first step is to create a CIFilter, but before doing it let’s see what is it according to Apple:

An image processor that produces an image by manipulating one or more input images or by generating new image data. The CIFilter class produces a CIImage object as output. Typically, a filter takes one or more images as input. Some filters, however, generate an image based on other types of input parameters. The parameters of a CIFilter object are set and retrieved through the use of key-value pairs.

Let’s create a simple class to handle filters:

import Foundation
import CoreImage

enum CIFilterName: String, CaseIterable, Equatable {
    case CIPhotoEffectChrome = "CIPhotoEffectChrome"
    case CIPhotoEffectFade = "CIPhotoEffectFade"
    case CIPhotoEffectInstant = "CIPhotoEffectInstant"
    case CIPhotoEffectNoir = "CIPhotoEffectNoir"
    case CIPhotoEffectProcess = "CIPhotoEffectProcess"
    case CIPhotoEffectTonal = "CIPhotoEffectTonal"
    case CIPhotoEffectTransfer = "CIPhotoEffectTransfer"
    case CISepiaTone = "CISepiaTone"
}

class ImageFilters {
    private var context: CIContext
    private let image: CIImage
    
    init() {
        self.context = CIContext()
        self.image = CIImage()
    }
    
    init(image: CIImage, context: CIContext){
        self.context = context
        self.image = image
    }
    
    func apply(filterName: CIFilterName) -> CIImage?{
 
        let filter = CIFilter(name: filterName.rawValue)
        filter?.setDefaults()

        filter?.setValue(self.image, forKey: kCIInputImageKey)
        //filter?.setValue(Double(0.5), forKey: kCIInputIntensityKey)
        return filter?.outputImage
    }
}

The above code is to create a filtered image in CIImage format using its apply() function and supports 8 filters with default settings. To use it we need to pass a CIImage and a CIContext to the initializer. 

Now we need a subsclass of MTKview which can draw a CIImage into a MTView:

import UIKit
import MetalKit
import AVFoundation

class MetalKitView: MTKView {
    
    private var commanQueue: MTLCommandQueue?
    private var ciContext: CIContext?
    var mtlTexture: MTLTexture?
    
    required init(coder: NSCoder) {
        super.init(coder: coder)
        self.isOpaque = false
        self.enableSetNeedsDisplay = true
    }
    
    func render(image: CIImage, context: CIContext, device: MTLDevice) {
        #if !targetEnvironment(simulator)
        self.ciContext = context
        self.device = device
        
        var size = self.bounds
        size.size = self.drawableSize
        size = AVMakeRect(aspectRatio: image.extent.size, insideRect: size)
        let filteredImage = image.transformed(by: CGAffineTransform(
            scaleX: size.size.width/image.extent.size.width,
            y: size.size.height/image.extent.size.height))
        let x = -size.origin.x
        let y = -size.origin.y
        
        self.commanQueue = device.makeCommandQueue()
        
        let buffer = self.commanQueue!.makeCommandBuffer()!
        self.mtlTexture = self.currentDrawable!.texture
        self.ciContext!.render(filteredImage,
                               to: self.currentDrawable!.texture,
                               commandBuffer: buffer,
                               bounds: CGRect(origin:CGPoint(x:x, y:y), size:self.drawableSize),
                               colorSpace: CGColorSpaceCreateDeviceRGB())
        buffer.present(self.currentDrawable!)
        buffer.commit()
        #endif
    }
    
    func getUIImage(texture: MTLTexture, context: CIContext) -> UIImage?{
        let kciOptions = [CIImageOption.colorSpace: CGColorSpaceCreateDeviceRGB(),
                          CIContextOption.outputPremultiplied: true,
                          CIContextOption.useSoftwareRenderer: false] as! [CIImageOption : Any]
        
        if let ciImageFromTexture = CIImage(mtlTexture: texture, options: kciOptions) {
            if let cgImage = context.createCGImage(ciImageFromTexture, from: ciImageFromTexture.extent) {
                let uiImage = UIImage(cgImage: cgImage, scale: 1.0, orientation: .downMirrored)
                return uiImage
            }else{
                return nil
            }
        }else{
            return nil
        }
    }
    
}

Let’s talk about this new class in more details. Here and in the previous class, we have a CIContext. In both classes, it is injected into the class because it’s quite expensive operation to initiate one. Its main responsibility is compiling and running the filters, whether on CPU or GPU. Next one is a MTLCommandQueue property which, as it appears, handles queuing an ordered list of command buffers for the Metal device — has not introduced yet — to execute. The MTLCommandQueue, as well as CIContext, is thread safe and allows multiple outstanding command buffer to be encoded simultaneously. Finally, we have a MTLTexture property which is a memory allocation for storing formatted image data that is accessible to the GPU. The required init() sets two properties of our custom class, isOpaque = false tells it to do not render a black color for empty space and enableSetNeedsDisplay = true asks it to respond to setNeedsDisplay(). This class is created to render the given CIImage and here it’s done via the render() function which takes three arguments. We already know about two of them but MTLDevie is new. As you might have guessed, it defines the interface to the GPU. The body of this method makes sure there is an image to render and does a simple transform to make the image fit into the drawable area. To avoid compiler error while testing the code on a simulator, we enclose the body within #if !targetEnvironment(simulator) /// #endif, because with a simulator target the device‘s type is unknown to the compiler. The last method is obvious, it converts the formatted image to a UIImage object. 

The last step is applying a filter to a UIImage and show it on the screen:

    
if let device = MTLCreateSystemDefaultDevice(), let uiImage = UIImage(named: "someImage"){
    if let ciImage = CIImage(image: uiImage) {
        let context = CIContext(mtlDevice: device)
        filter = ImageFilters(image: ciImage, context: self.context)
        let ciImage = filter.apply(filterName: .CIPhotoEffectNoir)
        mtkView.render(image: ciImage, context: context, device: device)
        mtkView.setNeedsDisplay()
    }
}

These two classes help you to apply image filters almost instantly to images, whether small or large. 

Swift: An app to search through movie titles using The Open Movie Database API

This demo app uses OMDb APIs to search in movie titles and show details of the selected movie. First of all, you need to get your API key, it’s free. All right, this is the plan:

  • Create a class to make network requests using URLSession
  • Test the class
  • Create the UI

NetworService will handle the network requests. This is a singleton class which uses the builtin URLSession with a handful of configuration options. An extension to this class contains the required methods to search. Since I have created this as a reusable utility class, there are some extra features which won’t be used in this tutorial, maybe you need them later.

class NetworkService {
    //MARK: - Internal structs
    private struct authParameters {
        struct Keys {
            static let accept = "Accept"
            static let apiKey = "apikey"
        }
        
        static let apiKey = "YOURKEY"
    }
    
    //An NSCach object to cache images, if necessary
    private let cache = NSCache()
    
    //Default session configuration
    private let urlSessionConfig = URLSessionConfiguration.default
    
    //Additional headers such as authentication token, go here
    private func configSession(){
        self.urlSessionConfig.httpAdditionalHeaders = [
            AnyHashable(authParameters.Keys.accept): MIMETypes.json.rawValue
        ]
    }
    
    private static var sharedInstance: NetworkService = {
        return NetworkService()
    }()
    
    //MARK: - Public APIs
    class func shared() -> NetworkService {
        sharedInstance.configSession()
        return sharedInstance
    }

    //MARK: - Private APIs
    private func createAuthParameters(with parameters:[String:String]) -> Data? {
        guard parameters.count > 0 else {return nil}
        return  parameters.map {"\($0.key)=\($0.value)"}.joined(separator: "&").data(using: .utf8)
    }
}

This is the skeleton of our class, shared() function returns a static instance of the class after running the internal configSession() function. authParameters structure is used to store keys and values for authentication, just to prevent writing a messy code. Now we can create an instance of the class using, let networkService = NetworkService.shared().

Now the we need another public method to make network requests:

    private func request(url:String,
                 cachePolicy: URLRequest.CachePolicy = .reloadRevalidatingCacheData,
                 httpMethod: RequestType,
                 headers:[String:String]?,
                 body: [String:String]?,
                 parameters: [URLQueryItem]?,
                 useSharedSession: Bool = false,
                 handler: @escaping (Data?, URLResponse?, Int?, Error?) -> Void){
        
        if var urlComponent = URLComponents(string: url) {
            urlComponent.queryItems = parameters
            var session = URLSession(configuration: urlSessionConfig)
            if useSharedSession {
                session = URLSession.shared
            }
            
            if let _url = urlComponent.url {
                
                var request = URLRequest(url: _url)
                request.cachePolicy = cachePolicy
                request.allHTTPHeaderFields = headers
                
                if let _body = body {
                    request.httpBody = createAuthParameters(with: _body)
                }
                request.httpMethod = httpMethod.rawValue
                
                session.dataTask(with: request) { (data, response, error) in
                    let httpResponsStatusCode = (response as? HTTPURLResponse)?.statusCode
                    handler(data, response, httpResponsStatusCode, error)
                    }.resume()
            }else{
                handler(nil, nil, nil, Failure.invalidURL)
            }
        }else{
            handler(nil, nil, nil, Failure.invalidURL)
        }
    }

This method has the basic functionality of a request session, getting parameters and headers and returning response, data, status code and an error object if exists, asynchronously. For the httpPart, you can replace createAuthParameters(:_) with an array of URLQueryItems.

For errors, I’m using a enumerator which is inherited from Error protocol, here is it:

import Foundation
public enum Failure:Error {
    case invalidURL
    case invalidSearchParameters
    case invalidResults(String)
    case invalidStatusCode(Int?)
}

extension Failure: LocalizedError {
    public var errorDescription: String? {
        switch self {
        case .invalidURL:
            return NSLocalizedString("The requested URL is invalid.", comment: "")
        case .invalidSearchParameters:
            return NSLocalizedString("The URL parameters is invalid.", comment: "")
        case .invalidResults(let message):
            return NSLocalizedString(message, comment: "")
        case .invalidStatusCode(let message):
            return NSLocalizedString("Invalid HTTP status code:\(message ?? -1)", comment: "")
        }
    }
}

OMDb works with simple queries, it doesn’t have many end points! but to keep everything structured, let’s create another enumerator:

import Foundation

enum EndPoints {
    case Search
}

extension EndPoints {
    var path:String {
        let baseURL = "http://www.omdbapi.com"
        
        struct Section {
            static let search = "/?"
        }
        
        switch(self) {
        case .Search:
            return "\(baseURL)\(Section.search)"

        }
        
    }
    
}

The next step is testing:

import XCTest
@testable import OpenMovie

class OpenMovieTests: XCTestCase {
    
    private let networkService = NetworkService.shared()
    
    func testSearch() {
        let promise = expectation(description: "Search for batman movies")
        networkService.search(for: "batman", page: 1) { (searchObject, error) in
            XCTAssertNil(error)
            XCTAssertTrue(searchObject?.response == "True")
            promise.fulfill()
        }
        waitForExpectations(timeout: 2, handler: nil)
    }
    
    func testSearchByIMDBID() {
        let promise = expectation(description: "Search for Batman: Dark Night Returns")
        networkService.getMovie(with: "tt2313197") { (movieObject, error) in
            XCTAssertNil(error)
            XCTAssertTrue(movieObject?.imdbID == "tt2313197")
            promise.fulfill()
        }
     
        waitForExpectations(timeout: 2, handler: nil)
    }
    
}

And the results:

Now the last step, isn’t it better you look at it yourself? It’s TL;DR sort of thing, download it from github and give it a try.

This is how it looks:

iOS: Show activity indicator in UISearchBar

Updated for iOS 13 and Swift 5

When you are using a search controller, most probably it’s a network call. Isn’t it nice to show a tiny loading animation instead of the magnifier icon on the left side of the search while your app is still waiting for data from the internet? Indeed.

To do this, you can use either use private API, which is frowned upon, or use UITextField‘s built-in methodsetImage(), I go for the latter one.

First of all, let’s see how it looks:

To show the loading animation we have to:

  1. find the search bar within the search controller
  2. remove the magnifier icon
  3. add an UIActivityIndicatorView to the search bar’s leftView

And to hide it it’s almost the same steps but in reverse.

To make the code reusable, let’s do it as an extension:

import Foundation
import UIKit

extension UISearchBar {
  //
}

First of all, we need to find the text field which is basically a UITextField within UISearchBar subviews. this computed property does the job for us and returns an option UItextField:

private var textField: UITextField? {
    let subViews = self.subviews.flatMap { $0.subviews }
    if #available(iOS 13, *) {
        if let _subViews = subViews.last?.subviews {
            return (_subViews.filter { $0 is UITextField }).first as? UITextField
        }else{
            return nil
        }
        
    } else {
        return (subViews.filter { $0 is UITextField }).first as? UITextField
    }
    
}

The next step is finding the current magnifier image, it’s similar to the previous step:

    private var searchIcon: UIImage? {
        let subViews = subviews.flatMap { $0.subviews }
        return  ((subViews.filter { $0 is UIImageView }).first as? UIImageView)?.image
    }

Let’s get hold of our loading animation, which is an UIActivityIndicatorView:

   private var activityIndicator: UIActivityIndicatorView? {
        return textField?.leftView?.subviews.compactMap{ $0 as? UIActivityIndicatorView }.first
    }

Now let’s add a public variable to show/hide the loading animation:

    var isLoading: Bool {
        get {
            return activityIndicator != nil
        } set {
            let _searchIcon = searchIcon
            if newValue {
                if activityIndicator == nil {
                    let _activityIndicator = UIActivityIndicatorView(style: .gray)
                    _activityIndicator.startAnimating()
                    _activityIndicator.backgroundColor = UIColor.clear
                    let clearImage = UIImage().imageWithPixelSize(size: CGSize.init(width: 14, height: 14)) ?? UIImage()
                    self.setImage(clearImage, for: .search, state: .normal)
                    textField?.leftViewMode = .always
                    textField?.leftView?.addSubview(_activityIndicator)
                    let leftViewSize = CGSize.init(width: 14.0, height: 14.0)
                    _activityIndicator.center = CGPoint(x: leftViewSize.width/2, y: leftViewSize.height/2)
                }
            } else {
                self.setImage(_searchIcon, for: .search, state: .normal)
                activityIndicator?.removeFromSuperview()
            }
        }
    }

This piece of code looks for an existing activity indicator and if there is none, first creates one, cleans the default image for seach mode, the magnifier image, sets a new transparent image with 14 point in 14 point in size then adds the recently created UIActivityIndicatorView to left search bar’s leftView; And to hide it, smiple removes the  UIActivityIndicatorView and set back the image for search mode.

To use it simple set searchController.searchBar.isLoading = true  but be careful if you are using it inside another block, call it withing the main queue.

You can download the whole code from github. The UIImage extension, imageWithPixelSize(), is also available on github

Installing fovis on ROS Hydro

Well, fovis_ros is ROS package which uses stereo camera output to estimate the position of a robot (ROS Hydro is the latest ROS distribution that supports fovis).

The installation process is a little tricky, I spent a whole day to figure it out. At the first place, I have to  mention that you need to use catkin to build it, yes you have to compile it yourself.

There are two more packages you need to install before compiling fovis: cmake_modules and libfovis.

For the first one you can use apt-get, like this:  sudo apt-get install ros-hydro-cmake_modules

To install libfovis, you have to download the source and compile it yourself, using catkin. Setup catkin first, if it’s not your default workspace. Clone the libfovis source in the src forlder:

$ roscd
$ cd src
$ git clone https://github.com/srv/libfovis.git
$ cd ..
$ catkin_make libfovis

Now do the same thing for fovis package:

$ cd src
$ git clone https://github.com/srv/fovis
$ catkin_make fovis

Done!