Category

Posts

iOS: Fix label cuts for custom fonts

Has it ever occurred to you that a custom font does not look as expected in your app? like some parts of it are cut off? Look at this example, it supposed to be “Blog” but all you see is “Bl”:

Or this one in Farsi (and Arabic) which expected to be “کریم” but the last two characters are cut off completely:

The code to create it is pretty simple. I have used a third party library, FontBlaster, to load custom fonts which is available on github.

label = UILabel(frame: CGRect.zero)
let font = UIFont(name: "BleedingCowboys", size: 60)! // We are in debug mode, right?
label.backgroundColor = UIColor.yellow
label.frame.size = CGSize.zero
label.text = "Blog"
let size = label.sizeThatFits(CGSize.init(
width: CGFloat.greatestFiniteMagnitude, 
height: CGFloat.greatestFiniteMagnitude))
label.frame.size = size
label.center = self.view.center
self.view.addSubview(label)

It seems sizeThatFits(:) cannot determine the size correctly for all fonts. To fix this, I found an extension to UIBezierPath which returns a CGPath for an attributed string, you can find it here. This is how you can the path:

let line = CAShapeLayer() 
line.path = UIBezierPath(forMultilineAttributedString: mutabbleAttributedString, maxWidth: CGFloat.greatestFiniteMagnitude).cgPath
line.bounds = (line.path?.boundingBox)!
// We gonna need it later 
let sizeFromPath = CGSize(width: (line.path?.boundingBoxOfPath.width)!, height: (line.path?.boundingBoxOfPath.height)!) 

UIBezierPath(forMultilineAttributedString:, maxWidth:) comes from that extension I mentioned above. Now we can determine the actual size of the label frame, let’s see it in action:

It’s still not exactly what we want, the size seems to be correct but the left inset is not. To solve this last problem, let’s create a custom UILabel class which can set custom inset while drawing the label:

import Foundation
import UIKit

class CustomLabel: UILabel {
    var textInsets = UIEdgeInsets.zero {
        didSet { invalidateIntrinsicContentSize() }
    }
    
    override func textRect(forBounds bounds: CGRect,
 limitedToNumberOfLines numberOfLines: Int) -> CGRect {
        let insetRect = bounds.inset(by: textInsets)
        let textRect = super.textRect(forBounds: insetRect, limitedToNumberOfLines: numberOfLines)
        let invertedInsets = UIEdgeInsets(top: -textInsets.top,
                                          left: -textInsets.left,
                                          bottom: -textInsets.bottom,
                                          right: -textInsets.right)
        return textRect.inset(by: invertedInsets)
    }
    
    override func drawText(in rect: CGRect) {
        super.drawText(in: rect.inset(by: textInsets))
}

How many points we should add to left inset? the difference between the actual width and width from sizeThatFits.First we need to replace the line in which we declared the label. Instead of the UILabel we need to use CustomLabel. Then:

label.textInsets = UIEdgeInsets(top: 0, left: sizeFromPath.width - size.width, bottom: 0, right: 0)

Let’s see the final result:

Nice! yeah? Thing is you might not need the inset for all troublesome fonts, check it yourself

iOS: Image filters using CoreImage and MetalKitView

Image filters are not only the most popular feature of image editing apps, but also in many social network apps such as Instagram, Snapchat, and Facebook Messenger. As an iOS developer, you might like to include such option into your apps.

The most convenient method would be using CIFilter create the filter and a UIImageView to show it, but there is one big problem: it’s not fast. Regarding the size of images token by iPhones (usually 12Mp), trying several image filters would not create a pleasant and “it just works” experience for the user, they don’t like to see that spinner on the screen for a simple filter, instagram does it instantly, why not your app?

For this, you need to use MetalKit instead of UIKit, which is way faster.  In this tutorial, I will create a subclass of MTKView to display a Metal drawing, filtered image in this case. It does not mean that we cannot show the final image in our lovely UIImageView.

The first step is to create a CIFilter, but before doing it let’s see what is it according to Apple:

An image processor that produces an image by manipulating one or more input images or by generating new image data. The CIFilter class produces a CIImage object as output. Typically, a filter takes one or more images as input. Some filters, however, generate an image based on other types of input parameters. The parameters of a CIFilter object are set and retrieved through the use of key-value pairs.

Let’s create a simple class to handle filters:

import Foundation
import CoreImage

enum CIFilterName: String, CaseIterable, Equatable {
    case CIPhotoEffectChrome = "CIPhotoEffectChrome"
    case CIPhotoEffectFade = "CIPhotoEffectFade"
    case CIPhotoEffectInstant = "CIPhotoEffectInstant"
    case CIPhotoEffectNoir = "CIPhotoEffectNoir"
    case CIPhotoEffectProcess = "CIPhotoEffectProcess"
    case CIPhotoEffectTonal = "CIPhotoEffectTonal"
    case CIPhotoEffectTransfer = "CIPhotoEffectTransfer"
    case CISepiaTone = "CISepiaTone"
}

class ImageFilters {
    private var context: CIContext
    private let image: CIImage
    
    init() {
        self.context = CIContext()
        self.image = CIImage()
    }
    
    init(image: CIImage, context: CIContext){
        self.context = context
        self.image = image
    }
    
    func apply(filterName: CIFilterName) -> CIImage?{
 
        let filter = CIFilter(name: filterName.rawValue)
        filter?.setDefaults()

        filter?.setValue(self.image, forKey: kCIInputImageKey)
        //filter?.setValue(Double(0.5), forKey: kCIInputIntensityKey)
        return filter?.outputImage
    }
}

The above code is to create a filtered image in CIImage format using its apply() function and supports 8 filters with default settings. To use it we need to pass a CIImage and a CIContext to the initializer. 

Now we need a subsclass of MTKview which can draw a CIImage into a MTView:

import UIKit
import MetalKit
import AVFoundation

class MetalKitView: MTKView {
    
    private var commanQueue: MTLCommandQueue?
    private var ciContext: CIContext?
    var mtlTexture: MTLTexture?
    
    required init(coder: NSCoder) {
        super.init(coder: coder)
        self.isOpaque = false
        self.enableSetNeedsDisplay = true
    }
    
    func render(image: CIImage, context: CIContext, device: MTLDevice) {
        #if !targetEnvironment(simulator)
        self.ciContext = context
        self.device = device
        
        var size = self.bounds
        size.size = self.drawableSize
        size = AVMakeRect(aspectRatio: image.extent.size, insideRect: size)
        let filteredImage = image.transformed(by: CGAffineTransform(
            scaleX: size.size.width/image.extent.size.width,
            y: size.size.height/image.extent.size.height))
        let x = -size.origin.x
        let y = -size.origin.y
        
        self.commanQueue = device.makeCommandQueue()
        
        let buffer = self.commanQueue!.makeCommandBuffer()!
        self.mtlTexture = self.currentDrawable!.texture
        self.ciContext!.render(filteredImage,
                               to: self.currentDrawable!.texture,
                               commandBuffer: buffer,
                               bounds: CGRect(origin:CGPoint(x:x, y:y), size:self.drawableSize),
                               colorSpace: CGColorSpaceCreateDeviceRGB())
        buffer.present(self.currentDrawable!)
        buffer.commit()
        #endif
    }
    
    func getUIImage(texture: MTLTexture, context: CIContext) -> UIImage?{
        let kciOptions = [CIImageOption.colorSpace: CGColorSpaceCreateDeviceRGB(),
                          CIContextOption.outputPremultiplied: true,
                          CIContextOption.useSoftwareRenderer: false] as! [CIImageOption : Any]
        
        if let ciImageFromTexture = CIImage(mtlTexture: texture, options: kciOptions) {
            if let cgImage = context.createCGImage(ciImageFromTexture, from: ciImageFromTexture.extent) {
                let uiImage = UIImage(cgImage: cgImage, scale: 1.0, orientation: .downMirrored)
                return uiImage
            }else{
                return nil
            }
        }else{
            return nil
        }
    }
    
}

Let’s talk about this new class in more details. Here and in the previous class, we have a CIContext. In both classes, it is injected into the class because it’s quite expensive operation to initiate one. Its main responsibility is compiling and running the filters, whether on CPU or GPU. Next one is a MTLCommandQueue property which, as it appears, handles queuing an ordered list of command buffers for the Metal device — has not introduced yet — to execute. The MTLCommandQueue, as well as CIContext, is thread safe and allows multiple outstanding command buffer to be encoded simultaneously. Finally, we have a MTLTexture property which is a memory allocation for storing formatted image data that is accessible to the GPU. The required init() sets two properties of our custom class, isOpaque = false tells it to do not render a black color for empty space and enableSetNeedsDisplay = true asks it to respond to setNeedsDisplay(). This class is created to render the given CIImage and here it’s done via the render() function which takes three arguments. We already know about two of them but MTLDevie is new. As you might have guessed, it defines the interface to the GPU. The body of this method makes sure there is an image to render and does a simple transform to make the image fit into the drawable area. To avoid compiler error while testing the code on a simulator, we enclose the body within #if !targetEnvironment(simulator) /// #endif, because with a simulator target the device‘s type is unknown to the compiler. The last method is obvious, it converts the formatted image to a UIImage object. 

The last step is applying a filter to a UIImage and show it on the screen:

    
if let device = MTLCreateSystemDefaultDevice(), let uiImage = UIImage(named: "someImage"){
    if let ciImage = CIImage(image: uiImage) {
        let context = CIContext(mtlDevice: device)
        filter = ImageFilters(image: ciImage, context: self.context)
        let ciImage = filter.apply(filterName: .CIPhotoEffectNoir)
        mtkView.render(image: ciImage, context: context, device: device)
        mtkView.setNeedsDisplay()
    }
}

These two classes help you to apply image filters almost instantly to images, whether small or large. 

Installing fovis on ROS Hydro

Well, fovis_ros is ROS package which uses stereo camera output to estimate the position of a robot (ROS Hydro is the latest ROS distribution that supports fovis).

The installation process is a little tricky, I spent a whole day to figure it out. At the first place, I have to  mention that you need to use catkin to build it, yes you have to compile it yourself.

There are two more packages you need to install before compiling fovis: cmake_modules and libfovis.

For the first one you can use apt-get, like this:  sudo apt-get install ros-hydro-cmake_modules

To install libfovis, you have to download the source and compile it yourself, using catkin. Setup catkin first, if it’s not your default workspace. Clone the libfovis source in the src forlder:

$ roscd
$ cd src
$ git clone https://github.com/srv/libfovis.git
$ cd ..
$ catkin_make libfovis

Now do the same thing for fovis package:

$ cd src
$ git clone https://github.com/srv/fovis
$ catkin_make fovis

Done!

شانس

بهترین شانس، بخت و اقبالیست که خودتان می‌سازید. —داگلاس مک آرتور

Read More

برای خلاقیت

این مقاله در سال ۱۹۵۹ توسط اسحاق آزیموف درباره روند ایجاد خلاقیت در افراد خلاق نوشته شده.

Read More