Swift 3: Working with dates

An early version of Swift 3 arrived along with the first beta of Xcode 8 a few days ago. Considered a preview release, this version of Swift 3 gives you a hint on the progress of the language evolution, but it also lets you taste the most thrilling thing about the new version IMO – the implementation of the new Swift API design guidelines. After toying around in a playground for a while, I have ended up with a small piece of code that deals with date objects – setting timezone, locale, extracting date components from a date and building a date from date components – all the trivial stuff that someone who deals with dates does.

The result is published in a gist here and one can’t help but notice how readable Swift 3 is without the verbosity of the old API and without the NS prefixes. I love Swift 2.2 but Swift 3 looks awesome.

2015 in music

This is going to be the first music-related post that I am writing in English. I have a habbit to list the 10 (or more) albums that gathered my attention the most throughout the year that passed. Here they are, ordered by no particular reason (I could have ordered them differently if that post was written in another time of the day):

10
Purple
Baroness
09
Psychic Warfare
Clutch
08
Sol Invictus
Faith No More
07
The Book of Souls
Iron Maiden
06
Love, Fear and the Time Machine
Riverside
05
Polaris
Tesseract
04
Drones
Muse
03
Hand. Cannot. Erase.
Steven Wilson
02
The Shape of Colour
Intervals
01
Odyssey: The Destroyer of Worlds
Voices From The Fuselage

Swift 2.1: UIColor – calculating color and brightness difference

I was recently looking for a solution to determine if two colors are different to each other in terms of color and brightness. I ended up reading a W3C recommendation entitled “Techniques For Accessibility Evaluation And Repair Tools” that recommends some minimal values (and formulas to calculate them) describing the relationships between those two colors.

Here’s what that page states:

Two colors provide good color visibility if the brightness difference and the color difference between the two colors are greater than a set range.

Color brightness is determined by the following formula:
((Red value X 299) + (Green value X 587) + (Blue value X 114)) / 1000
Note: This algorithm is taken from a formula for converting RGB values to YIQ values. This brightness value gives a perceived brightness for a color.

Color difference is determined by the following formula:
(maximum (Red value 1, Red value 2) – minimum (Red value 1, Red value 2)) + (maximum (Green value 1, Green value 2) – minimum (Green value 1, Green value 2)) + (maximum (Blue value 1, Blue value 2) – minimum (Blue value 1, Blue value 2))

The rage for color brightness difference is 125. The range for color difference is 500.

From that info I have created 2 UIColor extensions:

extension UIColor {

  func getColorDifference(fromColor: UIColor) -> Int {
    // get the current color's red, green, blue and alpha values
    var red:CGFloat = 0
    var green:CGFloat = 0
    var blue:CGFloat = 0
    var alpha:CGFloat = 0
    self.getRed(&red, green: &green, blue: &blue, alpha: &alpha)

    // get the fromColor's red, green, blue and alpha values
    var fromRed:CGFloat = 0
    var fromGreen:CGFloat = 0
    var fromBlue:CGFloat = 0
    var fromAlpha:CGFloat = 0
    fromColor.getRed(&fromRed, green: &fromGreen, blue: &fromBlue, alpha: &fromAlpha)

    let redValue = (max(red, fromRed) - min(red, fromRed)) * 255
    let greenValue = (max(green, fromGreen) - min(green, fromGreen)) * 255
    let blueValue = (max(blue, fromBlue) - min(blue, fromBlue)) * 255

    return Int(redValue + greenValue + blueValue)
  }

  func getBrightnessDifference(fromColor: UIColor) -> Int {
    // get the current color's red, green, blue and alpha values
    var red:CGFloat = 0
    var green:CGFloat = 0
    var blue:CGFloat = 0
    var alpha:CGFloat = 0
    self.getRed(&red, green: &green, blue: &blue, alpha: &alpha)
    let brightness = Int((((red * 299) + (green * 587) + (blue * 114)) * 255) / 1000)

    // get the fromColor's red, green, blue and alpha values
    var fromRed:CGFloat = 0
    var fromGreen:CGFloat = 0
    var fromBlue:CGFloat = 0
    var fromAlpha:CGFloat = 0
    fromColor.getRed(&fromRed, green: &fromGreen, blue: &fromBlue, alpha: &fromAlpha)
    let fromBrightness = Int((((fromRed * 299) + (fromGreen * 587) + (fromBlue * 114)) * 255) / 1000)

    return max(brightness, fromBrightness) - min(brightness, fromBrightness)
  }
}

Here’s an example of how you can use them:

let backgroundColor = UIColor.whiteColor()
let foregroundColor = UIColor.blackColor()

let colorDifference = backgroundColor.getColorDifference(foregroundColor)
// returns 765
let brightnessDifference = backgroundColor.getBrightnessDifference(foregroundColor)
// returns 255

Enjoy!

Swift 2.1: Experimenting with CAEmitterLayer

Recently I am digging the CoreAnimation documentation and I have found a really interesting class, named CAEmitterLayer. Turned out that it’s a quite powerful particle engine, designed to help you creating realtime particle animations (like rain, fire, etc). The CAEmitterLayer is a container for a set of CAEmitterCell instances that define the effect. Each CAEmitterCell object serves as a template for a single particle and the CAEmitterLayer is responsible for instantiating a stream of particles based on these templates.

The example today is going to emit small pieces in a circle, simulating an explosion, as in the screenshot below.

CAEmitterLayer in action

Here’s the code (you can copy/paste it into a new playground and use the View > AssistantEditor > Show Assistant editor) to preview the effect:

import Foundation
import XCPlayground
import UIKit

class Emit: UIViewController {
  override func viewDidLoad() {
    super.viewDidLoad()

    // setup emitter
    let emitter = CAEmitterLayer()
    emitter.frame = self.view.bounds
    //self.view.layer.addSublayer(emitter)
    emitter.renderMode = kCAEmitterLayerAdditive
    emitter.emitterPosition = self.view.center
    self.view.layer.addSublayer(emitter)

    // setup cells
    let cell = CAEmitterCell()
    cell.contents = UIImage(named: "spark")?.CGImage
    cell.birthRate = 1500
    cell.lifetime = 5.0
    cell.color = UIColor(red: 1.0, green: 0.5, blue: 0.1, alpha: 1).CGColor
    cell.alphaSpeed = -0.4
    cell.velocity = 50
    cell.velocityRange = 250

    cell.emissionRange = CGFloat(M_PI) * 2.0

    //add cells to the emitter
    emitter.emitterCells = [cell]
  }
}

let v1 = Emit()
let page = XCPlaygroundPage.currentPage
page.liveView = v1.view

And here’s a breakdown of what happens:

  • On lines 10-12 we are creating our emitter, making it as big as our main view
  • Line 13 is important, because by setting the layer’s emitter.renderMode to kCAEmitterLayerAdditive we are allowing the colors of the particles to be “stacked”. This creates the bright yellow-white-ish effect at the center of the explosion
  • Line 14 sets the center from which the particles will be emitted

Now that we have the emitter set, let’s create a particle template. Using your image editor, create a small white square (I have used 3 x 3px image at 144dpi), make it’s background white and name it spark.png. Drag it in the Resources folder of your playground.

Here’s the breakdown of what happens with the template:

  • On lines 17-19 we are creating the template cell and are filling it with the image that we have just created
  • The cell’s birthRate controls how many cells will be created by the emitter every second
  • The cell’s lifetime is kind of self-explanatory but it controls how many seconds each particle will live
  • The cell’s alphaSpeed controls the speed, in seconds, at which the alpha component changes over the lifetime of the cell. The speed change is defined as the rate of change per second.
  • The cell’s velocity is the initial velocity of the particle
  • The velocityRange if the amount by which the velocity of the cell can vary
  • The emissionRange is the angle, in radians, defining a cone around the emission angle

There are lots of other configurable properties for the CAEmitterCell which you can examine over here. With some tweaks to the parameters of the CAEmitterCell and CAEmitterLayer, using the code above as a template, you can achieve something like the following:

emitter2

Here’s the full code of that second example:

import Foundation
import XCPlayground
import UIKit

class Emit: UIViewController {

  let shipOffset:CGFloat = 12.0

  func initStarsEmitter(withSize: CGSize) -> CAEmitterLayer {
    let starsEmitter = CAEmitterLayer()
    starsEmitter.emitterSize = CGSizeMake(withSize.width, withSize.height * 2)
    starsEmitter.emitterShape = kCAEmitterLayerLine
    starsEmitter.emitterMode = kCAEmitterLayerUnordered
    starsEmitter.emitterPosition = CGPointMake(withSize.width / 2, 0)
    starsEmitter.emitterDepth = 1.0
    return starsEmitter
  }

  func initStarsEmitterCell() -> CAEmitterCell {
    let star = CAEmitterCell()
    star.birthRate = 30
    star.lifetime = 10
    star.lifetimeRange = 0.5
    star.color = UIColor(white: 1, alpha: 1).CGColor
    star.contents = UIImage(named: "particle")!.CGImage
    star.velocityRange = 400
    star.emissionLongitude = CGFloat(M_PI)
    star.scale = 0.4
    star.spin = 1.0
    star.scaleRange = 0.8
    star.alphaRange = 0.3
    star.alphaSpeed = 0.5
    return star
  }

  func initSmokeEmitter(withSize: CGSize) -> CAEmitterLayer {
    let smokeEmitter = CAEmitterLayer()
    smokeEmitter.frame = self.view.bounds
    smokeEmitter.emitterPosition = CGPointMake(withSize.width / 2, withSize.height / 2)
    smokeEmitter.emitterMode = kCAEmitterLayerPoints
    return smokeEmitter
  }

  func initSmokeEmitterCell() -> CAEmitterCell {
    let smoke = CAEmitterCell()
    smoke.birthRate = 11
    smoke.emissionLongitude = CGFloat(M_PI) / 2
    smoke.lifetime = 0
    smoke.velocity = 40
    smoke.velocityRange = 20
    smoke.emissionRange = CGFloat(M_PI) / 4
    smoke.spin = 1
    smoke.spinRange = 6
    smoke.yAcceleration = 160
    smoke.contents = UIImage(named: "smoke")?.CGImage
    smoke.scaleSpeed = 0.7
    return smoke
  }

  func initFireEmitter(withSize: CGSize) -> CAEmitterLayer {
    let fireEmitter = CAEmitterLayer()
    fireEmitter.frame = self.view.bounds
    fireEmitter.emitterPosition = self.view.center
    fireEmitter.emitterMode = kCAEmitterLayerOutline
    fireEmitter.emitterShape = kCAEmitterLayerLine
    fireEmitter.renderMode = kCAEmitterLayerAdditive
    fireEmitter.emitterSize = CGSizeMake(0, 0)
    return fireEmitter
  }

  func initFireEmitterCell() -> CAEmitterCell {
    let fire = CAEmitterCell()
    fire.emissionLatitude = CGFloat(M_PI)
    fire.birthRate = 340
    fire.lifetime = 0.4
    fire.velocity = 80
    fire.velocityRange = 30
    fire.emissionRange = 1.1
    fire.yAcceleration = 200
    fire.scaleSpeed = 0.3
    fire.color = UIColor(red: 0.8, green: 0.4, blue: 0.2, alpha: 1.0).CGColor

    fire.contents = UIImage(named: "fire")?.CGImage
    return fire
  }

  override func viewDidLoad() {
    super.viewDidLoad()
    let size = self.view.bounds.size

    let starsEmitter = initStarsEmitter(size)
    self.view.layer.addSublayer(starsEmitter)
    let star = initStarsEmitterCell()
    starsEmitter.emitterCells = [star]


    let shipImage = UIImage(named: "spaceship1")!
    let ship = CALayer()
    ship.frame = CGRect(x: (size.width - shipImage.size.width) / 2, y: (size.height/2) - (shipImage.size.height + shipOffset), width: shipImage.size.width, height: shipImage.size.height)
    ship.contents = shipImage.CGImage
    ship.contentsScale = shipImage.scale
    self.view.layer.addSublayer(ship)

    let smokeEmitter = initSmokeEmitter(size)
    self.view.layer.addSublayer(smokeEmitter)
    let smoke = initSmokeEmitterCell()
    smokeEmitter.emitterCells = [smoke]


    let fireEmitter = initFireEmitter(size)
    self.view.layer.addSublayer(fireEmitter)
    let fire = initFireEmitterCell()
    fireEmitter.emitterCells = [fire]
  }
}

let v1 = Emit()
let page = XCPlaygroundPage.currentPage
page.liveView = v1.view

The starship image can be downloaded from this tutorial (and you’ll have to rotate it with your photo editing tool).

The other assets (smoke.png,fire.png and particle.png) are packed here: caemitterlayer_assets.zip

Happy hacking!

Swift 2.1: Printable errors

Here’s a Swift 2.1 snippet demonstrating how you can print/show error messages based on the thrown error type.

Basically we are implementing the CustomStringConvertible protocol, which allows us to provide a human readable error message for each enum case.

enum StrError: ErrorType, CustomStringConvertible {
  case StrIsEmpty

  var description:String {
    switch self {
      case .StrIsEmpty:
        return "The provided string is empty"
      }
  }
}

func showString(input: String) throws {
    guard !str.isEmpty else {
        throw StrError.StrIsEmpty
    }
    print("string: \(input)")
}

do {
    try showString("")
} catch let error as StrError {
    print("Error: \(error)")
}

Swift 2.0 Snippet: CoreData fetching with error handling

I have just downloaded the first Xcode 7 beta and started converting one of my playground projects to Swift 2.

Tip: Branch your code before starting a migration and don’t forget to commit all your changes before branching.

After the migration work was done I was left with a bunch of errors to fix. One of them was related to the way I was fetching from the CoreData. My code looked like this:

func getGalleryForItem(item: Item)-> [Image]? {
  var fetchRequest = NSFetchRequest(entityName: "Image")
  var predicate = NSPredicate(format: "%K == %@", "item", item)
  fetchRequest.predicate = predicate

  var fetchError:NSError? = nil

  if let fetchResults = self.managedObjectContext?.executeFetchRequest(fetchRequest, error: &fetchError) as? [Image] {
    if fetchError != nil {
      println("getGalleryForItem error: \(fetchError!.localizedDescription)")
    }
    return (!fetchResults.isEmpty) ? fetchResults : nil
  } else {
    if fetchError != nil {
      println("getGalleryForItem error: \(fetchError!.localizedDescription)")
    }
    return nil
  }
}

And here’s how the Swift 2.0-compatible version of the above looks like – quite neat I should say!

func getGalleryForItem(item: Item)-> [Image]? {
  let fetchRequest = NSFetchRequest(entityName: "Image")
  let predicate = NSPredicate(format: "%K == %@", "item", item)
  fetchRequest.predicate = predicate

  // here's the sugar
  do {
    let fetchResults = try self.managedObjectContext?.executeFetchRequest(fetchRequest) as? [Image]
    return fetchResults
  } catch let fetchError as NSError {
    print("getGalleryForItem error: \(fetchError.localizedDescription)")
    return nil
  }
}

Swift: Expand UIButton’s clickable area

We all know the drill – there’s a button that doesn’t always detect touch events due to various reasons (being too small, having too thin symbol in it, etc).

Here’s an elegant way to solve this in Swift – what’s happening is that we’re “expanding” the clickable area around the button to treat touches within that area as button touch events:

extension UIButton {
  override public func pointInside(point: CGPoint, withEvent event: UIEvent?) -> Bool {
    var relativeFrame = self.bounds
    var hitTestEdgeInsets = UIEdgeInsetsMake(-44, -44, -44, -44)
    var hitFrame = UIEdgeInsetsInsetRect(relativeFrame, hitTestEdgeInsets)
    return CGRectContainsPoint(hitFrame, point)
  }
}

Swift: Springboard-like loading animation using a custom layer

One thing I have struggled to wrap my head around in iOS is what’s the concept behind some of the eye candy that happens on screen. Being a long time full stack web developer, the concept of using graphic contexts to draw into is not obscure, since that’s what the <canvas> drawing/animation is based on, but with iOS there’s more than this.

You have the CoreGraphics stack, but you also have some UIKit tools that allow you to draw using simplified syntax (UIBezierPath being the most colorful example). In today’s post I’ll focus on using CoreGraphics instead of UIBezierPath, although my solution started with using UIBezierPath. In short, it went nowhere, so I had to go back to CoreGraphics 🙂

I always believed that, to learn something, one should have a real problem to solve. My plan was to mimic what happens on the iOS springboard when an app installs – there’s a layer that dims the app icon and a circular pie slice animation that shows the application download progress, knocking out more and more of the dimming as the progress advances.

It’s a good idea to toy with so here’s what I ended up with:

loader2

First, my StackOverflow research revealed that simply creating a layer to host the graphic won’t be enough and I had to create a custom subclass off of the CALayer. Then I had to override one of its static methods (needsDisplayForKey) to allow the endAngle property of my class to become animated.

Not doing it simply doesn’t track the animation intermediate states, therefore resulting in pie slices “skipping” between the values. What makes it better is that XCode 6.1.1 doesn’t display that method in its autocomplete menu. Lesson learned here – read the documentation carefully.

Also notice how the endAngle variable is declared with @NSManaged, since this is the property we’re gonna animate on:

@NSManaged var endAngle: CGFloat
var startAngle = CGFloat(-0.5 * M_PI)
var maxAngle = CGFloat(1.5 * M_PI)

private var circleOffset:CGFloat = 30.0

var center: CGPoint {
  return CGPointMake(self.bounds.size.width/2, self.bounds.size.height/2)
}

var radius: CGFloat {
  return min(center.x - circleOffset, center.y - circleOffset)
}

var startPoint: CGPoint {
  var startPointX = Float(center.x) + Float(radius) * cosf(Float(startAngle))
  var startPointY = Float(center.y) + Float(radius) * sinf(Float(startAngle))
  return CGPointMake(CGFloat(startPointX), CGFloat(startPointY))
}

var cw:Int32  {
  return (startAngle > endAngle) ? 1 : 0
}


override init!(layer: AnyObject!) {
  super.init(layer: layer)
  if (layer.isKindOfClass(LoaderLayer)) {
    if let other = layer as? Loaderayer {
      startAngle = other.startAngle
      endAngle = other.endAngle
    }
  }
}

required init(coder aDecoder: NSCoder) {
  super.init(coder: aDecoder)
}

override class func needsDisplayForKey(key: String!) -> Bool {
  if (key == "endAngle") {
    return true
  }
  return super.needsDisplayForKey(key)
}

Having all the above set, it’s time to implement the actionForKey function. This function is called whenever a property of the class gets changed. We want to animate the change on the endAngle, so we’re calling out our very own makeAnimationForKey method when we detect change in the endAngle value.

Notice how anim.fromValue is fetched from the presentation layer in makeAnimationForKey (line 10) – it’s the last know value of the endAngle property before the animation:

override func actionForKey(event: String!) -> CAAction! {
  if event == "endAngle" {
    return makeAnimationForKey(event)
  }
  return super.actionForKey(event)
}

func makeAnimationForKey(key: String!) -> CABasicAnimation {
  var anim = CABasicAnimation(keyPath: key)
  anim.fromValue = self.presentationLayer()?.valueForKey(key)
  anim.timingFunction = CAMediaTimingFunction(name: kCAMediaTimingFunctionLinear)
  anim.duration = 0.5
  return anim
}

And, finally, here’s the drawing function itself. The most interesting part is how it uses blend modes to knock out the pie chart progress in the background (lines 12 and 26), revealing more and more of what’s underneath.

The layer automatically removes itself when the progress reaches 100%:

override func drawInContext(ctx: CGContext!) {
  if (self.endAngle < maxAngle)
  {
    var backgroundRect = CGRectMake(0, 0, bounds.size.width, bounds.size.height)
    CGContextSetBlendMode(ctx, kCGBlendModeDestinationOver)
    CGContextAddRect(ctx, backgroundRect)
    CGContextSetFillColorWithColor(ctx, UIColor(white: 0.0, alpha: 0.4).CGColor)
    CGContextFillPath(ctx)

    CGContextBeginPath(ctx)
    CGContextMoveToPoint(ctx, center.x, center.y)
    CGContextAddLineToPoint(ctx, startPoint.x, startPoint.y)
    CGContextAddArc(ctx, center.x, center.y, radius, startAngle, endAngle, cw)
    CGContextClosePath(ctx)

    CGContextSetFillColorWithColor(ctx, UIColor(white: 1.0, alpha: 1).CGColor)
    CGContextSetLineCap(ctx, kCGLineCapRound)

    CGContextSetBlendMode(ctx, kCGBlendModeDestinationOut)
    CGContextDrawPath(ctx, kCGPathFill)
  } else {
    self.removeFromSuperlayer()
  }
}

Update (Oct 14, 2015): I have wrapped a basic demo of the concept on Github. Here’s a link to the repository: ProgressDemo

Swift: Regular expression-based string replacements

One of the quirkiest things in Swift for me turned out to be the understanding on how the Strings work.

Since all the Strings are Unicode in Swift, it is the fact that sometimes a character might be constructed from two (or more) unicode glyphs what makes the String manipulation a bit tricky.

The root of all string operations is the understanding of what the String index is and how does it differs from, say Int.

The String index is basically a representation of the character’s position within the String. Swift differs from many of the other programming languages by the fact that it’s Strings are actually CollectionTypes (instead of being treated as an array of characters or an indexed pointer) and the String index is BidirectionalIndexType instead of being an Int that is used to access a character within the string in the other programming languages. This gives us the ability to have functions like predecessor() or successor() on string indexes among everything else.

In short you can’t get a character by using string[3]. To access a character in Swift you have to use a String index, which is generated by the Core library’s advance function:

var s = "test"
var charIndex = advance(s.startIndex, 2)
var ch = s[charIndex]
>> "s"
ch = s[charIndex.predecessor()]
>> "e"
ch = s[charIndex.successor()]
>> "t"

Almost any of the string manipulations are performed using ranges. Ranges must consist of String indexes. A numeric range will produce an error:

charIndex = advance(s.startIndex, 2)

s.substringWithRange(Range(start: charIndex, end: s.endIndex))
// or
s[charIndex..<s.endIndex]
>> "st"

s.replaceRange(Range(start: charIndex, end: s.endIndex), with: "rererer")
// or
s.replaceRange(charIndex..<s.endIndex, with: "rererer")
>> "terererer"

Now, let’s give some practical usage to that knowledge.

Let’s assume that we have a string that has some HTML data and we want to convert that string to a Markdown-compatible one. For the sake of simplicity I’ll just look at the case of replacing the <strong> opening and closing tags with * which is used to make a string bold using Markdown. See below:

var s = "<strong>Hell</strong>o, <strong>Hell</strong>o, <strong>Hell</strong>o"
var search = "<\\/?strong>"
var replaceWith = "*"
var replacementLength = countElements(replaceWith)
var err: NSError? = nil
var expr = NSRegularExpression(pattern: search, options: .CaseInsensitive, error: &err)


if let matches = expr?.matchesInString(s, options: nil, range: NSMakeRange(0, countElements(s)) ) {
  var replacedStringLengthDifference = 0
  for match in matches {
    var startIndex = advance(s.startIndex, (match.range.location + replacedStringLengthDifference))
    var endIndex = advance(s.startIndex, (match.range.length + match.range.location + replacedStringLengthDifference))
    replacedStringLengthDifference -= (match.range.length - replacementLength)
    s.replaceRange(startIndex..<endIndex, with: replaceWith)
  }
}

>> "*Hell*o, *Hell*o, *Hell*o"

Leveraging the string indexes and ranges it’s pretty easy to do String manipulations in Swift. The replacedStringLengthDifference is a variable that keeps the difference in the string length (amount of characters in Int) after the replacement has been made. We use that difference to find the correct location of the next match.

Of course, all of the above could be solved by using a regular expression-based replacement, it depends if you want to have the source string replaced, in which case you’ll have to turn the string into a NSMutableString first, or if you want a new string that contains the replacements. Here’s the case where we replace within the original string:

var original = NSMutableString(string: "<strong>Hell</strong>o, <strong>Hell</strong>o, <strong>Hell</strong>o")
var search = "<\\/?strong>"
var replaceWith = "*"
var err: NSError? = nil
var expr = NSRegularExpression(pattern: search, options: .CaseInsensitive, error: &err)
if (err === nil) { 
  expr?.replaceMatchesInString(original, options: nil, range: NSMakeRange(0, original.length), withTemplate: replaceWith)
  println(original)
}

>> "*Hell*o, *Hell*o, *Hell*o"

And here’s a case where we get a string containing all the replacements by keeping the original string untouched. Note that the replacement is an Optional:

var original = "<strong>Hell</strong>o, <strong>Hell</strong>o, <strong>Hell</strong>o"
var search = "<\\/?strong>"
var replaceWith = "*"
var err: NSError? = nil
var expr = NSRegularExpression(pattern: search, options: .CaseInsensitive, error: &err)
if (err === nil) { 
  if let replacement = expr?.stringByReplacingMatchesInString(original, options: nil, range: NSMakeRange(0, countElements(original)), withTemplate: replaceWith) {
    println(replacement)
  }
}

>> "*Hell*o, *Hell*o, *Hell*o"