Functional Programming – A Gentle Introduction

How much do you know about Functional Programing?
functional programming coding

Wikipedia defines Functional Programming as:

A programming paradigm that treats computation as the evaluation of mathematical functions.

If this sounds like Mathematical mumbo-jumbo that is completely removed of your day-to-day work as a Software Engineer, then you’re not alone. More than anything, Functional Programming is a way of thinking about building software in a perhaps very different paradigm than we’re normally used to. Let’s break down what are the core building blocks of a Functional Language and see how they can help us deliver better, more maintainable and robust software that you’ll actually have fun writing.

Although Functional Programming is a concept almost as old as computing itself (its roots come from Lambda Calculus, which was developed in the 1930s!) I’ll assume throughout the post that you are a programmer who comes from an imperative, Object-oriented style of programming.

Programming is about transforming data

The traditional Object-oriented paradigm teaches us that programming is all about modelling your problem domain. Classes define behaviour, and object instances hold state. You will spend most of your time defining complex hierarchies that try to model your understanding of the domain.

When coding with objects, you’re mostly dealing with state. You call methods on objects and pass them around to other objects, and all the while these objects are updating their own state or changing another object’s state. Data is effectively hidden, enclosed in objects.

However, in the real world, we’re not really concerned with modelling complex hierarchies and maintaining state in our minds, we’re focused on getting things done. You might say that programming is more about shifting state than holding state. Functional Programming brings this to light in several ways, focusing on doing things instead of talking about things.

This blog, for example, takes Markdown files and transforms them into HTML files. A webserver then transforms your HTTP request into an HTTP response containing that same HTML file. A Javascript program then takes bits of the HTML and highlights pieces of code in the text. You can probably see where this is going.

Being Declarative, as opposed to Imperative

Imperative programs are authored using sequential statements. One line of code performs an action, and then the next line of code performs another action. These statements usually change the program’s state in one way or another.

If you read an imperative program out loud, from top to bottom, you will quickly see that it is entirely focused on telling the computer how to accomplish something. An algorithm is described as a set of steps that must be performed in order to produce a result. Here is a piece of imperative code whose objective is to get all dogs and their owners:

//dogs = [{name: 'Fido', owner_id: 1}, {...}, ... ]
//owners = [{id: 1, name: 'Bob'}, {...}, ...]

var dogsWithOwners = []
var dog, owner

for(var di=0; di < dogs.length; di++) {
  dog = dogs[di]

  for(var oi=0; oi < owners.length; oi++) {
    owner = owners[oi]
    if (owner && dog.owner_id == {
        dog: dog,
        owner: owner

Declarative programs, on the other hand, are authored using expressions. These kinds of programs express the logic of a computation without having to describe its control flow. They focus on expressing what an algorithm should do instead of how to do it. Let’s take the previous example of getting dogs and their owners and rewrite it in a declarative language that you are most likely already very familiar with, SQL:

SELECT * from dogs
WHERE dogs.owner_id =

This accomplishes exactly the same thing as before, only now it is left up to the compiler to work out the details of how to actually build up the data structure we’re interested in.

Declarative programs can be considered as proofs of formal logic, as the program is executed by searching for proofs of its statements. As a result of this kind of determinism, many people defend that declarative programming greatly simplifies the task of writing concurrent applications.

Side effects may include

In programming, a side effect is considered to be any observable change on the world as a result of applying a function. In pure functional languages, such as Haskell, all functions are without side effects, meaning that state changes are only represented as functions that take in data, transform it and return it.

A glaringly obvious example of a side effect is printing debug statements to a console. The console is part of the world the program runs it, and printing to it causes it to change its state.

Functions without side effects are said to be referentially transparent, meaning that, given an input x, the output will always be y. Because of this, not only does your program arguably become easier to reason about, writing tests to exercise your functions tends to be made much simpler as well.


Immutability is a concept that lies at the core of Functional Programming. It basically describes an object or data structure that, once created, can never be changed again for the duration of the program.

The key thing about immutable objects is that, since they are guaranteed to never change again, they are suddenly much easier to reason about. Immutability removes a lot of the headaches in concurrent programming, since two threads accessing the same data at the same time will never crash because the object unexpectedly changed state.

So how do we reflect changes in data if everything is frozen in a perfect glacier? Instead of directly mutating the object, you simply create a copy of it with the new changes. Because everything is guaranteed not to change, compilers can perform very smart optimisations to the process of copying an object, making its overhead negligible.

In traditional mutable programming, here is how you would add a person to a world:

world.add_person person

In contrast, this is how you’d do it in a functional language, with immutability and referential transparency:

world = add_person world, person

The current world is passed in as a dependency of our add_person function, and this function returns an updated copy of the world with another person in it. Any other thread currently doing something with the “old” world remains safe because nothing changed.

It’s functions all the way down

In a Functional language, as you might’ve guessed, functions are considered first class citizens. You do almost everything via functions that take and return transformed data. A function is considered first class if it can be dealt the same as any other primitive (such as an Integer), and thus can be stored anywhere (variables, arrays, Hashes, etc). Functions can also be declared anonymously, which means that you can declare a function without binding it to an identifier. These are also called lambdas in several languages.

Higher order functions

Support for anonymous functions means that we can express the concept of a Higher Order Function. This means that you can construct functions that take in functions as their arguments, or functions that return other functions.

Higher order functions allow programmers to use high level abstractions that you may be familiar with such as map or reduce, as well as use a technique called currying or partial application, in which a function that takes nparameters can be made to return a new function that takes n-1 arguments if only one argument is passed. Once the function receives the full amount of arguments, it then executes the code in its original function body.


Because you are able to declare a function inside a function, support for closures becomes an important feature. Boiled down to the core, closures are a mechanism that can “capture” an enclosing function’s scope, meaning that the inner function has access to all the variables of its parent function at runtime. Here is an example of this in Javascript:

function makeGreeting(salutation) {
  function greetName(name) {
    return salutation + ', ' + name;

  return greetName;

var helloGreeting = makeGreeting('Hello');
helloGreeting('Babbel') // -> "Hello, Babbel"

In this example, “makeGreeting” takes a salutation and returns a reference to an inner function “greetName”. Once the inner function is called, it accesses its parent’s “salutation” variable and combines it with a given “name” argument to produce the full greeting.

Functions as Lego bricks

Composition allows us to express chains of data transformations by combining simple functions together to build more complex ones.

The return of each function is passed on to the next one in the chain, and the result of the last function is considered to be the result of the whole composition.

Composition encourages programmers to develop small, focused functions that do one thing only and do it very well, then combining them together to build programs that are orders of magnitude more complex.

Here is a basic example of this concept:

y = g(x)
z = f(y)

z = f(g(x))

Not very expressive, is it? In order to understand the composition chain, you basically have to read from the inside out. First “g” is applied to “x” and then “f” is applied to the result. This is the reason most functional languages offer a way to expressively compose functions. Here are examples for Haskell and Elixir:

# Haskell
foo = f . g

# Elixir
foo = f |> g

# Unix
f | g

If you’re dying to know how to roll your own composition features for your favourite language, here are some examples in Javascript and Ruby (taken from Underscore and Funkify, respectively).

In Javascript:

var compose = function () {
  var args = arguments;
  var start = args.length - 1;
  return function() {
    var i = start;
    var result = args[start].apply(this, arguments);
    while (i--) result = args[i].call(this, result);
    return result;

var f = compose(negate, square, mult2, add1);
f(2); // -> 9

Now in Ruby:

module Funkify
  class ::Proc
    def |(other)
      Funkify.compose(other, self)

  def compose(*args)
    head, *tail = args
    head = _procify(head)
    if args.size <= 2
      ->(*xs) { head.(_procify(tail[0]).(*xs)) }
      ->(*xs) { head.(compose(*tail)) }

  def _procify(obj)
    case obj
    when Symbol

(mult(5) | add(1) | negate).(3) #=> -16

Never write another loop

When programming, arguably most of our time is spent dealing with Collections. We receive collections of entities, we create new collections, we transform them, etc. You’re probably used to the notion of iterating through a collection, mostly using loops. However, loops are an imperative construct, meaning that functional languages lack support for these control structures. Instead, most work is performed through Applicative Functors. These are basically higher-order functions, some of which you may use already, such as map, each, reduce, select, reject, take, zip, etc. Let’s go over some common tasks, how you’d achieve them using imperative loops, and then how you’d get the same result using a functor.


Transforms a list into another list by applying a function on each value of the collection. Given a list of numbers, let’s produce a similar list containing their square values. First, imperatively:

var square = function (x) { return Math.pow(x, 2); };
var values = [1, 2, 3, 4, 5];

var newValues = [];
for(var i=0; i < values.length; i++) {

// -> [1, 4, 9, 16, 25]

And now declaratively:

var square = function (x) { return Math.pow(x, 2); };
var values = [1, 2, 3, 4, 5];

newValues =;
// -> [1, 4, 9, 16, 25]


“Folds” a list into a single value, using an initial value as an accumulator. Let’s try to sum of all numbers in a list. Imperatively:

var sum = function (a, b) { return a + b };
var values = [1, 2, 3, 4, 5];

var sumOfValues = 0;
for(var i=0; i < values.length; i++) {
  sumOfValues = sum(sumOfValues, values[i]);

// -> 15


var sum = function (a, b) { return a + b };
var values = [1, 2, 3, 4, 5];

sumOfValues = values.reduce(sum, 0);
// -> 15


Builds a new list with only the values of a given list which pass a truthiness check. In this case, we’re trying to get all even numbers. Again, imperatively:

var isEven = function (x) { return x % 2 == 0 };
var values = [1, 2, 3, 4, 5];

var evenValues = [];
for(var i=0; i < values.length; i++) {
  if (isEven(values[i])) {

// -> [2, 4]

And then declaratively:

var isEven = function (x) { return x % 2 == 0 };
var values = [1, 2, 3, 4, 5];

evenValues = values.filter(isEven);

// -> [2, 4]

All together, now!

Your boss calls you at 5PM on a Friday. They need to get the sum of all squared even numbers! There goes the weekend… Well, not if you take advantage of Appplicative Functors and function composition (in this case, demonstrated by “chaining” in Javascript):

  .reduce(sum, 0);
// -> 20

Easy, right? Notice how writing code like this makes its intent completely clear. All the actual looping is abstracted away, meaning that your program is now describing what it wants done instead of worrying about the internals of how to do it.

Dolce fare niente

The eagle-eyed among you may have noticed that, in the previous example, we’re looping through the whole collection 3 times! One for filtering, one for mapping and finally, another one for reducing it. Writing this code imperatively would yield 3x faster code, so how do functional languages solve this issue?

Almost all languages implement, in some form or another, the concept of Lazy Evaluation. This is essentially an evaluation strategy which delays the evaluation of an expression until its final value is actually needed, thus avoiding unnecessary repetition. In the example above, the compiler would accumulate instructions and arguments (first filter, then map, then reduce) and then perform all the needed calculations during a single iteration.

The cool thing about this is that, since expressions are not greedily evaluated (ie., ASAP), you can potentially construct infinite data structures. A lot of languages express this concept through Streams, a concept you may have heard of before.


They say that you can only really understand recursion once you understand recursion. Since Functional languages have no control flow structures to loop over collections, one way to iterate is by expressing iteration algorithms in a recursive fashion. This concept ties well together with Pattern Matching, a technique that allows you to extract values out of a complex data structure. Let’s see how we would describe a “map” algorithm ourselves in Elixir:

def map([], _func) do
def map([ head | tail ], func) do
  [func.(head) | map(tail, func)]

Again, this algorithm is completely descriptive. The map of an empty list is, by definition, an empty list, so we introduce a function clause that expresses this condition. We pattern match on the first argument of the function, saying that this function body should be used for calls to “map” with an empty list ([]). Finally, the map of a non-empty list is the result of applying a function to the first argument of the list (its head), concatenated with the map of the remaining list (its tail) and a function.

You might be thinking that recursion can be slow, and you’d be right! Function calls are added to a stack, and if your function is recursive, this stack might grow incrementally until it reaches its limit and your program blows up. However, because of an optimization technique dubbed Tail Call Optimization, if the very end of your function call is a recursive call to the same function, the compiler simply inlines it with a GOTO statement and changes the arguments to the new ones passed in. In practice, this means is that your stack never grows, so there’s never a chance of a stack overflow.

Bring some functions into your life

You might already be using a lot of these techniques. Even so, I would encourage you to try one or more of a growing number of functional languages. It might be painful, yes. A lot of your instincts, developed over years of programming in an OO, imperative world will be wrong. Stick with it, though, and your eyes might be opened, and you might find new ways of thinking about and solving problems.

Want to join our Engineering team?
Apply today!