Like most people, I heard a lot about functional programming a few months ago, but didn’t get any deeper. To me, it’s probably just a buzzword. Since then, I’ve learned more about functional programming and I thought I’d do something for new people who hear about it all the time but don’t know what it is.

When you think about functional programming, you probably think of them: Haskell and Lisp, and a lot of discussion about which is better. Although they are both functional languages, they are very different and each has its own selling points. At the end of this article, I hope you have a clearer idea of all this. Both have left their mark on some of the more modern languages. You’ve probably heard of two languages: Elm and Clojurescript, both of which compile to JavaScript. But before I dive into language specifications, I’d like to give you a deeper look at some of the core concepts and patterns in functional languages.

It is highly recommended to have a cup of coffee on your desk before stepping into this new world of functional programming.

Pure functions

At the heart of functional programming is the use of formal mathematics to describe logic: lambda operations. Mathematicians like to describe programs as transformations of data, which leads to the first concept: pure functions. Pure functions have no side effects, depend only on the input of the function, and the output remains consistent when the input is the same. Here’s an example:

// Const add10 = (a) => a + 10 // Let x = 10 const addx = (a) => a + x => x = vCopy the code

Impure functions depend indirectly on the parameter x. If you change the value of x, addx will print different results for the same x. This makes it difficult to statically analyze and optimize programs at compile time. But even more useful to JavaScript developers, pure functions reduce the cognitive difficulty of the program. When you write a pure function, you only care about the body of the function itself. You don’t have to worry about external factors such as x being changed in the addx function.

Function composition

Another great thing about functional programming is that you can combine them in new functions. A very special operator used to describe programs in lambda operations is composition. Composition “combines” two functions in a new function. As follows:

const add1 = (a) => a + 1
const times2 = (a) => a * 2
const compose = (a, b) => (c) => a(b(c))
const add1OfTimes2 = compose(add1, times2)
add1OfTimes2(5) // => 11
Copy the code

The combination is similar to the preposition “of”. Note the order of the arguments and how they are evaluated: double plus one: the second function is called first. Composition is the opposite of the Unix pipe function, which takes an array of functions as an argument.

const pipe = (fns) => (x) => fns.reduce((v, f) => f(v), x)
const times2add1 = pipe([times2, add1])
times2add1(5) // => 11
Copy the code

With function composition, we can build more complex data variations by combining multiple small functions together. This article shows in detail how function combinations can help you process data in a cleaner and more concise way.

In practice, composition is a better alternative to inheritance in object orientation. Here is a far-fetched but practical example. Let’s say you need to create a greeting for your users.

const greeting = (name) => `Hello ${name}`
Copy the code

Great! A simple pure function. Suddenly, your project manager says you now need to show the user more information, adding a prefix before the name. So you might change your code to something like this:

const greeting = (name, male=false, female=false) => `Hello ${male ? 'Mr.' : Female? 'Ms.' : '} ${name} 'Copy the code

The code isn’t that bad, but what if we were to add more and more judgment logic, like “Dr.” or “Sir”? What if we were to add the prefix “MD” or “PhD”? Or should we change the greeting to “Sup” instead of “Hello”? Now things are getting tricky. Adding judgment logic to a function like this is not object-oriented inheritance, but it is similar to inheriting and overwriting an object’s properties and methods. Since we are opposed to adding judgment logic, let’s try the method of combining functions:

const formalGreeting = (name) => `Hello ${name}`
const casualGreeting = (name) => `Sup ${name}`
const male = (name) => `Mr. ${name}`
const female = (name) => `Mrs. ${name}`
const doctor = (name) => `Dr. ${name}`
const phd = (name) => `${name} PhD`
const md = (name) => `${name} M.D.`
formalGreeting(male(phd("Chet"))) // => "Hello Mr. Chet PhD"
Copy the code

That’s why it’s more maintainable and readable. Each function does only one simple thing, and we can easily put them together. Now that we have not completed the entire instance, use the pipe function!

const identity = (x) => x
const greet = (name, options) => {
  return pipe([
    // greeting    
    options.formal ? formalGreeting :
    casualGreeting,
    // prefix
    options.doctor ? doctor :
    options.male ? male :
    options.female ? female :
    identity,
    // suffix
    options.phd ? phd :
    options.md ?md :
    identity
  ])(name)
}
Copy the code

Another benefit of using pure functions and function combinations is that it is easier to track errors. Whenever an error occurs, you can trace the cause of the problem through each function. In object-oriented programming, this can be quite complicated because you generally don’t know the other state of the object causing the modification problem.

The function is currified

The inventor of the function Currization was the same man who invented Haskell — his name was Haskell Curry. The essence of function currification is that you can pass in fewer arguments when calling a function that returns another function and accepts other arguments. There is an excellent article explaining this in great detail, and here is a simple example of currization using ramda.js.

In the following example, we create a Currization function “add” that takes two arguments. When we pass an argument, we get an intermediate function “add1”, which takes only one argument.

const add = R.curry((a, b) => a + b)
add(1, 2) // => 3
const add1 = add(1)
add1(2) // => 3
add1(10) // => 11
Copy the code

In Haskell, all functions are automatically currified. There are no optional or default parameters.

In layman’s terms, function currification is ideal for use in map, compose, and PIPE. Such as:

const users = [{name: Chet, age: 25}, {name: "Joe", the age: 24}] R.p ipe (R.s ortBy (R.p rop (" age ")), / / sort through the age attribute R.m ap (R.p rop (' name ')), // Get each age attribute r.join (', '), // Use commas to separate each name)(users) // => "Joe, chet"Copy the code

This makes data processing more declarative. The code is just as described in the comment!

Monads, Functors, and Fancy Words

Monads and functors are probably just two buzzwords you know. If you want to understand them at a deeper level, I highly recommend this article, which uses great graphics to explain them. But it’s not that complicated.

Although Monads are very interesting. Monads can be viewed as a container for a value, and you can open that container and do something with the value, and you need to do a map operation on it. Take a look at this simple example:

/ / monad list = [1, 1] list. The map (inc) / / = > [0] list. The map (isZero) / / = > [false, true, false]Copy the code

One important thing about Monads and Functors is that mathematicians also study these ideas in classification theory. This not only helps us understand the framework of the program, but also provides algebraic theorems and proofs that can be used to statically analyze and optimize our code at compile time. This is one of the great things about Haskell – The Glasgow Haskell compiler is a feat of human intelligence.

There are many kinds of theorems and features in classification theory. For example, here’s a simple feature:

list.map(inc).map(isZero) // => [true, false, false]
list.map(compose(isZero, inc)) // => [true, false, false]
Copy the code

After the map is compiled, it loops in a more efficient way. This is usually an O(n) operation (linear time), but it is still strongly dependent on the growth of the next pointer in the list. So the second one is twice as good as the first one. These are also the transformations Haskell does to your code at compile time to make it really fast – there’s a cool trick for doing this, which I’ll explain later.

To extend monads a bit, there is an interesting monad called “Maybe Monad” (often called Option or Optional in Swift). There is no such thing as null or undefined in Haskell. To make it easier to represent potentially null variables, you can wrap it in Monad and the Haskell compiler will know how to handle it.

Maybe monad is a combination type, like Nothing or Just something. In Haskell you can define Maybe as follows:

type Maybe = Nothing | Just x
Copy the code

A lowercase X simply means any other type.

As a monad, you can use.map() on Maybe to change the values it contains! When you map a Maybe, if it is of type Just, apply the value to the function and return a new Just with a new value. If Maybe is of type Nothing, return Nothing. In Haskell, the syntax is pretty elegant and uses pattern matching, but in JavaScript you might need to use Maybe like this:

const x = Maybe.Just(10)
const n = x.map(inc)
n.isJust() // true
n.value() // 11
const x= Maybe.Nothing
const n = x.map(inc) // no error!
n.isNothing // true
Copy the code

This monad may not be very useful in JavaScript code, but it may be more interesting to see why it is so useful in Haskell. Haskell requires that the handling of every boundary case in a program must be defined or it will not compile. When you send an HTTP request, you get a Maybe type because the request may fail and nothing is returned. If you do not handle request failures, the program will not compile. This means that the program is unlikely to produce runtime errors. Maybe your code is doing something wrong, but it doesn’t interrupt execution like it does in JavaScript.

This is also a big selling point for Elm. Type systems and compilers force your programs to run without runtime errors.

Analyzing the monads in context and the code in algebraic structures will help you define and understand your problem in a structured way. For example, Maybe is an interesting extension to the concept of track-oriented programming for error handling. It is worth noting that Monads is also suitable for handling asynchronous events.

There are a lot of interesting monads and a lot of words that I don’t fully understand myself. However, in order to ensure consistency of terminology, there have been norms such as Fantasy-Land and Typeclassopedia that attempt to unify different concepts in classification theory in order to write conforming code.

Transparency and immutability of references

Another thing that might affect the whole classification theory and lambda calculation is the transparency of references. It is also very difficult for mathematicians to analyze logical procedures when two identical things are not equal to each other. This is also a common problem in JavaScript.

{} = = {} [] = = [] / / / false/false [1, 2] = = [1, 2] / / falseCopy the code

Now suppose you do this in a world where reference transparency does not exist. You don’t need any proof to be sure that an empty array is equal to another empty array. All that matters here is the value of the array, not the pointer to the array. So functional programming languages end up using deep comparisons to compare two values. Performance isn’t perfect though, so there are a few tricks to make this comparison faster.

Before I continue, I need to clarify one thing: in functional programming, you cannot change a variable without changing a reference. Otherwise, the change in the function may not be pure! Thus, you can be sure that if two variables have the same reference, their values must be the same. Also because we can’t change a variable directly, we need to copy those values to a new storage space whenever we want to change it. This is a huge performance penalty and can lead to garbage jitter. But the solution is to use structural sharing (persistent data structures).

A simple example of shared structures is linked lists. Suppose you just keep a reference to the end of the list. When comparing two linked lists, you can first compare whether the references at the end are the same. This is a great shortcut, because if they are equal, the comparison ends — both lists are the same! Otherwise, you need to recurse to each element in the table to see if they have the same value. To efficiently add a value to a table, rather than copying the entire table into a new memory, you can simply add a link to a new node and log the reference. Thus, we have shared the previous data structure structurally through a new reference in a new data structure, and persisted the previous data structure as well.

The data structures used to alter these immutable data are called hash mapped array indexes (HAMT). This is what Immutable. Js and Mori. Js do. Clojurescript and Haskell already do this in the compiler, although I’m not sure if it’s implemented in Elm.

Using immutable data structures will give you performance improvements and help you stay sane. React assumes that props and state are immutable, so it can effectively detect whether references to the previous props and state and to the next props and state are equal, reducing unnecessary redraws. In other cases, using immutable data is a handy way to ensure that values don’t change without warning.

Delay calculation

Delayed computing is a generic term that encompasses many of the concepts of the Thunk and Generator specifications. Deferred computing is exactly what you’d expect: you don’t do anything before you have to do something, you put it off as long as possible. An analogy is if you have an unlimited number of dishes to wash. Instead of putting all the dishes in the sink and washing them all at once, we can be lazy and just wash one dish at a time.

In Haskell, the nature of delayed computation is easier to understand, so I’ll start with it. First, we need to understand how the program evaluates. Most of the languages we use use inside-out conventions, like this:

Square (3 + 4) square(7) // Compute the innermost expression 7 * 7 49Copy the code

This is also a more sensible way of calculating the program. But first let’s look at the outward in specification.

Square (3 + 4) (3 + 4) * (3 + 4) // Compute the outermost expression 7 * (3 + 4) 7 * 7 49Copy the code

Obviously, the outside-in approach wasn’t smart enough — we needed to calculate 3 + 4 twice, so the program took five steps. That’s kind of bad. Haskell, however, retains references to each expression and passes shared references as they are specified from the outside in. Thus, when 3 + 4 is evaluated for the first time, the reference to this expression points to the new expression 7. So we skip the repetitive steps.

Square (3 + 4) (3 + 4) * (3 + 4) // Calculates the outermost expression 7 * 7 // because of the reference share, the calculation is reduced by 49 stepsCopy the code

In essence, delayed computation is an outside-in computation that references shared computation.

Haskell does a lot of things for you internally, and that means you can define things like an infinite list. For example, you could recursively define an infinite list.

Suppose we now have a take(n, list) function whose first argument is a list of n elements. If we use the inside-out protocol, infinite recursion may occur to evaluate a list because it is infinite. However, with the help of outside-in computing, we can implement deferred computing on demand!

However, because JavaScript and most programming languages use inside-out conventions, the only way we can replicate this architecture is to treat arrays as functions, as shown in the following example:

const makeOnes = () => {next: () => 1}
ones = makeOnes()
ones.next() // => 1
ones.next() // => 1
Copy the code

We have now created an expression for an infinite list of delayed calculations based on the same recursive definition. Now we create an infinite list of natural numbers:

const makeNumbers = () => {
  let n = 0
  return {next: () => {
    n += 1
    return n
  }
}
numbers = makeNumbers()
numbers.next() // 1
numbers.next() // 2
numbers.next() // 3
Copy the code

In ES2015, there is indeed a standard implemented for this and it is called a function generator.

function* numbers() {
  let n = 0
  while(true) {
    n += 1
    yield n
  }
}
Copy the code

Latency can bring huge performance benefits. For example, you can compare a Lazy. Js calculation per second to Underscore and Lodash:

Here’s a good example (from lazy.js) explaining how it works. Suppose you now have a huge array (the elements are people) and you want to perform some conversions to it:

const results = _.chain(people)
  .pluck('lastName')
  .filter((name) => name.startsWith('Smith'))
  .take(5)
  .value()
Copy the code

The primitive way to do this is to pick out all the names, filter the entire array, and use the first five. That’s how Underscore. Js, and most of the class libraries, do it. With generator, however, we can use deferred computation to compute only one value at a time until we get a name that starts with “Smith”.

The biggest surprise of Haskell is that all of this is implemented within the language with outside-in specifications and reference sharing. In JavaScript, we can use lazy.js, but if you want to create something like this yourself, you need to understand each of the steps above and return a new generator. To get all the values in a generator, you need to call.next() for them. The chain method programs the array as a generator. Then, when you call.value(), it calls the next() method repeatedly until there are no more values. And.take(5) ensures that you don’t do more computations than you need!

Now recall the theorem I mentioned earlier:

list.map(inc).map(isZero) // => [false, true, false]
list.map(compose(isZero, inc)) // => [false, true, false]
Copy the code

Delay calculations help you do this kind of optimization internally.

Clojure Patterns and Features

## Clojure patterns and features

I’ve talked a lot about Haskell, and now I want to explain how Clojure fits in perfectly with this as well. Clojure has reference transparency, immutable data types, and you can’t arbitrarily change a variable, except for special atomic types. This makes comparison computations incredibly convenient in Haskell, which forces a scan of every element in an array and logs all the values into an associative array that can then be called anywhere. Clojure also doesn’t have a strongly typed system or a compiler as powerful as Glasgow Haskell. And there’s no such thing as null in Clojure. That said, the functional pattern is strongly recommended and it’s hard not to use it in Clojure.

Two things struck me about Clojure: Everything in Clojure is a private data type called EDN — Clojure’s version of JSON. Instead of objects and types, everything is just some private data structure that can be used to represent anything you want. For example, in JavaScript, we have a native Date object. But what happens when you want to serialize date to a JSON? You’ll need to create your own custom Serializer or Deserializer. In Clojure, you might want to represent a date as an associative array with a timestamp and time zone (unless you implement it in Java). Any string-formatted function is simply assumed to have the same data structure. So in Clojure, data is important, data transformation, and data processing. Everything is data.

Another cool thing about Clojure is that code is data. Clojure is a table-processing language, which is basically an interpreter for lists, where the first item is a set of functions and the rest are arguments — which is why many people say there is a Lisp in every language. But the cool thing about Lisp is that it can create incredibly powerful macros. The macros most people use are limited to text replacement macros, and you can use some kind of string template to generate code. There is also a powerful class library in JavaScript called Sweetjs. But in Clojure, since the code itself is a list, you can review the code at compile time, transform it, and finally evaluate! This is handy for writing something that repeats continuously, allowing you to create the syntax you want to express. To do this in JavaScript, you need to have a plug-in like Babel and JavaScript Abstract syntax tree (AST) and create your own compiler. But in Clojure, an AST is just a list!

One of Clojure’s biggest features is the core.async library for handling asynchronous communication, and the use of macros in a very elegant way. In the example below, we create a channel in which the go function is essentially a macro.

(defecho-chan(chan)) (go(println(echo-chan))) (>!! echo-chan"ketchup") ; prints ketchupCopy the code

What’s even more surprising here is that Go interprets its arguments as a list and generates asynchronous code that no one wants to write. In the code is a symbol for subscribing to a channel. It then generates some code to do the job. Here’s some nasty code we don’t have to write or deal with:

User = > (macroexpand '(go) (println (echo - chan)))) (let*[c__6888__auto__(clojure.core.async/chan1)captured-bindings__6889__auto__(clojure.lang.Var/getThreadBindingFrame)]( clojure.core.async.impl.dispatch/run(clojure.core/fn[](clojure.core/let[f__6890__auto__(clojure.core/fnstate-machine__67 12__auto__([](clojure.core.async.impl.ioc-macros/aset-all! (java.util.concurrent.atomic.AtomicReferenceArray.8)0state-machine__6712__auto__11))([state_8650](clojure.core/let[old-f rame__6713__auto__(clojure.lang.Var/getThreadBindingFrame)ret-value__6714__auto__(try(clojure.lang.Var/resetThreadBindin gFrame(clojure.core.async.impl.ioc-macros/aget-objectstate_86503))(clojure.core/loop[](clojure.core/let[result__6715__au to__(clojure.core/case(clojure.core/int(clojure.core.async.impl.ioc-macros/aget-objectstate_86501))1(clojure.core/let[in st_8644printlninst_8645echo-chanstate_8650(clojure.core.async.impl.ioc-macros/aset-all! state_86507inst_8644)](clojure.core.async.impl.ioc-macros/take! state_86502inst_8645))2(clojure.core/let[inst_8644(clojure.core.async.impl.ioc-macros/aget-objectstate_86507)inst_8647(c lojure.core.async.impl.ioc-macros/aget-objectstate_86502)inst_8648(inst_8644inst_8647)](clojure.core.async.impl.ioc-macr os/return-chanstate_8650inst_8648)))](if(clojure.core/identical?result__6715__auto__:recur)(recur)result__6715__auto__)) )(catchjava.lang.Throwableex__6716__auto__(clojure.core.async.impl.ioc-macros/aset-all! state_8650clojure.core.async.impl.ioc-macros/CURRENT-EXCEPTIONex__6716__auto__)(clojure.core.async.impl.ioc-macros/proce ss-exceptionstate_8650):recur)(finally(clojure.lang.Var/resetThreadBindingFrameold-frame__6713__auto__)))](if(clojure.co re/identical?ret-value__6714__auto__:recur)(recurstate_8650)ret-value__6714__auto__))))state__6891__auto__(clojure.core/ ->(f__6890__auto__)(clojure.core.async.impl.ioc-macros/aset-all! clojure.core.async.impl.ioc-macros/USER-START-IDXc__6888__auto__clojure.core.async.impl.ioc-macros/BINDINGS-IDXcaptured- bindings__6889__auto__))](clojure.core.async.impl.ioc-macros/run-state-machine-wrappedstate__6891__auto__))))c__6888__au to__)Copy the code

conclusion

That’s pretty much all I’ve learned about functional programming in recent months. I’d love to help others (especially in the JavaScript community) write better code and get more experienced!

As with the never-ending debate over Haskell and Clojure, I think it’s hard to say which is better because they’re different. Haskell is the root of functional programming. People who use Haskell would call themselves programming fundamentalists. Haskell is exacting, specific, indestructible, and incredibly fast and reliable. Clojure is much more malleable, abstract, and empowering. You can do everything in Clojure because it’s written on the JVM (do everything you can do in Java). In Clojure, you can build everything that’s been left over from Java for over a decade and detect Java algorithms. Clojure has a unique developer culture, with cool libraries like Overtone and Quill behind it.

In the JavaScript world, I’m looking forward to moving things toward the realm of pure functions. I never want to see “this” again. And let’s get into the habit of using const values instead of mutable var or let.

Two of my favorite JavaScript libraries are Ramda and Flyd. However, Ramda is not lazy and is unfriendly to immutable.js. I would love to see a library that combines all of these concepts — persistence, sharing, immutable data structures, and currization, delayed computation, with a lot of useful functionality.

Of course, I’d like to see a library that states these things in a more continuous language — the new ES2015 API, for example, uses.then() instead of.map(), even though promises are essentially monad! That said, you can’t use Ramda to process data in native Promises because R.mapwon’t work. I think the appropriately named Fantasyland specification is necessary because it attempts to unify data structures across all programming languages. If all of these libraries, like Ramda, Lodash, lazy.js, Immutable. Js, and even native data like Promise, used this language, we would be able to reuse more code. You can wrap our native JavaScript arrays in Immutable. Js lists without having to rewrite all the data processing code in Ramda or lazy.js.

Anyway, I hope you enjoyed this article. If you don’t understand something or have any ideas of your own, please let me know by [email protected] and it will also help me improve my writing.

Happy Hacking

Functional Programming for JavaScript People