JavaScript Logging: We Can Do Better!

Currently in the world of JavaScript these options are what we most commonly use to generate logs:

  • console.log
  • console.info
  • console.warn
  • console.error

These are actually pretty good in most modern browsers. Even if you go back to Internet Explorer 8 console.log and friends work as long as you have the developer tools open.

Given that we have these logging utilities what is the problem with using them? When in local development these are just fine for helping debug and speed up development. They can be used to help you quickly catch errors or see where you’re starting to go astray when using a library.

console.log and friends allow you to see what’s going on and leave notes for other developers in the future. This is okay for local development. However, what do you do once you move into production? Almost everyone removes console commands before code is served in production.

Without console commands in production how do you have the same level of logging you’re used to with standard applications? When using a web server you get to see most every error that occurs. Each 500 is logged to an error log file for every error that occurs for every user. This is not something that really exist for JavaScript. By the nature of how web browsers work we do not get any errors that occur for the end user.

Here are the issues with JavaScript logging today:

  • Local development is the only way to see the errors.
  • Logs are distributed across many clients.
  • Errors usually lack local stack context.
  • We do not know when a user sees an error.

Given the problems listed above you may ask: “Why should I care?” We’ve gotten along for years without getting JavaScript errors. Try not to follow this line of flawed reasoning. Despite spending years without tracking analytics about how people use our sites, we now view those tracking analytics as invaluable. Once you start seeing your JavaScript errors at the same rate and volume as your server side errors you will view those logs as invaluable. Most importantly developers will finally be empowered to provide proactive fixes for JavaScript errors the same way we can fix server side errors in a proactive fashion.

Imagine this scenario, you have an advanced search feature in your application. This search feature works two fold: it has an AJAX call to fill out the search results as well as a two-layer UI with a drop down that shows the results. When a result is clicked, it opens a more detailed modal of those results.

In most cases this kind of interaction is JavaScript heavy. How do you know when the searches fail due to a scripting error, instead of a network drop on the client? What can you do to be proactive about issues occurring here?

We’ve released a logging framework, Canadarm, to make identifying and handling these kinds of situations easy. Now each time a script error occurs you’ll get to see it. As long as the client can connect to the Internet and execute JavaScript you’ll get to see what went wrong. A common issue you may not realize in local testing is a Unicode search error. This logger will tell you what error occurred as well as the language and encoding used to read your page.

Below are a some topics that are likely to cross your mind. This post will cover each of them in detail.

  • What does Canadarm do?
  • How does Canadarm work?
  • Who has used Canadarm?
  • Has Canadarm helped solve any problems?
  • When can I use it?

This post will cover all of these questions in detail.

What does Canadarm do?

Canadarm makes it easy to send logs to a remote server. This alone is nothing novel and isn’t all that impressive. It’s fairly easy to setup a try/catch around your code and send that error to a server via a GET request. The real advantage to Canadarm comes in what it does to catch the errors.

Canadarm has three ways to gather errors:

  • Automatically catch all window.onerror events (least useful due to lack of context)
  • Automatically catch all errors that occur when events fire (most useful because it “just works”)
  • Manually watch or attempt individual function calls

These modes allow you to write your code and not have to worry about logging or catching errors yourself. Any global errors will be caught and more specifically all errors bound to events will be caught. The ability to catch errors related to events is the most useful feature of Canadarm.

Most errors that will occur on your web pages happen when a user performs some sort of action. Canadarm is able to provide you with context specific error messages by automatically hooking into and monitoring functions bound to events.

With the Canadarm.watch and Canadarm.attempt functions, you have the power to individually monitor specific functions. Let’s say you have a function that gets called without an event being fired. You can call attempt on that function which will immediately invoke the function. If an error occurs, the error will be logged. With watch you can watch a function once and every time it throws an error later during execution the error will be logged.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
function fastMath() {
    var addedItems = 0, i;

    for (i = 0; i < arguments.length; i++) {
        addedItems += argument[i]; // This typo will throw an error when called.
    }

    return addedItems;
}

// Immediately attempt to execute fast math with the arguments 1,2,3
Canadarm.attempt(fastMath, 1,2,3);

// Override fastMath with the watched version.
fastMath = Canadarm.watch(fastMath);

If you don’t want Canadarm to automatically log global errors and/or event based errors you can opt-out of this feature. With watch and attempt you can write your JavaScript how you want to and not worry about what is going on within Canadarm.

Finally, you get out of Canadarm what you really wanted from console functions. You can log specific error messages at the point you want to via these logging commands:

  • Canadarm.debug(msg, error)
  • Canadarm.info(msg, error)
  • Canadarm.warn(msg, error)
  • Canadarm.error(msg, error)
  • Canadarm.fatal(msg, error)

Optionally, you can provide two more arguments after msg and error. data followed by options. You can see the usage of these arguments over in the Canadarm documentation. Specifically, data is the most useful here. data allows you to pass an extra object that will get its values passed to the appenders. The default appender included in Canadarm will log all these values for you as key-value pairs.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
function addPositives() {
    var value;

    try {
        var addedItems = 0, i;

        // Add all values together.
        for (i = 0; i < arguments.length; i++) {
            value = arguments[i];

            // If the value is negative we throw an error.
            if (value < 0) {
                throw Error();
            }

            addedItems += value;
        }

        return addedItems;
    } catch (e) {
        // This gives a very specific error, likely relating to business logic of a
        // case that should not occur.
        Canadarm.error('A negative value ' + value + ' was given.', e)
        return undefined;
    }
}

// In the console (if the console handler is enabled) you will see the error message.
addPositives();

To find out more on how to configure and use Canadarm go and check out its documentation. It’s pretty easy though. You only need to include the Canadarm code and then configure the logger. As seen on the Canadarm readme, you can do the following to get a working local logger:

1
2
3
4
5
6
7
8
9
10
11
12
Canadarm.init({
  onError: true,
  wrapEvents: true,
  logLevel: Canadarm.level.DEBUG,
  appenders: [
    Canadarm.Appender.standardLogAppender
  ],
  handlers: [
    Canadarm.Handler.consoleLogHandler,
    Canadarm.Handler.beaconLogHandler('http://example.com/beacon_url')
  ]
});

Now you’ll see all logged errors in your console with all the information the standardLogAppender provides. Obviously you want more than local logs. Next you’ll see how our teams have used this logger.

How does this work?

Canadarm is fairly simple. The logger catches an error and then sends that error to a central server. Under the covers it uses Appenders and Handlers as the mechanisms to achieve this result.

Appenders

An appender works as a way to process an error or log event that occurs. The appender has this signature: appender(level, exception, message, data).

  • level – Level of the log, one of DEBUG, INFO, WARN, ERROR, FATAL
  • exception – An actual JavaScript Error object.
  • message – Text message of the logged error.
  • data – Extra information to provide to the appender, usually this is not used.

An appender must return an object. The object should contain simple data types. They are single key/value pairs, usually strings. The return value of an appender is then passed to a handler.

Handlers

Handlers take action on the objects produces by the appenders. A handler’s job is to send the results of the appenders somewhere. By default there are two handlers that come out of the box with Canadarm: a console handler that logs all errors to the console and a beacon handler that sends all errors to a given URL end point.

Appenders & Handlers

Appenders and handlers work together to create your logs. Here’s the break down of what happens during an error or logging event:

  1. Error or log event happens
  2. Every appender is iterated over in order (duplicate keys will be replaced with the value of a later appender)
  3. A final object is created from the output of all appenders
  4. The final object is passed to every handler
  5. Each handler usually sends this information somewhere (e.g. console or remote server)

That’s it for how the logger works on the client. The real power comes when you combine this log gathering with the BeaconHandler. The logs gathered are then sent to a server. The server receiving these logs should be writing them out to a file that is then read into a logging system. We currently use a simple Apache server and treat its access logs as our JavaScript error logs. We then send the logs to a log aggregation tool, Splunk.

Who has used it?

We have a few applications that have begun using Canadarm.

  • HealtheLife – website for patients to manage their health
  • Internal Sites – a few sites we use internally for a few things (e.g. code review, cheat sheets etc.)

HealtheLife

HealtheLife was the first client facing application to go into production using Canadarm. They have over 3 million users and at any given moment they usually have at least one thousand concurrent users. These metrics matter for two reasons: first, it shows us that Canadarm works at scale without causing issues to the application, second, we have been able to see trends in JavaScript errors occurring in this application.

For those who thought “why should I care?” when it comes to JavaScript logs this go live was an interesting story. Within the first 20 minutes we noticed errors that occurred on every page load. Specifically this error was a reference to $ (jQuery) before it was defined. Since this was in an analytics tracking snippet, and at the end of a script tag, it did not cause an end user impact, beyond eating processing time to handle an error on every page.

However, it did mean that analytics were not getting tracked how the application intended. In fact, without this logger in place the application would have happily continued along with no indication certain actions where never taking place. Since the analytics tool did not report the expected user interactions, it appeared as if features of the application where not getting used, or worse, that the analytics were faulty.

The actual messages in the errors for this application are interesting. Since HealtheLife is used in many countries with many different locales, they support various languages. Because of the various supported languages and users being able to use their browsers in any locale they want, we had a few interesting logs messages. Specifically we have had a few logs come across in English, Spanish, German and more. It was kind of eye opening to know that errors are actually translated within a browser.

Internal Sites

Currently a few internal sites are using Canadarm for local development and integration environments. Most interesting so far for has been looking at the logs and seeing who has been copy pasting code around.

Interestingly enough I found some random logs on our Splunk dashboard in dev.

Which lead me to github, specifically a github pages site.

Seeing the application and where the logs said the application lived I was able to find the source code. The code then lead me to the owner of the application. At that point I was able to contact the owner and get the issue fixed. Finding another application’s errors and letting the owner know about is an interesting experience. The whole interaction was cool because it was not a use case we had considered when building Canadarm.

Has it helped solve any problems?

As mentioned before this has helped to point out two issues: one for HealtheLife and another for an internal application. Pointing out issues is not enough to fix them though. Also, Canadarm does not solve problems on its own. You get the most out of logging when you use a tool to aggregate those logs. We’ve been using Splunk to aggregate our logs.

Combined with the searching and reporting of Splunk we’ve been able to leverage the logs generated by Canadarm to see a few common trends in our code. Canadarm has helped us to see a few common problems we have:

  1. Locally we produce a lot of JavaScript errors
  2. We often introduce new errors when we write visualizations
  3. Referencing variables before they exist
  4. New frameworks are hard to get a handle on

Using Canadarm to generate logs doesn’t solve problems on its own. It’s when we combine those logs with the reporting capabilities of Splunk that we can see trends and identify areas we need to improve upon in our development.

On our teams it has shown that we need to get better at defining our APIs for data visualizations. I’ve been able to see many errors from our developers when they first try to update or modify any of our visualizations. Without this logging in place it would not be so obvious that our current API is not working well for others. This gives us empirical data that developers are having issues using our software. Without this data, we’d have to rely on complaints and hope people reported the issues they encountered when using our code.

As a more concrete example, recently one of our teams has begun to try and use react. By analyzing the logs it’s easy to see that we have had some issues getting a handle on how react works. Over the few weeks of react work we could clearly see a number of errors. This shows we need a lot more training on how to properly use react. Further it shows me that if we plan to adopt react as our frontend framework that we need to put together a “gotcha” or “tips and tricks” guide for getting started.

While the logger is not directly “solving” problems it is helping to illuminate issues we are seeing in local development as well as in production. By shining a light on these issues we are able to move forward and solve these problems ourselves. Sometimes the problems may be solved by additional training or they may be solved by code changes. Most importantly Canadarm is opening our eyes to the kinds of issues we’ve had for years with JavaScript. We can no longer ignore these issue because we have solid empirical evidence showing us our problems.

When can I use it?

After reading along this far hopefully you’re thinking: “This all sounds great! When and how can I get started?”. It’s pretty easy:

  1. Include Canadarm in your JavaScript (as early as possible)
  2. Configure Canadarm
  3. Have a server to handle logs
  4. Have a reporting tool on top of your logs

Standalone applications

If you are a public facing Internet application or a small startup, Canadarm can still be a great investment. Ideally you do not need to worry about steps 3 and 4 above. Simple Log management solutions, such as Loggly should be enough for your needs.

Here’s what I did to get a quick setup working:

  1. Setup a free account
  2. Fill out info in the pop up
  3. Go to https://YOUR_SUB_DOMAIN_HERE.loggly.com/sources/setup/https
  4. See the “step 2” section you should have a URL to copy that looks something like:

http://logs-01.loggly.com/inputs/WWWWWWW-55555-5555555-55WW55-WWWWW55555/tag/http/ 5. Configure Canadarm with this end point to see the logs:

Canadarm.init({
  onError: true,
  wrapEvents: true,
  appenders: [
    Canadarm.Appender.standardLogAppender
  ],
  handlers: [
    Canadarm.Handler.beaconLogHandler('http://logs-01.loggly.com/inputs/WWWWWWW-55555-5555555-55WW55-WWWWW55555/tag/http/'),
    Canadarm.Handler.consoleLogHandler
  ]
});

After this setup it was pretty easy to get some graphs going. For example you can easily see what errors occurred by message in this pie chart:

Even easier is getting to view the raw output of a given event message:

Loggly is a great tool to use for large and small projects. A big bonus for anyone starting to use Loggly for their JavaScript logs it that they can begin to use Loggly for their other logs (if they are not already). While Loggly may not be ideal for every situation when it comes to logging, it is really handy when you do not have the resources, money, or time to setup your own log aggregation tool.

Summary

From desktop, to mobile, to embedded devices, web browsers can be seen everywhere. With the help of Canadarm we can now see what exactly is happening within our applications. An entire world of client side errors and issues can now be properly managed and acted upon. Combine these logs with an aggregation tool such as Splunk or Loggly and you have enabled operational intelligence.

The next time a user logs an issue for a JavaScript error you can respond by telling them you’ve seen the error and are already working to correct it. Gone are the days of reactive fixes. Now you can worry about proactive solutions.

Managing 30,000 Logging Events Per Day With Splunk

Our team works on a patient facing web application with a thousand live clients with 2,315,000+ users. On an average, the high traffic results into more than 40,000 visits and 300,000 page views daily generating about 30,000 logging events. A considerable portion of these events are of information and warning level in order to aid proactive monitoring or identify potential issues due to clients’ misconfiguration.

Before Splunk

To handle this large volume of logging, our team created a rotational support role to manually monitor the logs at regular intervals daily. We built a custom log viewer that would aggregate application logs and the engineer on the support role was expected to watch this tool manually to identify issues. Although we were able to identify problems such as bad client builds or service errors, it was not very efficient nor accurate in quickly determining end user impact stats. Since there was no way to tag previously identified and resolved issues, often times newer support engineers lacked knowledge to react to a problem. This led to unnecessary escalation and engagement of next tier support.

Below: Our old log aggregator used to identify the occurrences (2) of a log with the stack trace.

Splunk Round 1

Once we migrated to Splunk we were very excited about the capabilities it offered, especially around searching and data visualization. In addition to searching logs more effectively, we were able to extract meaningful information from our logs unlike before. Splunk gave us the ability to identify client and end user impact down to the user id across all events in our logs [see image below]. This helped us gain more insight into our problems and trends in terms of impact to users. For a particular problem, we were able to quickly conclude whether all clients were affected, whether clients affected were over a virtual network only, and if the issue was isolated to a specific user. This information gave us the ability to determine the impact of issues coming into our logs, including the area of the site being impacted and frequency.

Below: Once we extracted meaningful fields in our logs, we could identify impact. In this case, an issue is spread across 9 orgs and and 28 users.

Although we had crossed some of the hurdles which made log monitoring difficult shortly after moving to Splunk, monitoring logs for issues was still not an easy job. It was possible to overlook issues since there was no effective way of studying trends. Initially, we created dashboards which helped identify organizations having problems. This was slightly useful but failed to depict more important graphical representation of the different types of occurring issues for a particular client or for all clients at a given time.

Below: Reports like these weren’t very helpful. Clients with more users tend to have more errors, so this trend doesn’t necessarily indicate a client is experiencing a downtime.

Splunk Round 2

It didn’t take us long to realize that we had to get a better handle on our logs to stay on top of increasing traffic caused by a growing user base. Although we were able to identify frequently occurring errors, we still needed a more effective way to identify known issues, service issues, configuration issues and application issues. In order to do that, we needed something more meaningful than a stack trace to track issues. We needed to tag events, and to do that, we turned to the eventtypes feature offered by Splunk.

Eventtypes are applied at search time and allow you to create events to tag search results with. Because they are applied at search time, we were able to add new event types and have them applied historically throughout our logs. This also gave us the ability to tweak our event types to add more known issues as we continued identifying them. Once we successfully gauged a way to take advantage of eventtypes, we came up with a query that created a stacked timechart of eventtypes where eventtypes represented known issues. Once we reached the improved level of production monitoring, the following had to be done:

  1. Create an eventtype with least priority that catches all problems and label it “Unknown Issue.”
  2. Go through “Unknown Issues” and create prioritized eventtypes that describe the problem in english. Once an issue is logged in our bug tracking system, tag the eventtype with that id for easy tracking.
  3. Repeat daily.

Below: Eventtypes give us the ability to see known problems that happen over time. We can even see known problems broken down by client.

Once we had our frequently occurring problems categorized, we were able to break it down even further. We could identify problems caused by configuration in our application layer, problems that required escalation or if client side contacts needed to be engaged.

Below: We now have the ability to track impact to users from clients not taking a service package [left], or from improper Service Configuration [right].

Alerting

We’ve also started taking advantage of Splunk’s alerting. With its powerful searching abilities, we have scheduled searches that trigger an alert when a particular condition is met. For example, when a client has misconfigured certain credentials that cause authentication errors all over the site, we can engage support immediately to get it resolved.

What’s Next?

Although we have a better understanding of our logs now, it can get even better. We plan on continually categorizing our logs so that monitoring our system becomes really simple for everyone. Once all of our known issues are categorized, we wish to have a scheduled search that can identify anomalies in the logs. This would be highly beneficial to find out if a release introduces issues.

Since our site is dependent on multiple services, most of the service problems are resolved by escalated support. We are currently working on identifying problems with known resolutions along with the people that need to be contacted to perform the resolution steps. Eventually we would like to send alerts/emails from Splunk to Cerner’s general support directly for these issues.

We also plan on integrating Jira into splunk with the help of the Splunk Jira app. This will give us the ability to not only track issues in our logs, but also view their current status (investigation, assigned, fixed, resolved). This closes the loop on finding new issues, tracking their impact, and finally their resolution until the end. Splunk has been extremely exciting to work on and has been an invaluable asset to our team. We’d love to continue the conversations on how we can improve our usage of Splunk and how others are using it as well.

Cerner and the Apache Software Foundation

At the beginning of this year, we announced that Cerner became a bronze-level sponsor of the non-profit Apache Software Foundation (ASF). Many of the open source projects we use and contribute to are under the ASF umbrella, so supporting the mission and work of the ASF is important to us.

We’re happy to announce that Cerner has now increased our sponsorship of the ASF to become a silver-level sponsor. Open source continues to play an integral role in both our architecture and engineering culture. We’ve blogged and spoken at conferences about how several ASF projects are core foundational components in our architecture and several of our tech talks have focused on ASF projects.

Further increasing our sponsorship of the ASF reaffirms our continued support for an organization that provides homes for numerous open source projects that are important not only to us, but the larger development community.

Closures & Currying in JavaScript

Preface

I have been asked many times what closures are and how they work. There are many resources available to learn this concept, but they are not always clear to everyone. This has led me to put together my own approach to exchanging the information.

I will supply code samples. //> denotes an output or return.

Before discussing closures, it is important to review how functions work in JavaScript.

Introduction to functions

If a function does not have a return statement, it will implicitly return undefined, which brings us to the simplest functions.

Noop

Noop typically stands for no operation; it takes any parameters, does nothing with them, and returns undefined.

1
2
function noop() {};
noop("cat"); //> undefined

Identity

The identity function takes in a value and returns it.

1
2
3
4
5
6
function identity(value) {
  return value;
}

identity("cat"); //> "cat"
identity({a: "dog"}); //> Object {a: "dog"}

The important thing to note here is that the variable (value) passed in is bound to that function’s scope. This means that it is available to everything inside the function and is unavailable outside of it. There is an exception to this, being that objects are passed by reference which will prove useful with the use of closures and currying.

Functions that evaluate to functions

Functions are first class citizens in Javascript, which means that they are objects. Since they are objects, they can take functions as parameters, have methods bound to them, and even return functions.

1
2
3
4
5
6
7
function foo() {
  return function () {
    return true;
  }
}

foo()(); //> true

This is a function that returns a function which returns true.

Functions take arguments and those arguments can be values or reference types, such as functions. If you return a function, it is that function you are returning, not a new one (even though it might have just been made to return).

Closures

Creating a closure is nothing more than accessing a variable outside of a function’s scope (using a variable that is neither bound on invocation or defined in the function body).

To elaborate, the parent function’s variables are accessible to the inner function. If the inner function uses its parent’s (or parent’s parent’s and so on) variable(s) then they will persist in memory as long as the accessing functions(s) are still referenceable. In JavaScript, referenceable variables are not garbage collected.

Let’s review the identity function:

1
function identity(a) { return a; }

The value, a, is bound inside of the function and is unavailable outside of it; there is no closure here. For a closure to be present, there would need to be a function within this function that would access the variable a.

Why is this important?

  • Closures provide a way to associate data with a method that operates on that data.
  • They enable private variables in a global world.
  • Many patterns, including the fairly popular module pattern, rely on closures to work correctly.

Due to these strengths, and many more, closures are used everywhere. Many popular libraries utilize them internally.

Let’s take a look at an example of closure in action:

1
2
3
4
5
6
7
8
9
function foo(x) {
  function bar(y) {
    console.log(x + y);
  }

  bar(2);
}

foo(2); // will log 4 to the console

The outer function (foo) takes a variable (x), which, which is bound to that function when invoked. When the internal function (bar) is invoked, x (2) and y (2) are added together then logged to the console as 4. Bar is able to access foo’s x-variable because bar is created within foo’s scope.

The takeaway here is that bar can access foo’s variables because it was created within foo’s scope. A function can access variables in its scope and up the chain to the global scope. It cannot access other function’s scopes that are declared within it or parallel to it.

No, a function inside of a function doesn’t have to reference variables outside of its scope. Recall the example function which returned a function which evaluated to true:

1
2
3
4
5
6
7
8
function foo(x) {
  // does something with x or not
  return function () {
      return true;
  }
}

foo(7)(); //> true

No matter what is passed to foo, a function that evaluates to true is returned. A closure only exists when a function accesses a variable(s) outside of its immediate scope.

This leads into an important implication about closures, they enable you to define a dataset once. We’re talking about private variables here.

Without closures, you recreate the data per function call if you want to keep it private.

1
2
3
4
5
6
7
function foo() {
  var private = [0, 1, 2]; // Imaginary large data set - instantiated per invocation

  console.log(private);
}

foo(); //> [0, 1, 2]

We can do better! With a closure, we can save it to a variable that is private, but only instantiated once.

1
2
3
4
5
6
7
8
9
10
var bar = (function () {
  var private = [0, 1, 2]; // Same large imaginary data set - only instantiated once

  // As long as this function exists, it has a reference to the private variable
  return function () {
    console.log(private);
  }
}());

bar(); //> [0, 1, 2]

By utilizing closure here, our big imaginary data set only has to be created once. Given the way garbage collection (automatic memory freeing) works in JavaScript, the existence of the internal function (which is returned and set to the variable bar) keeps the private variable from being freed and thus available for subsequent calls. This is really advantageous when you consider large data sets that may be created via Ajax requests which have to go over the network.

Currying

Currying is the process of transforming a function with many arguments into the same function with less arguments.

That sounds cool, but why would I care about that?

  • Currying can help you make higher order factories.
  • Currying can help you avoid continuously passing the same variables.
  • Currying can memorize various things including state.

Let’s pretend that we have a function (curry) defined and set onto the function prototype which turns a function into a curried version of itself. Please note, that this is not a built in feature of JavaScript.

1
2
3
4
5
6
7
8
function msg(msg1, msg2) {
  return msg1 + ' ' + msg2 + '.';
}

var hello = msg.curry('Hello,');

console.log(hello('Sarah Connor')); // Hello, Sarah Connor. 
console.log(msg('Goodbye,', 'Sarah Connor')); // Goodbye, Sarah Connor. 

By currying the msg function so the first variable is cached as “Hello,”, we can call a simpler function, hello, that only requires one variable to be passed. Doesn’t this sound similar to what a closure might be used for?

In the discussion of functional programming concepts, there is often a sense of resistance.

The thing is, you’ve probably already been functionally programming all along. If you use jQuery, you certainly already do.

1
2
3
4
$("some-selector").each(function () {
  $(this).fadeOut();
  // other stuff to justify the each
});

Another place you may have seen this is utilizing the map function for arrays.

1
2
3
4
5
6
var myArray = [0, 1, 2];
console.log(myArray.map(function (val) {
  return val * 2;
}));

//> [0, 2, 4]

Conclusion

We’ve seen some examples of closures and how they can be useful. We’ve seen what currying is and more importantly that you’ve likely already been functionally programming even if you didn’t realize it. There is a lot more to learn with closures and currying as well as functional programming.

I ask you to:

  1. Work with closures and get the hang of them.
  2. Give currying a shot.
  3. Embrace functional programming as an additional tool that you can utilize to enhance your programs and development workflow.

Additional readings and inspirations

Bonus

Check out how you can utilize closure and currying to manage state throughout a stateful function:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
function setFoo(state) {
  if (state === "a") { // Specific state
      return function () {
          console.log("State a for the win!");
      };
  } else if (state) { // Default state
      return function () {
        console.log("Default state");
      };
  }
  // Empty function since no state is desired. This avoids invocation errors.
  return function () {};
}

var foo = setFoo("a"); // Set to the specific state (a)
foo(); //> "State a for the win!";

foo = setFoo(true); // Set foo to its default state
foo(); //> "Default state"

foo = setFoo(); // Set foo to not do anything
foo(); //> undefined
// etc

Bonus 2

Checkout how closures and currying can be used to create higher order functions to create methods on the fly: http://jsfiddle.net/GneatGeek/A9WRb/