OpenTable Tech UK Blog

The technology blog for OpenTable UK.

OpenComponents - microservices in the front-end world

Many engineers work every day on opentable.com from our offices located in Europe, America, and Asia, pushing changes to production multiple times a day. Usually, this is very hard to achieve, in fact it took years for us to get to this point. I described in a previous article how we dismantled our monolith in favour of a Microsites architecture. Since the publication of that blog post we have been working on something I believe to be quite unique, called OpenComponents.

Another front-end framework?

OpenComponents is a system to facilitate code sharing, reduce dependencies, and easily approach new features and experiments from the back-end to the front-end. To achieve this, it is based on the concept of using services as interfaces – enabling pages to render partial content that is located, executed and deployed independently.

OpenComponents is not another SPA JS framework; it is a set of conventions, patterns and tools to develop and quickly deploy fragments of front-end. In this perspective, it plays nicely with any existing architecture and framework in terms of front-end and back-end. Its purpose is to serve as delivery mechanism for a more modularised end-result in the front-end.

OC is been in production for more than a year at OpenTable and it is fully open-sourced.

 Overview

OpenComponents involves two parts:

  • The consumers are web pages that need fragments of HTML for rendering partial contents. Sometimes they need some content during server-side rendering, somethings when executing code in the browser.
  • The components are small units of isomorphic code mainly consisting of HTML, Javascript and CSS. They can optionally contain some logic, allowing a server-side Node.js closure to compose a model that is used to render the view. When rendered they are pieces of HTML, ready to be injected in any web page.

The framework consists of three parts:

  • The cli allows developers to create, develop, test, and publish components.
  • The library is where the components are stored after the publishing. When components depend on static resources (such as images, CSS files, etc.) these are stored, during packaging and publishing, in a publicly-exposed part of the library that serves as a CDN.
  • The registry is a REST API that is used to consume components. It is the entity that handles the traffic between the library and the consumers.

In the following example, you can see how a web page looks like when including both a server-side rendered component (header) and client-side (still) unrendered component (advert):

1
2
3
4
5
6
7
8
9
10
11
12
13
<!DOCTYPE html>
  ...
  <oc-component href="//oc-registry.com/header/1.X.X" data-rendered="true">
    <a href="/">
      <img src="//cdn.com/oc/header/1.2.3/img/logo.png" />
    </a>
  </oc-component>
  ...
  <p>page content</p>
  <oc-component href="//oc-registry.com/advert/~1.3.5/?type=bottom">
  </oc-component>
  ...
  <script src="//oc-registry/oc-client/client.js"></script>

Getting started

The only prerequisite for creating a component is Node.js:

1
2
3
$ npm install -g oc
$ mkdir components && cd components
$ oc init my-component

Components are folders containing the following files:

File Description
package.json A common node’s package.json. An “oc” property contains some additional configuration.
view.html The view containing the markup. Currently Handlebars and Jade view engines are supported. It can contain some CSS under the <style> tag and client-side Javascript under the <script> tag.
server.js (optional) If the component has some logic, including consuming services, this is the entity that will produce the view-model to compile the view.
static files (optional) Images, Javascript, and files that will be referenced in the HTML markup.
* Any other files that will be useful for the development such as tests, docs, etc.

Editing, debugging, testing

To start a local test registry using a components’ folder as a library with a watcher:

1
$ oc dev . 3030

To see how the component looks like when consuming it:

1
$ oc preview http://localhost:3030/hello-world

As soon as you make changes on the component, you will be able to refresh this page and see how it looks. This an example for a component that handles some minimal logic:

1
2
<!-- view.html -->
<div>Hello {{ name }}</div>
1
2
3
4
5
6
// server.js
module.exports.data = function(context, callback){
  callback(null, {
    name: context.params.name || 'John Doe'
  });
};

To test this component, we can curl http://localhost:3030/my-component/?name=Jack.

Publishing to a registry

You will need an online registry connected to a library. A component with the same name and version cannot already exist on that registry.

1
2
3
4
5
# just once we create a link between the current folder and a registry endpoint
$ oc registry add http://my-components-registry.mydomain.com

# then, ship it
$ oc publish my-component/

Now, it should be available at http://my-components-registry.mydomain.com/my-component.

Consuming components

From a consumer’s perspective, a component is an HTML fragment. You can render components just on the client-side, just on the server-side, or use the client-side rendering as failover strategy for when the server-side rendering fails (for example because the registry is not responding quickly or it is down).

You don’t need Node.js to consume components on the server-side. The registry can provide rendered components so that you can consume them using any tech stack.

When published, components are immutable and semantic versioned. The registry allows consumers to get any version of the component: the latest patch, or minor version, etc. For instance, http://registry.com/component serves the latest version, and http://registry.com/component/^1.2.5 serves the most recent major version for v1.

Client-side rendering

To make this happen, a components’ registry has to be publicly available.

1
2
3
4
5
<!DOCTYPE html>
  ...
  <oc-component href="//my-components-registry.mydomain.com/hello-world/1.X.X"></oc-component>
  ...
  <script src="//my-components-registry.mydomain.com/oc-client/client.js" />

Server-side rendering

You can get rendered components via the registry REST API.

1
2
3
4
5
6
7
8
9
10
curl http://my-components-registry.mydomain.com/hello-world

{
  "href": "https://my-components-registry.mydomain.com/hello-world",
  "version": "1.0.0",
  "requestVersion": "",
  "html": "<oc-component href=\"https://my-components-registry.mydomain.com/hello-world\" data-hash=\"cad2a9671257d5033d2abfd739b1660993021d02\" data-name=\"hello-world\" data-rendered=\"true\" data-version=\"1.0.13\">Hello John doe!</oc-component>",
  "type": "oc-component",
  "renderMode": "rendered"
}

Nevertheless, for improving caching and response size, when doing browser rendering, or using the node.js client or any language capable of executing server-side Javascript the request will look more like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
 curl http://my-components-registry.mydomain.com/hello-world/~1.0.0 -H Accept:application/vnd.oc.unrendered+json

{
  "href": "https://my-components-registry.mydomain.com/hello-world/~1.0.0",
  "name": "hello-world",
  "version": "1.0.0",
  "requestVersion": "~1.0.0",
  "data": {
    "name": "John doe"
  },
  "template": {
    "src": "https://s3.amazonaws.com/your-s3-bucket/components/hello-world/1.0.0/template.js",
    "type": "handlebars",
    "key": "cad2a9671257d5033d2abfd739b1660993021d02"
  },
  "type": "oc-component",
  "renderMode": "unrendered"
}

Making a similar request it is possible to get the compiled view’s url + the view-model as data. This is useful for caching the compiled view (taking advantage of components’ immutability).

Setup a registry

The registry is a Node.js Express app that serves the components. It just needs an S3 account to be used as library.

First, create a dir and install OC:

1
2
3
4
$ mkdir oc-registry && cd oc-registry
$ npm init
$ npm install oc --save
$ touch index.js

This is how index.js will look like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
var oc = require('oc');

var configuration = {
  verbosity: 0,
  baseUrl: 'https://my-components-registry.mydomain.com/',
  port: 3000,
  tempDir: './temp/',
  refreshInterval: 600,
  pollingInterval: 5,
  s3: {
    key: 'your-s3-key',
    secret: 'your-s3-secret',
    bucket: 'your-s3-bucket',
    region: 'your-s3-region',
    path: '//s3.amazonaws.com/your-s3-bucket/',
    componentsDir: 'components'
  },
  env: { name: 'production' }
};

var registry = new oc.Registry(configuration);

registry.start(function(err, app){
  if(err){
    console.log('Registry not started: ', err);
    process.exit(1);
  }
});

Conclusions

After more than a year in production, OC is still evolving. These are some of the most powerful features:

  • It enables developers to create and publish components very easily. None of the operations need any infrastructural work as the framework takes care, when packaging, of making each component production-ready.
  • It is framework agnostic. Microsites written in C#, Node and Ruby consume components on the server-side via the API. In the front-end, it is great for delivering neutral pieces of HTML but works well for Angular components and React views too.
  • It enables granular ownership. Many teams can own components and they all are discoverable via the same service.
  • Isomorphism is good for performance. It enables consumers to render things on the server-side when needed (mobile apps, SEO) and defer to the client-side contents that are not required on the first load (third-party widgets, adverts, SPA fragments).
  • Isomorphism is good for robustness. When something is going bad on the server-side (the registry is erroring or slow) it is possible to use client-side rendering as a fail-over mechanism. The Node.js client does this by default.
  • It is a good approach for experimentation. People can work closely to the business to create widgets that are capable of both getting data from back-end services and deliver them via rich UIs. We very often had teams that were able to create and instrument tests created via OC in less then 24 hours.
  • Semver and auto-generated documentation enforce clear contracts. Consumers can pick the version they want and component owners can keep clear what the contract is.
  • A more componentised front-end leads to write more easily destroyable code. As opposite of writing highly maintainable code, this approach promotes small iterations on very small, easily readable and testable units of code. In this perspective, recreating something from scratch is perfectly acceptable and recommended, as there is almost zero cost for a developer to start a new project and the infrastructure in place makes maintainance and deprecation as easy as a couple of clicks.

If you wish to try or know more about OpenComponents, visit OC’s github page or have a look at some component examples. If you would give us some feedback, asks us question, or contribute to the project get in touch via the gitter chat or via e-mail. We would love to hear your thoughts about this project.

Testing React Components

At OpenTable it’s becoming an increasingly popular trend to use React. One of the reasons for this is the ability for it to server-side render whilst still giving us the client side flexibility that we all crave!

We all know to have stable, reliable software you need to have well written tests. Facebook knows this and provides the handy Test Utilities library to make our lives easier.

Cool — I hear you all say! But what is the best approach to testing React components?

Well unfortunately this is something that is not very well documented and if not approached in the correct way can lead to brittle tests.

Therefore I have written this blog post to discuss the different approaches we have available to us.

All code used in this post is avaliable on my GitHub.

The Basics

To make our lives a lot easier when writing test it’s best to use a couple of basic tools. Below is the absolute minimum required to start testing React components.

  • Mocha – This is a testing framework that runs in the browser or Node.JS (others are available).
  • ReactTestUtils – This is the basic testing framework that Facebook provides to go testing with React.

The Scenario

We have a landing page broken down into two separate components:

  • Container – The holding container for all sub-components.
  • Menu Bar – Contains the site navigation and is always displayed.

react-comp

Each React component is self-contained and should be tested in isolation.

For the purpose of this exercise we will focus on the test for the container component and making sure that the menu bar is displayed within it.

Approach 1 (Full DOM):

I like to call this the “Full DOM” approach because you take a component and render it in its entirety including all of its children. The React syntax are transformed and any assertion you make will be against the rendered HTML elements.

Below is our test scenario written in this approach.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import React from 'react/addons';
...
import jsdom from 'jsdom';

global.document = jsdom.jsdom('<!doctype html><html><body></body></html>');
global.window = document.parentWindow;

describe('Container', function () {
  it('Show the menu bar', function () {
    let container = TestUtils.renderIntoDocument(<Container />);

    let result = TestUtils.scryRenderedDOMComponentsWithClass(container,
      'menu-bar-container');

    assert.lengthOf(result, 1);
  });

If you run the above test it passes but how does it work?

1
2
3
4
import jsdom from 'jsdom';

global.document = jsdom.jsdom('<!doctype html><html><body></body></html>');
global.window = document.parentWindow;

This sets up our DOM which is a requirement of TestUtils.renderIntoDocument.

1
let container = TestUtils.renderIntoDocument(<Container />);

TestUtils.renderIntoDocument then takes the React syntax and renders it into the DOM as HTML.

1
let result = TestUtils.scryRenderedDOMComponentsWithClass(container, 'menu-bar-container');

We now query the DOM for a unique class that is contained within the menu-bar and get an array of DOM elements back which we can assert against.

The example above is a common approach but is it necessarily the best way?

From my point of view no, as this approach makes our tests brittle. We are exposing and querying on the inner workings of the menu-bar and if someone was to refactor it and remove/rename the “menu-bar-container” class then our test would fail.

Approach 2 (Shallow Rendering):

With the release of React 0.13 Facebook provided the ability to “shallow render” a component. This allows you to instantiate a component and get the result of its render function, a ReactElement, without a DOM. It also only renders the component one level deep so you can keep your tests more focused.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
import React, { addons } from 'react/addons';
import Container from '../../src/Container';
import MenuBar from '../../src/MenuBar';

describe('Container', function () {
  let shallowRenderer = React.addons.TestUtils.createRenderer();

  it('Show the menu bar', function () {
    shallowRenderer.render(<Container/>);
    let result = shallowRenderer.getRenderOutput();

    assert.deepEqual(result.props.children, [
      <MenuBar />
    ]);
  });
});

Again like the previous example this passes but how does it work?

1
let shallowRenderer = React.addons.TestUtils.createRenderer();

We first create the shallowRender which handles the rendering of the React components.

1
shallowRenderer.render(<Container/>);

Then we pass in the component we have under test to the shallowRender.

1
2
let result = shallowRenderer.getRenderOutput();
assert.deepEqual(result.props.children, [<MenuBar/>]);

And finally we get the output from the shallowRender and assert that the children contain the menu-bar component.

Is this approach any better than the previous? In my option yes and for the following reasons:

  • We don’t rely on the inner workings of the menu-bar to know if it has been rendered and therefore the markup can be refactored without any of the tests being broken.

  • Less dependencies are being used as shallowRender does not require a DOM to render into.

  • It’s a lot easier to see what is being asserted as we are able to use JSX syntax in assertions.

Conclusion

So is shallow rendering the silver bullet for React testing? Probably not as it still lacking on key feature for me when dealing with large components and that is the ability to easily query the ReactDOM (libraries like enzyme are working towards improving this). But it is still a lot better than rendering the component out into HTML and coupling your tests to the inner components of others.

In this blog post we have just scratched the surface of testing with React and I hope it’s food for thought when writing your next set of React tests.

Puppet-Community

Puppet is an important tool to us at OpenTable; we couldn’t operate as efficiently without it but Puppet is more than a tool or a vendor, it is a community of people trying to help each other operate increasing complex and sophisticated infrastructures.

The Puppet community and the open source efforts that drive that community have always been important to us which is why we want to take a step further in our efforts and introduce you to the “Puppet-community” project.

What is Puppet-community

Puppet-community is a GitHub organisation of like-minded individuals from across the wider Puppet ecosystem and from a diverse set of companies. Its principle aims are to allow the community to synchronise its efforts and to provide a GitHub organisation and Puppet Forge namespace not affiliated with any company.

Its wider aims are to provide a place for module and tool authors to share their code and the burden of maintaining it.

I would like to say that this was our idea, as it’s an excellent one, but actually all credit goes to its founders: Igor Galić, Daniele Sluijters and Spencer Krum

Why communities matter

So why all the fuss about this? Why does it even matter where your code lives?

Well these are the some questions that I asked myself when I first heard about this project at PuppetConf 2014. The answer is that is really does matter and it’s a pattern that is developing elsewhere (see: packer-community, terraform-community-modules, cloudfoundry-community) to deal with the problems you’ll face with a large amount of open source code.

Stepping back slightly, if you look at open source then there are three types: product-based (think open-core), corporate/individual sponsored, and community-driven.

The first is common for businesses (like PuppetLabs) who’s product is a open source product. They make great efforts to build a community, fix bugs and accept changes. They make their money through extras (add-ons and/or professional services). They control what they will/won’t accept and are driven by the need to build that community as well as support those big paying customers who pay the bills – it’s a tough balancing act.

The second is what you probably mean when you think about open source. It’s a individual or company that dumps some code they have been working on to GitHub and that’s it – they own it, they control it, it they don’t like your changes they don’t even have to give a reason. They can also choose to close or delete the project whenever they want or more likely they will just let it sit on GitHub and move onto the next thing.

The third is the community approach. Create a GitHub organisation, move your projects and add some new people in there with commit access. This is a different approach because it means that you don’t own it any more, you don’t have that tight control over the codebase because there are other people with other opinions that you have to take into account. It also means that on long weeks when you’re on-call or on holiday that there is someone else to pick up the slack and merge those pull requests for you. It has massive benefits if you can keep that ego in check.

Why we’re moving our modules there

So why is OpenTable moving its modules there? It is because we care about the community (particularly those using Puppet on Windows) and want to make sure there is good long term support for the modules that we authored. OpenTable isn’t a company that authors Puppet modules, it is a company that seats diners in restaurants so from time to time we are going to work on other things.

By being part of the community there will be other people who can help discuss and diagnose bugs, merge pull requests and generally help with any problems that arise when using the modules we created.

Sometimes when writing a module it’s not about being the best, sometimes it’s just about being first – we got a bit lucky. What that means though is that we need to recognise that there are plenty of people out there in the community that have better knowledge than us about a tool or application and might be better suited to guide the project forward – heck we might even learn from them in the process.

So let’s lose our egos, loosen that grip and let those modules be free …

What that means for you

Ok, so let’s get practical for a second. What’s happening here? What our support of Puppet-community means is that our code has moved into a new organisation (github.com/puppet-community) and our modules have been re-released under the community namespace on the forge (forge.puppetlabs.com/puppet). So if you are using our modules then you should go and have a look on the forge and update to the latest versions. We will continue to provide lots of support to these modules but so will lots of others (including some PuppetLabs employees) so expect the quality of the modules to also start increasing.

If you have any thoughts or questions about this you can reach out to me personally on twitter: @liamjbennett or via email at: liamjbennett@gmail.com

The DNS ABC

Introduction to DNS

Before joining OpenTable I was looking for a software engineer job and I’ve done my fair share of interviews. A question that has popped out a lot, and when I say a lot I mean always, is:

Could you tell me what happens when I type an URL in a web browser on my computer and press enter?

Of course the possible answers could range from “MMMHHH, wellll, I’m not sure where to start…” to a whole book on computer networks.

After a number of attempts to answer briefly and correctly, I’ve concluded that mentioning DNS can make a reasonable start.

Let’s think about it. When we type the address of the resource we want to browse, we use the alphabet, right? With letters and names easily readable and retainable by a human being.

But a machine needs an IP address to recognize another machine connected to a network. An IP address is numerical, for example 192.168.0.1. Less readable, it seems.

And here is where DNS comes to play. DNS stands for Domain Name System, and that represents exactly what it is: a system that translates domain names (e.g www.opentable.co.uk), into IP addresses. I think of it as a phone book. It is queried with a domain name and, after a lookup, returns an IP.

How does the magic happen? Let’s look into it.

The ABC

Some definitions

So we can define a domain name as a string composed by one or more parts, called labels, concatenated and delimited by dots with a hierarchical logic.

In the case of www.opentable.co.uk, for instance, we have four labels:

  • uk is the top-level domain. This should sound familiar. Famous top-level domains are also .net, .org, .uk, .it, .gov, etc.

  • co is the second level domain, which in this case specifies the commercial nature of the company.

  • Hierarchy goes from right to left, so then we can say that opentable is a subdomain of co. And so on.

  • A name that can be associated to a specific machine connected to a network with an IP address is called hostname. Let’s say it’s the leftmost label in the domain name.

Questions that pop out at this point

Q: So all the host names reachable via a specific domain have a specific IP address! There must be BILLIONS of them. How do we make sure everyone is unique?

A: There are entities that have the authority to assign and register names under one or more top-level domain, called registrars. The registered name then becomes part of a central database known as the whois database.

Q: Now, how do we retrieve this infamous IP address by just knowing a domain name? Who can resolve this request?

A: Well, the domain name is resolved into an IP address by querying authoritative name servers. These machines are the endpoints of a database that can map domain names to IPs. The authoritative name servers of the top level domain are also called root level servers.

Q: OK, but wait a second. How in the heavens does my machine know the address of the name server to query? I thought I just entered an address in the browser!

A: Every client machine has a default DNS resolver, which is responsible of initiating the sequence of queries that will ultimately lead to the resolution. It is very important to note that the system’s DNS setting can be also overridden by the Internet Service Provider (ISP) settings, so the DNS lookup process can be very OS-specific and ISP-specific. This would deserve a whole post apart.

How to resolve an address (ideally)

Resolving an address via DNS is also called lookup, and it is a recursive process. Now that we know the purpose of DNS, and the concepts involved in the process, we can dig a little deeper into its basic mechanism, which is roughly:

  1. The resolver has knowledge of the addresses of root name servers, from where the search can start.

  2. The root name server will return a name server which is authoritative for the top-level domain.

  3. This server will give the address of the name server authoritative for the second level domain.

  4. If the hostname is resolved, an IP address is returned. Otherwise step 3) is repeated for all the labels of the domain name in sequence, until a result is reached.

I made a diagram that shows that.

Real life problems

The mechanism explained above is great, but if applied in a real life application, it will lead to a bottleneck. Every lookup would involve root servers and authoritative servers, which would be hit by gazillions of queries every day, putting a huge burden on the system since the start.

To solve this, of course a caching system comes to help. Yes, DNS allows and encourages caching. This way another class of DNS servers comes into play, the recursive name servers. They can perform recursive lookups and cache results, returning them when queried even if they don’t have the authority to generate the results themselves.

Caching recursive DNS server are usually managed by Internet Service Providers, and are able to resolve addresses without waiting for the “authorities”. This means that a query will rarely have to hit the root name servers, since there is a very high likelihood that the hostname/IP request is already cached by one of the delegated DNS servers that are called by recursion.

We could say that in reality a root server will be hit as a last resort to track down an authoritative server for a given domain.

The amount of time for which a lookup result is stored on a server is called time-to-live (TTL) and can vary with the configuration.

One side effect of the heavy caching that involves the DNS is that when a new domain is registered, or there is a change in any domain-related settings, there will be a time lag for the propagation of it to all the cached results.

It is noteworthy that cached DNS results from your browsing could be stored in your router, or somewhere within you browser memory as well. These IP addresses seem to be everywhere these days!

Conclusion

I barely scratched the surface of the Domain Name System topic, and that alone took a good day of research and writing.

So I decided to avoid making this post too long, so that beginners that are going to find it will profit, and be encouraged to research on these key concepts. This will allow me to decide which part of DNS is worth more digging, and maybe write a sequel. Stay tuned!

Hapi.js and SIGTERM

When we first stood up our hapi.js APIs, we wrote init scripts to start/stop them. Stopping the server, was simply a case of sending SIGKILL (causing the app to immediately exit).

Whilst this is fine for most cases, if we want our apps to be good Linux citizens, then they should terminate gracefully. Hapi.js has the handy server.stop(...) command (see docs here) which will terminate the server gracefully. It will cause the server to respond to new connections with a 503 (server unavailable), and wait for existing connections to terminate (up to some specified timeout), before stopping the server and allowing the node.js process to exit. Perfect.

This makes our graceful shutdown code really simple:

1
2
3
4
5
process.on('SIGTERM', function(){
  server.stop({ timeout: 5 * 1000}, function(){
    process.exit(0);
  });
});

When we see a SIGTERM, call server.stop(), then once the server has stopped, call process.exit(0). Easy peasy.

Throw a spanner in the works

Whilst server.stop() is really useful, it has the problem that it immediately prevents the server from responding to new requests. In our case, that isn’t particularly desirable. We use service-discovery, which means that the graceful termination of our app should run like this:

  • SIGTERM
  • Unannounce from Service-Discovery
  • server.stop(...)
  • process.exit(0)

Ideally we want the unannounce to happen before the server starts rejecting connections, in order to reduce the likelihood that clients will hit a server that is shutting down.

Plugins to the rescue!

Thanks to hapi.js’s awesome plugin interface (shameless self promotion), we can do some magic to make the above possible.

I created a really simple plugin called hapi-shutdown which will handle SIGTERM and then run triggers before calling server.stop(...).

The idea is that it allows us to run the ‘unannounce’ step, before server.stop(...) is called.

How to use hapi-shutdown

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
server.register([
  {
    plugin: require('hapi-shutdown'),
    options: {
      serverSpindownTime: 5000 // the timeout passed to server.stop(...)
    }
  }],
  function(err){
    server.start(function(){

      server.plugins['hapi-shutdown'].register({
        taskname: 'do stuff',
        task: function(done){
          console.log('doing stuff before server.stop is called');
          done();
        },
        timeout: 2000 // time to wait before forcibly returning
      })
    });
  });

The plugin exposes a .register() function which allows you to register your shutdown tasks. The tasks are named (to prevent multiple registrations), and each task must call the done() function. The timeout parameter is provided so that a task which never completes won’t block the shutdown of the server.

Neat, huh?

Hooking up unannounce using hapi-shutdown

We now have a place to register our ‘unannounce’ task. Our service-discovery code is wrapped in another plugin, which means we can use server.dependency(...).

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// inside the plugin's register function

server.dependency('hapi-shutdown', function(_, cb){
  var err = server.plugins['hapi-shutdown'].register({
    taskname: 'discovery-unannounce',
    task: function(done){
      discovery.unannounce(function(){
        done();
      });
    },
    timeout: 10 * 1000
  });

  cb(err);
});

server.dependency(...) allows us to specify that this plugin relies on another plugin (or list of plugins). If the dependent plugin is not registered before the server starts, then an exception is thrown.

Handily, server.dependency(...) also takes a callback function, which is invoked after all the dependencies have been registered, which means that you don’t need to worry about ordering inside your server.register(...) code.

This allows our unannounce code to be decoupled from the actual business of shutting down the server.

Dismantling the monolith - Microsites at Opentable

A couple of years ago we started to break-up the code-base behind our consumer site opentable.com, to smaller units of code, in order to improve our productivity. New teams were created with the goal of splitting up the logic that was powering the back-end and then bring to life new small services. Then, we started working on what we call Microsites.

Microsites

A microsite is a very small set of web-pages, or even a single one, that takes care of handling a very specific part of the system’s domain logic. Examples are the Search Results page or the Restaurant’s Profile page. Every microsite is an independently deployable unit of code, so it is easier to test, to deploy, and in consequence more resilient. Microsites are then all connected by a front-door service that handles the routing.

Not a free ride

When we deployed some microsites to production we immediately discovered a lot of pros:

  • Bi-weekly deployments of the monolith became hundreds of deployments every week.
  • Not anymore a shared codebase for hundreds of engineers. Pull requests accepted, merged, and often deployed on the same day.
  • Teams experimenting and reiterating faster: product was happy.
  • Diversity on tech stacks: teams were finally able to pick their own favourite web-stack, as soon as they were capable of deploying their code and taking care of it in terms of reliability and performance.
  • Robustness: when something was wrong with a microsite, everything else was fine.

On the other hand, we soon realised that we introduced new problems on the system:

  • Duplication: teams started duplicating a lot of code, specifically front-end components such as the header, the footer, etc.
  • Coordination: when we needed to change something on the header, for example, we were expecting to see the change live in different time frames, resulting in inconsistencies.
  • Performance: every microsite was hosting its own duplicated css, javascript libraries, and static resources; resulting as a big disadvantage for the end-user in terms of performance.

SRS – aka Site Resources Service

To solve some of these problems we created a REST api to serve html snippets, that soon we started to call components. Main characteristics of the system are:

  • We have components for shared parts of the website such as the header, the footer, and the adverts. When a change has to go live, we apply the change, we deploy, and we see the change live everywhere.
  • Output is in HTML format, so the integration is possible if the microsite is either a .NET MVC site or a node.js app.
  • We have components for the core CSS and the JS common libraries, so that all the microsites use the same resources and the browser can cache them making the navigation smooth.
  • The service takes care of hosting all the static resources in a separate CDN, so microsites don’t have to host that resources.

This is an example of a request to the core css component:

1
2
3
4
5
6
7
curl http://srs-sc.otenv.com/v1/com-2014/resource-includes/css

{
  "href": "http://srs-sc.otenv.com/v1/com-2014/resource-includes/css",
  "html": "<link rel=\"stylesheet\" href=\"//na-srs.opentable.com/content/static-1.0.1388.0/css-new-min/app.css\" /><!--[if lte IE 8]><link rel=\"stylesheet\" href=\"//na-srs.opentable.com/content/static-1.0.1388.0/css-new-min/app_ie8.css\" /> <![endif]-->",
  "type":"css"
}

The downside of this approach is that there is a strict dependency with SRS for each microsite. On every request, a call to SRS has to be made, so we had to work hard to guarantee reliability and good performance.

Conclusions

When we tried the microsite approach we “traded” some of our code problems with some new cultural problems. We became more agile and we were working in a new different way, with the downside of having the need to more effectively coordinate more people. The consequence is that the way we were approaching the code evolved over time.

One year later, with the front-end (almost completely) living on micro-sites, and with the help of SRS, we are experimenting more effective ways to be resilient and robust, with the specific goal to allow teams to create their own components and share them with other teams in order to be independent, and use them to easily approach to A/B experiments.

In the next post I’ll write about OpenComponents, an experimental framework we just open-sourced that is trying to address some of this needs.

A Beginner’s guide to REST services

Why this post?

As a junior, I always find it easier to just sit and write code than actually stop to think about the theoretical basis that lie under the applications I work on. REST is one of those terms I heard a lot about, so I decided to try to sum up what it means and how it affects the choices we make everyday as software engineers.

Introduction to REST

REST stands for Representational State Transfer, and it can be defined as an architectural style used to build Web Services that are lightweight, maintainable, and scalable. A service that is designed by REST principles can be called a RESTful service.

It has been described first in 2000 by Roy Fielding, in a dissertation called “Architectural Styles and the Design of Network-based Software Architectures”. The basic idea was to describe the interactions between the components of a distributed system, putting constraints on them and emphasizing the importance of an uniform interface, that is abstracted from the single components.

REST is often applied to the design and development of web services, which is the scenario I’ll try to address in this post.

The purpose of a web service can be summed up as follows: it exposes resources to a client so that it can have access to them (examples of typical resources include pictures, video files, web pages and business data).

Common features of a service that is built in a REST style are:

  • Representations
  • Messages
  • URIs
  • Uniform Interface
  • Statelessness
  • Links between resources
  • Caching

Representations – what are they?

REST style does not put a constraint into the way resources are represented, as long as their format is understandable by the client.

Good examples of data formats in which a resource could be returned from a service are JSON (JavaScript Object Notation, which nowadays is the coolest one) and XML (Extensible Markup Language, used for more complex data structures). Say for instance a REST service has to expose the data related to a song, with its attributes. A way of doing it in JSON could be:

1
2
3
4
5
6
7
8
{
    "ID": 1,
    "title": "(You gotta) Fight for your right (To party)",
    "artist": "Beastie Boys",
    "album": "Licensed To Ill",
    "year": 1986,
    "genre": "Hip-Hop"
}

Easy, huh?

Anyway, a service can represent a resource in a number of ways at the same time, leaving the client to choose which one is better suited for its needs. The important thing is that there is agreement on what format to send/expect.

The format that the client needs will be part of the request sent by the client.

The resource will be eventually sent by the service as part of what we call a response.

It has to be kept in mind that a resource should be completely described by the representation, since this is the only information the client will have. It has to be exaustive, but without exposing classified or useless information about the entity at the same time.

Messages A.K.A. client and service chatting

Q: So, how exactly do client and service exchange requests and responses?

A: They send messages.

In fact, to be more specific, the client will send an HTTP request to the service, specifying the following details:

  • The method that is called on the resource. It can correspond to a GET, a POST, a PUT, a DELETE, an OPTIONS or a HEAD operation.
  • The URI of the request. It identifies what is the resource on which the client wants to use the method. More on that later. For now let’s say it is the only way the client knows how to call the needed resource.
  • The HTTP version, which is usually HTTP/1.1.
  • The request headers, which are the additional information passed, with the request, to the service. These fields are basically request modifiers, similar to the parameters sent to a programming language method, and they depend on the type of request sent. More on that later.
  • The request body: is the actual content of a message. In a RESTful service, it’s where the representation of resources sit. A body will not be present in a GET request, for instance, since it is a request to retrieve a resource rather than to create one, whereas a POST request will most likely have one.

The request will then generate an HTTP response to the client, that will contain the following elements:

  • The HTTP version, same as above.
  • The response code: which is a three-digit status code sent back to the client. Can be of the 1xx format (informational), 2xx (success), 3xx (redirect), 4xx (client error), 5xx (server error).
  • The response header, which contains metadata and settings related to the message.
  • The response body: contains the representation (if the request was successful).

URIs, home of the resources

A requirement of REST is that each resource has to correspond to an URI address, which unsurprisingly stands for Uniform Resource Identifier. Having URIs associated to resources is key, because they are the addresses on which the client is allowed to perform the operations on the resources. It is important to stress that according to REST an URI should describe a resource, but never the operation performed on it.

The addresses are usually constructed hierarchically, to allow readability. A typical resource URL could be written as: http://serviceName/resourceName/resourceID

Basic guidelines to build well-structured URIs are:

  • Resources should be named with plural nouns, no verbs, using conventions throughout the whole service.
  • Query URIs http://serviceName/resourceName?id=resourceID should be used only when really necessary. They are not deprecated by REST style, but they are less readable than the normal URIs, and are ignored by search engines. On the upside, they allow the client to send parameters to the service, to refine the request for a specific subset of resources, or resources in a specific format.

Uniform interface, various operations

Ok, so now that a client knows where a resource is reachable, how is it going to handle the resource? What are the operations that it can perform?

HTTP provides a set of methods that allow the client to perform standard operations on the service:

Method Operation performed Quality
GET Read a resource Safe
POST Insert a new resource, or update an existing one Not idempotent
PUT Insert a new resource, or update an existing one Idempotent (see below)
DELETE Delete a resource Idempotent
OPTIONS List allowed operations on a resource Safe
HEAD Return only the response header, no body Safe

The key difference between POST and PUT is that no matter how many times a PUT operation is performed, the result will be the same (this is what idempotent means), whereas with a POST operation a resource will be added or updated multiple times.

Another difference is that a client that sends a PUT request always need to know the exact URI to operate on, I.E. assigning a name or an ID to a resource. If the client is not able to do so, it has no choice but to use a POST request.

Finally, if the resource already exists, POST and PUT will update it in an identical fashion.

These operations, according to REST, should be available to the client as hyperlinks to the above described URIs, and that is how the client/service interface is constrained to be uniform.

Statelessness of the client side

A RESTful service does not maintain the application state client-side. This only allows the client to perform requests that are resource specific, and does not allow the client to perform operations that assume prior knowledge of past requests. The client only knows what to do based on the ability to read the hypertext it receives, knowing its media type.

This leads me to mention an important constraint of REST, that was also enforced by Fielding after publishing his dissertation: hyperlinks within hypertext are the only way for the client to make state transitions and perform operations on resources. This constraint is also known as HATEOAS (Hypermedia As The Engine Of Application State).

Links between resources

In the case of a resource that contains a list of resources, REST suggests to include links to the single resources on the representation, to keep it compact and avoid redundant data.

Caching to optimize time and efficiency

Allows to store responses and return them if the same request is performed again. It has to be handled carefully to avoid returning stale results. The headers that allow us to perform controls over caching are:

Header Application
Date Finding out when this representation was generated
Last Modified Date and time when the server modified the representation
Cache-Control HTTP 1.1 header used to control caching, can contain directives
Expires Expiration date (supports HTTP 1.0)
Age Duration since the resource was fetched from server

Cache-Control values can be tweaked to control if a cached result is still valid or stale. For example, the max-age value indicates for how many seconds from the moment expressed by the Date header a cached result will be valid.

Conclusion

REST is a language-agnostic style that abstracts over components and allows to build scalable, reusable and relatively lightweight web services. Thinking about it, it seems that REST is very close to an accurate description of the characteristics that made the World Wide Web so popular.

That of course is encouraging developers from all over the world to comply to these very basic ideas, owned by no one but at the same time used by everyone. Fascinating!

On Strongly Typed Logging

Logging is a crucial element of monitoring highly available systems. It allows not only to find out about errors but also quickly identify their cause. Logs are often used to generate metrics that help business and engineering make informative decisions on future development directions.

At OpenTable we have a central logging infrastructure, that means all logs are stored in the same shared database (ElasticSearch for us). And everybody can access any logs they want without having very specialized knowledge (thanks Kibana!).

ElasticSearch, though living in a NoSQL world, is not actually a schema-free database. Sure, you do not need to provide schema to it but instead ES will infer schema for you from documents you send to it. This is very similar to type inference you can find in many programming languages. You do not need to specify type of field, but if you later on try to assign inappropriate value to it you will get an exception.

This trait of our database goes all the way to the root of our logging system design. Let me explain why I say that we have ‘strongly typed logs’.

In The Beginning There Was String

Before centralization we just logged a single message along with its importance. In code it looked something like:

1
logger.ERROR(“Kaboom!”)

which resulted in logline on disk having timestamp, severity and message.

1
{2014-10-10T07:33:04Z [ERROR] Kaboom!}

That worked pretty well. As time passed we often started making log messages more generic to hold relevant data:

1
logger.INFO(string.Format(“Received {0} from {1}. Status: {2}. Took {3}”, httpMethod, sourceIp, statusCode, durationms));

When we decided to centralize logs we moved the same logs from local disk to a central database. Suddenly things that used to live on single server in a file called ‘application.log’ become part of one huge lump of data. Instead of easing access to logs they were really hard to filter, without even speaking about aggregation, or any simple form of operations to find the source of the problem. ElasticSearch is really good at free text searching, but frankly speaking FTS is never as precise as a good filter.

Then There Was Dictionary Of Strings

Wherever there is problem there is also a solution. So we changed the way our logging works. We created a custom logger and started sending logs more like documents than single string.

1
2
3
4
5
6
7
8
9
customLogger.send(‘info’, new Dictionary<string, string> {
{‘method’, httpMethod.ToString()},
{‘sourceIp’, sourceIp.ToString()},
{‘statusCode’, statusCode.ToString()},
{‘duration’, durationms.ToString()},
{‘requestId’, requestId.ToString()},
{‘service’, ‘myservice’}
{‘message’, string.Format(“Received {0} from {1}. Status: {2}. Took {3}”, httpMethod, sourceIp, statusCode, durationms)}
}

That helped a lot.

You might wonder why we serialized everything to string? The answer is ElasticSearch mapping as I described above. Mapping, once it is inferred, cannot be changed. So from time to time we used to have conflicts (e.g. one application logging requestId as number, other as guid). Those conflicts were costly – logs were lost – so we simply applied the simplest solution available and serialized everything.

Now filtering was working fine. We were even able to group requests based on a single field and count them. You cannot imagine how useful it is to simply count the different status codes returned by a service. Also you may have noticed we introduced some extra fields like ‘service’ which helped us group logs coming from a single application. We did the same with hostname etc.

With this easy success our appetite has grown and we wanted to log more. And being lazy programmers we found a way to do it quickly so our logs often included just relevant objects.

1
2
customLogger.log(‘info’, request)
customLogger.log(‘error’, exception)

Our custom logging library did all the serialization for us. This worked really well. Now we were actually logging whole things that mattered without having to worry about serialization at all. What’s even better, whenever the object in question changed (e.g. a new field was added to request), it was automagically logged.

However one thing was still missing. We really wanted to see performance of our application in real time or do range queries (e.g. “show me all requests that have 5xx status code”). We also were aware that both ES and Kibana can deliver it but our logging is not yet good enough.

Strongly Typed Logs

So we looked at our logging and infrastructure and at what needs to be done to allow different types of fields to live in ElasticSearch. And you can imagine that it was a pretty simple fix; we just started using types. Each log format was assigned its own type. This type was then used by ElasticSearch to put different logs into separate buckets with separate mapping. The type is equivalent in meaning to classes in OO programming. If we take this comparison further then each log entry would be an object in OO programming. ElasticSearch supports searches across multiple types, which is very convenient when you don’t know what you are looking for. On the other hand, when you know, you can limit your query to single type and take advantage of fields types.

It was a big application change as we needed to completely change our transport mechanism to LogStash. We started with Gelf and switched to Redis, which allowed us to better control format of our logs.

We also agreed on a first standard. The standard defined that type will consist of three parts:

1
<serviceName>-<logName>-<version>

This ensures that each team can use any logs they want to (thus serviceName). Each log will have its own format (thus logName). But they can also change in the future (thus version). One little word of caution, ES doesn’t like dots in type name, so don’t use them.

So our logs look now like this:

1
2
3
4
customLogger.log(new RequestLog {
Request = request,
Headers = headers,
Status = status})

RequestLog is responsible for providing valid type to the logging library.

With sending serialized objects as logs and assigning each class unique type our logs have become strongly typed.

We are already couple steps further down the path of improving our logs. We standardized some common fields and logtypes. That, however, is a completely different tale. ​

Building a living styleguide at OpenTable

If you’re reading this you’ve probably built yourself a website. A site – large or small – that’s thrown together or crafted over many months. And if you have, you’ve probably kept all your CSS class names in your head, or at least been able to go straight to the relevant stylesheets to retrieve them.

Well OpenTable is unsurprisingly built by many engineering teams across multiple continents, and was completely redesigned last year. And as soon as you have more than a handful of people working on your front-end you will quickly find a well-intentioned developer causing one or both of these problems:

  • Well-intentioned developer adds a new submission form but, like the design Philistine he is, his buttons are 18px Verdana #E40000, not the correct 16px Arial #DA3743
  • Your good old developer knows which font size and colour it should be, but bungs a duplicate class into a random stylesheet (or worse still, inline)

Despite these risks, a single front-end dev (or a team of them) cannot check every new piece of code or they will quickly become a bottleneck.

You need some guidelines

Offline designers regularly create ‘brand guidelines’ or ‘design standards’ to document the precise way their brand or product should be recreated when outside of their control. Online, such guidelines are similarly invaluable for maintaining brand and code consistency with multiple engineers and designers, but it is blindingly obvious that a printed or ‘static’ set of guidelines is completely unsuitable for a constantly changing website.

Step forward a ‘living’ styleguide.

A living styleguide gives a visual representation of a site’s UI elements using the exact same code as on the website, in most cases via the live CSS. A living styleguide may also provide reusable CSS and HTML code examples and they are not just for engineers new to the code; I frequently use ours at OpenTable and I wrote the stylesheets in the first place (I can’t be expected to remember everything).

Providing reusable code improves collaboration, consistency and standards, and reduces design and development time – but like most documentation it is essential your guide is always up-to-date and trustworthy. So if a living styleguide is (theoretically) always up-to-date, how did we build ours?

How we built our styleguide

Living styleguides are not new (although they were one of the trends of 2014) and as such many frameworks have been built over the years. We chose to use Kalei by Thomas Davis – I forget the exact reasons why but it was probably the easiest at the time to set up and customise.

Generating a Kalei styleguide is as simple as adding comments to your stylesheet; Kalei uses a variety of frameworks, including Backbone.js, JSCSSP and Marked to convert these comments into HTML mark-up, generate a list of your individual stylesheets as navigation and present these as a single page web app.

For example in your buttons.css file it is as simple as adding the following comments:

/*!
# Primary buttons
Primary buttons are only used when there is an exceedingly distinct and clear call-to-action.
```
<a href="#" class="button">Button</a>
<a href="#" class="button secondary">Button secondary</a>
<a href="#" class="button success">Button success</a>
<a href="#" class="button alert">Button alert</a>
```
*/

Which, by using the CSS in the file itself, Kalei would visually render like so:

styleguide-buttons-screenshot

Customising Kalei

Kalei works well out-of-the-box but we had to make a few customisations. These were mostly cosmetic changes, but a fundamental changes was to add support for Sass. For this we wrote a Grunt task imaginatively called grunt styleguide in which we combined Clean, Copy, Scss and Replace tasks. Unsatisfactorily it took a little while to set up and involved a number of steps, but below is simplification of the process.

  1. Clean all CSS files from the styleguide, excluding Kalei specific stylesheets
  2. Copy our partial scss files into a temporary folder and rename them to remove the underscore (partial scss files begin with an underscore are are not compiled by default)
  3. Compile the scss files into CSS in the styleguide directory
  4. Copy across dependent fonts and images, using Replace to update the relative paths
  5. Delete the temporary directory

This task is run as a deployment step and can be run locally when developing the guide.

Other that a few small UI tweaks we made one significant changes to the look and feel. By default the navigation lists stylesheets using their full file name, e.g. breadcrumbs.css and buttons.css. Using a regex function in the menu.js file and text-transform: capitalize in the Kalei stylesheet we modify the navigation to display the more attractive headings Breadcrumbs and Buttons.

View our styleguide at opentable.com/styleguide.

What’s next?

Our living styleguide is intended to be an organic resource that we will grow and refine into an integral part of our software development. We have many ideas for how we want to develop the guide – at the very least it is currently incomplete insomuch as we have not documented every one of our stylesheets.

There is also a fundamental weakness to this type of styleguide, which is duplication of code. Whilst we use the exact same CSS as our live site, we are copying and pasting mark-up into these files and this content can go out of date without deliberate upkeep. At OpenTable we have a site resource service which serves HTML snippets to different internal microsites so one option could be to use this service to integrate these snippets into the styleguide. We may also investigate a solution using web components as cross-browser support is not a concern.

We are also interested to see whether it would be useful to run UI tests against the styleguide. We have used pDiff in the past for visual regression on specific microsites, but the styleguide could be an opportunity to catch accidental, global UI changes. We are going to look at running BackstopJS against each section of the guide to see if this increases its usefulness.

Finally, as one of the developers who created the styleguide I want it to be widely adopted across OpenTable. I want designers and engineers to contribute to the code and use it for their day-to-day designing and developing, and I want product owners and marketing folks to use it when creating promotional material and A/B tests. My ultimate goal is for it to be an integral tool enabling everyone to work faster, avoid duplication and maintain a consistent brand identity.

Read more

Explaining Flux architecture with macgyver.js

What is Flux?

Flux is an application architectural pattern developed by Facebook. It was developed to solve some of the complexities of the MVC pattern when used at scale by favouring a uni-directional approach. It is a pattern and not a technology or framework.

MVC scale issue

When applications that use the model-view-controller (MVC) pattern at any scale it becomes difficult to maintain consistent data across multiple views. In particular the case whereby flow between models and views is not uni-directional and may require increasing logic to maintain parity between views when model data is updated. Facebook hit this issue several times and in particular with their unseen count (an incremented value of unseen messages which is updated by several UI chat components). It wasn’t until they realised that the MVC pattern accomodated the complexity that they stepped back from the problem and addressed the architecture.

Flux is intentionally unidirectional.

flux

Key to this architecture is the dispatcher. The dispatcher forms the gatekeeper that all actions must go through. When a view, or views, wish to do something they fire an action which the dispatcher correctly routes via registered callbacks made by the stores.

Stores are responsible for the data and respond to callbacks from the dispatcher. When data is changed they emit change events that views listen to to notify them that data has changed. The view can then respond accordingly (for example to update/rebind).

This will become more obvious when we go through the macgyver.js example.

What is macgyver.js?

Macgyver is a project fork of mullet.io by Steve Higgs. Mullet is an aggregate stack to get started using Node.js with Facebook’s React framework on the client and Walmart’s hapi.js on the server.

Steve initially swapped out Grunt for Gulp, updated hapi and React and fixed some issues with the React dev tools. I then added another example to incorporate the Flux architecture, which you can see here. As React was also developed by Facebook you can begin to see how flux compliments its design and component based model.

The macgyver.js Flux example

The demo is a very simple quiz. In true Macgyver style he is faced with abnormally unrealistic situations armed with impossibly useless “every-day” items to escape the situation. If you select the correct tool, you proceed to the next situation.

Let’s start by going through the uni-directional flow above and at the same time look at the code and its structure.

When the game is first loaded the view fires an action to get the next situation. This is then fired off to the dispatcher, as are all actions.

1
2
3
4
5
6
receiveSituations: function(data) {
  AppDispatcher.handleViewAction({
          actionType: MacgyverConstants.RECEIVE_SITUATIONS_DATA,
          data: data
      });
},

The store registers to listen for events from the dispatcher with a registered callback. It has the job of loading the situation data and emitting an event when this data is changed. In this case the SituationStore.js has the job of setting the current situation for the view to render.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
AppDispatcher.register(function(payload){
  var action = payload.action;

  switch(action.actionType) {
      case MacgyverConstants.RECEIVE_SITUATIONS_DATA:
          loadSituationsData(action.data);
          break;
      case MacgyverConstants.CHECK_ANSWER:
          checkAnswer(action.data);
          break;
      default:
          return true;
  }

  SituationStore.emitChange();

  return true;
});

The React view (in this case Game.jsx) registers an event listener for these changes in the SituationStore using the React “componentDidMount” function. When the situation is received by the component it rebinds to the data by loading the sitution and the possible answers.

1
2
3
4
5
6
7
8
9
10
11
12
var Game = React.createClass({

  componentDidMount: function () {
      SituationStore.addChangeListener(this._onChange);
      ToolStore.addChangeListener(this._onChange);
  },
  componentWillUnmount: function() {
      SituationStore.removeChangeListener(this._onChange);
      ToolStore.removeChangeListener(this._onChange);
  },
  render: ...
});

When the user selects an answer this fires off another “CHECK_ANSWER” event to the dispatcher. The situation store recieves this event with the answer in the payload and checks whether the answer selected is the correct one. If it is it updates the situation and emits a changes event to which the view receives and rebinds the view to the new situation.

Conclusion

Flux can be quite difficult to fathom eventhough it is quite a simple architectural pattern. In this small example it does initially feel overly complex and indeed it probably is. The pattern was designed to solve issues that occur at large scale in MVC applications due to the increased amound of bi-directional dependencies between views and models. For smaller applications it could be seen as over-engineered, however I really like the simplicity in the uni-directional flow and the assurance that unit tests are almost always going to mimic the state changes possible in your application because of the guarantee of a simple flow of data.