Meetup: Frontend Refactoring at Scale

OpenTable is delighted to host the tenth WEBdeLDN monthly meetup with talks by Jack Franklin (Frontend Engineer at Thread) and Nick Balestra (Senior Engineer at OpenTable - Delivery Tools Team at OpenTable).

This month’s WEBdeLDN talks will be about frontend refactoring at scale with two real use cases: refactoring a monolith frontend application, as well as migrating a large, business critical application from Angular 1 to React.

Schedule of the evening

18:30 - Doors will open. Welcoming drinks and pizza
19:00 - Dismantling the frontend monolith together - Nick Balestra
19:45 - More pizza and drinks
20:00 - Lessons learned migrating complex JS applications - Jack Franklin
20:45 - End of the meetup

Registration and full details are available here

[Read More]

Hosting external events

This June, OpenTable hosted the London Machine Learning Study Group to great success. We provided the food, drinks, and space for an extremely talented speaker and over 100 RSVPs. It was also my first experience hosting an external meetup at OpenTable - and I learned a lot!

My Experience

OpenTable has frequently hosted meetups in our office such as Web Platform London and WEBdeLDN. However, acting as an event host was a complete unknown for me, so leading up to the machine learning event, I made a point of sitting down with someone who had organised one before.

We documented the key steps in setting up an external event, and then published what we’d discussed on our internal wiki so as to avoid siloing useful information for future use. Following the steps documented in the wiki proved invaluable in reducing my stress on the day.

After our last event we also looked back on how we’d done the last few events.

Retrospective

Following our last meetup, we held a retrospective as, along with the London Machine Learning Study Group, we have had a recent spate of meetups at OpenTable. This was extremely useful to underline our shared experiences as well as the considerations we want to make for our next external event. Here are a few areas we will look to improve.

Push social media more

We want the largest number of people to know about these events ahead of time, which means not just relying on one platform. Instead we aim to advertise our meetups on all channels available to us to get as diverse an audience as possible. We will tweet about our events more via @OpenTableTechUK and we will use this blog to advertise upcoming events.

A lot of people don’t turn up

[Read More]

Pragmatic Testing with GraphQL

We’ve been using GraphQL at OpenTable for a little over half a year now. I won’t go into detail as to why we started using it, but suffice to say, we really enjoyed creating our very first GraphQL endpoint. It eased a lot of the inconsistencies that we were experiencing with some of our REST-ful services.

This post assumes you have some experience building a GraphQL endpoint. For those of who you aren’t familiar with it, it feels a lot like having a querying language via an HTTP endpoint. If you want to try it for yourself, I recommend GitHub’s endpoint. To get started with GraphQL, the official documentation is a perfect place.

The Problem

Now, let’s get back to writing tests for an endpoint. When we were creating our first endpoint, we started facing regression testing problems while expanding our schema. It seemed our existing testing methods were ill-equipped to handle it.

Finding a good solution to this is important because I’ve had such a love–hate affair with testing. All tests are not created equal and a naive developer would say, “write a test in case something changes”. This certainly isn’t a good measure. I’ve also found that just attempting to write tests immediately without forethought can be a costly distraction if you grow your codebase.

The Approach

So we took a step back and looked at the code we were writing. Most of the connectors we wrote were fairly anemic. We didn’t have much logic for our API nor did we want any. We could have written unit tests for each parts of the schema but mocking portions of the system seemed more work than it was worth.

So where did we start? I personally enjoy the outside-in approach and started writing a few acceptance tests. So, we wrote a test for each query in the schema that we had. It would fire an HTTP request and expect a 200 status code and have no errors in the resulting JSON body. Thanks to GraphQL most errors are handily reported using this mechanism so we would be able to make a decent impact using little work.

As you can see, we found that this would be good enough at the time but eventually found that it would not scale very well. If you forget to update one of these queries, you could leave out a major section out accidentally. Being a big fan of automation, we wondered how we could scale this out.

[Read More]

OpenTable's Global Hackathon 2017

Last week we were excited to kick-off our first OpenTable Global Hackathon, underway simultaneously in San Francisco, Los Angeles, Mumbai, Melbourne, and right here in London. Having personally never attended a hackathon before let alone helped organise one I was initially daunted, but with some careful planning, good suggestions from the team and a fair amount of making it up as we went along, the end result was quite a success.

This post discusses what format our hackathon took, what challenges we faced in coordinating across countries, the experience in the London office and what we learned.

The basic format

The hackathon was conceived in our San Francisco headquarters and could easily have been confined to that one office, but I was delighted to learn is was intended to be a global event from the outset. The basic format, described below, was however optimised for our SF office.

In the weeks leading up to the hackathon, individuals were asked to submit their hack proposals. Idea prompts were circulated such as “engages and delights diners” or “socially connected”, as well as the judging criteria; Originality, Feasibility, Likely to adopt, Fidelity of prototype, Business impact and Captivating presentation.

One week before the hackathon, our San Francisco office held a ‘happy-hour’ in which everyone taking part mingled and discussed ideas. This social event encouraged the developers, designers and product owners to self-organise into teams and submit their proposals in advance.

The hackathon itself was devised as a 2½ day event, running from Tuesday morning to Thursday lunchtime during normal working hours, with OpenTable providing breakfast and lunch. Live incidents or outages still had to be fixed, but otherwise the teams would be uninterrupted for the duration.

The hackathon concluded with each team having up to five minutes to present their hack to their colleagues and the judges. The awards were for the top three hacks, special CEO and CTO’s prizes, and an open vote - and generous cash prizes and gifts were up for grabs.

Tweaks for the London version and building enthusiasm

The big headache in London that we always have to live with is the eight hour time difference between London and San Francisco - and the same constraint obviously existed for the hackathon. Each team concluded that it wouldn’t be feasible to form teams across the offices so all our build-up and actual hacking remained independent. However we submitted our recorded presentations for the final judging along with the rest of the company.

[Read More]

The goal driven organisation

Quarterly team goals are an effective way to establish organisational purpose, direction and alignment while supporting team agility. But be vigilant - they can be used inappropriately.

Overview

Due to factors such as growth, acquisition, changing markets amongst others, organisations can find themselves in new environments to which they struggle to adapt.

Scaling Agile in these new environments is hard. The practices and tools that are frequently used to solve this problem can give the appearance of acting against team autonomy and agility. Teams and individuals naturally try to protect the engineering culture that they have worked hard to establish but by allowing new organisational needs to go unmet they put their autonomy and agility at risk.

In this post I will show how quarterly goal setting, using Agile principles and with one eye on the dysfunctions that can arise, is an effective way to meet organisational needs while protecting team agility and engineering culture.

What needs are we trying to meet?

Senior leaders in an organisation need confidence that the creativity, intelligence, skills and knowledge of their employees are directed most effectively towards delivering long term business objectives. Ensuring that teams are aligned around those business objectives is traditionally called governance.

The top-down command-and-control connotations of that term do not sit comfortably in modern software development and rightly so. The responsibility for organisational alignment should rest with teams and individuals.

A bottom-up approach to organisational alignment relies on transparency. Everyone in the business needs to know the business objectives and what all teams are doing to meet the objectives. Arguably the primary output of all product development and engineering teams is a statement of intent and a narrative of their progress.

Put another way, the first output of all knowledge work is shared, actionable knowledge - we can then decide how best to utilise that knowledge. Teams need to know that the decisions that they are making right now are the right decisions for the business. Without full transparency this is not possible.

[Read More]

testing-node-apps-with-docker-compose (and some Soul)

Contents of this post

  • Purpose: the reason for this blog post.
  • Scenario: what this example of using docker-compose can be useful for.
  • Prerequisites: basic setup to be able to run the code contained in this post.
  • Code example: an actual step-by-step guide on how you can setup your test environment to run with docker-compose.
  • Improvements: a couple of ideas on how to expand this technique.

Purpose

As I am sure the audience of this post knows to some extent, Docker is a technology that has grown to become popular over the last few years, allowing developers to deploy pieces of software by packaging them into standardized containers, in a number of various ecosystems (Apache Mesos, Amazon Web Services and many more).

So we can use Docker for our deployment needs, awesome. But let’s pay attention to a key word I used above. Docker grants isolation. And what do we like to perform on our application in isolation? Yeah, you guessed right – testing!

Specifically, with this post, I aim to dig deeper into how to use docker-compose (a specific Docker-based tool that enables creation of multi-container Docker applications) to build and run a Node.js application connected to MongoDB, to test their interaction and the interaction of the app with the external world, all inside containers running on your machine. All isolated and testable thanks to the usage of containers that we can spin up, hit with tests, and clean up with little effort.

Interested? Let’s go!

Scenario

In this scenario we will use Docker and one of its functionalities, docker-compose, to build a container and spin up our app. Then we build another container with a copy of the database where we can freely create and manipulate data, and finally we perform all the integration testing we want against those self-contained entities, which we can clean up after the tests ran. Total isolation and, very importantly, no need to pollute our development or pre-production environment with superfluous test data.

Let’s imagine an app that we can build and test, for example a directory of soul music artists.

[Read More]

falcor-postman

At OpenTable, we have an engineering culture that empowers us to research, experiment and learn.

In an effort to foster innovation and to try new ideas, Chris Cartlidge, Nick Balestra, Tom Martin and myself started to work on a side project nicknamed big-mars. Our project is a mobile-first, responsive web application that uses Falcor by Netflix.

Falcor, a JavaScript library for efficient data fetching, is an implementation of the Backend for Frontend (BFF) or the API Gateway pattern.

One powerful concept that Falcor has is its query model, in which you access your data as if it was a single JSON model in memory. You can navigate your data structure the way you would navigate a JSON object; either with a “dot notation” (e.g. restaurant.address.city) or with an “array notation” (e.g. restaurant[‘address’][‘city’]).

Really easy and intuitive.

However, while experimenting and learning how to properly use its query model, we noticed that it was quite hard to “visualise” the expected result from the API calls without using, for instance, our beloved in-browser Developer Tools console.

falcor-postman

The Falcor project released many additional packages in order to facilitate developers who are willing to use this technology (e.g. falcor-express) but in this ecosystem we noticed a lack of tools with a GUI on top.

We also noticed that the GraphQL, another project that shares with Falcor the same core concepts, has tools with visual interfaces (e.g. express-graphql and GraphiQL).

So as a spin-off of our big-mars project, Nick and I decided to build a tool with a nice and intuitive GUI responsible for exercising the Falcor endpoint in order to help us to validate our queries, and we named this tool falcor-postman.

[Read More]

OpenComponents - microservices in the front-end world

Many engineers work every day on opentable.com from our offices located in Europe, America, and Asia, pushing changes to production multiple times a day. Usually, this is very hard to achieve, in fact it took years for us to get to this point. I described in a previous article how we dismantled our monolith in favour of a Microsites architecture. Since the publication of that blog post we have been working on something I believe to be quite unique, called OpenComponents.

Another front-end framework?

OpenComponents is a system to facilitate code sharing, reduce dependencies, and easily approach new features and experiments from the back-end to the front-end. To achieve this, it is based on the concept of using services as interfaces - enabling pages to render partial content that is located, executed and deployed independently.

OpenComponents is not another SPA JS framework; it is a set of conventions, patterns and tools to develop and quickly deploy fragments of front-end. In this perspective, it plays nicely with any existing architecture and framework in terms of front-end and back-end. Its purpose is to serve as delivery mechanism for a more modularised end-result in the front-end.

OC is been in production for more than a year at OpenTable and it is fully open-sourced.

Overview

OpenComponents involves two parts:

  • The consumers are web pages that need fragments of HTML for rendering partial contents. Sometimes they need some content during server-side rendering, somethings when executing code in the browser.
  • The components are small units of isomorphic code mainly consisting of HTML, Javascript and CSS. They can optionally contain some logic, allowing a server-side Node.js closure to compose a model that is used to render the view. When rendered they are pieces of HTML, ready to be injected in any web page.

The framework consists of three parts:

  • The cli allows developers to create, develop, test, and publish components.
  • The library is where the components are stored after the publishing. When components depend on static resources (such as images, CSS files, etc.) these are stored, during packaging and publishing, in a publicly-exposed part of the library that serves as a CDN.
  • The registry is a REST API that is used to consume components. It is the entity that handles the traffic between the library and the consumers.
[Read More]

Testing React Components

At OpenTable it’s becoming an increasingly popular trend to use React.
One of the reasons for this is the ability for it to server-side render whilst still
giving us the client side flexibility that we all crave!

We all know to have stable, reliable software you need to have well written tests. Facebook knows this and
provides the handy Test Utilities library to make
our lives easier.

Cool — I hear you all say! But what is the best approach to testing React components?

Well unfortunately this is something that is not very well documented and if not approached in
the correct way can lead to brittle tests.

Therefore I have written this blog post to discuss the different approaches we have available to us.

All code used in this post is avaliable on my GitHub.

The Basics

To make our lives a lot easier when writing test it’s best to use a couple of basic tools. Below is
the absolute minimum required to start testing React components.

  • Mocha - This is a testing framework that runs in the browser or Node.JS (others are available).
  • ReactTestUtils - This is the basic testing framework that Facebook provides to go testing with React.

The Scenario

[Read More]

Puppet-Community

Puppet is an important tool to us at OpenTable; we couldn’t operate as efficiently without it but Puppet is more than a tool or a vendor, it is a community of people trying to help
each other operate increasing complex and sophisticated infrastructures.

The Puppet community and the open source efforts that drive that community have always been important to us which is why we want to take a step further in our efforts and introduce
you to the “Puppet-community” project.

What is Puppet-community

Puppet-community is a GitHub organisation of like-minded individuals from across the wider Puppet ecosystem and from a diverse set of companies. Its principle aims are to allow the community to synchronise its efforts and to provide a GitHub organisation and Puppet Forge namespace not affiliated with any company.

Its wider aims are to provide a place for module and tool authors to share their code and the burden of maintaining it.

I would like to say that this was our idea, as it’s an excellent one, but actually all credit goes to its founders: Igor Galić, Daniele Sluijters and Spencer Krum

Why communities matter

So why all the fuss about this? Why does it even matter where your code lives?

Well these are the some questions that I asked myself when I first heard about this project at PuppetConf 2014. The answer is that is really does matter and it’s a pattern that is
developing elsewhere (see: packer-community, terraform-community-modules,
cloudfoundry-community) to deal with the problems you’ll face with a large amount of open source code.

Stepping back slightly, if you look at open source then there are three types: product-based (think open-core), corporate/individual sponsored, and community-driven.

[Read More]