NAV Navbar


Kelda enables anyone to deploy complex applications to the cloud by encoding expertise about how to run those applications in JavaScript blueprints.

This site describes how to use Kelda. It describes how to install Kelda to run a blueprint, and also how to write your own blueprint for a new application.

We hope you will find this documentation to be a helpful guide to using Kelda. If you run into any issues using Kelda, don’t hesitate to contact us. If you notice issues in the documentation, please submit a Github issue, or feel free to fix it and submit a pull request!

Kelda is currently in beta.


Quick Start

This Quick Start will cover how to install Kelda and deploy a simple Node.js and MongoDB application to the cloud (e.g. AWS or Google) using Kelda.

Installing Kelda

  1. Install Node.js. Download and install Node.js version 6 or higher using the installer from the Node.js download page.
  2. Install Kelda. Use Node.js’s package manager, npm, to install Kelda:

    $ npm install -g @kelda/install

    Or, as root:

    $ sudo npm install -g @kelda/install --unsafe-perm
  3. Check that Kelda was installed. Verify that the kelda command outputs a blurb about how to use the command.

Configuring a Cloud Provider

This section explains how to get credentials for your cloud provider. Kelda will need these to launch machines on your account. Before proceeding:

  1. Create an account. Make sure you have an account with one of Kelda’s supported cloud providers (Google Cloud, Amazon EC2, or DigitalOcean)

  2. Get credentials. Locate the credentials for your cloud provider account.

For Amazon EC2, create an account with Amazon Web Services, find your “Access Keys” on the Security Credentials page in the AWS Management Console, and “Create New Access Key” (or use an existing key, if you already have one).

Alternatively, follow the instructions for Google Cloud or DigitalOcean, but come back to this tutorial before running kelda init. In the next step, you will run kelda init with some specific inputs.

Creating an Infrastructure

  1. Specify an infrastructure. Kelda needs to know what infrastructure (e.g. which cloud provider and VMs) to launch the application on. The easiest way to specify this is by running kelda init:

    $ kelda init

Note! When asked for credentials, use the provider credentials from the previous section.

Getting the Blueprint

  1. Download Kelda’s Node.js and MongoDB blueprint using git:

    $ git clone
  2. Install the JavaScript dependencies of the blueprint using npm, the Node.js package manager:

    $ cd nodejs
    $ npm install

Running the Application

  1. Start the daemon. To run a blueprint, the Kelda daemon must be running. If no daemon is running, open a new terminal window and start one:

    $ kelda daemon

    The daemon is a long running process that periodically prints some log messages. Leave this running.

  2. Start the application. In another terminal window, navigate to the nodejs directory and run the blueprint:

    $ kelda run ./nodeExample.js
  3. Check the status. The VMs and application containers are now booting. Use Kelda’s show command, to see how things are progressing. The output looks similar to this:

    $ kelda show
               Master    Amazon      us-west-2    m3.medium                booting
               Worker    Amazon      us-west-2    m3.medium                booting

    kelda show might temporarily show an error message starting with “unable to query connections: rpc error”; this error is benign and can occur while the machines are booting.

  4. Wait a few minutes until the machines’ STATUS are connected, and both containers’ STATUS are running:

    $ kelda show
    MACHINE         ROLE      PROVIDER    REGION       SIZE        PUBLIC IP         STATUS
    i-0e8b292380    Master    Amazon      us-west-2    m3.medium    connected
    i-0fee34512f    Worker    Amazon      us-west-2    m3.medium    connected
    CONTAINER       MACHINE         COMMAND                    HOSTNAME     STATUS     CREATED               PUBLIC IP
    703ed73b87ee    i-0fee34512f    keldaio/mongo              mongo        running    About a minute ago
    fa33354be048    i-0fee34512f    node-app:node-todo.git     node-app2    running    27 seconds ago

    Don’t worry if the STATUS is empty for a a few minutes – this is normal while things are starting up.

Accessing the Web App

To access the website, simply copy-paste the PUBLIC IP address from the node-app container row (in this case into your browser. You should see the todo app.

Debugging Applications with Kelda

Note that the commands in this section (and all Kelda commands that take IDs) don’t need the full ID; a unique prefix of the ID is enough.

  1. Check the logs. To get the logs of the Node.js app, find the container’s ID in the kelda show output, and pass it to the kelda logs command:

    $ kelda logs fa33

    Verify that as you interact with the todo app, the logs show the corresponding GET, POST, and DELETE requests. This isn’t thrilling information, but the logs will come in handy if you ever encounter any errors.

  2. SSH in to the container. To SSH in to the Node.js container, execute kelda ssh with the container ID:

    $ kelda ssh fa33

    Try to run ls to see the application source files.

  3. Log out. When you’re done poking around, log out with the exit command.

Stopping the Application

  1. Stop the VMs. When you’re done playing around, make sure to stop the machines you’ve started! Otherwise, your cloud provider might charge you for the VMs that are still running. To stop all VMs and containers:

    $ kelda stop
  2. Check that the machines are stopped. Wait a few seconds until the VMs no longer show up in kelda show.

  3. Stop the daemon. Go to the terminal window that’s running kelda daemon and stop the process with Ctrl+C, or just close the window.

Next steps

How Kelda Works

This section describes what happens when you run an application using Kelda, and explains the main components in a deployment. As an example, we will consider the deployment in this illustration:

Kelda Diagram


The key idea behind Kelda is a blueprint – in this case my_app.js. A blueprint describes every aspect of running a particular application in the cloud, and is written in JavaScript. Kelda blueprints exist for many common applications. Using Kelda, you can run one of those applications by executing just two commands on your laptop: kelda daemon and kelda run.


The first command, kelda daemon, starts a long-running process – “the daemon”. The daemon is responsible for booting machines in the cloud, configuring the machines, managing them, and rebooting them if anything goes wrong. This means that if a machine dies while no daemon is running for its deployment (e.g. because you closed the laptop that’s running the daemon), the machine will not be restarted. However, as soon as the daemon comes back online, it will reboot the missing VM. The kelda daemon command starts the daemon, but doesn’t launch any machines until you kelda run a blueprint. Likewise, stopping the daemon (e.g. when closing your laptop) doesn’t stop the running machines – only kelda stop will cause the daemon to terminate VMs.


To launch an application, call kelda run with a JavaScript blueprint – in this example my_app.js. The run command passes the parsed blueprint to the daemon, and the daemon sets up the infrastructure described in the blueprint.


Kelda runs applications using Docker. As shown in the graphic, my_app.js described an application with three Docker containers. You can think of a container as being like a process: as a coarse rule-of-thumb, anything that you’d launch as its own process should have it’s own container with Kelda. While containers are lightweight (like processes), they each have their own environment (including their own filesystem and their own software installed) and are isolated from other containers running on the same machine (unlike processes). If you’ve never used containers before, it may be helpful to review the Docker getting started guide.


In terms of machines, myApp.js described a cluster with one master and two workers.

Workers: The worker machines host the application containers. In this case, Kelda ran two containers on one worker machine and one container on the other.

Masters: The master is responsible for managing the worker machines and the application containers running on the workers. No application containers run on the master. In the above explanation of the daemon, we described a scenario where a worker machine disappears. In that scenario, the master machine would reboot the application containers from the failed worker on one of the healthy worker machines. It is possible to have multiple master machines to safeguard against a master machine failing.

Blueprint Writers Guide

The previous section described how to use Kelda to run an application that already had a blueprint. This guide describes how to write the Kelda blueprint for a new application, using the application as an example. is an open source project that implements a reddit-like web page, where users can post content and vote up or down other content.

For more blueprint examples, check out the blueprints in the blueprint library.

Decomposing the application into containers

The first question you should ask yourself is “how should this application be decomposed into different containers?” Be sure you’ve read the How Kelda Works section, which gives a brief overview of containers. If you’ve already figured out the containers that are needed for your application (e.g., if you’re already using Docker), you can skip the rest of this section.

Specifying the containers for your application

As an example of how to specify the containers for your application, let’s use the example. requires mysql to run, so we’ll use one container for mysql. We’ll use a second container for the program to run in.

For each container that your application uses, you’ll need a container image. The container image describes the filesystem that will be on the container when it’s started. For mysql, for exampe, the container image includes all of the dependencies that mysql needs to run, so that after starting a new mysql container, you can simply launch mysql (no more installation is needed). Most popular applications already have containers that you can use, and a quick google search yields an existing mysql image that we can use for

For the container that runs, we’ll need to create a new image by writing our own Dockerfile, which describes how the Docker image should be created. In this case, the Dockerfile is relatively simple:

# This container is based on the Ruby image, which means that it
# automatically inherits the Ruby installation defined in that image.
FROM ruby:2.3.1

# Install NodeJS, which is required by
RUN apt-get update && apt-get install nodejs -y

# Download and build the code.
RUN git clone git://
WORKDIR lobsters
RUN bundle

# Add a file to the container that contains startup code for This
# command assumes that is in the same directory as this
# Dockerfile.
COPY /lobsters/

# When the container starts, it should run the server using the
# bash file that we copied above.  This is a common
# "gotcha" to people new to containers: unlike VMs, each container is based
# on a process (in this case, rails, which is started at the end of
# and will be shutdown when that process stops.
ENTRYPOINT ["/bin/sh", "/lobsters/"]

In this case, we wrote an additional bash script,, to help start the application. The important thing about that script is that it does some setup that needed to be done after the container was started, so it couldn’t be done in the Dockerfile. For example, the first piece of setup it does it to initialize the SQL database. Because that requires a connection to mysql, it needs to be done after the container is launched (and configured to access the mysql container, as discussed below). After initializing the database, the script launches the rails server, which is the main process run by the container.

To create a docker image using this file, run docker build in the directory with the Dockerfile (don’t forget the period at the end!):

$ docker build -t kayousterhout/lobsters .

In this case, we called the resulting image kayousterhout/lobsters, because we’ll push it to the Dockerhub for kayousterhout; you’ll want to use your own Dockerhub id to name your images.

This will take a few minutes, and creates a new image with the name kayousterhout/lobsters. If you want to play around with the new container, you can use Docker to launch it locally:

$ docker run -n lobsters-test kayousterhout/lobsters

To use a shell on your new container to poke around (while the rails server is running), use:

$ docker exec -it lobsters-test /bin/bash

This can be helpful for making sure everything was installed and is running as expected (although in this case, won’t work when you start it with Docker, because it’s not yet connected to a mysql container).

Deploying the containers with Kelda

So far we have a mysql container image (we’re using an existing one hosted on Dockerhub) and a container image that we just made. You should similarly have the containers ready for your application. Up until now, we haven’t done anything Kelda-specific: if you were using another container management service like Kubernetes, you would have had to create the container images like we did above. These containers aren’t yet configured to communicate with each other, which is what we’ll set up with Kelda. We’ll also use Kelda to describe the machines to launch for the containers to run on.

To run the containers for your application with Kelda, you’ll need to write a Kelda blueprint. Kelda blueprints are written in Javascript, and the Kelda Javascript API is described here. In this guide, we’ll walk through how to write a Kelda blueprint for, but the Kelda API has more functionality than we could describe here. See the API guide for more usage information.

Writing the Kelda blueprint for MySQL

First, let’s write the Kelda blueprint to get the MySQL container up and running. We need to create a container based on the mysql image:

const sql = new Container({
  name: 'sql',
  image: 'mysql:5.6.32',

Here, the argument to Container is the hostname for the container, and the name of an image. You can also pass in a Dockerfile to use to create a new image, as described in the Javascript API documentation.

Next, the SQL container requires some environment variables to be set. In particular, we need to specify a root password for SQL. We can set the root password to foo with the setEnv function:

sql.env.MYSQL_ROOT_PASSWORD = 'foo';

Writing the Kelda blueprint for

Next, we can similarly initialize the lobsters container. The lobsters container is a little trickier to initialize because it requires an environment variable (DATABASE_URL) to be set to the URL of the SQL container. Kelda containers are each assigned unique hostnames when they’re initialized, so we can create the lobsters container and initialize the URL as follows:

const lobsters = new Container({
  name: 'lobsters',
  image: 'kayousterhout/lobsters',
const sqlDatabaseUrl = 'mysql2://root:' + mysqlOpts.rootPassword + '@' + sqlContainer.getHostname() + ':3306/lobsters';
lobsters.env.DATABASE_URL = sqlDatabaseUrl;

Allowing network connections

At this point, we’ve written code to create a mysql container and a lobsters container. With Kelda, by default, all network connections are blocked. To allow lobsters to talk to mysql, we need to explicitly open the mysql port (3306):

allowTraffic(lobsters, sql, 3306);

Because lobsters is a web application, the relevant port should also be open to the public internet on the lobsters container. Kelda has a publicInternet variable that can be used to connect containers to any IP address:

allowTraffic(publicInternet, lobsters, 3000);

If you’re having trouble determining which ports your application needs, take a look at How to Debug Network Connectivity Problems.

Deploying the application on infrastructure

Finally, we’ll use Kelda to launch some machines, and then start our containers on those machines. First, we’ll define a “base machine.” We’ll deploy a few machines, and creating the base machine is a useful way to create one machine that all of the machines in our deployment will be based off of. In this case, the base machine will be an Amazon instance:

const baseMachine = new Machine({ provider: 'Amazon' });

Now, using that base machine, we can deploy a master and a worker machine using Kelda’s Infrastructure constructor. All infrastructures must have at least one master, which keeps track of state for all of the machines in the cluster, and at least one worker. The Infrastructure constructor accepts the master(s) and worker(s) as parameters:

const infrastructure = new Infrastructure({
  masters: baseMachine,
  workers: baseMachine,

We’ve now defined a infrastructure with a master and worker machine. Let’s finally deploy the two containers on that infrastructure:


We’re done! Running the blueprint is now trivial. With a kelda daemon running, run your new blueprint (which, in this case, is called lobsters.js):

$ kelda run ./lobsters.js

Now users of lobsters, for example, can deploy it without needing to worry about the details of how different containers are connected with each other. All they need to do is to kelda run the existing blueprint.

Kelda.js API Documentation

This section documents use of the Kelda JavaScript library, which is used to write blueprints.



(constant) publicInternet



allowTraffic(src, dst, portRange) → {void}

Allows traffic from a Connectable or set of Connectables to another Connectable or set of Connectables. A LoadBalancer cannot make outbound connections, so it may not be included in `src`. Connectables have a default-deny firewall, meaning that unless traffic is explicitly allowed to or from a Connectable (by calling this function) they will not be allowed.

Name Type Description
src Connectable | Array.<Connectable> the Connectables that can send outgoing traffic to those listed in `dst`. LoadBalancers cannot make outgoing connections, so they may not be included in `src`.
dst Connectable | Array.<Connectable> the Connectables that can accept inbound traffic from those listed in `src`.
portRange int | Port | PortRange The ports on which Connectables can send traffic.

baseInfrastructure() → {Infrastructure}

Returns a base infrastructure. The base infrastructure could be created with `kelda init`.

Returns: The infrastructure object.


Retrieve the base infrastructure, and deploy an nginx container on it.

const infrastructure = baseInfrastructure();
const nginx = new Container({ name: 'web', image: 'nginx' });

getInfrastructure() → {Infrastructure}

Returns: The global infrastructure object.

githubKeys(user) → {string}

Gets the public key associated with a github username.

Name Type Description
user string The GitHub username.

Returns: The SSH key.

validateHostname(hostname) → {void}

validateHostname checks whether the given hostname is a valid hostname. If the hostname is invalid, it throws an error.

Name Type Description
hostname string The hostname to validate.

Interface: Connectable

Interface for classes that can allow inbound traffic.


getConnectableName() → {string}

Returns: a string representation for use in connections

Class: Container

new Container(args, volumeMounts)

Creates a new Container, which represents a container to be deployed. If a Container uses a custom image (e.g., the image is created by reading in a local Dockerfile), Kelda tracks the Dockerfile that was used to create that image. If the Dockerfile is changed and the blueprint is re-run, the image will be re-built and all containers that use the image will be re-started with the new image.

Name Type Description
args Object Required and optional arguments.
Name Type Description
name string The prefix of the container's network hostname. If multiple containers use the same name within the same deployment, their hostnames will become name, name2, name3, etc.
image Image | string An Image that the container should boot, or a string with the name of a Docker image (that exists in Docker Hub) that the container should boot.
Array.<string> The command to use when starting the container.
bool Whether the container should be run in privileged mode. Privileged mode grants the container extended privileges, such as accessing devices on the host machine. It can be thought of as the equivalent of granting root access to a user. The majority of containers do not require this flag, so make sure it is necessary before enabling it.
Object.<string, (string|Secret)> Environment variables to set in the booted container. The key is the name of the environment variable.
Object.<string, (string|Secret)> Text files to be installed on the container before it starts. The key is the path on the container where the text file should be installed, and the value is the contents of the text file. If the file content specified by this argument changes and the blueprint is re-run, Kelda will re-start the container using the new files. Files are installed with permissions 0644 and parent directories are automatically created.
Array.<VolumeMount> A list of volumes to mount within the container. Referenced volumes are automatically created by Kelda. We only document properties users should care about.
volumeMounts Array.<VolumeMount> A list of volumes to mount within the container.
Name Type Description
image Image The image of the container.
command Array.<string> The command to run when the container starts.
env Object.<string, (string|Secret)> An object containing the environment variables to set in the container. The key is the name of the variable and the value is the variable's value.
filepathToContent Object.<string, (string|Secret)> An object of the files that should be created in the container. The key is the path where the file should be created and the value is the desired content of the file. For more details, see the description of the `filepathToContent` constructor argument.

Create a Container named `my-app` that uses the nginx image on Docker Hub, and that includes a file located at /etc/myconf with contents foo.

const container = new Container({
  name: 'my-app',
  image: 'nginx',
  filepathToContent: { '/etc/myconf': 'foo' },

Create a Container that has one regular, and one secret environment variable value. The value of `mySecret` must be defined by running `kelda secret mySecret SECRET_VALUE`. If the blueprint with the container is launched before `mySecret` has been added, Kelda will wait to launch the container until the secret's value has been defined.

const container = new Container({
  name: 'my-app',
  image: 'nginx',
  env: {
    key1: 'a plaintext value',
    key2: new Secret('mySecret'),

Create a Container that mounts the Docker run socket from the host.

const volume = new Volume({
  name: 'docker',
  type: 'hostPath',
  path: '/var/run/docker.sock',
const container = new Container({
  name: 'my-app',
  image: 'ubuntu',
  volumeMounts: [
    new VolumeMount({
      mountPath: volume.path,


clone() → {Container}

Returns: A new Container with the same attributes.

deploy(infrastructure) → {void}

Adds this Container to be deployed as part of the given infrastructure.

Name Type Description
infrastructure Infrastructure The infrastructure that this should be added to.

getHostname() → {string}

Returns: The container's hostname.

placeOn(machineAttrs) → {void}

Sets placement requirements for the Machine that the Container is placed on.

Name Type Description
machineAttrs Object.<string, string> Requirements for the machine the Container gets placed on.
Name Type Description
string Provider that the Container should be placed in.
string Size of the machine that the Container should be placed on (e.g., m2.4xlarge).
string Region that the Container should be placed in.
string Floating IP address that must be assigned to the machine that the Container gets placed on.

Class: Image

new Image(args)

Creates a Docker Image. If two images with the same name but different Dockerfiles are referenced, an error will be thrown.

Name Type Description
args Object All required and optional arguments.
Name Type Description
name string The name to use for the Docker image, or if no Dockerfile is specified, the repository to get the image from. The repository can be a full URL (e.g., or the name of an image in Docker Hub (e.g., nginx or nginx:1.13.3).
string The string contents of the Dockerfile that constructs the Image.

Create an image that uses the nginx image stored on Docker Hub.

const image = new Image({
  name : 'nginx',

Create an image that uses the etcd image stored at

const image = new Image({
  name: ' ',

Create an Image named my-image-name that's built on top of the nginx image, and additionally includes the Git repository at cloned into /web_root.

const dockerfileContent = `FROM nginx
RUN cd /web_root && git clone`;

const image = new Image({
  name: 'my-image-name',
  dockerfile: dockerfileContent,

Create an image named my-image-name that's built using a Dockerfile saved locally at 'Dockerfile'.

const fs = require('fs');
const container = new Image({
  name: 'my-image-name',
  dockerfile: fs.readFileSync('./Dockerfile', { encoding: 'utf8' }),


clone() → {Image}

Returns: A new Image with all of the same attributes as this Image.

Class: Infrastructure

new Infrastructure(args)

Creates a new Infrastructure with the given options. An Infrastructure represents a collection of virtual machines in the cloud.

Name Type Description
args Object All required and optional arguments.
Name Type Default Description
masters Machine | Array.<Machine> One or more machines that should be launched to use as the masters.
workers Machine | Array.<Machine> One or more machines that should be launched to use as the workers. Worker machines are responsible for running application containers.
string kelda The name of the namespace that the blueprint should operate in.
Array.<string> A list of IP addresses that are allowed to access the deployed machines. The IP of the machine where the daemon is running is always allowed to access the machines. If you would like to allow another machine to access the deployed machines (e.g., to SSH into a machine), add its IP address here. These IP addresses must be in CIDR notation; e.g., to allow access from, set adminACL to [""]. To allow access from all IP addresses, set adminACL to [""]. We only document properties users should care about.
Name Type Description
containers Array.<Container> All containers that have been registered to run on this infrastructure.
loadBalancers Array.<LoadBalancer> All load balancers that have been registered to run on this infrastructure.
masters Array.<Machine> The master machines of this infrastructure.
workers Array.<Machine> The worker machines of this infrastructure.
adminACL Array.<string> A list of IP addresses that are allowed to access the deployed machines. See the description of the adminACL constructor argument for more details.
namespace string The namespace the blueprint should run in.

Create an infrastructure with one master and two workers - all in Amazon EC2.

const machine = new Machine({ provider: 'Amazon' });
const infrastructure = new Infrastructure({
  masters: machine,
  workers: machine.replicate(2),

Create an infrastructure with a master and a worker in Google, the custom namespace 'prod' and an adminACL that will allow the machine with IP address to access the machines in this infrastructure.

const machine = new Machine({ provider: 'Google' });
const infrastructure = new Infrastructure({
  masters: machine,
  workers: machine,
  namespace: 'prod',
  adminACL: [''],

Class: LoadBalancer

new LoadBalancer(args)

Creates a new LoadBalancer object which represents a collection of containers behind a load balancer.

Name Type Description
args Object All required arguments.
Name Type Description
name string The name of the load balancer.
containers Array.<Container> The containers behind the load balancer. We only document properties users should care about.
Name Type Description
containers Array.<Container> The containers behind the load balancer.


deploy(infrastructure) → {void}

Adds this load balancer to the given infrastructure.

Name Type Description
infrastructure Infrastructure The Infrastructure that this should be added to.

getHostname() → {string}

Returns: The Kelda hostname that represents the entire load balancer.

Class: Machine

new Machine(opts)

Creates a new Machine object, which represents a machine to be deployed. The constructor will set the Machine's size, region, cpu, and ram properties based on the cloud virtual machine that will be launched for this Machine.

Name Type Description
opts Object.<string, string> Arguments that modify the machine. Only 'provider' is required; the remaining options are optional.
Name Type Default Description
provider string The cloud provider that the machine should be launched in. Accepted values are Amazon, DigitalOcean, Google, and Vagrant.
string The region the machine will run-in (provider-specific; e.g., for Amazon, this could be 'us-west-2').
string The instance type (provider-specific).
Range | int The desired number of CPUs. The actual number of CPUs on the booted machine will be stored in the `cpu` property of this Machine instance.
Range | int The desired amount of RAM in GiB. The actual amount of RAM on the booted machine will be set in the `ram` property of this Machine instance.
int The desired amount of disk space in GB.
string A reserved IP to associate with the machine.
Array.<string> Public keys to allow users to log in to the machine and containers running on it.
boolean false Whether the machine should be preemptible. Only supported on the Amazon provider.

Create a Machine on Amazon. This will use the default size and region for Amazon.

const baseMachine = new Machine({ provider: 'Amazon' });

Create a machine with the 'n1-standard-1' size in GCE's 'us-east1-b' region.

const googleMachine = new Machine({
  provider: 'Google',
  region: 'us-east1-b',
  size: 'n1-standard-1',

Create a DigitalOcean droplet with the '512mb' size in the 'sfo1' zone.

const googleWorker = new Machine({
  provider: 'DigitalOcean',
  region: 'sfo1',
  size: '512mb',


clone() → {Machine}

Returns: A new machine with the same attributes.

replicate(n) → {Array.<Machine>}

Creates n new machines with the same attributes.

Name Type Description
n number The number of new machines to create.

Returns: A list of the new machines. This machine will not be in the returned list.

Class: Port

new Port(p)

Creates a Port object.

Name Type Description
p integer The port number.

Class: Range

new Range(min, max)

Creates a Range object.

Name Type Description
min integer The minimum of the range (inclusive).
max integer The maximum of the range (inclusive).

Class: Secret

new Secret(name)

Secret represents a secret to extract from the Vault secret store. The value is stored encrypted in a Vault instance running in the cluster. Only the value is considered secret -- names should not contain private information as they are expected to be saved in insecure locations such as user blueprints. A secret association is created by running the Kelda `secret` command. For example, running `kelda secret foo bar` creates a secret named `foo` that can be referenced using this type.

Name Type Description
name string The name of the Secret.

Class: Volume

new Volume(args)

Creates a new Volume. Volumes allow users to define storage for containers. They are useful both for persisting storage outside the lifecycle of a container, and for sharing files between multiple containers.

Name Type Description
args Object All required and optional arguments.
Name Type Description
name string A human-friendly name for the Volume. The identifier must be unique among all declared volumes.
type string The type of volume. Right now, only "hostPath" is supported.
string Required only if the volume type is "hostPath". The path on the host that should be made available to the mounting container. We only list properties that the user should care about.
Name Type Attributes Description
name string A human-friendly name for the Volume.
type string The type of volume.
path string <optional>
Will be set only if the volume type is "hostPath".

Class: VolumeMount

new VolumeMount(args)

VolumeMount defines how a Volume should be mounted into a container. VolumeMounts allow multiple containers to share the same Volume. Furthermore, the volume could be mounted in slightly different ways, such as at different paths within the container.

Name Type Description
args Object All required and optional arguments.
Name Type Description
volume string A reference to the volume to be mounted.
mountPath string The path within the container to mount the volume.

Kelda CLI

Kelda’s CLI, kelda, is a handy command line tool for starting, stopping, and managing deployments. Kelda CLI commands have the following format:


To see the help text for a specific command, run:

$ kelda COMMAND --help


Name, shorthand Default Description
--log-level, -l info Logging level (debug, info, warn, error, fatal, or panic)
--verbose, -v false Turn on debug logging
-log-file Log output file (will be overwritten)


Name Description
counters Display internal counters tracked for debugging purposes. Most users will not need this command.
daemon Start the kelda daemon, which listens for kelda API requests.
debug-logs Fetch logs for a set of machines or containers.
init Create an infrastructure that can be accessed in blueprints using baseInfrastructure().
inspect Visualize a blueprint.
logs Fetch the logs of a container or machine minion.
minion Run the kelda minion.
show Display the status of kelda-managed machines and containers.
run Compile a blueprint, and deploy the system it describes.
secret Securely add a named secret to the cluster.
ssh SSH into or execute a command in a machine or container.
stop Stop a deployment.
version Show the Kelda version information.


The kelda init command is a simple way to create reusable infrastructure. The command prompts the user for information about their desired infrastructure and then creates an infrastructure based on the answers. The infrastructure can be used in blueprints by calling baseInfrastructure().

To edit the infrastructure after creation, either rerun kelda init using the same name, or directly edit the infrastructure blueprint stored in ~/.kelda/infra/default.js.

Provider Keys: In order to launch virtual machines from your account, Kelda needs access to your provider credentials. The credentials are used when Kelda makes API calls to the provider. Kelda will not store your credentials, but simply needs access to a credentials file on your machine. If there is no existing credentials file, kelda init helps create one with the correct format. See Cloud Provider Configuration for instructions on how to get your cloud provider credentials.

How To

How to Give Your Application a Custom Domain Name

  1. Buy your domain name from a registrar like Namecheap or GoDaddy.
  2. Get a floating IP (also called Elastic IP or Static External IP) through your cloud provider’s management console or command line tool. When you reserve a floating IP, it is guaranteed to be yours until you explicitly release it.
  3. Point the domain to your IP by modifying the A record on your registrar’s website, so the domain points at the floating IP from last step.
  4. Run a blueprint that hosts the website on that floating IP. The next section describes how to write the blueprint.

Blueprint: Hosting a Website on a Floating IP

Assigning a floating IP address to an application just involves two steps:

  1. Deploy a worker machine with the floating IP:

    // The floating IP you registered with the cloud provider -- say, Amazon.
    const floatingIP = '';
    const baseMachine = new Machine({ provider: 'Amazon' });
    // Set the IP on the worker machine.
    const worker = baseMachine.clone();
    worker.floatingIp = floatingIp;
    // Create the infrastructure.
    const infrastructure = new Infrastructure({
      masters: baseMachine,
      workers: worker,
  2. Tell Kelda to place the application on the machine with your floating IP:

    const app = new Container({ name: 'myApp', image: 'myImage' });
    app.placeOn({ floatingIP });
    // Deploy the application.

    If your website is hosted on multiple servers, follow the guide for running a replicated, load balanced application, and simply place the loadBalancer on the floating IP.

How to Update Your Application on Kelda

The most robust way to handle updates to your application is to build and push your own tagged Docker images. We recommend always tagging images, and not using the :latest tag.

Say that we want to update an application that uses the me/myWebsite:0.1 Docker image to use me/myWebsite:0.2 instead. We can do that in two simple steps:

  1. In the blueprint, update all references to me/myWebsite:0.1 to use the tag :0.2.
  2. Run kelda run with the updated blueprint to update the containers with the new image.

Kelda will now restart all the relevant containers to use the new tag.

Untagged Images and Images Specified in Blueprints

For users who do not use tagged Docker images, there are currently two ways of updating an application. This section explains the two methods and when to use each.

After Changing a Blueprint

If changes are made directly to the blueprint or a file or module that the blueprint depends on, simply kelda run the blueprint again. Kelda will detect the changes and reboot the affected containers.

Examples of when kelda run will update the application:

Updating an Image

Though we recommend using tagged Docker images, some applications might use untagged images either hosted in a registry like Docker Hub or created with Kelda’s Image constructor in a blueprint. To pull a newer version of a hosted image or rebuild an Image object with Kelda, you need to restart the relevant container:

  1. In the blueprint, remove the code that .deploy()s the container that should be updated.
  2. kelda run this modified blueprint in order to stop the container.
  3. Back in the blueprint, add the .deploy() call back in to the blueprint.
  4. kelda run the blueprint from step 3 to reboot the container with the updated image.

Examples of when you need to stop and start the container in order to update the application:

How to Run a Replicated, Load Balanced Application Behind a Single IP Address

This guide describes how to write a blueprint that will run a replicated, load balanced application behind a single IP address. We will use HAProxy (High Availability Proxy), a popular open source load balancer, to evenly distribute traffic between your application containers.

Before we start writing the blueprint, make sure the application you want to replicate is listening on port 80. E.g., for an Node.js Express application called app, call app.listen(80) in your application code.

A Single Replicated Application

Import Kelda’s HAProxy blueprint in your application blueprint:

const haproxy = require('@kelda/haproxy');

Replicate the application container. E.g., to create 3 containers with the myWebsite image:

const appContainers = [];
for (let i = 0; i < 3; i += 1) {
  appContainers.push(new Container({ name: 'web', image: 'myWebsite' }));

Create a load balancer to sit in front of appContainers:

const loadBalancer = haproxy.simpleLoadBalancer(appContainers);

Allow requests from the public internet to the load balancer on port 80 (the default exposedPort).

allowTraffic(publicInternet, loadBalancer, haproxy.exposedPort);

Deploy the application containers and load balancer to your infrastructure:

const infrastructure = baseInfrastructure();
appContainers.forEach(container => container.deploy(infrastructure));

You can find a full example blueprint here.

Accessing the application

The application will be accessible on the PUBLIC IP of the haproxy container.

Multiple Replicated Applications

Say you want to run two different replicated websites with different domain names. You could call simpleLoadBalancer for each of them, but that would create two separate load balancers. This section explains how to put multiple applications behind a single load balancer – that is, behind a single IP address.

The steps are basically identical to those for running a single replicated application. There are just two important differences:

  1. Register a domain name for each replicated application (e.g. and before deploying them. The load balancer will need the domain names to forward incoming requests to the right application. For more details, see the guide on custom domain names.
  2. Create the load balancer using the withURLrouting() function instead of simpleLoadBalancer(). As an example, the load balancer below will forward requests for to one of the containers in appleContainers, and requests for will go to one of the containers in orangeContainers.
const loadBalancer = haproxy.withURLrouting({
  '': appleContainers,
  '': orangeContainers,

You can find a full example blueprint here.

Accessing the applications

The applications are now only available via their domain names. When the domains are registered, the applications can be accessed with the domain name (e.g. in the browser. If a domain name isn’t yet registered, you can test that the redirects work by cURLing the load balancer and check you get the right response:

$ curl -H "Host:" HAPROXY_PUBLIC_IP

How to Run the Daemon

We recommend reading about the daemon before reading this section.

The default way to run the daemon is to run it on a local machine like your laptop. However, when running a long term deployment or if multiple people need access to the deployment, we recommend running the daemon in the cloud.

A Shared Daemon on a Separate VM

We recommend running a single, shared daemon on a small VM in the cloud, and executing Kelda commands from there.

Setting up the Remote Daemon

  1. Create a VM (Ubuntu or Debian) on your preferred cloud provider. You can choose a small instance type to keep costs low.
    • If the provider blocks ports by default, allow ingress TCP traffic on port 22.
  2. SSH in to the VM and do the following from the VM:

    • Install Node.js.
    • Install Kelda with npm.
    • Provider Credentials. Set up provider credentials.
    • Start the Daemon. The following command starts the daemon in the background, and redirects its logs to the daemon.log file. The nohup command ensures that the daemon keeps running even when you log out of the VM.
    $ nohup kelda daemon > daemon.log 2>&1 &
  3. Run and Manage Applications. All kelda CLI commands (e.g. run, show and stop) can now be run from this machine.

How to Run Applications that Rely on Configuration Secrets

This section walks through an example of running an application that has sensitive information in its configuration. Note that Kelda secrets are currently only useful for configuration. Secrets generated at runtime, such as customer information that needs to be stored in a secure database, are not yet handled.

This section walks through deploying a GitHub bot that requires a GitHub OAuth token in order to push to a private GitHub repository. Specifically, it deploys the keldaio/bot Docker image, and configures its GITHUB_OAUTH_TOKEN environment variable with a Kelda secret. Although this example uses an environment variable, the workflow is exactly the same when installing a secret onto the filesystem.

  1. Create the Container in the blueprint. Note the secret name “githubToken”. The name is arbitrary, but will be used in the next steps to interact with the secret.

    const container = new kelda.Container({
      name: 'bot',
      image: 'keldaio/bot',
      env: { GITHUB_OAUTH_TOKEN: new kelda.Secret('githubToken') },
  2. Deploy the blueprint.

    $ kelda run ./<blueprintName.js>
  3. Kelda will not launch a container until all secrets needed by the container have been added to Kelda. Running kelda show after deploying the blueprint should result in the following:

    CONTAINER       MACHINE         COMMAND           HOSTNAME   STATUS                                CREATED    PUBLIC IP
    d044f3880fdc    sir-m5erezkj    keldaio/bot       bot        Waiting for secrets: [githubToken]

    This means that Kelda is waiting to launch the container until the secret called githubToken is set. To set the secret value, use kelda secret:

    $ kelda secret githubToken <tokenValue>

    If the command succeeds, there will be no output, and the exit code will be zero.

    Note that Kelda does not handle the lifecycle of the secret before kelda secret is run. For the GitHub token example, the GitHub token can be copied directly from the GitHub web UI to the kelda secret command. Another approach could be to store the token in a password manager, and paste it into kelda secret when needed.

  4. kelda show should show that the bot container has started. It may take up to a minute for the container to start.

  5. To change the secret value, run kelda secret githubToken <newValue> again, and the container will restart with the new value within a minute.

How to Debug Network Connectivity Problems

One common problem when writing a Kelda blueprint is that the blueprint doesn’t open all of the ports necessary for the application. This typically manifests as an application not starting properly or as the application logging messages about being unable to connect to something. This can be difficult to debug without a deep familiarity of which ports a particular application needs open. One helpful way to debug this is to use the lsof command line tool, which can be used to show which ports applications are trying to communicate on. The instructions below describe how to install lsof and use the output to solve a few different connectivity problems.

Suppose a blueprint includes a container called buggyContainer that is not running properly because the network is not correctly setup:

const buggyContainer = new kelda.Container({ name: 'buggyContainer', ... });

Start by enabling that container to access port 80 on the public internet, so that the container can download the lsof tool, by adding the following code to the blueprint:

kelda.allowTraffic(buggyContainer, kelda.publicInternet, 80);

Re-run the blueprint so that Kelda will update the container’s network access:

$ kelda run <path to blueprint>

Next, login to the container and install the lsof tool.

$ kelda ssh buggyContainer
# apt-get update
# apt-get install lsof

Use lsof to determine which ports the application in buggyContainer is trying to access:

# lsof -i -P -n
java      8 root   34u  IPv4  88572      0t0  TCP> (SYN_SENT)

The useful output is the NAME column, which shows the source network address (in this case, port 56902 on address and the destination address (in this case, port 443 on address The COMMAND column may also be useful; in this case, it shows that the network connection was initiated by a Java process. SYN_SENT means that the container tried to initiate a connection, but the machine at never replied (in this case because the Kelda firewall blocked the connection). For this output, the way to fix the problem is to enable the container to access the public internet at port 443 by adding the following line to the blueprint:

kelda.allowTraffic(buggyContainer, kelda.publicInternet, 443);

After re-running the blueprint, the container will be able to access port 443, which fixes this connectivity problem.

The output from lsof may instead show that the application is trying to listen for connections on a particular port:

# lsof -i -P -n
java     47 root  101u  IPv4 116320      0t0  TCP *:3000 (LISTEN)

In this case, there is a Java process that is running a web application on port 3000, and the container will need to enable public access on port 3000 in order for users to access the application:

kelda.allowTraffic(kelda.publicInternet, buggyContainer, 3000);

lsof may also show that internal containers are trying to communicate:

# lsof -i -P -n
java     47 root  115u  IPv4 117041      0t0  TCP> (SYN_SENT)

In this case, the fact that the destination IP address begins with 10 signifies that the destination is another container in the deployment, because IP addresses beginning with 10 are private IP addresses (in the context of Kelda, these are typically container IP addresses). This problem is somewhat harder to debug because kelda show doesn’t show each container’s private IP address, but it’s often possible to determine which other container buggyContainer is trying to connect to based on the application logs or the port that buggyContainer is trying to connect to. To see the application logs:

$ kelda logs buggyContainer

In this case, buggyContainer needs access to port 5432 on a container that runs a postgres database (postgres runs on port 5432 by default), which can be fixed by enabling access between those two containers:

allowTraffic(buggyContainer, postgresContainer, 5432);

If you’re curious, lsof lists all open files, which includes network connections because Linux treats network connections as file handles. The -i argument tells lsof to only show IP files – i.e., to only show network connections. The -P argument specifies to show port numbers rather than port names, which is more useful here because Kelda relies on port numbers, not names, to enable network connections. Finally, the -n argument specifies to show IP addresses rather than hostnames (the hostnames output by lsof are unfortunately not the same hostnames assigned by Kelda, and are not helpful as a result).

Cloud Provider Configuration

This section describes the basic configuration of the cloud providers supported by Kelda, and gives some details about how to enable extra features (e.g., floating IP addresses) on each cloud provider.

Kelda needs access to your cloud provider credentials in order to make the API calls needed to boot your deployment. Don’t worry, Kelda will never store your credentials or use them for anything else than deploying your application.

Amazon EC2

Set Up Credentials

  1. If you don’t already have an account with Amazon Web Services, go ahead and create one.

  2. Get your access credentials from the Security Credentials page in the AWS Management Console. Choose “Access Keys” and then “Create New Access Key.”

  3. Run kelda init on the machine that will be running the daemon, and pass it your AWS credentials. The formatted credentials will be placed in ~/.aws/credentials.

Formatting Credentials

While it is recommended to use kelda init to format the provider credentials, it is possible to manually create the credentials file in ~/.aws/credentials on the machine that will be running the daemon:

aws_access_key_id = <YOUR_ID>
aws_secret_access_key = <YOUR_SECRET_KEY>

The file needs to appear exactly as above (including the [default] at the top), except with <YOUR_ID> and <YOUR_SECRET_KEY> filled in appropriately.


Set Up Credentials

  1. If you don’t have a DigitalOcean account, go ahead and create one.

  2. Create a new token here. The token must have both read and write permissions.

  3. Run kelda init on the machine that will be running the Kelda daemon, and pass it your token. The token will be placed in ~/.digitalocean/key.

Floating IPs

Unless there are already droplets running, DigitalOcean doesn’t allow users to create floating IPs under the “Networking” tab on their website. Instead, this link can be used to reserve IPs that Kelda can then assign to droplets.

Google Compute Engine

Set Up Credentials

  1. If you don’t have an account on Google Cloud Platform, go ahead and create one.

  2. Create a Google Cloud Platform Project: All instances are booted under a Cloud Platform project. To setup a project for use with Kelda, go to the console page, then click the project dropdown at the top of page, and hit the plus icon. Pick a name, and create your project.

  3. Enable the Compute API: Select your newly created project from the project selector at the top of the console page, and then select APIs & services -> Library from the navbar on the left. Search for and enable the Google Compute Engine API.

  4. Save the Credentials File: Go to Credentials on the left navbar (under APIs & services), and create credentials for a Service account key. Create a new service account with the Project -> Editor role, and select the JSON output option.

  5. Run kelda init on the machine from which you will be running the Kelda daemon, and give it the path to the downloaded JSON from step 3. The credentials will be placed in ~/.gce/kelda.json.

Developing Kelda


Install Go

The project is written in Go and supports Go version 1.8 or later. Install Go using your package manager (Go is commonly referred to as “golang” in package managers and elsewhere) or via the Go website.

If you’ve never used Go before, we recommend reading the overview to Go workspaces here. In short, you’ll need to configure the GOPATH environment variable to be the location where you’ll keep all Go code. For example, if you’d like your Go workspace to be $HOME/gowork:

export GOPATH="$HOME/gowork"
export PATH="$GOPATH/bin:$PATH"

Add these commands to your .bashrc so that they’ll be run automatically each time you open a new shell.

Download Kelda

Clone the Kelda repository into your Go workspace using go get:

$ go get

This will install Kelda in your Go workspace at $GOPATH/src/, and compile Kelda. After running installing Kelda, the kelda command should execute successfully in your shell.


If you change any of the proto files, you’ll need to regenerate the protobuf code. We currently use protoc v3. On a Mac with homebrew, you can install protoc v3 using:

$ brew install protobuf

On other operating systems you can directly download the protoc binary here, and then add it to your $PATH.

To generate the protobufs simply call:

$ make generate


We use govendor for dependency management. If you are using Go 1.5 make sure GO15VENDOREXPERIMENT is set to 1.

To add a new dependency:

  1. Run go get foo/bar
  2. Edit your code to import foo/bar
  3. Run govendor add +external

To update a dependency:

$ govendor update +vendor

Building and Testing

Building and Testing the Go Code

To build Kelda, run go install in the Kelda directory. To do things beyond basic build and install, several additional build tools are required. These can be installed with the make go-get target.

Note that if you’ve previously installed Kelda with npm, there will be another Kelda binary installed on your machine (that was downloaded during the npm installation). If you want to develop Kelda, you probably want to make sure that when you run kelda, the version you’re developing (that was compiled from the Go code) gets run, and not the Kelda release that was downloaded from npm. Check that this is the case:

$ which kelda

If running which kelda results in a path that includes your $GOPATH$, like the one above, you’re all set. If it instead returns someplace else, e.g., /usr/local/bin/kelda, you’ll need to fix your $PATH variable so that $GOPATH/bin comes first.

To run the go tests, use the gocheck Make target in the root directory:

$ make gocheck

If you’d like to run the tests in just one package, e.g., the tests in the engine package, use go test with the package name:

$ go test

Building and Testing the JavaScript Code

To run the JavaScript code, you’ll need to use npm to install Kelda’s dependencies:

$ npm install .

If you’re developing the kelda package, you must also tell Node.js to use your local development copy of the Kelda JavaScript bindings (when you use Kelda to run blueprints) by running:

$ npm link js/bindings

in the directory that contains your local Kelda source files. For each blueprint that uses the Kelda JavaScript bindings, you must also run:

$ npm link kelda

in the directory that contains the blueprint JavaScript files.

To run the JavaScript tests for bindings.js, use the jscheck build target:

$ make jscheck

Running the Integration Tests

Kelda’s integration tests are located in the integration-tester directory of the main repository. These tests are not run by Travis for each submitted pull request, but they are run after each commit to our master branch (by Jenkins). To run the integration tests yourself, first install the JavaScript dependencies for the tests:

$ cd integration-tester
$ npm install .

Each integration test includes a blueprint that creates some infrastructure, and a Go test file that checks that the infrastructure was created correctly. To run a particular integration test, first run the blueprint, and then run the associated Go code.

For example, to run the Spark integration tests, first run the blueprint. You’ll need to start a kelda daemon if you don’t already have one running.

$ kelda daemon
$ # Open a new window to run the blueprint
$ kelda run ./tests/20-spark/spark.js

Use kelda show to check the status of the blueprint. Once all of the virtual machines and containers are up, run the Go test code:

$ go test ./tests/20-spark

This command will run all of the test code in the tests/20-spark package. You can add the -v flag to enable more verbose output:

$ go test -v ./tests/20-spark

By default, the tests will run in the default namespace. If you’d like to change this, you can do so by editing integration-tester/config/infrastructure.js.

Restart the Daemon After Panic

If the kelda daemon encounters a panic and quits, the socket file will still exist. Therefore when you start another daemon, you might get an error saying error=listen unix /tmp/kelda.sock: bind: address already in use. If so, remove the socket file by running rm /tmp/kelda.sock, then restart the daemon.

Contributing Code

We highly encourage contributions to Kelda from the Open Source community! Everything from fixing spelling errors to major contributions to the architecture is welcome. If you’d like to contribute but don’t know where to get started, feel free to reach out to us for some guidance.

The project is organized using a hybrid of the Github and Linux Kernel development workflows. Changes are submitted using the Github Pull Request System and, after appropriate review, fast-forwarded into master. See Submitting Patches for details.

Go Coding Style

The coding style is as defined by the gofmt tool: whatever transformations it makes on a piece of code are considered, by definition, the correct style. Unlike official go style, in Kelda lines should be wrapped to 89 characters. To make sure that your code is properly formatted, run:

$ make golint

Running make format will fix many (but not all) formatting errors.

JavaScript Coding Style

Kelda uses the AirBnb JavaScript style guide. To make sure that your JavaScript code is properly formatted, run:

$ make jslint

Git Commits

The fundamental unit of work in the Kelda project is the git commit. Each commit should be a coherent whole that implements one idea completely and correctly. No commits should break the code, even if they “fix it” later. Commit messages should be wrapped to 80 characters and begin with a title of the form <Area>: <Title>. The title should be capitalized, but not end with a period. For example, provider: Move the provider interfaces into the cloud directory is a good title. When possible, the title should fit in 50 characters.

All but the most trivial of commits should have a brief paragraph below the title (separated by an empty line), explaining the context of the commit. Why the patch was written, what problem it solves, why the approach was taken, what the future implications of the patch are, etc.

Commits should have proper author attribution, with the full name of the commit author, capitalized properly, with their email at the time of authorship. Commits authored by more than one person should have a Co-Authored-By: tag at the end of the commit message.

Submitting Patches

Patches are submitted for inclusion in Kelda using a Github Pull Request.

A pull request is a collection of well formed commits that tie together in some theme, usually the larger goal they’re trying to achieve. Completely unrelated patches should be included in separate pull requests.

Pull requests are reviewed by one person: either by a committer, if the code was submitted by a non-committer, or by a non-committer otherwise. You do not need to choose a reviewer yourself; kelda-bot will randomly select a reviewer from the appropriate group. Once the reviewer has approved the pull request, a committer will merge it. If the reviewer requests changes, leave a comment in the PR once you’ve implemented the changes, so that the reviewer knows that the PR is ready for another look.

It should be noted that the code review assignment is just a suggestion. If a another contributor, or member of the public for that matter, happens to do a detailed review and provide a +1 then the assigned reviewer is relieved of their responsibility. If you’re not the assigned reviewer, but would like to do the code review, please comment in the PR to that effect so the assigned reviewer knows they need not review the patch.

We expect patches to go through multiple rounds of code review, each involving multiple changes to the code. After each round of review, the original author is expected to update the pull request with appropriate changes. These changes should be incorporated into the patches in their most logical places. I.E. they should be folded into the original patches or, if appropriate inserted as a new patch in the series. Changes should not be simply tacked on to the end of the series as tweaks to be squashed in later – at all stages the PRs should be ready to merge without reorganizing commits.

Code Structure

Kelda is structured around a central database (db) that stores information about the current state of the system. This information is used both by the global controller (Kelda Global) that runs locally on your machine, and by the minion containers on the remote machines.


Kelda uses the basic db database implemented in db.go. This database supports insertions, deletions, transactions, triggers and querying.

The db holds the tables defined in table.go, and each table is simply a collection of rows. Each row is in turn an instance of one of the types defined in the db directory - e.g. Cluster or Machine. Note that a table holds instances of exactly one type. For instance, in ClusterTable, each row is an instance of Cluster; in ConnectionTable, each row is an instance of Connection, and so on. Because of this structure, a given row can only appear in exactly one table, and the developer therefore performs insertions, deletions and transactions on the db, rather than on specific tables. Because there is only one possible table for any given row, this is safe.

The canonical way to query the database is by calling a SelectFromX function on the db. There is a SelectFromX function for each type X that is stored in the database. For instance, to query for Connections in the ConnectionTable, one should use SelectFromConnection.

Kelda Global

The first thing that happens when Kelda starts is that your blueprint is parsed by Kelda’s JavaScript library, kelda.js. kelda.js then puts the connection and container specifications into a sensible format and forwards them to the engine.

The engine is responsible for keeping the db updated so it always reflects the desired state of the system. It does so by computing a diff of the config and the current state stored in the database. After identifying the differences, engine determines the least disruptive way to update the database to the correct state, and then performs these updates. Notice that the engine only updates the database, not the actual remote system - cloud takes care of that.

The cloud takes care of making the state of your system equal to the state of the database. cloud continuously checks for updates to the database, and whenever the state changes, cloud boots or terminates VMs in you system to reflect the changes in the db.

Now that VMs are running, the minion container will take care of starting the necessary system containers on its host VM. The foreman acts like the middle man between your locally run Kelda Global, and the minion on the VMs. Namely, the foreman configures the minion, notifies it of its (the minion‘s) role, and passes it the policies from Kelda Global.

All of these steps are done continuously so the blueprint, database and remote system always agree on the state of the system.

Kelda Remote

As described above, cloud is responsible for booting VMs. On boot, each VM runs docker and a minion. The VM is furthermore assigned a role - either worker or master - which determines what tasks it will carry out. The master minion is responsible for control related tasks, whereas the worker VMs do “the actual work” - that is, they run containers. When the user specifies a new container the config file, the scheduler will choose a worker VM to boot this container on. The minion on the chosen VM is then notified, and will boot the new container on its host. The minion is similarly responsible for tearing down containers on its host VM.

While it is possible to boot multiple master VMs, there is only one effective master at any given time. The remaining master VMs simply perform as backups in case the leading master fails.

Developing the Minion

If you’re developing code in minion, you’ll need to do some extra setup to test your new code. To make Kelda run your local version of the minion image, and not the default Kelda minion image, follow these steps:

  1. Create a new empty repository on your favorite registry - docker hub for example.
  2. Modify keldaImage in cloud/cfg/cfg.go to point to your repo.
  3. Modify Version in version/version.go to be “latest”. This ensures that you will be using the most recent version of the minion image that you are pushing up to your registry.
  4. Create a .mk file (for example: to override variables defined in Makefile. Set REPO to your own repository (for example: REPO = sample_repo) inside the .mk file you created.
  5. Create the docker image: make docker-build-kelda
    • Docker for Mac and Windows is in beta. See the docs for install instructions.
  6. Sign in to your image registry using docker login.
  7. Push your image: make docker-push-kelda.

After the above setup, you’re good to go - just remember to build and push your image first, whenever you want to run the minion with your latest changes.



Kelda uses grpc for communication with the daemon and deployed clusters. Functionality exposed through grpc includes deploying new blueprints and querying deployment information. All communication is automatically encrypted and verified using TLS.


Start the daemon. If credentials don’t already exist, they will be automatically generated.

$ kelda daemon

Use the other Kelda commands as normal.

$ kelda run ./example.js
$ kelda show
MACHINE         ROLE      PROVIDER    REGION       SIZE         PUBLIC IP         STATUS
8a0d2198229c    Master    Amazon      us-west-1    m3.medium      connected
b92d625c6847    Worker    Amazon      us-west-1    m3.medium     connected

CONTAINER       MACHINE         COMMAND                     HOSTNAME  STATUS     CREATED           PUBLIC IP
1daa461f0805    b92d625c6847    alpine tail -f /dev/null    alpine    running    24 seconds ago

Only a user with access to the correct credentials can connect to the cluster. As an example of this, if you delete your credentials, restart the daemon, and run the same blueprint, you won’t be able to connect to the machines:

$ rm -rf ~/.kelda/tls
$ kelda daemon
$ kelda run ./example.js
$ kelda show
MACHINE         ROLE      PROVIDER    REGION       SIZE         PUBLIC IP        STATUS
8a0d2198229c    Master    Amazon      us-west-1    m3.medium     connecting
b92d625c6847    Worker    Amazon      us-west-1    m3.medium    connecting

TLS credentials

kelda daemon autogenerates TLS credentials if necessary. They are stored in ~/.kelda/tls. The directory structure is as follows:

$ tree ~/.kelda/tls
├── certificate_authority.crt
├── certificate_authority.key
├── kelda.crt
├── kelda.key

Other files in the directory are ignored by Kelda.


Kelda uses Vault to securely store values for container environment variables and files. For an example of how to use secrets, see How to Run Applications that Rely on Configuration Secrets.

Secrets are encrypted both in transit and at rest. Furthermore, access to the secrets are constrained by least privilege:

Blueprint Library

Kelda blueprints have been written for many popular applications. Here are some that we recommend trying out:

If you would like your blueprint to be on this list, feel free to submit a pull request!

Frequently Asked Questions

This section includes answers to common questions about Kelda, and solutions to various issues. If you run into an issue and can’t find the answer here, don’t hesitate to email us at

Why can’t I access my website?

There are a few possible reasons:

  1. Check that you are accessing the PUBLIC IP of the container that’s serving the web application (and not the IP of another container or VM).
  2. Make sure that the container exposes a port to the public internet. If it does, the PUBLIC IP will show a port number after the IP – e.g. This says that port 80 is exposed to the public internet and thus that the application should be accessible from your (or any) browser. If there is no port number, and you are using an imported blueprint, check if the blueprint exports a function that will expose the application. If so, call this function in your blueprint. If there is no such function, use allowTraffic and publicInternet in your blueprint to expose the desired port.
  3. When exposing a different port than 80, make sure to paste both the IP address and the port number into the browser as <IP>:<PORT>.

How do I get persistent storage?

Kelda currently doesn’t support persistent storage, so we recommend using a hosted database like Firebase. If you still choose to run storage applications like MongoDB or Elasticsearch on Kelda, be aware that the data will be lost if the containers or the VMs hosting them die.

I tried to kelda run a blueprint on Amazon and nothing seems to be working.

If you’re running a blueprint on AWS and the containers are not getting properly created, you may have an issue with your VPC (Virtual Private Cloud) settings on Amazon. When this issue occurs, if you run kelda show, the machines will all have status connected, but the containers will never progress to the scheduled state (either the status will be empty, or for Dockerfiles that are built in the cluster, the status will say built). This issue only occurs if you’ve changed your default VPC on Amazon, so if you don’t know what a VPC is or you haven’t used one before on Amazon, this is probably not the issue you’re experiencing.

If you previously changed your default VPC on Amazon, it may be configured in a way that prevents Kelda containers from properly communicating with each other. This happens if the subnet configured for your VPC overlaps with, which is the subnet that Kelda uses. This problem can manifest in many ways; typically it looks like nothing seems to be working correctly. For example, a recent user who experienced this problem saw the following logs on the etcd container on a Kelda worker:

2017-06-05 21:14:40.823760 W | etcdserver: could not get cluster response from Get dial tcp getsockopt: no route to host
2017-06-05 21:14:40.823787 W | etcdmain: proxy: could not retrieve cluster information from the given urls

You can check (and fix) your VPC settings in the VPC section of the online AWS console.

My DigitalOcean machines are failing to boot with the error “422 Region is not available”

DigitalOcean sometimes temporarily disables Droplet creations. Trying a different region will most likely resolve this problem.

To confirm that the error is because Droplet creations are disabled for your region, go the Droplet creation page on the DigitalOcean UI. Under “Choose a datacenter region”, you can see if any of the regions are unavailable.

Tell us what you think!

We care about your opinion! Please send us any feedback, comments, questions, or suggestions you have about Kelda and our documentation. We would love to hear from you!