The Future of Software Development: can be described as the “GitHub for development environments”. They are a NYC-based startup that will host your entire dev environment in the cloud, so that you can share that environment with teammates without writing scripts or managing virtual machines. With Bowery new hires can just connect to Bowery and be writing code on their first day. Scrap the pages of often outdated documentation given to every engineer on their first day. We’re in the 21st century people! Everything is in the cloud! Why not our dev environments?!

Bowery doesn’t only help when on-boarding new team members though. Because Bowery runs your code against a remote copy of your production environment, you can also bypass version conflicts (oh, you upgraded to Python 3 without telling me?) and OS dependency issues (MUST he work on a Windows laptop?). As an engineering team scales these issues can get become a big time suck, but with Bowery you’re in the clear. Every update to your environment is automatically shared with everyone else. Just like we can’t imagine collaborating on projects without version control and GitHub, I predict that soon we won’t be able to imagine the days when we each manually copied and maintained our dev environments from machine to machine.

Now that I’ve got my initial excitement out of the way, I’ll go into some of the features of Bowery and conclude with my thoughts on the product.


Bowery is actually a desktop app (written in Go!). You download the app, extract the .zip file, and run the installer. Then you open Bowery and navigate to your project. This will start an instance, which is a replica of your dev environment, hosted in the cloud (Bowery securely stores instances in Google’s Compute Engine using Quay, so you don’t have to worry about security).

The basis of a Bowery environment is Ubuntu 14.04, which has Python pre-installed, so you can run “$ python -m SimpleHTTPServer” and then go to File > Open in Bowery to see your app, updated live as you make changes to source code. Nice.


If you’re opening a project whose dependencies have already been built, you’re good to go – just start coding! However if this is the first time you’re opening a project in Bowery, you’ll have to provision all of your dependencies. There are several ways to do this.

  • Bash: if you have a script to build your dependencies, just run it inside Bowery and then save your environment (File > Save).
  • Docker: probably the easiest option. If you have a DockerFile inside your application’s directory, Bowery will recognize this. When selecting a folder, Bowery will ask if you want to build your environment using Docker or get the clean Ubuntu install instead.
  • Ansible: Bowery makes it easy to get your SSH port and password (File > Info), which you can give Ansible so it can connect to your instance.
  • Chef/Puppet: if you have a Chef recipe or Puppet manifest, you’ll need to first install Chef/Puppet in your Bowery instance, and run the appropriate “chef-solo / puppet apply” command. Easy peezy.

Continuous Integration

The only major beef I have with Bowery (and why I wouldn’t be able to use it at my current employer) is that it seems to have limited CI integration support. While Bowery supports CircleCI if you’re using Docker for provisioning, I don’t see support for TeamCity, Travis, or Jenkins. This is a major bummer, and I hope the Bowery team is working on this.


Bowery also supports custom requests to host environments in AWS, Azure or Rackspace, and behind your own company’s firewall. They have a small team so to be honest I’m not sure bigger companies would feel comfortable handing over a big piece of their devops responsibility to such a tiny company.


Overall I think Bowery is a company with an exciting technology. They’re probably a good fit for newer/smaller companies that don’t have super-complex dev environments and/or custom needs. However as the company grows I’m sure they will be able to support larger and more difficult clients. Hosting dev environments in the cloud has a ton of benefits, and is definitely the direction we should be moving towards.

I’m looking forward to trying them out when I’m collaborating with a team on a hobby or open source project, and I’ll be recommending my entrepreneur friends and engineers at smaller companies to give them a look.

What is Machine Learning?

A definition I like most is, “the semi-automated extraction of knowledge from data”. I say semi-automated because machine learning (ML from now on) requires both humans and computers to work properly. ML starts with a question that might be answerable with data. That data is fed to a ML algorithm to build a predictive model, which can then be used to generate insight.

Supervised Learning

Supervised learning seeks to predict a specific outcome, for example, is this email spam or not?. The first step in supervised learning is training a ML model using labeled data. For example, an ML algorithm might be fed thousands of emails (inputs) and be told whether each email is spam or not (output). The algorithm builds a predictive model that learns the relationship between the attributes of the data and its outcome, perhaps that emails with lots of links in the body and uppercased words in the subject line are likely to be spam.

Then, use this predictive model to make predictions on new data for which the label is unknown. For example, is this new email I’ve never seen spam or not? The primary goal in supervised learning is to build a predictive model that “generalizes”, and accurately predicts the future rather than the past.


Unsupervised Learning

Unsupervised learning works with far less data than supervised learning because the data is not labeled. Unsupervised learning aims to extract structure from unlabelled data in order to learn how to best represent it. In contrast to supervised learning, there is no “right answer”. For example, if I have a data set representing the behaviors of e-commerce shoppers, an unsupervised learning task might be to group shoppers into clusters that exhibit similar behavior. Notice there are no labels in this model, and the predictive model produces clustered data instead of correctly labeled data.


Using the e-commerce shopper example from before, an unsupervised learning model would group shoppers into clusters with similar behavior, say, urban males 18-35, suburban females 45-60 and single mothers. The shoppers in these clusters have similar behaviors and dissimilar behaviors from the other clusters.



Reinforcement Learning

In contrast to supervised learning (input, correct output) and unsupervised learning (input, no output), reinforcement learning gives an input, some output, and a grade for that output. This is interesting because it closely mimics the way humans naturally learn.

For example, think of a toddler looking at a hot cup of coffee. The toddler is curious and touches the cup (input) and receives some output (the feeling of touching the hot cup) with a grade (ouch!). The toddler just received a negative reward (pain) for undesirable behavior (touching a cup of coffee with steam coming out of it). The toddler may experience this model of reinforcement learning several times before learning to identify and not touch hot cups of coffee.

Reinforcement learning is often used to help computers learn how to play games, using a current state and target function. If you want to teach a computer how to play chess, the computer will need to know the current state of the board, and identify the move with the greatest chance to win the game given the current statecalled the target functionSince the computer doesn’t know anything about playing chess, it will start by randomly selecting each move, and playing the chess game to completion. When the game ends and the computer wins or loses, that grade (positive for a win, negative for a loss) will be propagated back to each move in that game. When the computer plays another game it will then have some information to feed to its target function that decides what move to choose. Over time, the computer will get better and better at playing chess, all based on the rewards and punishments allocated to specific moves given a win or a loss.

Visualized, the reinforcement learning model looks something like this: