Virtualization using Vagrant for Selenium Tests

Say goodbye to “works on my machine” bugs.

Nobody likes to release software on a Friday. What if something breaks over the weekend? There’s nothing like debugging an issue in production when you’re doing some day drinking at a BBQ.

But you have nothing to worry about, right? I mean, you tested the application and it worked on *your* machine. Too bad that’s never good enough. Have fun working this weekend.

If only you had automated tests for your application (across all of the browsers and all the platforms you cared about). Then you could rest easy this weekend and enjoy the festivities.

Creating a robust and scalable execution environment for automation tests is a very essential phase. In automated tests for web, we actually need to cover different browsers and platforms.

In this documentation, we are going to see how to run our selenium tests in a Linux box using vagrant.

Basically I have a selenium test that works fine when running in different browsers in my Windows host machine. But I want to make sure that it is running fine on the browsers in a Linux box too. Instead of creating an actual VM and running my tests, I am simply going to create a light weight portable Linux box and I am going to run my selenium tests from my windows host machine.

Also every time that you want to run your tests in your local machine, it opens the browser on top of the other windows, preventing you from doing something else. Unless you use Phantomjs or HTMLUnitDriver, it is not possible to run Chrome or Firefox hidden, without disturbing you on your work. Now using Vagrant and VirtualBox, you only need to start the VM, and all your tests will be run into the VM. You can continue developing in the meanwhile!


Vagrant provides us easy to configure, reproducible, and portable work environments. Vagrant stands on top of VirtualBox, VMware and some other service providers. If you’re a developer, Vagrant will isolate dependencies and their configuration within a single disposable, consistent environment. If you’re a tester, Vagrant will help you to create light weight Virtual environments to run your tests against all the possible OS + Browser combinations.


  1. Download and install Oracle VM Virtualbox –
  2. Download and install Vagrant –

From command line, type vagrant –v and make sure that you get the version number that you installed. Vagrant 1.7.21

Vagrant Boxes:

Boxes are the package format for Vagrant environments. A box can be used by anyone on any platform that Vagrant supports to bring up an identical working environment. The easiest way to use a box is to add a box from the publicly available catalog of Vagrant boxes for VirtualBox. We can also add and share our own customized boxes on this website.

I am going to use this box chef/ubuntu-14.04 – a standard Ubuntu 14.04 x64 base install.

Getting Started:

  1. Create a workspace directory to store the vagrant configuration file and shell scripts.

I have created “D:\Automation\Vagrant\Demo”

  1. From Command line go to your workspace and type vagrant init chef/ubuntu-14.04


  1. Make sure that the vagrant file is created in “D:\Automation\Vagrant\Demo”
  2. Run the command vagrant up

3If you are running this for first time, it may take some considerable amount of time to download the virtual box image and will put it under C:\Users\<username> \VirtualBox VMs\Demo_default_*

  1. Now the Ubuntu base image VM is up and running. Run the Command vagrant halt. This will power off the VM. Because we need to provision this VM with selenium related libraries, browsers and some utilities to run our selenium tests.
  2. Update the vagrant file in “D:\Automation\Vagrant\Demo” with the following content. Basically the default generated vagrant file has some lot of optional behaviors which are commented by default and you may enable it you need. But we require only the following content in a vagrant file.


Vagrant.configure(2) do |config| = "chef/ubuntu-14.04"
   config.vm.provision :shell, :path =&gt; "" :forwarded_port, guest:4444, host:4444

Basically what we are trying to do above is enabling port forwarding in the VM. Selenium server will use the 4444 port by default and we are forwarding that port from VM to host machine.

Also we are going to provision this VM using the “” shell script. Provisioners in Vagrant allow you to automatically install software, alter configurations, and more on the machine, as part of the vagrant up process. This is useful since boxes typically aren’t built perfectly for your use case. Of course, if you want to just use vagrant ssh and install the software by hand, that works. But by using the provisioning systems built-in to Vagrant, it automates the process so that it is repeatable. Most importantly, it requires no human interaction, so you can vagrant destroy and vagrant up and have a fully ready-to-go work environment with a single command.

Vagrant gives you multiple options for provisioning the machine, from simple shell scripts to more complex, industry-standard configuration management systems. If you’ve never used a configuration management system before, it is recommended you start with basic shell scripts for provisioning.

So to run a selenium test in a new plain OS, we need following things.

  • JDK/JRE – To run the selenium server.
  • Google chrome browser
  • ChromeDriver
  • Some utilities like ‘Unzip’ to extract the chromedriver zip file.
  • Selenium Standalone Server
  • Xvfbor X virtual framebuffer – To run tests in headless mode.

Xvfb is a display server implementing the X11 display server protocol. In contrast to other display servers, Xvfb performs all graphical operations in memory without showing any screen output. So actually when we run our tests in a VM box, we won’t be able to see any browser popping up. It means no GUI interactions are possible. So I am going to install all the above packages along with the dependencies using

  1. Create in the “D:\Automation\Vagrant\Demo” with the following content.

#!/usr/bin/env bash

# Set start time so we know how long the bootstrap takes

T="$(date +%s)"

#echo 'Updating'

sudo apt-get -y update

echo 'Installing Zip/Unzip'

sudo apt-get -y install zip unzip

echo 'Installing Google Chrome'

sudo apt-get -y install google-chrome-stable


sudo dpkg -i google-chrome-stable_current_amd64.deb

sudo apt-get -y install -f

echo 'Installing Google XVFB'

sudo apt-get -y install xvfb

sudo apt-get -y install -f

echo 'Installing JRE'

sudo apt-get -y install default-jdk

sudo apt-get -y install -f

echo 'Downloading and Moving the ChromeDriver/Selenium Server to /usr/local/bin'

cd /tmp

wget ""

wget ""


mv chromedriver /usr/local/bin

mv selenium-server-standalone-2.35.0.jar /usr/local/bin

export DISPLAY=:10

cd /vagrant

echo "Starting Xvfb ..."

Xvfb :10 -screen 0 1366x768x24 -ac &

echo "Starting Google Chrome ..."

google-chrome --remote-debugging-port=9222 &

echo "Starting Selenium ..."

cd /usr/local/bin

java -jar selenium-server-standalone-2.35.0.jar

# Print how long the bootstrap script took to run

T="$(($(date +%s)-T))"

echo "Time bootstrap took: ${T} seconds"

  1. Run vagrant up –provision this will start the VM and then install the list of packages that we have added in the sh and also it will start the selenium server in 4444 port.


You can see this by navigating to http://localhost:4444/wd/hub/static/resource/hub.html from your windows host machine [As we have port forwarded].

Now run the following selenium test from any of your favorite IDE.

package test;

import java.util.concurrent.TimeUnit;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.remote.DesiredCapabilities;
import org.openqa.selenium.remote.RemoteWebDriver;

public class HeadlessSession {	  
	  public static void main(String args[]) throws InterruptedException, Exception {	  
	  public static void HeadlessSessionId() throws Exception {
               DesiredCapabilities capabilities = new DesiredCapabilities();
	       WebDriver driver = new RemoteWebDriver(new URL("http://localhost:4444/wd/hub"), capabilities);	  
		  try {				
			    String baseUrl = "";
			    driver.manage().timeouts().implicitlyWait(60, TimeUnit.SECONDS);
			    System.out.println("Title : " + driver.getTitle());
			    String browserName = capabilities.getBrowserName().toLowerCase();
			    System.out.println("Browser : " + browserName);
		  } catch (Exception e) {
		  finally {

That’s it. Now you have successfully ran your tests in Chrome browser in a Linux box from your windows host machine.

Useful Vagrant Commands:

Command Action
vagrant up Power on the VM
vagrant halt Power off the VM
vagrant reload Restart
vagrant suspend Saving the VM state and sleep
vagrant resume Resuming the suspended VM
vagrant provision Provision the VM by running the or any other shell script mentioned in the vagrant file.
vagrant up –provision Power on with provision
vagrant reload –provision Restart with provision


  1. Easy to setup and maintain.
  2. It’s free.
  3. Able to clone the production/staging environment for our test execution.
  4. Light weight and portable.
  5. Support to provisioning scripts like Shell, Chef, and Puppet.
  6. Simple command line based workflow.
  7. Goodbye to ‘Works on my machine’ bugs.
  8. For selenium, you can able to run headless tests on your favorite browser.
  9. Create and destroy the VMs as needed.
  10. It works on all major platforms.
  11. We can also use Selenium Grid along with vagrant to run tests parallely across different VM’s.


  1. Base boxes for your choice of operating system might not be readily available.
  2. Little bit of time consuming to make vagrant VM up and running.
  3. Taking lot of time when provisioning the VM box. To avoid this, create your own base box by installing all the required dependencies by manually SSH into it and then use provisioning script only for starting the Selenium Server.
  4. Light weight. But looks fatty when considering the alternatives like Docker.
  5. It requires you to have a hard drive file that can be huge and it takes a lot of RAM.

SoapUI – WADL and Test Coverage

The most important part after developing your API is to provide a good documentation about this.

For Restful services, it is always good to provide a WADL document describing your API.

This provides a machine readable specification that can drive a human readable view as well as various testing tools. There are several other purposes of WADL.

  • DevOps can more quickly diagnose and correct problems when parts of the larger system can be tested in isolation.
  • When working in a large team base, developers who is gonna consume your exposed API’s should not approach the developers who did it for knowing more about your API functionality.
  • Developers/QA Engineers will be able to use the SoapUI project as an example of how to access the API.
  • For integrating our APIs with API Developer Portal or any other centralized ESB.
  • To derive the test coverage for our API tests.
  • You can do a rest code generation using WADL2Java in SoapUI.

Different documentations for your Web API:

There are different documentations available for your web API’s like

Even Ready API Pro version has the ability to import/create tests for rest services from the above documentations or framework with the help of external plugins.

So most of the services that I am working on are restful and I recommend WADL more as a standard. It’s always your developer’s responsibility to provide a proper documentation for your web API. But sometimes it may not happen. So in that case let’s see how you can generate WADL on your own.

How to Generate a WADL:

There are several ways that you can use to generate the WADL for your API’s and each has its own Pros and Cons.

WADL File Generator in .NET

As I am working only on .net projects, this solution will use the  leeksnet.AspNet.WebApi.Wadl package to generate the WADL. After installing this package along with its dependencies in your API project, we can be able to see the generated WADL on the root of your Web API.


For more details, Please refer this Wiki. WADL File Generator in .NET


  • It is automated. No manual efforts involved.
  • We can able to see the updated description for the API’s, whenever there is a change in the API code base.


  • Dependency on product development team.
  • Dev team may have to upgrade their MVC versions to make this solution to work.

Ready API/SoapUI Rest Discovery:

Soap-UI Pro/Ready API has an inbuilt feature called “Rest Discovery” which will help us to discover the API’s and their descriptions.

Smart Bear has lot of documentation available to guide you step by step about this feature. Have a look at here for Getting Started with Ready API Rest Discovery.


  • No dependency on Dev team. Anyone can go ahead and generate the descriptions for their WADL.


  • Significant risk of missed API resources – anything not exercised is not recorded in the WADL.
  • Someone from the team should own the responsibility for updating the generated WADL every time, whenever there is a change.
  • Certainly result in a fairly massive performance bottleneck, since all API traffic would be routed to a Soap-UI recorder that is not optimized for performance.

Manually Generating a WADL:

Final option is to write a WADL file manually for your API’s. If you have a good understanding about your API’s internal skeleton like resource, representation, request and response, you can write your own WADL file using any text editor.


  • No dependency on Dev team. Anyone can go ahead and generate the descriptions for their WADL.
  • Requires no special tools.


  • It involves lot of manual efforts and time consuming.
  • Someone from the team should own the responsibility for updating the generated WADL every time, whenever there is a change.
  • Need to know more about WADL standards and schema.
  • Possible chances for errors which may break the WADL schema.

Ready API/SoapUI Schema Inference:

When creating a REST Service without a WADL, it is often useful to be able to generate these documents anyway, so that validation is made possible, and code/documentation generation tools can be used. Ready! API provides automatic inference of WADL from the model you create in SoapUI, and also inference of XSD schemas from any incoming responses that can be converted to XML, such as XML, JSON and HTML. For more information Please refer Using Inferred Schemas.


  • No dependency on Dev team. Anyone can go ahead and generate the descriptions for their WADL.


  • Need to know the list of available API’s before inferred with the WADL schema.

I strongly believe that there may be still lot of available solutions to provide a proper documentation for your API. There should be some kind of automated mechanism available for Web API’s in every language like Java, Python etc.

Test Coverage in SoapUI/Ready API:

Test Coverage in the sense, I am referring to the built-in “Contract Coverage” feature in Ready API.

This feature helps us to make sure that we are writing good amount of tests for all the available resources, representations, request and response in an exposed API.

This built-in “Contract Coverage” feature in Ready API is entirely different from the term “Code Coverage” which can be achievable through external tools like Ncover, Cobertura etc.

This Coverage is possible only if we have the proper documentation/WADL provided for the API’s from product teams.

To derive coverage for your API tests, please have a look at this –  Getting started with API Test Coverage

Generate Test Suites using WADL:

If we have the WADL file available at the root of the API URL, our life is going to be much easier by simply importing it into Ready API and auto generating test suites and test cases for all the available resources in your API.

Please have a look at this to learn – How to import WADL into Ready API and auto generate API tests

Additional Note:

If you are APIs are restful, you have a choice to use either “HTTP Request” or “Rest Request” in Ready API. But I recommend using only “Rest Request”.

Please have a look at here – Getting started with Restful Requests

Also for getting API Contract Coverage in Ready API, it is mandatory to use “Rest Request”.

If you have used “HTTP Request” you won’t be able to derive Contract Coverage for your API tests.