Vagrant is a tool for managing and configuring virtualised development environments.
Vagrant has a concept of 'providers', which map to the virtualisation engine and its API. The most popular and well-supported provider is Virtualbox; plugins exist for
vmware and more.
Vagrant uses a mostly declarative
Vagrantfile to define virtualised machines. A single Vagrantfile can define multiple machines.
- 1 Installation
- 2 Plugins
- 3 Provisioning
- 4 Base Boxes for Vagrant
- 5 Troubleshooting
- 5.1 No ping between host and vagrant box (host-only networking)
- 5.2 Virtual machine is not network accessible from the Arch host OS
- 5.3 'vagrant up' hangs on NFS mounting (Mounting NFS shared folders...)
- 5.4 Error starting network 'default': internal error: Failed to initialize a valid firewall backend
- 5.5 Unable to ssh to vagrant guest
- 6 See also
Install the package.
Vagrant has a middleware architecture providing support for powerful plugins.
Plugins can be installed with Vagrant's built-in plugin manager. You can specify multiple plugins to install:
$ vagrant plugin install vagrant-vbguest vagrant-share
This plugin adds a libvirt provider to Vagrant. The gcc and make packages must be installed before this plugin can be installed, and libvirt and related packages must be installed and configured before using the libvirt provider.
As of September 2016 (Vagrant version 1.8.5), a normal installation of this plugin fails on Arch Linux. The plugin can be successfully installed with this workaround:
$ CONFIGURE_ARGS='with-ldflags=-L/opt/vagrant/embedded/lib with-libvirt-include=/usr/include/libvirt with-libvirt-lib=/usr/lib' \ GEM_HOME=~/.vagrant.d/gems GEM_PATH=$GEM_HOME:/opt/vagrant/embedded/gems PATH=/opt/vagrant/embedded/bin:$PATH \ vagrant plugin install vagrant-libvirt
As of June 2017 (Vagrant version 1.9.5-1), a normal installation of this plugin fails, and the workaround presented for version 1.8.5 doesn't work on Arch Linux. error and workaround_for_version_1.9.5-1 Downgrading vagrant-substrate fixes this issue.
Once the plugin is installed the
libvirt provider will be available:
$ vagrant up --provider=libvirt
First installfrom the official repositories, then:
$ vagrant plugin install vagrant-lxc
Next, configure lxc and some systemd unit files per this comment. The plugin can now be used with a
Vagrantfile like so:
VAGRANTFILE_API_VERSION = "2" Vagrant.configure("2") do |config| config.vm.define "main" do |config| config.vm.box = 'http://bit.ly/vagrant-lxc-wheezy64-2013-10-23' config.vm.provider :lxc do |lxc| lxc.customize 'cgroup.memory.limit_in_bytes', '512M' end config.vm.provision :shell do |shell| shell.path = 'provision.sh' end end end
provision.sh file should be a shell script beside the
Vagrantfile. Do whatever setup is appropriate; for example, to remove puppet, which is packaged in the above box:
rm /etc/apt/sources.list.d/puppetlabs.list apt-get purge -y puppet facter hiera puppet-common puppetlabs-release ruby-rgen
This plugin supports KVM as the virtualisation provider.
Please see and follow the complete installation guide for Arch Linux at vagrant-kvm wiki.
Provisioners allow you to automatically install software, alter and automate configurations as part of the vagrant up process. The two most common provisioners are official repositories and AUR[broken link: archived in aur-mirror] from the AUR Arch User Repository.from
Base Boxes for Vagrant
Here is a list of places to get all sorts of vagrant base boxes for different purposes: development, testing, or even production.
- The official Arch Linux vagrant boxes. The corresponding GitHub project contains the packerfile used for building along with provisioning scripts.
- A well maintained up-to-date Arch Linux x86_64 base box for Vagrant
- The same Arch Linux x86_64 base box can also be obtained via Vagrant Cloud by running:
vagrant init terrywang/archlinux
- Vagrant Cloud is HashiCorp's official site for Vagrant boxes. You can browse user-submitted boxes or upload your own. A single Vagrant Cloud box can support multiple providers with versioning.
A List of vagrant base boxes. Initiated by Gareth Rushgrove @garethr hosted on Heroku using Nginx. See the story here: The Vagrantbox.es Story.
- Opscode bento
We all know what bento means in Japanese, right? In this case, they are NOT lunch boxes BUT extremely useful base boxes which can be used to test cookbooks or private chef (Chef Server and Client). Distributions included: Ubuntu Server, Debian, CentOS, Fedora and FreeBSD.
- Puppet Labs Vagrant Boxes
Pre-rolled vagrant boxes, ready for use. Made by the folks at Puppet Labs.
- Vagrant Ubuntu Cloud Images
It has been there since Jan, 2013. For some reason Canonical has NOT officially promoted it yet, may be still in beta. Remember these are vanilla images, NOT very useful without Chef or Puppet.
- packer-arch project on Github provides configuration files to build light Arch Linux Vagrant images from the official iso image, using
No ping between host and vagrant box (host-only networking)
Sometimes there are troubles with host-only networking not functioning. Host have no ip on vboxnet interface, host cannot ping vagrant boxes and cannot be pinged from them. This is solved by installing good old this thread by kevin1024as mentioned in
Virtual machine is not network accessible from the Arch host OS
As of version 1.8.4, Vagrant appears to use the deprecated
route command to configure routing to the virtual network interface which bridges to the virtual machine(s). If
route is not installed, you will not be able to access the virtual machine from the host OS due to the lack of suitable route. The solution, as mentioned above, is to install the package, which includes the route command.
Installingpackage may solve this problem.
Error starting network 'default': internal error: Failed to initialize a valid firewall backend
Unable to ssh to vagrant guest
Check that virtualization is enabled in your BIOS. Because vagrant reports that the vm guest is booted, you would think that all was well with virtualization, but some vagrant boxes (e.g. tantegerda1/archlinux) allow you to get all the way to the ssh stage before the lack of cpu virtualization capabilities bites you.