Merge pull request #261 from fgrehm/extract-base-boxes-to-separate-repo

Extract base boxes to separate repo
This commit is contained in:
Fabio Rehm 2014-03-27 19:47:01 -03:00
commit 9b6d1bb117
21 changed files with 33 additions and 1076 deletions

View file

@ -4,25 +4,11 @@ Although the official documentation says it is only supported for VirtualBox
environments, you can use the [`vagrant package`](http://docs.vagrantup.com/v2/cli/package.html)
command to export a `.box` file from an existing vagrant-lxc container.
There is also a set of [bash scripts](https://github.com/fgrehm/vagrant-lxc/tree/master/boxes)
There is also a set of [bash scripts](https://github.com/fgrehm/vagrant-lxc-base-boxes)
that you can use to build base boxes as needed. By default it won't include any
provisioning tool and you can pick the ones you want by providing some environment
variables.
For example:
```
git clone https://github.com/fgrehm/vagrant-lxc.git
cd vagrant-lxc/boxes
PUPPET=1 CHEF=1 make precise
```
Will build a Ubuntu Precise x86_64 box with latest Puppet and Chef pre-installed, please refer to the scripts for more information.
## Known issues
We can't get the NFS client to be installed on the containers used for building
Ubuntu 13.04 / 13.10 / 14.04 base boxes.
variables. Please refer to the [base boxes repository](https://github.com/fgrehm/vagrant-lxc-base-boxes)
for more information.
## "Anatomy" of a box
@ -31,13 +17,13 @@ on knowing what makes a base box for vagrant-lxc, here's what's needed:
### Expected `.box` contents
| FILE | DESCRIPTION |
| --- | --- |
| `lxc-template` | Script responsible for creating and setting up the container (used with `lxc-create`), a ["generic script"]() is provided along with project's source. |
| `rootfs.tar.gz` | Compressed container rootfs tarball (need to remeber to pass in `--numeric-owner` when creating it) |
| `lxc.conf` | File passed in to `lxc-create -f` |
| `lxc-config` | Box specific configuration to be _appended_ to the container's config file |
| `metadata.json` | Required by Vagrant |
| FILE | REQUIRED? | DESCRIPTION |
| --- | --- | --- |
| `metadata.json` | Yes | Required by Vagrant |
| `rootfs.tar.gz` | Yes | Compressed container rootfs tarball (need to remeber to pass in `--numeric-owner` when creating it) |
| `lxc-template` | No, a ["generic script"](scripts/lxc-template) is provided by the plugin if it doesn't exist on the base box | Script responsible for creating and setting up the container (used with `lxc-create`). |
| `lxc-config` | No | Box specific configuration to be _appended_ to the system's generated container config file |
| `lxc.conf` | No | File passed in to `lxc-create -f` |
### metadata.json
@ -58,4 +44,4 @@ on knowing what makes a base box for vagrant-lxc, here's what's needed:
| `provider` | Yes | Required by Vagrant |
| `version` | Yes | Tracks backward incompatibilities |
| `built-on` | No | Date / time when the box was packaged for the first time |
| `template-opts` | No | Extra options to be passed to the `lxc-template` script provided with the .box package |
| `template-opts` | No | Extra options to be passed to the `lxc-template` script |

View file

@ -34,6 +34,7 @@ IMPROVEMENTS:
issues: [[GH-151]] [[GH-191]] [[GH-241]] [[GH-242]]
- Warn in case `:group` or `:owner` are specified for synced folders [[GH-196]]
- Acceptance specs are now powered by `vagrant-spec` [[GH-213]]
- Base boxes creation scripts were moved out to https://github.com/fgrehm/vagrant-lxc-base-boxes.
[GH-254]: https://github.com/fgrehm/vagrant-lxc/issues/254
[GH-196]: https://github.com/fgrehm/vagrant-lxc/issues/196
@ -46,33 +47,6 @@ IMPROVEMENTS:
[GH-242]: https://github.com/fgrehm/vagrant-lxc/issues/242
BASE BOXES:
- Switched to [`lxc-download`](https://github.com/lxc/lxc/blob/master/templates/lxc-download.in)
as the "reference implementation" for the generic `lxc-template` script [[GH-236]]
- Added support for _appending_ custom boxes configs with the `lxc-config` file,
allowing usage of host's specific configs from `/etc/lxc/default.conf` [[GH-222]]
- Include NFS client on Ubuntu and Debian base boxes [[GH-218]]
- Improved output for building base boxes
- Improved `vagrant` user `sudo` rights [[GH-231]] [[GH-188]]
- Locale configuration may follow builder's LANG environment variable [[GH-221]]
- Enable bash completion for Debian base boxes [[GH-220]]
- Fix broken locale in Ubuntu boxes [[GH-201]]
- Install `python-software-properties` by default [[GH-155]]
- Fix apt-get error when building Ubuntu boxes [[GH-200]]
[GH-236]: https://github.com/fgrehm/vagrant-lxc/issues/236
[GH-222]: https://github.com/fgrehm/vagrant-lxc/issues/222
[GH-218]: https://github.com/fgrehm/vagrant-lxc/issues/218
[GH-231]: https://github.com/fgrehm/vagrant-lxc/issues/231
[GH-221]: https://github.com/fgrehm/vagrant-lxc/issues/221
[GH-220]: https://github.com/fgrehm/vagrant-lxc/issues/220
[GH-201]: https://github.com/fgrehm/vagrant-lxc/issues/201
[GH-188]: https://github.com/fgrehm/vagrant-lxc/issues/188
[GH-155]: https://github.com/fgrehm/vagrant-lxc/issues/155
[GH-200]: https://github.com/fgrehm/vagrant-lxc/issues/200
## [0.8.0](https://github.com/fgrehm/vagrant-lxc/compare/v0.7.0...v0.8.0) (Feb 26, 2014)
FEATURES:

View file

@ -1,14 +1,17 @@
### Please read before contributing
* If you have an issue with base boxes, please create it on https://github.com/fgrehm/vagrant-lxc-base-boxes,
this repository is for the Vagrant plugin only.
* Try not to post questions in the issues tracker. I will probably answer you
but I'll most likely close the issue right away and will continue the discussion
with the issue closed. If you have any questions about the plugin, make sure
you go through the [Wiki](https://github.com/fgrehm/vagrant-lxc/wiki) pages
first (specially the [Troubleshooting Section](https://github.com/fgrehm/vagrant-lxc/wiki/Troubleshooting))
and if you still need answers please ask a question on [Stack Overflow](http://stackoverflow.com/questions/tagged/vagrant-lxc)
using the `vagrant-lxc` tag on it so that I get notified :)
using the `vagrant` / `lxc` tag on it so that I get notified :)
* Please do a small search on the issues tracker before submitting your issue to
* Please do a search on the issues tracker before submitting your issue to
check if it was already reported / fixed.
* When reporting a bug, please include **all** information that you can get

View file

@ -18,7 +18,7 @@ branch.
Usage with the recently released Vagrant 1.5 is only possible by [building the
plugin from sources](https://github.com/fgrehm/vagrant-lxc/wiki/Development#wiki-installing-the-plugin-from-source).
The 1.0.0.beta1 version of the plugin that will ship with the changes required
is expected to be released by the end of March 2014.
is expected to be released by April, 2014.
## Features
@ -61,26 +61,23 @@ vagrant plugin install vagrant-lxc
```
## Usage
## Quick start
After installing, add a [base box](#base-boxes) using any name you want, for example:
On Vagrant 1.5+:
```
vagrant box add quantal64 http://bit.ly/vagrant-lxc-quantal64-2013-10-23
vagrant init fgrehm/precise64-lxc
vagrant up --provider=lxc
```
Then create a Vagrantfile that looks like the following, changing the box name
to the one you've just added:
On Vagrant < 1.5:
```ruby
Vagrant.configure("2") do |config|
config.vm.box = "quantal64"
end
```
vagrant box init precise64 http://bit.ly/vagrant-lxc-precise64-2013-10-23
vagrant up --provider=lxc
```
And finally run `vagrant up --provider=lxc`.
If you are using Vagrant 1.2+ you can also set `VAGRANT_DEFAULT_PROVIDER`
If you are using Vagrant 1.2+ you can also set the `VAGRANT_DEFAULT_PROVIDER`
environmental variable to `lxc` in order to avoid typing `--provider=lxc` all
the time.
@ -104,7 +101,7 @@ vagrant-lxc will then write out `lxc.cgroup.memory.limit_in_bytes='1024M'` to th
container config file (usually kept under `/var/lib/lxc/<container>/config`)
prior to starting it.
For other configuration options, please check the [lxc.conf manpages](http://manpages.ubuntu.com/manpages/quantal/man5/lxc.conf.5.html).
For other configuration options, please check the [lxc.conf manpages](http://manpages.ubuntu.com/manpages/precise/man5/lxc.conf.5.html).
### Container naming
@ -139,10 +136,11 @@ _vagrant-lxc < 1.0.0 users, please check this [Wiki page](https://github.com/fgr
### Base boxes
Please check [the wiki](https://github.com/fgrehm/vagrant-lxc/wiki/Base-boxes)
for a list of [pre built](https://github.com/fgrehm/vagrant-lxc/wiki/Base-boxes#available-boxes)
base boxes and have a look at [`BOXES.md`](https://github.com/fgrehm/vagrant-lxc/tree/master/BOXES.md)
for more information on building your own.
Base boxes can be found on [VagrantCloud](https://vagrantcloud.com/search?provider=lxc)
and some scripts to build your own are available at [fgrehm/vagrant-lxc-base-boxes](https://github.com/fgrehm/vagrant-lxc-base-boxes).
If you want to build your own boxes, please have a look at [`BOXES.md`](https://github.com/fgrehm/vagrant-lxc/tree/master/BOXES.md)
for more information.
## More information

1
boxes/.gitignore vendored
View file

@ -1 +0,0 @@
/log

View file

@ -1,42 +0,0 @@
UBUNTU_BOXES= precise quantal raring saucy trusty
DEBIAN_BOXES= squeeze wheezy sid jessie
TODAY=$(shell date -u +"%Y-%m-%d")
default:
all: ubuntu debian
ubuntu: $(UBUNTU_BOXES)
debian: $(DEBIAN_BOXES)
# REFACTOR: Figure out how can we reduce duplicated code
$(UBUNTU_BOXES): CONTAINER = "vagrant-base-${@}-amd64"
$(UBUNTU_BOXES): PACKAGE = "output/${TODAY}/vagrant-lxc-${@}-amd64.box"
$(UBUNTU_BOXES):
@mkdir -p $$(dirname $(PACKAGE))
@sudo -E ./mk-debian.sh ubuntu $(@) amd64 $(CONTAINER) $(PACKAGE)
@sudo chmod +rw $(PACKAGE)
@sudo chown ${USER}: $(PACKAGE)
$(DEBIAN_BOXES): CONTAINER = "vagrant-base-${@}-amd64"
$(DEBIAN_BOXES): PACKAGE = "output/${TODAY}/vagrant-lxc-${@}-amd64.box"
$(DEBIAN_BOXES):
@mkdir -p $$(dirname $(PACKAGE))
@sudo -E ./mk-debian.sh debian $(@) amd64 $(CONTAINER) $(PACKAGE)
@sudo chmod +rw $(PACKAGE)
@sudo chown ${USER}: $(PACKAGE)
acceptance: CONTAINER = "vagrant-base-acceptance-amd64"
acceptance: PACKAGE = "output/${TODAY}/vagrant-lxc-acceptance-amd64.box"
acceptance:
@mkdir -p $$(dirname $(PACKAGE))
@PUPPET=1 CHEF=1 sudo -E ./mk-debian.sh ubuntu precise amd64 $(CONTAINER) $(PACKAGE)
@sudo chmod +rw $(PACKAGE)
@sudo chown ${USER}: $(PACKAGE)
clean: ALL_BOXES = ${DEBIAN_BOXES} ${UBUNTU_BOXES} acceptance
clean:
@for r in $(ALL_BOXES); do \
sudo -E ./clean.sh $${r}\
vagrant-base-$${r}-amd64 \
output/${TODAY}/vagrant-lxc-$${r}-amd64.box; \
done

View file

@ -1,159 +0,0 @@
#!/bin/bash
# set -x
set -e
# Script used to build OpenMandriva base vagrant-lxc containers, currently limited to
# host's arch
#
# USAGE:
# $ cd boxes && sudo ./build-openmandriva-box.sh OPENMANDRIVA_RELEASE BOX_ARCH
#
# TODO: scripts for install CHEF, PUPPET, SALT, BABUSHKA
# To enable Chef or any other configuration management tool pass '1' to the
# corresponding env var:
# $ CHEF=1 sudo -E ./build-openmandriva-box.sh OPENMANDRIVA_RELEASE BOX_ARCH
# $ PUPPET=1 sudo -E ./build-openmandriva-box.sh OPENMANDRIVA_RELEASE BOX_ARCH
# $ SALT=1 sudo -E ./build-openmandriva-box.sh OPENMANDRIVA_RELEASE BOX_ARCH
# $ BABUSHKA=1 sudo -E ./build-openmandriva-box.sh OPENMANDRIVA_RELEASE BOX_ARCH
##################################################################################
# 0 - Initial setup and sanity checks
TODAY=$(date -u +"%Y-%m-%d")
NOW=$(date -u)
RELEASE=${1:-"openmandriva2013.0"}
ARCH=${2:-"x86_64"}
PKG=vagrant-lxc-${RELEASE}-${ARCH}-${TODAY}.box
WORKING_DIR=/tmp/vagrant-lxc-${RELEASE}
VAGRANT_KEY="ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key"
ROOTFS=/var/lib/lxc/${RELEASE}-base/${RELEASE}-base/rootfs
# Providing '1' will enable these tools
CHEF=${CHEF:-0}
PUPPET=${PUPPET:-0}
SALT=${SALT:-0}
BABUSHKA=${BABUSHKA:-0}
# Path to files bundled with the box
CWD=`readlink -f .`
LXC_TEMPLATE=${CWD}/common/lxc-template-openmandriva
LXC_CONF=${CWD}/common/lxc.conf
METATADA_JSON=${CWD}/common/metadata.json
# Set up a working dir
mkdir -p $WORKING_DIR
if [ -f "${WORKING_DIR}/${PKG}" ]; then
echo "Found a box on ${WORKING_DIR}/${PKG} already!"
exit 1
fi
##################################################################################
# 1 - Create the base container
if $(lxc-ls | grep -q "${RELEASE}-base"); then
echo "Base container already exists, please remove it with \`lxc-destroy -n ${RELEASE}-base\`!"
exit 1
else
export SUITE=$RELEASE
lxc-create -n ${RELEASE}-base -t openmandriva -- -R ${RELEASE} --arch ${ARCH}
fi
######################################
# 2 - Fix some known issues
# Fixes some networking issues
cat /etc/resolv.conf > ${ROOTFS}/etc/resolv.conf
##################################################################################
# 3 - Prepare vagrant user
chroot ${ROOTFS} su -c 'useradd --create-home -s /bin/bash vagrant'
# echo -n 'vagrant:vagrant' | chroot ${ROOTFS} chpasswd
chroot ${ROOTFS} su -c "echo -n 'vagrant:vagrant' | chpasswd"
##################################################################################
# 4 - Setup SSH access and passwordless sudo
# Configure SSH access
mkdir -p ${ROOTFS}/home/vagrant/.ssh
echo $VAGRANT_KEY > ${ROOTFS}/home/vagrant/.ssh/authorized_keys
chroot ${ROOTFS} chown -R vagrant: /home/vagrant/.ssh
chroot ${ROOTFS} urpmi sudo --auto
chroot ${ROOTFS} usermod -a -G wheel vagrant
# Enable passwordless sudo for users under the "sudo" group
cp ${ROOTFS}/etc/sudoers{,.orig}
sed -i 's/Defaults requiretty/\# Defaults requiretty/' ${ROOTFS}/etc/sudoers
sed -i 's/\#%wheel/\%wheel/' ${ROOTFS}/etc/sudoers
sed -i 's/\# %wheel/\%wheel/' ${ROOTFS}/etc/sudoers
# sed -i -e \
# 's/%sudo\s\+ALL=(ALL\(:ALL\)\?)\s\+ALL/%sudo ALL=(ALL) NOPASSWD:ALL/g' \
# ${ROOTFS}/etc/sudoers
##################################################################################
# 5 - Add some goodies and update packages
PACKAGES=(vim curl wget man bash-completion openssh-server openssh-clients tar)
chroot ${ROOTFS} urpmi ${PACKAGES[*]} --auto
chroot ${ROOTFS} urpmi.update -a
##################################################################################
# 6 - Configuration management tools
if [ $CHEF = 1 ]; then
./common/install-chef $ROOTFS
fi
if [ $PUPPET = 1 ]; then
./common/install-puppet $ROOTFS
fi
if [ $SALT = 1 ]; then
./common/install-salt $ROOTFS
fi
if [ $BABUSHKA = 1 ]; then
./common/install-babushka $ROOTFS
fi
##################################################################################
# 7 - Free up some disk space
rm -rf ${ROOTFS}/tmp/*
# chroot ${ROOTFS} urpmi clean metadata
##################################################################################
# 8 - Build box package
# Compress container's rootfs
cd $(dirname $ROOTFS)
tar --numeric-owner -czf /tmp/vagrant-lxc-${RELEASE}/rootfs.tar.gz ./rootfs/*
# Prepare package contents
cd $WORKING_DIR
cp $LXC_TEMPLATE lxc-template
cp $LXC_CONF .
cp $METATADA_JSON .
chmod +x lxc-template
sed -i "s/<TODAY>/${NOW}/" metadata.json
# Vagrant box!
tar -czf $PKG ./*
chmod +rw ${WORKING_DIR}/${PKG}
mkdir -p ${CWD}/output
mv ${WORKING_DIR}/${PKG} ${CWD}/output
# Clean up after ourselves
rm -rf ${WORKING_DIR}
echo "The base box was built successfully to ${CWD}/output/${PKG}"

View file

@ -1,27 +0,0 @@
#!/bin/bash
set -e
source common/ui.sh
export RELEASE=$1
export CONTAINER=$2
export PACKAGE=$3
export LOG=$(readlink -f .)/log/${CONTAINER}.log
info "Cleaning ${RELEASE} artifacts..."
# If container exists, check if want to continue
if $(lxc-ls | grep -q ${CONTAINER}); then
log "Removing '${CONTAINER}' container"
lxc-stop -n ${CONTAINER} &>/dev/null || true
lxc-destroy -n ${CONTAINER}
else
log "The container '${CONTAINER}' does not exist"
fi
if [ -e ${PACKAGE} ]; then
log "Removing '${PACKAGE}'"
rm -f ${PACKAGE}
else
log "The package '${PACKAGE}' does not exist"
fi

View file

@ -1,43 +0,0 @@
#!/bin/bash
set -e
source common/ui.sh
source common/utils.sh
# If container exists, check if want to continue
if $(lxc-ls | grep -q ${CONTAINER}); then
if ! $(confirm "The '${CONTAINER}' container already exists, do you want to continue building the box?" 'y'); then
log 'Aborting...'
exit 1
fi
fi
# If container exists and wants to continue building the box
if $(lxc-ls | grep -q ${CONTAINER}); then
if $(confirm "Do you want to rebuild the '${CONTAINER}' container?" 'n'); then
log "Destroying container ${CONTAINER}..."
utils.lxc.stop
utils.lxc.destroy
else
log "Reusing existing container..."
exit 0
fi
fi
# If we got to this point, we need to create the container
log "Creating container..."
if [ $RELEASE = 'raring' ]; then
utils.lxc.create -t ubuntu -- \
--release ${RELEASE} \
--arch ${ARCH}
elif [ $RELEASE = 'squeeze' ]; then
utils.lxc.create -t debian -- \
--release ${RELEASE} \
--arch ${ARCH}
else
utils.lxc.create -t download -- \
--dist ${DISTRIBUTION} \
--release ${RELEASE} \
--arch ${ARCH}
fi
log "Container created!"

View file

@ -1,225 +0,0 @@
#!/bin/bash
# This is a modified version of /usr/share/lxc/templates/lxc-openmandriva
# that comes with OpenMandriva changed to suit vagrant-lxc needs
#
# template script for generating openmandriva container for LXC
#
#
# lxc: linux Container library
# Authors:
# Alexander Khryukin <alexander@mezon.ru>
# Vokhmin Alexey V <avokhmin@gmail.com>
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
set -e
if [ -r /etc/default/lxc ]; then
. /etc/default/lxc
fi
extract_rootfs()
{
tarball=$1
arch=$2
rootfs=$3
echo "Extracting $tarball ..."
mkdir -p $(dirname $rootfs)
(cd `dirname $rootfs` && tar xfz $tarball)
return 0
}
install_openmandriva()
{
rootfs=$1
release=$2
tarball=$3
mkdir -p /var/lock/subsys/
(
flock -x 200
if [ $? -ne 0 ]; then
echo "Cache repository is busy."
return 1
fi
extract_rootfs $tarball $arch $rootfs
if [ $? -ne 0 ]; then
echo "Failed to copy rootfs"
return 1
fi
return 0
) 200>/var/lock/subsys/lxc
return $?
}
copy_configuration()
{
path=$1
rootfs=$2
name=$3
grep -q "^lxc.rootfs" $path/config 2>/dev/null || echo "lxc.rootfs = $rootfs" >> $path/config
# if there is exactly one veth network entry, make sure it has an
# associated hwaddr.
nics=`grep -e '^lxc\.network\.type[ \t]*=[ \t]*veth' $path/config | wc -l`
if [ $nics -eq 1 ]; then
grep -q "^lxc.network.hwaddr" $path/config || sed -i -e "/^lxc\.network\.type[ \t]*=[ \t]*veth/a lxc.network.hwaddr = 00:16:3e:$(openssl rand -hex 3| sed 's/\(..\)/\1:/g; s/.$//')" $path/config
fi
if [ $? -ne 0 ]; then
echo "Failed to add configuration"
return 1
fi
return 0
}
post_process()
{
rootfs=$1
# rmdir /dev/shm for containers that have /run/shm
# I'm afraid of doing rm -rf $rootfs/dev/shm, in case it did
# get bind mounted to the host's /run/shm. So try to rmdir
# it, and in case that fails move it out of the way.
if [ ! -L $rootfs/dev/shm ] && [ -d $rootfs/run/shm ] && [ -e $rootfs/dev/shm ]; then
mv $rootfs/dev/shm $rootfs/dev/shm.bak
ln -s /run/shm $rootfs/dev/shm
fi
}
usage()
{
cat <<EOF
usage:
$1 -n|--name=<container_name>
[-p|--path=<path>] [-c|--clean] [-R|--release=<openmandriva2013.0/rosa2012.1/cooker/ release>]
[-4|--ipv4=<ipv4 address>] [-6|--ipv6=<ipv6 address>]
[-g|--gw=<gw address>] [-d|--dns=<dns address>]
[-P|--profile=<name of the profile>] [--rootfs=<path>]
[-A|--arch=<arch of the container>]
[-T|--tarball <tarball path>]
[-S|--auth-key <auth-key path>]
[-h|--help]
Mandatory args:
-n,--name container name, used to as an identifier for that container from now on
Optional args:
-p,--path path to where the container rootfs will be created, defaults to /var/lib/lxc. The container config will go under /var/lib/lxc in that case
-c,--clean clean the cache
-R,--release openmandriva2013.0/cooker/rosa2012.1 release for the new container. if the host is OpenMandriva, then it will default to the host's release.
-4,--ipv4 specify the ipv4 address to assign to the virtualized interface, eg. 192.168.1.123/24
-6,--ipv6 specify the ipv6 address to assign to the virtualized interface, eg. 2003:db8:1:0:214:1234:fe0b:3596/64
-g,--gw specify the default gw, eg. 192.168.1.1
-G,--gw6 specify the default gw, eg. 2003:db8:1:0:214:1234:fe0b:3596
-d,--dns specify the DNS server, eg. 192.168.1.2
-P,--profile Profile name is the file name in /etc/lxc/profiles contained packages name for install to cache.
-A,--arch Define what arch the container will be [i586,x86_64,armv7l,armv7hl]
---rootfs rootfs path
-h,--help print this help
EOF
return 0
}
options=$(getopt -o hp:n:P:cR:4:6:g:d:A:S:T: -l help,rootfs:,path:,name:,profile:,clean:,release:,ipv4:,ipv6:,gw:,dns:,arch:,auth-key:,tarball: -- "$@")
if [ $? -ne 0 ]; then
usage $(basename $0)
exit 1
fi
eval set -- "$options"
# doesn't use
release=${release:-"cooker"}
hostarch=$(uname -m)
while true
do
case "$1" in
-h|--help) usage $0 && exit 0;;
-p|--path) path=$2; shift 2;;
--rootfs) rootfs_path=$2; shift 2;;
-n|--name) name=$2; shift 2;;
-P|--profile) profile=$2; shift 2;;
-c|--clean) clean=$2; shift 2;;
-R|--release) release=$2; shift 2;;
-T|--tarball) tarball=$2; shift 2;;
-S|--auth-key) auth_key=$2; shift 2;;
-A|--arch) arch=$2; shift 2;;
-4|--ipv4) ipv4=$2; shift 2;;
-6|--ipv6) ipv6=$2; shift 2;;
-g|--gw) gw=$2; shift 2;;
-d|--dns) dns=$2; shift 2;;
--) shift 1; break ;;
*) break ;;
esac
done
arch=${arch:-$hostarch}
if [ $hostarch = "i586" -a $arch = "x86_64" ]; then
echo "can't create x86_64 container on i586"
exit 1
fi
if [ -z "$path" ]; then
echo "'path' parameter is required"
exit 1
fi
if [ "$(id -u)" != "0" ]; then
echo "This script should be run as 'root'"
exit 1
fi
# detect rootfs
config="$path/config"
# if $rootfs exists here, it was passed in with --rootfs
if [ -z "$rootfs" ]; then
if grep -q '^lxc.rootfs' $config 2>/dev/null ; then
rootfs=`grep 'lxc.rootfs =' $config | awk -F= '{ print $2 }'`
else
rootfs=$path/rootfs
fi
fi
install_openmandriva $rootfs $release $tarball
if [ $? -ne 0 ]; then
echo "failed to install openmandriva $release"
exit 1
fi
copy_configuration $path $rootfs $name $arch
if [ $? -ne 0 ]; then
echo "failed write configuration file"
exit 1
fi
post_process $rootfs $release
echo ""
echo "##"
echo "# The default user is 'vagrant' with password 'vagrant'!"
echo "# Use the 'sudo' command to run tasks as root in the container."
echo "##"
echo ""

View file

@ -1,33 +0,0 @@
#!/bin/bash
set -e
source common/ui.sh
# TODO: Create file with build date / time on container
info "Packaging '${CONTAINER}' to '${PACKAGE}'..."
debug 'Stopping container'
lxc-stop -n ${CONTAINER} &>/dev/null || true
if [ -f ${WORKING_DIR}/rootfs.tar.gz ]; then
log "Removing previous rootfs tarball"
rm -f ${WORKING_DIR}/rootfs.tar.gz
fi
log "Compressing container's rootfs"
pushd $(dirname ${ROOTFS}) &>>${LOG}
tar --numeric-owner --anchored --exclude=./rootfs/dev/log -czf \
${WORKING_DIR}/rootfs.tar.gz ./rootfs/*
popd &>>${LOG}
# Prepare package contents
log 'Preparing box package contents'
cp conf/${DISTRIBUTION} ${WORKING_DIR}/lxc-config
cp conf/metadata.json ${WORKING_DIR}
sed -i "s/<TODAY>/${NOW}/" ${WORKING_DIR}/metadata.json
# Vagrant box!
log 'Packaging box'
TARBALL=$(readlink -f ${PACKAGE})
(cd ${WORKING_DIR} && tar -czf $TARBALL ./*)

View file

@ -1,46 +0,0 @@
#!/bin/bash
set -e
source common/ui.sh
export VAGRANT_KEY="ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6NF8iallvQVp22WDkTkyrtvp9eWW6A8YVr+kz4TjGYe7gHzIw+niNltGEFHzD8+v1I2YJ6oXevct1YeS0o9HZyN1Q9qgCgzUFtdOKLv6IedplqoPkcmF0aYet2PkEDo3MlTBckFXPITAMzF8dJSIFo9D8HfdOV0IAdx4O7PtixWKn5y2hMNG0zQPyUecp4pzC6kivAIhyfHilFR61RGL+GPXQ2MWZWFYbAGjyiYJnAmCP3NOTd0jMZEnDkbUvxhMmBYSdETk1rRgm+R4LOzFUGaHqHDLKLX+FIPKcF96hrucXzcWyLbIbEgE98OHlnVYCzRdK8jlqm8tehUc9c9WhQ== vagrant insecure public key"
info "Preparing vagrant user..."
# Create vagrant user
if $(grep -q 'vagrant' ${ROOTFS}/etc/shadow); then
log 'Skipping vagrant user creation'
elif $(grep -q 'ubuntu' ${ROOTFS}/etc/shadow); then
debug 'vagrant user does not exist, renaming ubuntu user...'
mv ${ROOTFS}/home/{ubuntu,vagrant}
chroot ${ROOTFS} usermod -l vagrant -d /home/vagrant ubuntu &>> ${LOG}
chroot ${ROOTFS} groupmod -n vagrant ubuntu &>> ${LOG}
echo -n 'vagrant:vagrant' | chroot ${ROOTFS} chpasswd
log 'Renamed ubuntu user to vagrant and changed password.'
else
debug 'Creating vagrant user...'
chroot ${ROOTFS} useradd --create-home -s /bin/bash vagrant &>> ${LOG}
chroot ${ROOTFS} adduser vagrant sudo &>> ${LOG}
echo -n 'vagrant:vagrant' | chroot ${ROOTFS} chpasswd
fi
# Configure SSH access
if [ -d ${ROOTFS}/home/vagrant/.ssh ]; then
log 'Skipping vagrant SSH credentials configuration'
else
debug 'SSH key has not been set'
mkdir -p ${ROOTFS}/home/vagrant/.ssh
echo $VAGRANT_KEY > ${ROOTFS}/home/vagrant/.ssh/authorized_keys
chroot ${ROOTFS} chown -R vagrant: /home/vagrant/.ssh
log 'SSH credentials configured for the vagrant user.'
fi
# Enable passwordless sudo for the vagrant user
if [ -f ${ROOTFS}/etc/sudoers.d/vagrant ]; then
log 'Skipping sudoers file creation.'
else
debug 'Sudoers file was not found'
echo "vagrant ALL=(ALL) NOPASSWD:ALL" > ${ROOTFS}/etc/sudoers.d/vagrant
chmod 0440 ${ROOTFS}/etc/sudoers.d/vagrant
log 'Sudoers file created.'
fi

View file

@ -1,53 +0,0 @@
#!/bin/bash
export NO_COLOR='\033[0m'
export OK_COLOR='\033[32;01m'
export ERROR_COLOR='\033[31;01m'
export WARN_COLOR='\033[33;01m'
log() {
echo " [${RELEASE}] ${1}" >>${LOG}
echo " [${RELEASE}] ${1}" >&2
}
warn() {
echo "==> [${RELEASE}] [WARN] ${1}" >>${LOG}
echo -e "${WARN_COLOR}==> [${RELEASE}] ${1}${NO_COLOR}"
}
info() {
echo "==> [${RELEASE}] [INFO] ${1}" >>${LOG}
echo -e "${OK_COLOR}==> [${RELEASE}] ${1}${NO_COLOR}"
}
confirm() {
question=${1}
default=${2}
default_prompt=
if [ $default = 'n' ]; then
default_prompt="y/N"
default='No'
else
default_prompt="Y/n"
default='Yes'
fi
echo -e -n "${WARN_COLOR}==> [${RELEASE}] ${question} [${default_prompt}] ${NO_COLOR}" >&2
read answer
if [ -z $answer ]; then
debug "Answer not provided, assuming '${default}'"
answer=${default}
fi
if $(echo ${answer} | grep -q -i '^y'); then
return 0
else
return 1
fi
}
debug() {
[ ! $DEBUG ] || echo " [${RELEASE}] [DEBUG] ${1}" >&2
}

View file

@ -1,23 +0,0 @@
#!/bin/bash
utils.lxc.attach() {
cmd="$@"
log "Running [${cmd}] inside '${CONTAINER}' container..."
(lxc-attach -n ${CONTAINER} -- $cmd) &>> ${LOG}
}
utils.lxc.start() {
lxc-start -d -n ${CONTAINER} &>>${LOG} || true
}
utils.lxc.stop() {
lxc-stop -n ${CONTAINER} &>>${LOG} || true
}
utils.lxc.destroy() {
lxc-destroy -n ${CONTAINER} &>>${LOG}
}
utils.lxc.create() {
lxc-create -n ${CONTAINER} "$@" &>>${LOG}
}

View file

@ -1,62 +0,0 @@
# Default pivot location
lxc.pivotdir = lxc_putold
# Default mount entries
lxc.mount.entry = proc proc proc nodev,noexec,nosuid 0 0
lxc.mount.entry = sysfs sys sysfs defaults 0 0
lxc.mount.entry = /sys/fs/fuse/connections sys/fs/fuse/connections none bind,optional 0 0
# Default console settings
lxc.tty = 4
lxc.pts = 1024
# Default capabilities
lxc.cap.drop = sys_module mac_admin mac_override sys_time
# When using LXC with apparmor, the container will be confined by default.
# If you wish for it to instead run unconfined, copy the following line
# (uncommented) to the container's configuration file.
#lxc.aa_profile = unconfined
# To support container nesting on an Ubuntu host while retaining most of
# apparmor's added security, use the following two lines instead.
#lxc.aa_profile = lxc-container-default-with-nesting
#lxc.hook.mount = /usr/share/lxc/hooks/mountcgroups
# If you wish to allow mounting block filesystems, then use the following
# line instead, and make sure to grant access to the block device and/or loop
# devices below in lxc.cgroup.devices.allow.
#lxc.aa_profile = lxc-container-default-with-mounting
# Default cgroup limits
lxc.cgroup.devices.deny = a
## Allow any mknod (but not using the node)
lxc.cgroup.devices.allow = c *:* m
lxc.cgroup.devices.allow = b *:* m
## /dev/null and zero
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
## consoles
lxc.cgroup.devices.allow = c 5:0 rwm
lxc.cgroup.devices.allow = c 5:1 rwm
## /dev/{,u}random
lxc.cgroup.devices.allow = c 1:8 rwm
lxc.cgroup.devices.allow = c 1:9 rwm
## /dev/pts/*
lxc.cgroup.devices.allow = c 5:2 rwm
lxc.cgroup.devices.allow = c 136:* rwm
## rtc
lxc.cgroup.devices.allow = c 254:0 rm
## fuse
lxc.cgroup.devices.allow = c 10:229 rwm
## tun
lxc.cgroup.devices.allow = c 10:200 rwm
## full
lxc.cgroup.devices.allow = c 1:7 rwm
## hpet
lxc.cgroup.devices.allow = c 10:228 rwm
## kvm
lxc.cgroup.devices.allow = c 10:232 rwm
## To use loop devices, copy the following line to the container's
## configuration file (uncommented).
#lxc.cgroup.devices.allow = b 7:* rwm

View file

@ -1,5 +0,0 @@
{
"provider": "lxc",
"version": "1.0.0",
"built-on": "<TODAY>"
}

View file

@ -1,70 +0,0 @@
# Default pivot location
lxc.pivotdir = lxc_putold
# Default mount entries
lxc.mount.entry = proc proc proc nodev,noexec,nosuid 0 0
lxc.mount.entry = sysfs sys sysfs defaults 0 0
lxc.mount.entry = /sys/fs/fuse/connections sys/fs/fuse/connections none bind,optional 0 0
lxc.mount.entry = /sys/kernel/debug sys/kernel/debug none bind,optional 0 0
lxc.mount.entry = /sys/kernel/security sys/kernel/security none bind,optional 0 0
lxc.mount.entry = /sys/fs/pstore sys/fs/pstore none bind,optional 0 0
# Default console settings
lxc.devttydir = lxc
lxc.tty = 4
lxc.pts = 1024
# Default capabilities
lxc.cap.drop = sys_module mac_admin mac_override sys_time
# When using LXC with apparmor, the container will be confined by default.
# If you wish for it to instead run unconfined, copy the following line
# (uncommented) to the container's configuration file.
#lxc.aa_profile = unconfined
# To support container nesting on an Ubuntu host while retaining most of
# apparmor's added security, use the following two lines instead.
#lxc.aa_profile = lxc-container-default-with-nesting
#lxc.hook.mount = /usr/share/lxc/hooks/mountcgroups
# Uncomment the following line to autodetect squid-deb-proxy configuration on the
# host and forward it to the guest at start time.
#lxc.hook.pre-start = /usr/share/lxc/hooks/squid-deb-proxy-client
# If you wish to allow mounting block filesystems, then use the following
# line instead, and make sure to grant access to the block device and/or loop
# devices below in lxc.cgroup.devices.allow.
#lxc.aa_profile = lxc-container-default-with-mounting
# Default cgroup limits
lxc.cgroup.devices.deny = a
## Allow any mknod (but not using the node)
lxc.cgroup.devices.allow = c *:* m
lxc.cgroup.devices.allow = b *:* m
## /dev/null and zero
lxc.cgroup.devices.allow = c 1:3 rwm
lxc.cgroup.devices.allow = c 1:5 rwm
## consoles
lxc.cgroup.devices.allow = c 5:0 rwm
lxc.cgroup.devices.allow = c 5:1 rwm
## /dev/{,u}random
lxc.cgroup.devices.allow = c 1:8 rwm
lxc.cgroup.devices.allow = c 1:9 rwm
## /dev/pts/*
lxc.cgroup.devices.allow = c 5:2 rwm
lxc.cgroup.devices.allow = c 136:* rwm
## rtc
lxc.cgroup.devices.allow = c 254:0 rm
## fuse
lxc.cgroup.devices.allow = c 10:229 rwm
## tun
lxc.cgroup.devices.allow = c 10:200 rwm
## full
lxc.cgroup.devices.allow = c 1:7 rwm
## hpet
lxc.cgroup.devices.allow = c 10:228 rwm
## kvm
lxc.cgroup.devices.allow = c 10:232 rwm
## To use loop devices, copy the following line to the container's
## configuration file (uncommented).
#lxc.cgroup.devices.allow = b 7:* rwm

View file

@ -1,16 +0,0 @@
#!/bin/bash
set -e
source common/ui.sh
source common/utils.sh
debug 'Bringing container up'
utils.lxc.start
info "Cleaning up '${CONTAINER}'..."
log 'Removing temporary files...'
rm -rf ${ROOTFS}/tmp/*
log 'Removing downloaded packages...'
utils.lxc.attach apt-get clean

View file

@ -1,119 +0,0 @@
#!/bin/bash
set -e
source common/ui.sh
source common/utils.sh
info 'Installing extra packages and upgrading'
debug 'Bringing container up'
utils.lxc.start
# Sleep for a bit so that the container can get an IP
log 'Sleeping for 5 seconds...'
sleep 5
# TODO: Support for appending to this list from outside
PACKAGES=(vim curl wget man-db bash-completion python-software-properties ca-certificates sudo)
if [ $DISTRIBUTION = 'ubuntu' ]; then
PACKAGES+=' software-properties-common'
fi
if [ $RELEASE != 'raring' ] && [ $RELEASE != 'saucy' ] && [ $RELEASE != 'trusty' ] ; then
PACKAGES+=' nfs-common'
fi
utils.lxc.attach apt-get update
utils.lxc.attach apt-get install ${PACKAGES[*]} -y --force-yes
utils.lxc.attach apt-get upgrade -y --force-yes
CHEF=${CHEF:-0}
PUPPET=${PUPPET:-0}
SALT=${SALT:-0}
BABUSHKA=${BABUSHKA:-0}
if [ $DISTRIBUTION = 'debian' ]; then
# Enable bash-completion
sed -e '/^#if ! shopt -oq posix; then/,/^#fi/ s/^#\(.*\)/\1/g' \
-i ${ROOTFS}/etc/bash.bashrc
fi
if [ $CHEF = 1 ]; then
if $(lxc-attach -n ${CONTAINER} -- which chef-solo &>/dev/null); then
log "Chef has been installed on container, skipping"
else
log "Installing Chef"
cat > ${ROOTFS}/tmp/install-chef.sh << EOF
#!/bin/sh
curl -L https://www.opscode.com/chef/install.sh -k | sudo bash
EOF
chmod +x ${ROOTFS}/tmp/install-chef.sh
utils.lxc.attach /tmp/install-chef.sh
fi
else
log "Skipping Chef installation"
fi
if [ $PUPPET = 1 ]; then
if $(lxc-attach -n ${CONTAINER} -- which puppet &>/dev/null); then
log "Puppet has been installed on container, skipping"
elif [ ${RELEASE} = 'trusty' ]; then
warn "Puppet can't be installed on Ubuntu Trusty 14.04, skipping"
elif [ ${RELEASE} = 'sid' ]; then
warn "Puppet can't be installed on Debian sid, skipping"
else
log "Installing Puppet"
wget http://apt.puppetlabs.com/puppetlabs-release-stable.deb -O "${ROOTFS}/tmp/puppetlabs-release-stable.deb" &>>${LOG}
utils.lxc.attach dpkg -i "/tmp/puppetlabs-release-stable.deb"
utils.lxc.attach apt-get update
utils.lxc.attach apt-get install puppet -y --force-yes
fi
else
log "Skipping Puppet installation"
fi
if [ $SALT = 1 ]; then
if $(lxc-attach -n ${CONTAINER} -- which salt-minion &>/dev/null); then
log "Salt has been installed on container, skipping"
elif [ ${RELEASE} = 'raring' ]; then
warn "Salt can't be installed on Ubuntu Raring 13.04, skipping"
else
if [ $DISTRIBUTION = 'ubuntu' ]; then
utils.lxc.attach add-apt-repository -y ppa:saltstack/salt
else # DEBIAN
if [ $RELEASE == "squeeze" ]; then
SALT_SOURCE_1="deb http://debian.saltstack.com/debian squeeze-saltstack main"
SALT_SOURCE_2="deb http://backports.debian.org/debian-backports squeeze-backports main contrib non-free"
elif [ $RELEASE == "wheezy" ]; then
SALT_SOURCE_1="deb http://debian.saltstack.com/debian wheezy-saltstack main"
else
SALT_SOURCE_1="deb http://debian.saltstack.com/debian unstable main"
fi
echo $SALT_SOURCE_1 > ${ROOTFS}/etc/apt/sources.list.d/saltstack.list
echo $SALT_SOURCE_2 >> ${ROOTFS}/etc/apt/sources.list.d/saltstack.list
utils.lxc.attach wget -q -O /tmp/salt.key "http://debian.saltstack.com/debian-salt-team-joehealy.gpg.key"
utils.lxc.attach apt-key add /tmp/salt.key
fi
utils.lxc.attach apt-get update
utils.lxc.attach apt-get install salt-minion -y --force-yes
fi
else
log "Skipping Salt installation"
fi
if [ $BABUSHKA = 1 ]; then
if $(lxc-attach -n ${CONTAINER} -- which babushka &>/dev/null); then
log "Babushka has been installed on container, skipping"
elif [ ${RELEASE} = 'trusty' ]; then
warn "Babushka can't be installed on Ubuntu Trusty 14.04, skipping"
else
log "Installing Babushka"
cat > $ROOTFS/tmp/install-babushka.sh << EOF
#!/bin/sh
curl https://babushka.me/up | sudo bash
EOF
chmod +x $ROOTFS/tmp/install-babushka.sh
utils.lxc.attach /tmp/install-babushka.sh
fi
else
log "Skipping Babushka installation"
fi

View file

@ -1,33 +0,0 @@
#!/bin/bash
set -e
source common/ui.sh
source common/utils.sh
# Fixes some networking issues
# See https://github.com/fgrehm/vagrant-lxc/issues/91 for more info
if ! $(grep -q 'ip6-allhosts' ${ROOTFS}/etc/hosts); then
log "Adding ipv6 allhosts entry to container's /etc/hosts"
echo 'ff02::3 ip6-allhosts' >> ${ROOTFS}/etc/hosts
fi
utils.lxc.start
if [ ${DISTRIBUTION} = 'debian' ]; then
# Ensure locales are properly set, based on http://askubuntu.com/a/238063
LANG=${LANG:-en_US.UTF-8}
sed -i "s/^# ${LANG}/${LANG}/" ${ROOTFS}/etc/locale.gen
# Fixes some networking issues
# See https://github.com/fgrehm/vagrant-lxc/issues/91 for more info
sed -i -e "s/\(127.0.0.1\s\+localhost\)/\1\n127.0.1.1\t${RELEASE}-base\n/g" ${ROOTFS}/etc/hosts
# Ensures that `/tmp` does not get cleared on halt
# See https://github.com/fgrehm/vagrant-lxc/issues/68 for more info
utils.lxc.attach /usr/sbin/update-rc.d -f checkroot-bootclean.sh remove
utils.lxc.attach /usr/sbin/update-rc.d -f mountall-bootclean.sh remove
utils.lxc.attach /usr/sbin/update-rc.d -f mountnfs-bootclean.sh remove
fi
utils.lxc.attach /usr/sbin/locale-gen ${LANG}
utils.lxc.attach update-locale LANG=${LANG}

View file

@ -1,47 +0,0 @@
#!/bin/bash
set -e
source common/ui.sh
if [ "$(id -u)" != "0" ]; then
echo "You should run this script as root (sudo)."
exit 1
fi
export DISTRIBUTION=$1
export RELEASE=$2
export ARCH=$3
export CONTAINER=$4
export PACKAGE=$5
export ROOTFS="/var/lib/lxc/${CONTAINER}/rootfs"
export WORKING_DIR="/tmp/${CONTAINER}"
export NOW=$(date -u)
export LOG=$(readlink -f .)/log/${CONTAINER}.log
mkdir -p $(dirname $LOG)
echo '############################################' > ${LOG}
echo "# Beginning build at $(date)" >> ${LOG}
touch ${LOG}
chmod +rw ${LOG}
if [ -f ${PACKAGE} ]; then
warn "The box '${PACKAGE}' already exists, skipping..."
echo
exit
fi
debug "Creating ${WORKING_DIR}"
mkdir -p ${WORKING_DIR}
info "Building box to '${PACKAGE}'..."
./common/download.sh ${DISTRIBUTION} ${RELEASE} ${ARCH} ${CONTAINER}
./debian/vagrant-lxc-fixes.sh ${DISTRIBUTION} ${RELEASE} ${ARCH} ${CONTAINER}
./debian/install-extras.sh ${CONTAINER}
./common/prepare-vagrant-user.sh ${CONTAINER}
./debian/clean.sh ${CONTAINER}
./common/package.sh ${CONTAINER} ${PACKAGE}
info "Finished building '${PACKAGE}'!"
log "Run \`sudo lxc-destroy -n ${CONTAINER}\` or \`make clean\` to remove the container that was created along the way"
echo