Commit 76af43f7 authored by Nguyễn Văn Vũ's avatar Nguyễn Văn Vũ

Week4 - Use Ansible-playbook install docker on remote host, use roles install...

Week4 - Use Ansible-playbook install docker on remote host, use roles install application on remote host
parent 58abaaca
---
- name: Install Docker
hosts: all
become: yes
tasks:
- name: Update apt cache (for Ubuntu/Debian)
become: yes
apt:
update_cache: yes
when: ansible_os_family == "Debian"
- name: Install required packages
become: yes
package:
name:
- apt-transport-https
- ca-certificates
- curl
- gnupg-agent
- software-properties-common
state: present
when: ansible_os_family == "Debian"
- name: Add Docker's official GPG key
become: yes
apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present
when: ansible_os_family == "Debian"
- name: Add Docker APT repository
become: yes
apt_repository:
repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable
state: present
when: ansible_os_family == "Debian"
- name: Update apt cache (for Ubuntu/Debian)
become: yes
apt:
update_cache: yes
when: ansible_os_family == "Debian"
- name: Install Docker
become: yes
apt:
name: docker-ce
state: present
when: ansible_os_family == "Debian"
---
- name: Install applications on remote host
hosts: all
become: yes
roles:
- nginx
- ceph
- minio
- mongodb-replica
- postgresql
- kafka
# Contributing to ceph-ansible
1. Follow the [commit guidelines](#commit-guidelines)
## Commit guidelines
- All commits should have a subject and a body
- The commit subject should briefly describe what the commit changes
- The commit body should describe the problem addressed and the chosen solution
- What was the problem and solution? Why that solution? Were there alternative ideas?
- Wrap commit subjects and bodies to 80 characters
- Sign-off your commits
- Add a best-effort scope designation to commit subjects. This could be a directory name, file name,
or the name of a logical grouping of code. Examples:
- library: add a placeholder module for the validate action plugin
- site.yml: combine validate play with fact gathering play
- Commits linked with an issue should trace them with :
- Fixes: #2653
[Suggested reading.](https://chris.beams.io/posts/git-commit/)
## Pull requests
### Jenkins CI
We use Jenkins to run several tests on each pull request.
If you don't want to run a build for a particular pull request, because all you are changing is the
README for example, add the text `[skip ci]` to the PR title.
### Merging strategy
Merging PR is controlled by [mergify](https://mergify.io/) by the following rules:
- at least one approuval from a maintainer
- a SUCCESS from the CI pipeline "ceph-ansible PR Pipeline"
If you work is not ready for review/merge, please request the DNM label via a comment or the title of your PR.
This will prevent the engine merging your pull request.
### Backports (maintainers only)
If you wish to see your work from 'main' being backported to a stable branch you can ping a maintainer
so he will set the backport label on your PR. Once the PR from main is merged, a backport PR will be created by mergify,
if there is a cherry-pick conflict you must resolv it by pulling the branch.
**NEVER** push directly into a stable branch, **unless** the code from main has diverged so much that the files don't exist in the stable branch.
If that happens, inform the maintainers of the reasons why you pushed directly into a stable branch, if the reason is invalid, maintainers will immediatly close your pull request.
## Good to know
### Sample files
The sample files we provide in `group_vars/` are versionned,
they are a copy of what their respective `./roles/<role>/defaults/main.yml` contain.
It means if you are pushing a patch modifying one of these files:
- `./roles/ceph-mds/defaults/main.yml`
- `./roles/ceph-mgr/defaults/main.yml`
- `./roles/ceph-fetch-keys/defaults/main.yml`
- `./roles/ceph-rbd-mirror/defaults/main.yml`
- `./roles/ceph-defaults/defaults/main.yml`
- `./roles/ceph-osd/defaults/main.yml`
- `./roles/ceph-nfs/defaults/main.yml`
- `./roles/ceph-client/defaults/main.yml`
- `./roles/ceph-common/defaults/main.yml`
- `./roles/ceph-mon/defaults/main.yml`
- `./roles/ceph-rgw/defaults/main.yml`
- `./roles/ceph-container-common/defaults/main.yml`
- `./roles/ceph-common-coreos/defaults/main.yml`
You will have to get the corresponding sample file updated, there is a script which do it for you.
You must run `./generate_group_vars_sample.sh` before you commit your changes so you are guaranteed to have consistent content for these files.
### Keep your branch up-to-date
Sometimes, a pull request can be subject to long discussion, reviews and comments, meantime, `main`
moves forward so let's try to keep your branch rebased on main regularly to avoid huge conflict merge.
A rebased branch is more likely to be merged easily & shorter.
### Organize your commits
Do not split your commits unecessary, we are used to see pull request with useless additional commits like
"I'm addressing reviewer's comments". So, please, squash and/or amend them as much as possible.
Similarly, split them when needed, if you are modifying several parts in ceph-ansible or pushing a large
patch you may have to split yours commit properly so it's better to understand your work.
Some recommandations:
- one fix = one commit,
- do not mix multiple topics in a single commit,
- if you PR contains a large number of commits that are each other totally unrelated, it should probably even be split in several PRs.
If you've broken your work up into a set of sequential changes and each commit pass the tests on their own then that's fine.
If you've got commits fixing typos or other problems introduced by previous commits in the same PR, then those should be squashed before merging.
If you are new to Git, these links might help:
- [https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History](https://git-scm.com/book/en/v2/Git-Tools-Rewriting-History)
- [http://gitready.com/advanced/2009/02/10/squashing-commits-with-rebase.html](http://gitready.com/advanced/2009/02/10/squashing-commits-with-rebase.html)
This diff is collapsed.
# Makefile for constructing RPMs.
# Try "make" (for SRPMS) or "make rpm"
NAME = ceph-ansible
# Set the RPM package NVR from "git describe".
# Examples:
#
# A "git describe" value of "v2.2.0beta1" would create an NVR
# "ceph-ansible-2.2.0-0.beta1.1.el8"
#
# A "git describe" value of "v2.2.0rc1" would create an NVR
# "ceph-ansible-2.2.0-0.rc1.1.el8"
#
# A "git describe" value of "v2.2.0rc1-1-gc465f85" would create an NVR
# "ceph-ansible-2.2.0-0.rc1.1.gc465f85.el8"
#
# A "git describe" value of "v2.2.0" creates an NVR
# "ceph-ansible-2.2.0-1.el8"
DIST ?= "el8"
MOCK_CONFIG ?= "centos-stream+epel-8-x86_64"
TAG := $(shell git describe --tags --abbrev=0 --match 'v*')
VERSION := $(shell echo $(TAG) | sed 's/^v//')
COMMIT := $(shell git rev-parse HEAD)
SHORTCOMMIT := $(shell echo $(COMMIT) | cut -c1-7)
RELEASE := $(shell git describe --tags --match 'v*' \
| sed 's/^v//' \
| sed 's/^[^-]*-//' \
| sed 's/-/./')
ifeq ($(VERSION),$(RELEASE))
RELEASE = 1
endif
ifneq (,$(findstring alpha,$(VERSION)))
ALPHA := $(shell echo $(VERSION) | sed 's/.*alpha/alpha/')
RELEASE := 0.$(ALPHA).$(RELEASE)
VERSION := $(subst $(ALPHA),,$(VERSION))
endif
ifneq (,$(findstring beta,$(VERSION)))
BETA := $(shell echo $(VERSION) | sed 's/.*beta/beta/')
RELEASE := 0.$(BETA).$(RELEASE)
VERSION := $(subst $(BETA),,$(VERSION))
endif
ifneq (,$(findstring rc,$(VERSION)))
RC := $(shell echo $(VERSION) | sed 's/.*rc/rc/')
RELEASE := 0.$(RC).$(RELEASE)
VERSION := $(subst $(RC),,$(VERSION))
endif
ifneq (,$(shell echo $(VERSION) | grep [a-zA-Z]))
# If we still have alpha characters in our Git tag string, we don't know
# how to translate that into a sane RPM version/release. Bail out.
$(error cannot translate Git tag version $(VERSION) to an RPM NVR)
endif
NVR := $(NAME)-$(VERSION)-$(RELEASE).$(DIST)
all: srpm
# Testing only
echo:
echo COMMIT $(COMMIT)
echo VERSION $(VERSION)
echo RELEASE $(RELEASE)
echo NVR $(NVR)
clean:
rm -rf dist/
rm -rf ceph-ansible-$(VERSION)-$(SHORTCOMMIT).tar.gz
rm -rf $(NVR).src.rpm
dist:
git archive --format=tar.gz --prefix=ceph-ansible-$(VERSION)/ HEAD > ceph-ansible-$(VERSION)-$(SHORTCOMMIT).tar.gz
spec:
sed ceph-ansible.spec.in \
-e 's/@COMMIT@/$(COMMIT)/' \
-e 's/@VERSION@/$(VERSION)/' \
-e 's/@RELEASE@/$(RELEASE)/' \
> ceph-ansible.spec
srpm: dist spec
rpmbuild -bs ceph-ansible.spec \
--define "_topdir ." \
--define "_sourcedir ." \
--define "_srcrpmdir ." \
--define "dist .$(DIST)"
rpm: dist srpm
mock -r $(MOCK_CONFIG) rebuild $(NVR).src.rpm \
--resultdir=. \
--define "dist .$(DIST)"
tag:
$(eval BRANCH := $(shell git rev-parse --abbrev-ref HEAD))
$(eval LASTNUM := $(shell echo $(TAG) \
| sed -E "s/.*[^0-9]([0-9]+)$$/\1/"))
$(eval NEXTNUM=$(shell echo $$(($(LASTNUM)+1))))
$(eval NEXTTAG=$(shell echo $(TAG) | sed "s/$(LASTNUM)$$/$(NEXTNUM)/"))
if [[ "$(TAG)" == "$(git describe --tags --match 'v*')" ]]; then \
echo "$(SHORTCOMMIT) on $(BRANCH) is already tagged as $(TAG)"; \
exit 1; \
fi
if [[ "$(BRANCH)" != "master" || "$(BRANCH)" != "main" ]] && \
! [[ "$(BRANCH)" =~ ^stable- ]]; then \
echo Cannot tag $(BRANCH); \
exit 1; \
fi
@echo Tagging Git branch $(BRANCH)
git tag $(NEXTTAG)
@echo run \'git push origin $(NEXTTAG)\' to push to GitHub.
.PHONY: dist rpm srpm tag
ceph-ansible is deprecated, please migrate to ``cephadm`` instead.
==================================================================
``cephadm`` is the new official Ceph installer.
See ``cephadm`` documentation [1]
[1] https://docs.ceph.com/en/latest/cephadm/
This diff is collapsed.
# Comments inside this file must be set BEFORE the option.
# NOT after the option, otherwise the comment will be interpreted as a value to that option.
[defaults]
ansible_managed = Please do not change this file directly since it is managed by Ansible and will be overwritten
library = ./library
module_utils = ./module_utils
action_plugins = plugins/actions
callback_plugins = plugins/callback
filter_plugins = plugins/filter
roles_path = ./roles
# Be sure the user running Ansible has permissions on the logfile
log_path = $HOME/ansible/ansible.log
forks = 20
host_key_checking = False
gathering = smart
fact_caching = jsonfile
fact_caching_connection = $HOME/ansible/facts
fact_caching_timeout = 7200
nocows = 1
callback_allowlist = profile_tasks
stdout_callback = yaml
force_valid_group_names = ignore
inject_facts_as_vars = False
# Disable them in the context of https://review.openstack.org/#/c/469644
retry_files_enabled = False
# This is the default SSH timeout to use on connection attempts
# CI slaves are slow so by setting a higher value we can avoid the following error:
# Timeout (12s) waiting for privilege escalation prompt:
timeout = 60
[ssh_connection]
# see: https://github.com/ansible/ansible/issues/11536
control_path = %(directory)s/%%h-%%r-%%p
ssh_args = -o ControlMaster=auto -o ControlPersist=600s
pipelining = True
# Option to retry failed ssh executions if the failure is encountered in ssh itself
retries = 10
%global commit @COMMIT@
%global shortcommit %(c=%{commit}; echo ${c:0:7})
Name: ceph-ansible
Version: @VERSION@
Release: @RELEASE@%{?dist}
Summary: Ansible playbooks for Ceph
# Some files have been copied from Ansible (GPLv3+). For example:
# plugins/actions/config_template.py
# roles/ceph-common/plugins/actions/config_template.py
License: ASL 2.0 and GPLv3+
URL: https://github.com/ceph/ceph-ansible
Source0: %{name}-%{version}-%{shortcommit}.tar.gz
Obsoletes: ceph-iscsi-ansible <= 1.5
BuildArch: noarch
BuildRequires: ansible-core >= 2.14
Requires: ansible-core >= 2.14
%if 0%{?rhel} == 7
BuildRequires: python2-devel
Requires: python2-netaddr
%else
BuildRequires: python3-devel
Requires: python3-netaddr
%endif
%description
Ansible playbooks for Ceph
%prep
%autosetup -p1
%build
%install
mkdir -p %{buildroot}%{_datarootdir}/ceph-ansible
for f in ansible.cfg *.yml *.sample group_vars roles library module_utils plugins infrastructure-playbooks; do
cp -a $f %{buildroot}%{_datarootdir}/ceph-ansible
done
pushd %{buildroot}%{_datarootdir}/ceph-ansible
# These untested playbooks are too unstable for users.
rm -r infrastructure-playbooks/untested-by-ci
%if ! 0%{?fedora} && ! 0%{?centos}
# remove ability to install ceph community version
rm roles/ceph-common/tasks/installs/redhat_{community,dev}_repository.yml
%endif
popd
%check
# Borrowed from upstream's .travis.yml:
ansible-playbook -i dummy-ansible-hosts test.yml --syntax-check
%files
%doc README.rst
%license LICENSE
%{_datarootdir}/ceph-ansible
%changelog
#!/usr/bin/env bash
set -e
shopt -s extglob # enable extended pattern matching features
#############
# VARIABLES #
#############
stable_branch=$1
commit=$2
bkp_branch_name=$3
bkp_branch_name_prefix=bkp
bkp_branch=$bkp_branch_name-$bkp_branch_name_prefix-$stable_branch
#############
# FUNCTIONS #
#############
verify_commit () {
for com in ${commit//,/ }; do
if [[ $(git cat-file -t "$com" 2>/dev/null) != commit ]]; then
echo "$com does not exist in your tree"
echo "Run 'git fetch origin main && git pull origin main'"
exit 1
fi
done
}
git_status () {
if [[ $(git status --porcelain | wc -l) -gt 0 ]]; then
echo "It looks like you have not committed changes:"
echo ""
git status --short
echo ""
echo ""
echo "Press ENTER to continue or Ctrl+c to break."
read -r
fi
}
checkout () {
git checkout --no-track -b "$bkp_branch" origin/"$stable_branch"
}
cherry_pick () {
local x
for com in ${commit//,/ }; do
x="$x $com"
done
# Trim the first white space and use an array
# Reference: https://github.com/koalaman/shellcheck/wiki/SC2086#exceptions
x=(${x##*( )})
git cherry-pick -x -s "${x[@]}"
}
push () {
git push origin "$bkp_branch"
}
create_pr () {
hub pull-request -h ceph/ceph-ansible:"$bkp_branch" -b "$stable_branch" -F -
}
cleanup () {
echo "Moving back to previous branch"
git checkout -
git branch -D "$bkp_branch"
}
test_args () {
if [ $# -lt 3 ]; then
echo "Please run the script like this: ./contrib/backport_to_stable_branch.sh STABLE_BRANCH_NAME COMMIT_SHA1 BACKPORT_BRANCH_NAME"
echo "We accept multiple commits as soon as they are commas-separated."
echo "e.g: ./contrib/backport_to_stable_branch.sh stable-2.2 6892670d317698771be7e96ce9032bc27d3fd1e5,8756c553cc8c213fc4996fc5202c7b687eb645a3 my-work"
exit 1
fi
}
########
# MAIN #
########
test_args "$@"
git_status
verify_commit
checkout
cherry_pick
push
create_pr <<MSG
${4} Backport of ${3} in $stable_branch
Backport of #${3} in $stable_branch
MSG
cleanup
#!/bin/bash
set -xe
# VARIABLES
BASEDIR=$(dirname "$0")
LOCAL_BRANCH=$(cd $BASEDIR && git rev-parse --abbrev-ref HEAD)
ROLES="ceph-common ceph-mon ceph-osd ceph-mds ceph-rgw ceph-fetch-keys ceph-rbd-mirror ceph-client ceph-container-common ceph-mgr ceph-defaults ceph-config"
# FUNCTIONS
function goto_basedir {
TOP_LEVEL=$(cd $BASEDIR && git rev-parse --show-toplevel)
if [[ "$(pwd)" != "$TOP_LEVEL" ]]; then
pushd "$TOP_LEVEL"
fi
}
function check_existing_remote {
if ! git remote show "$1" &> /dev/null; then
git remote add "$1" git@github.com:/ceph/ansible-"$1".git
fi
}
function pull_origin {
git pull origin main
}
function reset_hard_origin {
# let's bring everything back to normal
git checkout "$LOCAL_BRANCH"
git fetch origin --prune
git fetch --tags
git reset --hard origin/main
}
function check_git_status {
if [[ $(git status --porcelain | wc -l) -gt 0 ]]; then
echo "It looks like the following changes haven't been committed yet"
echo ""
git status --short
echo ""
echo ""
echo "Do you really want to continue?"
echo "Press ENTER to continue or CTRL C to break"
read -r
fi
}
function compare_tags {
# compare local tags (from https://github.com/ceph/ceph-ansible/) with distant tags (from https://github.com/ceph/ansible-ceph-$ROLE)
local tag_local
local tag_remote
for tag_local in $(git tag | grep -oE '^v[2-9].[0-9]*.[0-9]*$' | sort -t. -k 1,1n -k 2,2n -k 3,3n -k 4,4n); do
tags_array+=("$tag_local")
done
for tag_remote in $(git ls-remote --tags "$1" | grep -oE 'v[2-9].[0-9]*.[0-9]*$' | sort -t. -k 1,1n -k 2,2n -k 3,3n -k 4,4n); do
remote_tags_array+=("$tag_remote")
done
for i in "${tags_array[@]}"; do
skip=
for j in "${remote_tags_array[@]}"; do
[[ "$i" == "$j" ]] && { skip=1; break; }
done
[[ -n $skip ]] || tag_to_apply+=("$i")
done
}
# MAIN
goto_basedir
check_git_status
trap reset_hard_origin EXIT
trap reset_hard_origin ERR
pull_origin
for ROLE in $ROLES; do
# For readability we use 2 variables with the same content
# so we always make sure we 'push' to a remote and 'filter' a role
REMOTE=$ROLE
check_existing_remote "$REMOTE"
reset_hard_origin
# First we filter branches by rewriting main with the content of roles/$ROLE
# this gives us a new commit history
for BRANCH in $(git branch --list --remotes "origin/stable-*" "origin/main" "origin/ansible-1.9" | cut -d '/' -f2); do
git checkout -B "$BRANCH" origin/"$BRANCH"
# use || true to avoid exiting in case of 'Found nothing to rewrite'
git filter-branch -f --prune-empty --subdirectory-filter roles/"$ROLE" || true
git push -f "$REMOTE" "$BRANCH"
done
reset_hard_origin
# then we filter tags starting from version 2.0 and push them
compare_tags "$ROLE"
if [[ ${#tag_to_apply[@]} == 0 ]]; then
echo "No new tag to push."
continue
fi
for TAG in "${tag_to_apply[@]}"; do
# use || true to avoid exiting in case of 'Found nothing to rewrite'
git filter-branch -f --prune-empty --subdirectory-filter roles/"$ROLE" "$TAG" || true
git push -f "$REMOTE" "$TAG"
reset_hard_origin
done
done
trap - EXIT ERR
popd &> /dev/null
#Package lines can be commented out with '#'
#
#boost-atomic
#boost-chrono
#boost-date-time
#boost-iostreams
#boost-program
#boost-random
#boost-regex
#boost-system
#boost-thread
#bzip2-libs
#cyrus-sasl-lib
#expat
#fcgi
#fuse-libs
#glibc
#keyutils-libs
#leveldb
#libaio
#libatomic_ops
#libattr
#libblkid
#libcap
#libcom_err
#libcurl
#libgcc
#libicu
#libidn
#libnghttp2
#libpsl
#libselinux
#libssh2
#libstdc++
#libunistring
#nss-softokn-freebl
#openldap
#openssl-libs
#pcre
#python-nose
#python-sphinx
#snappy
#systemd-libs
#zlib
#!/bin/bash -e
#
# Copyright (C) 2014, 2015 Red Hat <contact@redhat.com>
#
# Author: Daniel Lin <danielin@umich.edu>
#
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
if test -f /etc/redhat-release ; then
PACKAGE_INSTALLER=yum
elif type apt-get > /dev/null 2>&1 ; then
PACKAGE_INSTALLER=apt-get
else
echo "ERROR: Package Installer could not be determined"
exit 1
fi
while read p; do
if [[ $p =~ ^#.* ]] ; then
continue
fi
$PACKAGE_INSTALLER install $p -y
done < $1
#!/bin/bash
create_snapshots() {
local pattern=$1
for vm in $(sudo virsh list --all | awk "/${pattern}/{print \$2}"); do
sudo virsh shutdown "${vm}"
wait_for_shutoff "${vm}"
sudo virsh snapshot-create "${vm}"
sudo virsh start "${vm}"
done
}
delete_snapshots() {
local pattern=$1
for vm in $(sudo virsh list --all | awk "/${pattern}/{print \$2}"); do
for snapshot in $(sudo virsh snapshot-list "${vm}" --name); do
echo "deleting snapshot ${snapshot} (vm: ${vm})"
sudo virsh snapshot-delete "${vm}" "${snapshot}"
done
done
}
revert_snapshots() {
local pattern=$1
for vm in $(sudo virsh list --all | awk "/${pattern}/{print \$2}"); do
echo "restoring last snapshot for ${vm}"
sudo virsh snapshot-revert "${vm}" --current
sudo virsh start "${vm}"
done
}
wait_for_shutoff() {
local vm=$1
local retries=60
local delay=2
until test "${retries}" -eq 0
do
echo "waiting for ${vm} to be shut off... #${retries}"
sleep "${delay}"
let "retries=$retries-1"
local current_state=$(sudo virsh domstate "${vm}")
test "${current_state}" == "shut off" && return
done
echo couldnt shutoff "${vm}"
exit 1
}
while :; do
case $1 in
-d|--delete)
delete_snapshots "$2"
exit
;;
-i|--interactive)
INTERACTIVE=TRUE
;;
-s|--snapshot)
create_snapshots "$2"
;;
-r|--revert)
revert_snapshots "$2"
;;
--)
shift
break
;;
*)
break
esac
shift
done
---
# DEPLOY CONTAINERIZED DAEMONS
docker: true
# DEFINE THE NUMBER OF VMS TO RUN
mon_vms: 1
osd_vms: 1
mds_vms: 0
rgw_vms: 0
nfs_vms: 0
rbd_mirror_vms: 0
client_vms: 0
mgr_vms: 0
# SUBNETS TO USE FOR THE VMS
public_subnet: 192.168.0
cluster_subnet: 192.168.1
# MEMORY
memory: 1024
disks: [ '/dev/sda', '/dev/sdb' ]
eth: 'enp0s8'
vagrant_box: centos/atomic-host
# The sync directory changes based on vagrant box
# Set to /home/vagrant/sync for Centos/7, /home/{ user }/vagrant for openstack and defaults to /vagrant
vagrant_sync_dir: /home/vagrant/sync
skip_tags: 'with_pkg'
---
vagrant_box: 'linode'
vagrant_box_url: 'https://github.com/displague/vagrant-linode/raw/master/box/linode.box'
# Set a label prefix for the machines in this cluster. (This is useful and necessary when running multiple clusters concurrently.)
#label_prefix: 'foo'
ssh_username: 'vagrant'
ssh_private_key_path: '~/.ssh/id_rsa'
cloud_distribution: 'CentOS 7'
cloud_datacenter: 'newark'
# Memory for each Linode instance, this determines price! See Linode plans.
memory: 2048
# The private network on Linode, you probably don't want to change this.
public_subnet: 192.168.0
cluster_subnet: 192.168.0
# DEFINE THE NUMBER OF VMS TO RUN
mon_vms: 3
osd_vms: 3
mds_vms: 1
rgw_vms: 0
nfs_vms: 0
rbd_mirror_vms: 0
client_vms: 0
# The sync directory changes based on vagrant box
# Set to /home/vagrant/sync for Centos/7, /home/{ user }/vagrant for openstack and defaults to /vagrant
# vagrant_sync_dir: /home/vagrant/sync
os_tuning_params:
- { name: fs.file-max, value: 26234859 }
---
# DEPLOY CONTAINERIZED DAEMONS
docker: true
# DEFINE THE NUMBER OF VMS TO RUN
mon_vms: 1
osd_vms: 1
mds_vms: 0
rgw_vms: 0
nfs_vms: 0
rbd_mirror_vms: 0
client_vms: 0
# SUBNET TO USE FOR THE VMS
# Use whatever private subnet your Openstack VMs are given
public_subnet: 172.17.72
cluster_subnet: 172.17.72
# For Openstack VMs, the disk will depend on what you are allocated
disks: [ '/dev/vdb' ]
# For Openstack VMs, the lan is usually eth0
eth: 'eth0'
# For Openstack VMs, choose the following box instead
vagrant_box: 'openstack'
# When using Atomic Hosts (RHEL or CentOS), uncomment the line below to skip package installation
#skip_tags: 'with_pkg'
# Set a label prefix for the machines in this cluster to differentiate
# between different concurrent clusters e.g. your OpenStack username
label_prefix: 'your-openstack-username'
# For deploying on OpenStack VMs uncomment these vars and assign values.
# You can use env vars for the values if it makes sense.
#ssh_username :
#ssh_private_key_path :
#os_openstack_auth_url :
#os_username :
#os_password :
#os_tenant_name :
#os_region :
#os_flavor :
#os_image :
#os_keypair_name :
#os_networks :
#os_floating_ip_pool :
---
- name: Deploy node_exporter
hosts:
- "{{ mon_group_name|default('mons') }}"
- "{{ osd_group_name|default('osds') }}"
- "{{ mds_group_name|default('mdss') }}"
- "{{ rgw_group_name|default('rgws') }}"
- "{{ mgr_group_name|default('mgrs') }}"
- "{{ rbdmirror_group_name|default('rbdmirrors') }}"
- "{{ nfs_group_name|default('nfss') }}"
- "{{ monitoring_group_name|default('monitoring') }}"
gather_facts: false
become: true
pre_tasks:
- name: Import ceph-defaults role
ansible.builtin.import_role:
name: ceph-defaults
tags: ['ceph_update_config']
- name: Set ceph node exporter install 'In Progress'
run_once: true
ansible.builtin.set_stats:
data:
installer_phase_ceph_node_exporter:
status: "In Progress"
start: "{{ lookup('pipe', 'date +%Y%m%d%H%M%SZ') }}"
tasks:
- name: Import ceph-facts role
ansible.builtin.import_role:
name: ceph-facts
tags: ['ceph_update_config']
- name: Import ceph-container-engine
ansible.builtin.import_role:
name: ceph-container-engine
- name: Import ceph-container-common role
ansible.builtin.import_role:
name: ceph-container-common
tasks_from: registry
when:
- not containerized_deployment | bool
- ceph_docker_registry_auth | bool
- name: Import ceph-node-exporter role
ansible.builtin.import_role:
name: ceph-node-exporter
post_tasks:
- name: Set ceph node exporter install 'Complete'
run_once: true
ansible.builtin.set_stats:
data:
installer_phase_ceph_node_exporter:
status: "Complete"
end: "{{ lookup('pipe', 'date +%Y%m%d%H%M%SZ') }}"
- name: Deploy grafana and prometheus
hosts: "{{ monitoring_group_name | default('monitoring') }}"
gather_facts: false
become: true
pre_tasks:
- name: Import ceph-defaults role
ansible.builtin.import_role:
name: ceph-defaults
tags: ['ceph_update_config']
- name: Set ceph grafana install 'In Progress'
run_once: true
ansible.builtin.set_stats:
data:
installer_phase_ceph_grafana:
status: "In Progress"
start: "{{ lookup('pipe', 'date +%Y%m%d%H%M%SZ') }}"
tasks:
# - ansible.builtin.import_role:
# name: ceph-facts
# tags: ['ceph_update_config']
- name: Import ceph-facts role
ansible.builtin.import_role:
name: ceph-facts
tasks_from: grafana
tags: ['ceph_update_config']
- name: Import ceph-prometheus role
ansible.builtin.import_role:
name: ceph-prometheus
- name: Import ceph-grafana role
ansible.builtin.import_role:
name: ceph-grafana
post_tasks:
- name: Set ceph grafana install 'Complete'
run_once: true
ansible.builtin.set_stats:
data:
installer_phase_ceph_grafana:
status: "Complete"
end: "{{ lookup('pipe', 'date +%Y%m%d%H%M%SZ') }}"
# using groups[] here otherwise it can't fallback to the mon if there's no mgr group.
# adding an additional | default(omit) in case where no monitors are present (external ceph cluster)
- name: Deploy dashboard
hosts: "{{ groups['mgrs'] | default(groups['mons']) | default(omit) }}"
gather_facts: false
become: true
pre_tasks:
- name: Import ceph-defaults role
ansible.builtin.import_role:
name: ceph-defaults
tags: ['ceph_update_config']
- name: Set ceph dashboard install 'In Progress'
run_once: true
ansible.builtin.set_stats:
data:
installer_phase_ceph_dashboard:
status: "In Progress"
start: "{{ lookup('pipe', 'date +%Y%m%d%H%M%SZ') }}"
tasks:
# - name: Import ceph-facts role
# ansible.builtin.import_role:
# name: ceph-facts
# tags: ['ceph_update_config']
- name: Import ceph-facts role
ansible.builtin.import_role:
name: ceph-facts
tasks_from: grafana
tags: ['ceph_update_config']
- name: Import ceph-dashboard role
ansible.builtin.import_role:
name: ceph-dashboard
post_tasks:
- name: Set ceph dashboard install 'Complete'
run_once: true
ansible.builtin.set_stats:
data:
installer_phase_ceph_dashboard:
status: "Complete"
end: "{{ lookup('pipe', 'date +%Y%m%d%H%M%SZ') }}"
# Minimal makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
SPHINXPROJ = ceph-ansible
SOURCEDIR = source
BUILDDIR = build
# Put it first so that "make" without argument is like "make help".
help:
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
.PHONY: help Makefile
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
\ No newline at end of file
# -*- coding: utf-8 -*-
#
# ceph-ansible documentation build configuration file, created by
# sphinx-quickstart on Wed Apr 5 11:55:38 2017.
#
# This file is execfile()d with the current directory set to its
# containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#
# import os
# import sys
# sys.path.insert(0, os.path.abspath('.'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = []
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix(es) of source filenames.
# You can specify multiple suffix as a list of string:
#
# source_suffix = ['.rst', '.md']
source_suffix = '.rst'
# The root toctree document.
root_doc = 'glossary'
# General information about the project.
project = u'ceph-ansible'
copyright = u'2017-2018, Ceph team and individual contributors'
author = u'Ceph team and individual contributors'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = u''
# The full version, including alpha/beta/rc tags.
release = u''
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = []
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'alabaster'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#
# html_theme_options = {}
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# -- Options for HTMLHelp output ------------------------------------------
# Output file base name for HTML help builder.
htmlhelp_basename = 'ceph-ansibledoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#
# 'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#
# 'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#
# 'preamble': '',
# Latex figure (float) alignment
#
# 'figure_align': 'htbp',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(root_doc, 'ceph-ansible.tex', u'ceph-ansible Documentation',
u'Ceph team and individual contributors', 'manual'),
]
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(root_doc, 'ceph-ansible', u'ceph-ansible Documentation',
[author], 1)
]
# -- Options for Texinfo output -------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(root_doc, 'ceph-ansible', u'ceph-ansible Documentation',
author, 'ceph-ansible', 'One line description of project.',
'Miscellaneous'),
]
master_doc = 'index'
\ No newline at end of file
Adding/Removing OSD(s) after a cluster is deployed is a common operation that should be straight-forward to achieve.
Adding osd(s)
-------------
Adding new OSD(s) on an existing host or adding a new OSD node can be achieved by running the main playbook with the ``--limit`` ansible option.
You basically need to update your host_vars/group_vars with the new hardware and/or the inventory host file with the new osd nodes being added.
The command used would be like following:
``ansible-playbook -vv -i <your-inventory> site-container.yml --limit <node>``
example:
.. code-block:: shell
$ cat hosts
[mons]
mon-node-1
mon-node-2
mon-node-3
[mgrs]
mon-node-1
mon-node-2
mon-node-3
[osds]
osd-node-1
osd-node-2
osd-node-3
osd-node-99
$ ansible-playbook -vv -i hosts site-container.yml --limit osd-node-99
Shrinking osd(s)
----------------
Shrinking OSDs can be done by using the shrink-osd.yml playbook provided in ``infrastructure-playbooks`` directory.
The variable ``osd_to_kill`` is a comma separated list of OSD IDs which must be passed to the playbook (passing it as an extra var is the easiest way).
The playbook will shrink all osds passed in ``osd_to_kill`` serially.
example:
.. code-block:: shell
$ ansible-playbook -vv -i hosts infrastructure-playbooks/shrink-osd.yml -e osd_to_kill=1,2,3
Purging the cluster
-------------------
ceph-ansible provides two playbooks in ``infrastructure-playbooks`` for purging a Ceph cluster: ``purge-cluster.yml`` and ``purge-container-cluster.yml``.
The names are pretty self-explanatory, ``purge-cluster.yml`` is intended to purge a non-containerized cluster whereas ``purge-container-cluster.yml`` is to purge a containerized cluster.
example:
.. code-block:: shell
$ ansible-playbook -vv -i hosts infrastructure-playbooks/purge-container-cluster.yml
.. note::
These playbooks aren't intended to be run with the ``--limit`` option.
\ No newline at end of file
Upgrading the ceph cluster
--------------------------
ceph-ansible provides a playbook in ``infrastructure-playbooks`` for upgrading a Ceph cluster: ``rolling_update.yml``.
This playbook could be used for both minor upgrades (X.Y to X.Z) or major upgrades (X to Y).
Before running a major upgrade you need to update the ceph-ansible version first.
example:
.. code-block:: shell
$ ansible-playbook -vv -i hosts infrastructure-playbooks/rolling_update.yml
.. note::
This playbook isn't intended to be run with the ``--limit`` ansible option.
Contribution Guidelines
=======================
The repository centralises all the Ansible roles. The roles are all part of the Ansible Galaxy.
We love contribution and we love giving visibility to our contributors, this is why all the **commits must be signed-off**.
Mailing list
------------
Please register the mailing list at http://lists.ceph.com/listinfo.cgi/ceph-ansible-ceph.com.
IRC
---
Feel free to join us in the channel ``#ceph-ansible`` of the OFTC servers (https://www.oftc.net).
GitHub
------
The main GitHub account for the project is at https://github.com/ceph/ceph-ansible/.
Submit a patch
--------------
To start contributing just do:
.. code-block:: console
$ git checkout -b my-working-branch
$ # do your changes #
$ git add -p
If your change impacts a variable file in a role such as ``roles/ceph-common/defaults/main.yml``, you need to generate a ``group_vars`` file:
.. code-block:: console
$ ./generate_group_vars_sample.sh
You are finally ready to push your changes on GitHub:
.. code-block:: console
$ git commit -s
$ git push origin my-working-branch
Worked on a change and you don't want to resend a commit for a syntax fix?
.. code-block:: console
$ # do your syntax change #
$ git commit --amend
$ git push -f origin my-working-branch
Pull Request Testing
--------------------
Pull request testing is handled by Jenkins. All test must pass before your pull request will be merged.
All of tests that are running are listed in the GitHub UI and will list their current status.
If a test fails and you'd like to rerun it, comment on your pull request in the following format:
.. code-block:: none
jenkins test $scenario_name
For example:
.. code-block:: none
jenkins test centos-non_container-all_daemons
Backporting changes
-------------------
If a change should be backported to a ``stable-*`` Git branch:
- Mark your pull request with the GitHub label "Backport" so we don't lose track of it.
- Fetch the latest updates into your clone: ``git fetch``
- Determine the latest available stable branch:
``git branch -r --list "origin/stable-[0-9].[0-9]" | sort -r | sed 1q``
- Create a new local branch for your pull request, based on the stable branch:
``git checkout --no-track -b my-backported-change origin/stable-5.0``
- Cherry-pick your change: ``git cherry-pick -x (your-sha1)``
- Create a new pull request against the ``stable-5.0`` branch.
- Ensure that your pull request's title has the prefix "backport:", so it's clear
to reviewers what this is about.
- Add a comment in your backport pull request linking to the original (main) pull request.
All changes to the stable branches should land in main first, so we avoid
regressions.
Once this is done, one of the project maintainers will tag the tip of the
stable branch with your change. For example:
.. code-block:: console
$ git checkout stable-5.0
$ git pull --ff-only
$ git tag v5.0.12
$ git push origin v5.0.12
Glossary
========
.. toctree::
:maxdepth: 3
:caption: Contents:
index
testing/glossary
This diff is collapsed.
Containerized deployment
========================
Ceph-ansible supports docker and podman only in order to deploy Ceph in a containerized context.
Configuration and Usage
-----------------------
To deploy ceph in containers, you will need to set the ``containerized_deployment`` variable to ``true`` and use the site-container.yml.sample playbook.
.. code-block:: yaml
containerized_deployment: true
The ``ceph_origin`` and ``ceph_repository`` variables aren't needed anymore in containerized deployment and are ignored.
.. code-block:: console
$ ansible-playbook site-container.yml.sample
.. note::
The infrastructure playbooks are working for both non containerized and containerized deployment.
Custom container image
----------------------
You can configure your own container register, image and tag by using the ``ceph_docker_registry``, ``ceph_docker_image`` and ``ceph_docker_image_tag`` variables.
.. code-block:: yaml
ceph_docker_registry: quay.io
ceph_docker_image: ceph/daemon
ceph_docker_image_tag: latest
.. note::
``ceph_docker_image`` should have both image namespace and image name concatenated and separated by a slash character.
``ceph_docker_image_tag`` should be set to a fixed tag, not to any "latest" tags unless you know what you are doing. Using a "latest" tag
might make the playbook restart all the daemons deployed in your cluster since these tags are intended to be updated periodically.
Container registry authentication
---------------------------------
When using a container registry with authentication then you need to set the ``ceph_docker_registry_auth`` variable to ``true`` and provide the credentials via the
``ceph_docker_registry_username`` and ``ceph_docker_registry_password`` variables
.. code-block:: yaml
ceph_docker_registry_auth: true
ceph_docker_registry_username: foo
ceph_docker_registry_password: bar
Container registry behind a proxy
---------------------------------
When using a container registry reachable via a http(s) proxy then you need to set the ``ceph_docker_http_proxy`` and/or ``ceph_docker_https_proxy`` variables. If you need
to exclude some host for the proxy configuration to can use the ``ceph_docker_no_proxy`` variable.
.. code-block:: yaml
ceph_docker_http_proxy: http://192.168.42.100:8080
ceph_docker_https_proxy: https://192.168.42.100:8080
\ No newline at end of file
Installation methods
====================
ceph-ansible can deploy Ceph either in a non-containerized context (via packages) or in a containerized context using ceph-container images.
.. toctree::
:maxdepth: 1
non-containerized
containerized
The difference here is that you don't have the rbd command on the host when using the containerized deployment so everything related to ceph needs to be executed within a container. So in the case there is software like e.g. Open Nebula which requires that the rbd command is accessible directly on the host (non-containerized) then you have to install the rbd command by yourself on those servers outside of containers (or make sure that this software somehow runs within containers as well and that it can access rbd).
Non containerized deployment
============================
The following are all of the available options for the installing Ceph through different channels.
We support 3 main installation methods, all managed by the ``ceph_origin`` variable:
- ``repository``: means that you will get Ceph installed through a new repository. Later below choose between ``community`` or ``dev``. These options will be exposed through the ``ceph_repository`` variable.
- ``distro``: means that no separate repo file will be added and you will get whatever version of Ceph is included in your Linux distro.
- ``local``: means that the Ceph binaries will be copied over from the local machine (not well tested, use at your own risk)
Origin: Repository
------------------
If ``ceph_origin`` is set to ``repository``, you now have the choice between a couple of repositories controlled by the ``ceph_repository`` option:
- ``community``: fetches packages from http://download.ceph.com, the official community Ceph repositories
- ``dev``: fetches packages from shaman, a gitbuilder based package system
- ``uca``: fetches packages from Ubuntu Cloud Archive
- ``custom``: fetches packages from a specific repository
Community repository
~~~~~~~~~~~~~~~~~~~~
If ``ceph_repository`` is set to ``community``, packages you will be by default installed from http://download.ceph.com, this can be changed by tweaking ``ceph_mirror``.
UCA repository
~~~~~~~~~~~~~~
If ``ceph_repository`` is set to ``uca``, packages you will be by default installed from http://ubuntu-cloud.archive.canonical.com/ubuntu, this can be changed by tweaking ``ceph_stable_repo_uca``.
You can also decide which OpenStack version the Ceph packages should come from by tweaking ``ceph_stable_openstack_release_uca``.
For example, ``ceph_stable_openstack_release_uca: queens``.
Dev repository
~~~~~~~~~~~~~~
If ``ceph_repository`` is set to ``dev``, packages you will be by default installed from https://shaman.ceph.com/, this can not be tweaked.
You can obviously decide which branch to install with the help of ``ceph_dev_branch`` (defaults to 'main').
Additionally, you can specify a SHA1 with ``ceph_dev_sha1``, defaults to 'latest' (as in latest built).
Custom repository
~~~~~~~~~~~~~~~~~
If ``ceph_repository`` is set to ``custom``, packages you will be by default installed from a desired repository.
This repository is specified with ``ceph_custom_repo``, e.g: ``ceph_custom_repo: https://server.domain.com/ceph-custom-repo``.
Origin: Distro
--------------
If ``ceph_origin`` is set to ``distro``, no separate repo file will be added and you will get whatever version of Ceph is included in your Linux distro.
Origin: Local
-------------
If ``ceph_origin`` is set to ``local``, the ceph binaries will be copied over from the local machine (not well tested, use at your own risk)
OSD Scenario
============
As of stable-4.0, the following scenarios are not supported anymore since they are associated to ``ceph-disk``:
* `collocated`
* `non-collocated`
Since the Ceph luminous release, it is preferred to use the :ref:`lvm scenario
<osd_scenario_lvm>` that uses the ``ceph-volume`` provisioning tool. Any other
scenario will cause deprecation warnings.
``ceph-disk`` was deprecated during the ceph-ansible 3.2 cycle and has been removed entirely from Ceph itself in the Nautilus version.
At present (starting from stable-4.0), there is only one scenario, which defaults to ``lvm``, see:
* :ref:`lvm <osd_scenario_lvm>`
So there is no need to configure ``osd_scenario`` anymore, it defaults to ``lvm``.
The ``lvm`` scenario mentioned above support both containerized and non-containerized cluster.
As a reminder, deploying a containerized cluster can be done by setting ``containerized_deployment``
to ``True``.
If you want to skip OSD creation during a ``ceph-ansible run``
(e.g. because you have already provisioned your OSDs but disk IDs have
changed), you can skip the ``prepare_osd`` tag i.e. by specifying
``--skip-tags prepare_osd`` on the ``ansible-playbook`` command line.
.. _osd_scenario_lvm:
lvm
---
This OSD scenario uses ``ceph-volume`` to create OSDs, primarily using LVM, and
is only available when the Ceph release is luminous or newer.
It is automatically enabled.
Other (optional) supported settings:
- ``dmcrypt``: Enable Ceph's encryption on OSDs using ``dmcrypt``.
Defaults to ``false`` if unset.
- ``osds_per_device``: Provision more than 1 OSD (the default if unset) per device.
Simple configuration
^^^^^^^^^^^^^^^^^^^^
With this approach, most of the decisions on how devices are configured to
provision an OSD are made by the Ceph tooling (``ceph-volume lvm batch`` in
this case). There is almost no room to modify how the OSD is composed given an
input of devices.
To use this configuration, the ``devices`` option must be populated with the
raw device paths that will be used to provision the OSDs.
.. note:: Raw devices must be "clean", without a gpt partition table, or
logical volumes present.
For example, for a node that has ``/dev/sda`` and ``/dev/sdb`` intended for
Ceph usage, the configuration would be:
.. code-block:: yaml
devices:
- /dev/sda
- /dev/sdb
In the above case, if both devices are spinning drives, 2 OSDs would be
created, each with its own collocated journal.
Other provisioning strategies are possible, by mixing spinning and solid state
devices, for example:
.. code-block:: yaml
devices:
- /dev/sda
- /dev/sdb
- /dev/nvme0n1
Similar to the initial example, this would end up producing 2 OSDs, but data
would be placed on the slower spinning drives (``/dev/sda``, and ``/dev/sdb``)
and journals would be placed on the faster solid state device ``/dev/nvme0n1``.
The ``ceph-volume`` tool describes this in detail in
`the "batch" subcommand section <https://docs.ceph.com/en/latest/ceph-volume/lvm/batch/>`_
This option can also be used with ``osd_auto_discovery``, meaning that you do not need to populate
``devices`` directly and any appropriate devices found by ansible will be used instead.
.. code-block:: yaml
osd_auto_discovery: true
Other (optional) supported settings:
- ``crush_device_class``: Sets the CRUSH device class for all OSDs created with this
method (it is not possible to have a per-OSD CRUSH device class using the *simple*
configuration approach). Values *must be* a string, like
``crush_device_class: "ssd"``
Advanced configuration
^^^^^^^^^^^^^^^^^^^^^^
This configuration is useful when more granular control is wanted when setting
up devices and how they should be arranged to provision an OSD. It requires an
existing setup of volume groups and logical volumes (``ceph-volume`` will **not**
create these).
To use this configuration, the ``lvm_volumes`` option must be populated with
logical volumes and volume groups. Additionally, absolute paths to partitions
*can* be used for ``journal``, ``block.db``, and ``block.wal``.
.. note:: This configuration uses ``ceph-volume lvm create`` to provision OSDs
Supported ``lvm_volumes`` configuration settings:
- ``data``: The logical volume name or full path to a raw device (an LV will be
created using 100% of the raw device)
- ``data_vg``: The volume group name, **required** if ``data`` is a logical volume.
- ``crush_device_class``: CRUSH device class name for the resulting OSD, allows
setting set the device class for each OSD, unlike the global ``crush_device_class``
that sets them for all OSDs.
.. note:: If you wish to set the ``crush_device_class`` for the OSDs
when using ``devices`` you must set it using the global ``crush_device_class``
option as shown above. There is no way to define a specific CRUSH device class
per OSD when using ``devices`` like there is for ``lvm_volumes``.
.. warning:: Each entry must be unique, duplicate values are not allowed
``bluestore`` objectstore variables:
- ``db``: The logical volume name or full path to a partition.
- ``db_vg``: The volume group name, **required** if ``db`` is a logical volume.
- ``wal``: The logical volume name or full path to a partition.
- ``wal_vg``: The volume group name, **required** if ``wal`` is a logical volume.
.. note:: These ``bluestore`` variables are optional optimizations. Bluestore's
``db`` and ``wal`` will only benefit from faster devices. It is possible to
create a bluestore OSD with a single raw device.
.. warning:: Each entry must be unique, duplicate values are not allowed
``bluestore`` example using raw devices:
.. code-block:: yaml
osd_objectstore: bluestore
lvm_volumes:
- data: /dev/sda
- data: /dev/sdb
.. note:: Volume groups and logical volumes will be created in this case,
utilizing 100% of the devices.
``bluestore`` example with logical volumes:
.. code-block:: yaml
osd_objectstore: bluestore
lvm_volumes:
- data: data-lv1
data_vg: data-vg1
- data: data-lv2
data_vg: data-vg2
.. note:: Volume groups and logical volumes must exist.
``bluestore`` example defining ``wal`` and ``db`` logical volumes:
.. code-block:: yaml
osd_objectstore: bluestore
lvm_volumes:
- data: data-lv1
data_vg: data-vg1
db: db-lv1
db_vg: db-vg1
wal: wal-lv1
wal_vg: wal-vg1
- data: data-lv2
data_vg: data-vg2
db: db-lv2
db_vg: db-vg2
wal: wal-lv2
wal_vg: wal-vg2
.. note:: Volume groups and logical volumes must exist.
``filestore`` example with logical volumes:
.. code-block:: yaml
osd_objectstore: filestore
lvm_volumes:
- data: data-lv1
data_vg: data-vg1
journal: journal-lv1
journal_vg: journal-vg1
- data: data-lv2
data_vg: data-vg2
journal: journal-lv2
journal_vg: journal-vg2
.. note:: Volume groups and logical volumes must exist.
RBD Mirroring
=============
There's not so much to do from the primary cluster side in order to setup an RBD mirror replication.
``ceph_rbd_mirror_configure`` has to be set to ``true`` to make ceph-ansible create the mirrored pool
defined in ``ceph_rbd_mirror_pool`` and the keyring that is going to be used to add the rbd mirror peer.
group_vars from the primary cluster:
.. code-block:: yaml
ceph_rbd_mirror_configure: true
ceph_rbd_mirror_pool: rbd
Optionnally, you can tell ceph-ansible to set the name and the secret of the keyring you want to create:
.. code-block:: yaml
ceph_rbd_mirror_local_user: client.rbd-mirror-peer # 'client.rbd-mirror-peer' is the default value.
ceph_rbd_mirror_local_user_secret: AQC+eM1iKKBXFBAAVpunJvqpkodHSYmljCFCnw==
This secret will be needed to add the rbd mirror peer from the secondary cluster.
If you do not enforce it as shown above, you can get it from a monitor by running the following command:
``ceph auth get {{ ceph_rbd_mirror_local_user }}``
.. code-block:: shell
$ sudo ceph auth get client.rbd-mirror-peer
Once your variables are defined, you can run the playbook (you might want to run with --limit option):
.. code-block:: shell
$ ansible-playbook -vv -i hosts site-container.yml --limit rbdmirror0
The configuration of the rbd mirror replication strictly speaking is done on the secondary cluster.
The rbd-mirror daemon pulls the data from the primary cluster. This is where the rbd mirror peer addition has to be done.
The configuration is similar with what was done on the primary cluster, it just needs few additional variables.
``ceph_rbd_mirror_remote_user`` : This user must match the name defined in the variable ``ceph_rbd_mirror_local_user`` from the primary cluster.
``ceph_rbd_mirror_remote_mon_hosts`` : This must a comma separated list of the monitor addresses from the primary cluster.
``ceph_rbd_mirror_remote_key`` : This must be the same value as the user (``{{ ceph_rbd_mirror_local_user }}``) keyring secret from the primary cluster.
group_vars from the secondary cluster:
.. code-block:: yaml
ceph_rbd_mirror_configure: true
ceph_rbd_mirror_pool: rbd
ceph_rbd_mirror_remote_user: client.rbd-mirror-peer # This must match the value defined in {{ ceph_rbd_mirror_local_user }} on primary cluster.
ceph_rbd_mirror_remote_mon_hosts: 1.2.3.4
ceph_rbd_mirror_remote_key: AQC+eM1iKKBXFBAAVpunJvqpkodHSYmljCFCnw== # This must match the secret of the registered keyring of the user defined in {{ ceph_rbd_mirror_local_user }} on primary cluster.
Once you variables are defined, you can run the playbook (you might want to run with --limit option):
.. code-block:: shell
$ ansible-playbook -vv -i hosts site-container.yml --limit rbdmirror0
\ No newline at end of file
.. _development:
ceph-ansible testing for development
====================================
Glossary
========
.. toctree::
:maxdepth: 1
index
running.rst
development.rst
scenarios.rst
modifying.rst
layout.rst
tests.rst
tox.rst
.. _testing:
Testing
=======
``ceph-ansible`` has the ability to test different scenarios (collocated journals
or dmcrypt OSDs for example) in an isolated, repeatable, and easy way.
These tests can run locally with VirtualBox or via libvirt if available, which
removes the need to solely rely on a CI system like Jenkins to verify
a behavior.
* **Getting started:**
* :doc:`Running a Test Scenario <running>`
* :ref:`dependencies`
* **Configuration and structure:**
* :ref:`layout`
* :ref:`test_files`
* :ref:`scenario_files`
* :ref:`scenario_wiring`
* **Adding or modifying tests:**
* :ref:`test_conventions`
* :ref:`testinfra`
* **Adding or modifying a scenario:**
* :ref:`scenario_conventions`
* :ref:`scenario_environment_configuration`
* :ref:`scenario_ansible_configuration`
* **Custom/development repositories and packages:**
* :ref:`tox_environment_variables`
.. _layout:
Layout and conventions
----------------------
Test files and directories follow a few conventions, which makes it easy to
create (or expect) certain interactions between tests and scenarios.
All tests are in the ``tests`` directory. Scenarios are defined in
``tests/functional/`` and use the following convention for directory
structure:
.. code-block:: none
tests/functional/<distro>/<distro version>/<scenario name>/
For example: ``tests/functional/centos/7/journal-collocation``
Within a test scenario there are a few files that define what that specific
scenario needs for the tests, like how many OSD nodes or MON nodes. Tls
At the very least, a scenario will need these files:
* ``Vagrantfile``: must be symlinked from the root directory of the project
* ``hosts``: An Ansible hosts file that defines the machines part of the
cluster
* ``group_vars/all``: if any modifications are needed for deployment, this
would override them. Additionally, further customizations can be done. For
example, for OSDs that would mean adding ``group_vars/osds``
* ``vagrant_variables.yml``: Defines the actual environment for the test, where
machines, networks, disks, linux distro/version, can be defined.
.. _test_conventions:
Conventions
-----------
Python test files (unlike scenarios) rely on paths to *map* where they belong. For
example, a file that should only test monitor nodes would live in
``ceph-ansible/tests/functional/tests/mon/``. Internally, the test runner
(``py.test``) will *mark* these as tests that should run on a monitor only.
Since the configuration of a scenario already defines what node has a given
role, then it is easier for the system to only run tests that belong to
a particular node type.
The current convention is a bit manual, with initial path support for:
* mon
* osd
* mds
* rgw
* journal_collocation
* all/any (if none of the above are matched, then these are run on any host)
.. _testinfra:
``testinfra``
-------------
.. _modifying:
Modifying (or adding) tests
===========================
.. _running_tests:
Running Tests
=============
Although tests run continuously in CI, a lot of effort was put into making it
easy to run in any environment, as long as a couple of requirements are met.
.. _dependencies:
Dependencies
------------
There are some Python dependencies, which are listed in a ``requirements.txt``
file within the ``tests/`` directory. These are meant to be installed using
Python install tools (pip in this case):
.. code-block:: console
pip install -r tests/requirements.txt
For virtualization, either libvirt or VirtualBox is needed (there is native
support from the harness for both). This makes the test harness even more
flexible as most platforms will be covered by either VirtualBox or libvirt.
.. _running_a_scenario:
Running a scenario
------------------
Tests are driven by ``tox``, a command line tool to run a matrix of tests defined in
a configuration file (``tox.ini`` in this case at the root of the project).
For a thorough description of a scenario see :ref:`test_scenarios`.
To run a single scenario, make sure it is available (should be defined from
``tox.ini``) by listing them:
.. code-block:: console
tox -l
In this example, we will use the ``luminous-ansible2.4-xenial_cluster`` one. The
harness defaults to ``VirtualBox`` as the backend, so if you have that
installed in your system then this command should just work:
.. code-block:: console
tox -e luminous-ansible2.4-xenial_cluster
And for libvirt it would be:
.. code-block:: console
tox -e luminous-ansible2.4-xenial_cluster -- --provider=libvirt
.. warning::
Depending on the type of scenario and resources available, running
these tests locally in a personal computer can be very resource intensive.
.. note::
Most test runs take between 20 and 40 minutes depending on system
resources
The command should bring up the machines needed for the test, provision them
with ``ceph-ansible``, run the tests, and tear the whole environment down at the
end.
The output would look something similar to this trimmed version:
.. code-block:: console
luminous-ansible2.4-xenial_cluster create: /Users/alfredo/python/upstream/ceph-ansible/.tox/luminous-ansible2.4-xenial_cluster
luminous-ansible2.4-xenial_cluster installdeps: ansible==2.4.2, -r/Users/alfredo/python/upstream/ceph-ansible/tests/requirements.txt
luminous-ansible2.4-xenial_cluster runtests: commands[0] | vagrant up --no-provision --provider=virtualbox
Bringing machine 'client0' up with 'virtualbox' provider...
Bringing machine 'rgw0' up with 'virtualbox' provider...
Bringing machine 'mds0' up with 'virtualbox' provider...
Bringing machine 'mon0' up with 'virtualbox' provider...
Bringing machine 'mon1' up with 'virtualbox' provider...
Bringing machine 'mon2' up with 'virtualbox' provider...
Bringing machine 'osd0' up with 'virtualbox' provider...
...
After all the nodes are up, ``ceph-ansible`` will provision them, and run the
playbook(s):
.. code-block:: console
...
PLAY RECAP *********************************************************************
client0 : ok=4 changed=0 unreachable=0 failed=0
mds0 : ok=4 changed=0 unreachable=0 failed=0
mon0 : ok=4 changed=0 unreachable=0 failed=0
mon1 : ok=4 changed=0 unreachable=0 failed=0
mon2 : ok=4 changed=0 unreachable=0 failed=0
osd0 : ok=4 changed=0 unreachable=0 failed=0
rgw0 : ok=4 changed=0 unreachable=0 failed=0
...
Once the whole environment is all running the tests will be sent out to the
hosts, with output similar to this:
.. code-block:: console
luminous-ansible2.4-xenial_cluster runtests: commands[4] | testinfra -n 4 --sudo -v --connection=ansible --ansible-inventory=/Users/alfredo/python/upstream/ceph-ansible/tests/functional/ubuntu/16.04/cluster/hosts /Users/alfredo/python/upstream/ceph-ansible/tests/functional/tests
============================ test session starts ===========================
platform darwin -- Python 2.7.8, pytest-3.0.7, py-1.4.33, pluggy-0.4.0 -- /Users/alfredo/python/upstream/ceph-ansible/.tox/luminous-ansible2.4-xenial_cluster/bin/python
cachedir: ../../../../.cache
rootdir: /Users/alfredo/python/upstream/ceph-ansible/tests, inifile: pytest.ini
plugins: testinfra-1.5.4, xdist-1.15.0
[gw0] darwin Python 2.7.8 cwd: /Users/alfredo/python/upstream/ceph-ansible/tests/functional/ubuntu/16.04/cluster
[gw1] darwin Python 2.7.8 cwd: /Users/alfredo/python/upstream/ceph-ansible/tests/functional/ubuntu/16.04/cluster
[gw2] darwin Python 2.7.8 cwd: /Users/alfredo/python/upstream/ceph-ansible/tests/functional/ubuntu/16.04/cluster
[gw3] darwin Python 2.7.8 cwd: /Users/alfredo/python/upstream/ceph-ansible/tests/functional/ubuntu/16.04/cluster
[gw0] Python 2.7.8 (v2.7.8:ee879c0ffa11, Jun 29 2014, 21:07:35) -- [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)]
[gw1] Python 2.7.8 (v2.7.8:ee879c0ffa11, Jun 29 2014, 21:07:35) -- [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)]
[gw2] Python 2.7.8 (v2.7.8:ee879c0ffa11, Jun 29 2014, 21:07:35) -- [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)]
[gw3] Python 2.7.8 (v2.7.8:ee879c0ffa11, Jun 29 2014, 21:07:35) -- [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)]
gw0 [154] / gw1 [154] / gw2 [154] / gw3 [154]
scheduling tests via LoadScheduling
../../../tests/test_install.py::TestInstall::test_ceph_dir_exists[ansible:/mon0]
../../../tests/test_install.py::TestInstall::test_ceph_dir_is_a_directory[ansible:/mon0]
../../../tests/test_install.py::TestInstall::test_ceph_conf_is_a_file[ansible:/mon0]
../../../tests/test_install.py::TestInstall::test_ceph_dir_is_a_directory[ansible:/mon1]
[gw2] PASSED ../../../tests/test_install.py::TestCephConf::test_ceph_config_has_mon_host_line[ansible:/mon0]
../../../tests/test_install.py::TestInstall::test_ceph_conf_exists[ansible:/mon1]
[gw3] PASSED ../../../tests/test_install.py::TestCephConf::test_mon_host_line_has_correct_value[ansible:/mon0]
../../../tests/test_install.py::TestInstall::test_ceph_conf_is_a_file[ansible:/mon1]
[gw1] PASSED ../../../tests/test_install.py::TestInstall::test_ceph_command_exists[ansible:/mon1]
../../../tests/test_install.py::TestCephConf::test_mon_host_line_has_correct_value[ansible:/mon1]
...
Finally the whole environment gets torn down:
.. code-block:: console
luminous-ansible2.4-xenial_cluster runtests: commands[5] | vagrant destroy --force
==> osd0: Forcing shutdown of VM...
==> osd0: Destroying VM and associated drives...
==> mon2: Forcing shutdown of VM...
==> mon2: Destroying VM and associated drives...
==> mon1: Forcing shutdown of VM...
==> mon1: Destroying VM and associated drives...
==> mon0: Forcing shutdown of VM...
==> mon0: Destroying VM and associated drives...
==> mds0: Forcing shutdown of VM...
==> mds0: Destroying VM and associated drives...
==> rgw0: Forcing shutdown of VM...
==> rgw0: Destroying VM and associated drives...
==> client0: Forcing shutdown of VM...
==> client0: Destroying VM and associated drives...
And a brief summary of the scenario(s) that ran is displayed:
.. code-block:: console
________________________________________________ summary _________________________________________________
luminous-ansible2.4-xenial_cluster: commands succeeded
congratulations :)
.. _test_scenarios:
Test Scenarios
==============
Scenarios are distinct environments that describe a Ceph deployment and
configuration. Scenarios are isolated as well, and define what machines are
needed aside from any ``ceph-ansible`` configuration.
.. _scenario_files:
Scenario Files
==============
The scenario is described in a ``vagrant_variables.yml`` file, which is
consumed by ``Vagrant`` when bringing up an environment.
This yaml file is loaded in the ``Vagrantfile`` so that the settings can be
used to bring up the boxes and pass some configuration to ansible when running.
.. note::
The basic layout of a scenario is covered in :ref:`layout`.
There are just a handful of required files, this is the most basic layout.
There are just a handful of required files, these sections will cover the
required (most basic) ones. Alternatively, other ``ceph-ansible`` files can be
added to customize the behavior of a scenario deployment.
.. _vagrant_variables:
``vagrant_variables.yml``
-------------------------
There are a few sections in the ``vagrant_variables.yml`` file which are easy
to follow (most of them are 1 line settings).
* **docker**: (bool) Indicates if the scenario will deploy Docker daemons
* **VMS**: (int) These integer values are just a count of how many machines will be
needed. Each supported type is listed, defaulting to 0:
.. code-block:: yaml
mon_vms: 0
osd_vms: 0
mds_vms: 0
rgw_vms: 0
nfs_vms: 0
rbd_mirror_vms: 0
client_vms: 0
mgr_vms: 0
For a deployment that needs 1 MON and 1 OSD, the list would look like:
.. code-block:: yaml
mon_vms: 1
osd_vms: 1
* **CEPH SOURCE**: (string) indicate whether a ``dev`` or ``stable`` release is
needed. A ``stable`` release will use the latest stable release of Ceph,
a ``dev`` will use ``shaman`` (http://shaman.ceph.com)
* **SUBNETS**: These are used for configuring the network availability of each
server that will be booted as well as being used as configuration for
``ceph-ansible`` (and eventually Ceph). The two values that are **required**:
.. code-block:: yaml
public_subnet: 192.168.13
cluster_subnet: 192.168.14
* **MEMORY**: Memory requirements (in megabytes) for each server, e.g.
``memory: 512``
* **interfaces**: some vagrant boxes (and linux distros) set specific
interfaces. For Ubuntu releases older than Xenial it was common to have
``eth1``, for CentOS and some Xenial boxes ``enp0s8`` is used. **However**
the public Vagrant boxes normalize the interface to ``eth1`` for all boxes,
making it easier to configure them with Ansible later.
.. warning::
Do *not* change the interface from ``eth1`` unless absolutely
certain that is needed for a box. Some tests that depend on that
naming will fail.
* **disks**: The disks that will be created for each machine, for most
environments ``/dev/sd*`` style of disks will work, like: ``[ '/dev/sda', '/dev/sdb' ]``
* **vagrant_box**: We have published our own boxes to normalize what we test
against. These boxes are published in Atlas
(https://atlas.hashicorp.com/ceph/). Currently valid values are:
``ceph/ubuntu-xenial``, and ``ceph/centos7``
The following aren't usually changed/enabled for tests, since they don't have
an impact, however they are documented here for general knowledge in case they
are needed:
* **ssh_private_key_path**: The path to the ``id_rsa`` (or other private SSH
key) that should be used to connect to these boxes.
* **vagrant_sync_dir**: what should be "synced" (made available on the new
servers) from the host.
* **vagrant_disable_synced_folder**: (bool) when disabled, it will make
booting machines faster because no files need to be synced over.
* **os_tuning_params**: These are passed onto ``ceph-ansible`` as part of the
variables for "system tunning". These shouldn't be changed.
.. _vagrant_file:
``Vagrantfile``
---------------
The ``Vagrantfile`` should not need to change, and it is symlinked back to the
``Vagrantfile`` that exists in the root of the project. It is linked in this
way so that a vagrant environment can be isolated to the given scenario.
.. _hosts_file:
``hosts``
---------
The ``hosts`` file should contain the hosts needed for the scenario. This might
seem a bit repetitive since machines are already defined in
:ref:`vagrant_variables` but it allows granular changes to hosts (for example
defining an interface vs. an IP on a monitor) which can help catch issues in
``ceph-ansible`` configuration. For example:
.. code-block:: ini
[mons]
mon0 monitor_address=192.168.5.10
mon1 monitor_address=192.168.5.11
mon2 monitor_interface=eth1
.. _group_vars:
``group_vars``
--------------
This directory holds any configuration change that will affect ``ceph-ansible``
deployments in the same way as if ansible was executed from the root of the
project.
The file that will need to be defined always is ``all`` where (again) certain
values like ``public_network`` and ``cluster_network`` will need to be defined
along with any customizations that ``ceph-ansible`` supports.
.. _scenario_wiring:
Scenario Wiring
---------------
Scenarios are just meant to provide the Ceph environment for testing, but they
do need to be defined in the ``tox.ini`` so that they are available to the test
framework. To see a list of available scenarios, the following command (ran
from the root of the project) will list them, shortened for brevity:
.. code-block:: console
$ tox -l
...
luminous-ansible2.4-centos7_cluster
...
These scenarios are made from different variables, in the above command there
are 3:
* ``jewel``: the Ceph version to test
* ``ansible2.4``: the Ansible version to install
* ``centos7_cluster``: the name of the scenario
The last one is important in the *wiring up* of the scenario. It is a variable
that will define in what path the scenario lives. For example, the
``changedir`` section for ``centos7_cluster`` that looks like:
.. code-block:: ini
centos7_cluster: {toxinidir}/tests/functional/centos/7/cluster
The actual tests are written for specific daemon types, for all daemon types,
and for specific use cases (e.g. journal collocation), those have their own
conventions as well which are explained in detail in :ref:`test_conventions`
and :ref:`test_files`.
As long as a test scenario defines OSDs and MONs, the OSD tests and MON tests
will run.
.. _scenario_conventions:
Conventions
-----------
.. _scenario_environment_configuration:
Environment configuration
-------------------------
.. _scenario_ansible_configuration:
Ansible configuration
---------------------
.. _tests:
Tests
=====
Actual tests are written in Python methods that accept optional fixtures. These
fixtures come with interesting attributes to help with remote assertions.
As described in :ref:`test_conventions`, tests need to go into
``tests/functional/tests/``. These are collected and *mapped* to a distinct
node type, or *mapped* to run on all nodes.
Simple Python asserts are used (these tests do not need to follow the Python
``unittest.TestCase`` base class) that make it easier to reason about failures
and errors.
The test run is handled by ``py.test`` along with :ref:`testinfra` for handling
remote execution.
.. _test_files:
Test Files
----------
.. _test_fixtures:
Test Fixtures
=============
Test fixtures are a powerful feature of ``py.test`` and most tests depend on
this for making assertions about remote nodes. To request them in a test
method, all that is needed is to require it as an argument.
Fixtures are detected by name, so as long as the argument being used has the
same name, the fixture will be passed in (see `pytest fixtures`_ for more
in-depth examples). The code that follows shows a test method that will use the
``node`` fixture that contains useful information about a node in a ceph
cluster:
.. code-block:: python
def test_ceph_conf(self, node):
assert node['conf_path'] == "/etc/ceph/ceph.conf"
The test is naive (the configuration path might not exist remotely) but
explains how simple it is to "request" a fixture.
For remote execution, we can rely further on other fixtures (tests can have as
many fixtures as needed) like ``File``:
.. code-block:: python
def test_ceph_config_has_inital_members_line(self, node, File):
assert File(node["conf_path"]).contains("^mon initial members = .*$")
.. _node:
``node`` fixture
----------------
The ``node`` fixture contains a few useful pieces of information about the node
where the test is being executed, this is captured once, before tests run:
* ``address``: The IP for the ``eth1`` interface
* ``subnet``: The subnet that ``address`` belongs to
* ``vars``: all the Ansible vars set for the current run
* ``osd_ids``: a list of all the OSD IDs
* ``num_mons``: the total number of monitors for the current environment
* ``num_devices``: the number of devices for the current node
* ``num_osd_hosts``: the total number of OSD hosts
* ``total_osds``: total number of OSDs on the current node
* ``cluster_name``: the name of the Ceph cluster (which defaults to 'ceph')
* ``conf_path``: since the cluster name can change the file path for the Ceph
configuration, this gets sets according to the cluster name.
* ``cluster_address``: the address used for cluster communication. All
environments are set up with 2 interfaces, 1 being used exclusively for the
cluster
* ``docker``: A boolean that identifies a Ceph Docker cluster
* ``osds``: A list of OSD IDs, unless it is a Docker cluster, where it gets the
name of the devices (e.g. ``sda1``)
Other Fixtures
--------------
There are a lot of other fixtures provided by :ref:`testinfra` as well as
``py.test``. The full list of ``testinfra`` fixtures are available in
`testinfra_fixtures`_
``py.test`` builtin fixtures can be listed with ``pytest -q --fixtures`` and
they are described in `pytest builtin fixtures`_
.. _pytest fixtures: https://docs.pytest.org/en/latest/fixture.html
.. _pytest builtin fixtures: https://docs.pytest.org/en/latest/builtin.html#builtin-fixtures-function-arguments
.. _testinfra_fixtures: https://testinfra.readthedocs.io/en/latest/modules.html#modules
.. _tox:
``tox``
=======
``tox`` is an automation project we use to run our testing scenarios. It gives us
the ability to create a dynamic matrix of many testing scenarios, isolated testing environments
and a provides a single entry point to run all tests in an automated and repeatable fashion.
Documentation for tox can be found `here <https://tox.readthedocs.io/en/latest/>`_.
.. _tox_environment_variables:
Environment variables
---------------------
When running ``tox`` we've allowed for the usage of environment variables to tweak certain settings
of the playbook run using Ansible's ``--extra-vars``. It's helpful in Jenkins jobs or for manual test
runs of ``ceph-ansible``.
The following environent variables are available for use:
* ``CEPH_DOCKER_REGISTRY``: (default: ``quay.io``) This would configure the ``ceph-ansible`` variable ``ceph_docker_registry``.
* ``CEPH_DOCKER_IMAGE``: (default: ``ceph/daemon``) This would configure the ``ceph-ansible`` variable ``ceph_docker_image``.
* ``CEPH_DOCKER_IMAGE_TAG``: (default: ``latest``) This would configure the ``ceph-ansible`` variable ``ceph_docker_image_name``.
* ``CEPH_DEV_BRANCH``: (default: ``main``) This would configure the ``ceph-ansible`` variable ``ceph_dev_branch`` which defines which branch we'd
like to install from shaman.ceph.com.
* ``CEPH_DEV_SHA1``: (default: ``latest``) This would configure the ``ceph-ansible`` variable ``ceph_dev_sha1`` which defines which sha1 we'd like
to install from shaman.ceph.com.
* ``UPDATE_CEPH_DEV_BRANCH``: (default: ``main``) This would configure the ``ceph-ansible`` variable ``ceph_dev_branch`` which defines which branch we'd
like to update to from shaman.ceph.com.
* ``UPDATE_CEPH_DEV_SHA1``: (default: ``latest``) This would configure the ``ceph-ansible`` variable ``ceph_dev_sha1`` which defines which sha1 we'd like
to update to from shaman.ceph.com.
.. _tox_sections:
Sections
--------
The ``tox.ini`` file has a number of top level sections defined by ``[ ]`` and subsections within those. For complete documentation
on all subsections inside of a tox section please refer to the tox documentation.
* ``tox`` : This section contains the ``envlist`` which is used to create our dynamic matrix. Refer to the `section here <http://tox.readthedocs.io/en/latest/config.html#generating-environments-conditional-settings>`_ for more information on how the ``envlist`` works.
* ``purge`` : This section contains commands that only run for scenarios that purge the cluster and redeploy. You'll see this section being reused in ``testenv``
with the following syntax: ``{[purge]commands}``
* ``update`` : This section contains commands taht only run for scenarios that deploy a cluster and then upgrade it to another Ceph version.
* ``testenv`` : This is the main section of the ``tox.ini`` file and is run on every scenario. This section contains many *factors* that define conditional
settings depending on the scenarios defined in the ``envlist``. For example, the factor ``centos7_cluster`` in the ``changedir`` subsection of ``testenv`` sets
the directory that tox will change do when that factor is selected. This is an important behavior that allows us to use the same ``tox.ini`` and reuse commands while
tweaking certain sections per testing scenario.
.. _tox_environments:
Modifying or Adding environments
--------------------------------
The tox environments are controlled by the ``envlist`` subsection of the ``[tox]`` section. Anything inside of ``{}`` is considered a *factor* and will be included
in the dynamic matrix that tox creates. Inside of ``{}`` you can include a comma separated list of the *factors*. Do not use a hyphen (``-``) as part
of the *factor* name as those are used by tox as the separator between different factor sets.
For example, if wanted to add a new test *factor* for the next Ceph release of luminious this is how you'd accomplish that. Currently, the first factor set in our ``envlist``
is used to define the Ceph release (``{jewel,kraken}-...``). To add luminous you'd change that to look like ``{luminous,kraken}-...``. In the ``testenv`` section
this is a subsection called ``setenv`` which allows you to provide environment variables to the tox environment and we support an environment variable called ``CEPH_STABLE_RELEASE``. To ensure that all the new tests that are created by adding the luminous *factor* you'd do this in that section: ``luminous: CEPH_STABLE_RELEASE=luminous``.
[tox]
envlist = docs
skipsdist = True
[testenv:docs]
basepython=python
changedir=source
deps=sphinx==1.7.9
commands=
sphinx-build -W -b html -d {envtmpdir}/doctrees . {envtmpdir}/html
# Dummy ansible host file
# Used for syntax check by Travis
# Before committing code please run: ansible-playbook --syntax-check site.yml -i dummy-ansible-hosts
localhost
#!/usr/bin/env bash
set -euo pipefail
#############
# VARIABLES #
#############
basedir=$(dirname "$0")
do_not_generate="(ceph-common|ceph-container-common|ceph-fetch-keys)$" # pipe separated list of roles we don't want to generate sample file, MUST end with '$', e.g: 'foo$|bar$'
#############
# FUNCTIONS #
#############
populate_header () {
for i in $output; do
cat <<EOF > "$basedir"/group_vars/"$i"
---
# Variables here are applicable to all host groups NOT roles
# This sample file generated by $(basename "$0")
# Dummy variable to avoid error because ansible does not recognize the
# file as a good configuration file when no variable in it.
dummy:
EOF
done
}
generate_group_vars_file () {
for i in $output; do
if [ "$(uname)" == "Darwin" ]; then
sed '/^---/d; s/^\([A-Za-z[:space:]]\)/#\1/' \
"$defaults" >> "$basedir"/group_vars/"$i"
echo >> "$basedir"/group_vars/"$i"
elif [ "$(uname -s)" == "Linux" ]; then
sed '/^---/d; s/^\([A-Za-z[:space:]].\+\)/#\1/' \
"$defaults" >> "$basedir"/group_vars/"$i"
echo >> "$basedir"/group_vars/"$i"
else
echo "Unsupported platform"
exit 1
fi
done
}
########
# MAIN #
########
for role in "$basedir"/roles/ceph-*; do
rolename=$(basename "$role")
if [[ $rolename == "ceph-defaults" ]]; then
output="all.yml.sample"
elif [[ $rolename == "ceph-fetch-keys" ]]; then
output="ceph-fetch-keys.yml.sample"
elif [[ $rolename == "ceph-rbd-mirror" ]]; then
output="rbdmirrors.yml.sample"
elif [[ $rolename == "ceph-rgw-loadbalancer" ]]; then
output="rgwloadbalancers.yml.sample"
else
output="${rolename:5}s.yml.sample"
fi
defaults="$role"/defaults/main.yml
if [[ ! -f $defaults ]]; then
continue
fi
if ! echo "$rolename" | grep -qE "$do_not_generate"; then
populate_header
generate_group_vars_file
fi
done
This diff is collapsed.
---
# Variables here are applicable to all host groups NOT roles
# This sample file generated by generate_group_vars_sample.sh
# Dummy variable to avoid error because ansible does not recognize the
# file as a good configuration file when no variable in it.
dummy:
###########
# GENERAL #
###########
# Even though Client nodes should not have the admin key
# at their disposal, some people might want to have it
# distributed on Client nodes. Setting 'copy_admin_key' to 'true'
# will copy the admin key to the /etc/ceph/ directory
#copy_admin_key: false
#user_config: false
# When pg_autoscale_mode is set to True, you must add the target_size_ratio key with a correct value
# `pg_num` and `pgp_num` keys will be ignored, even if specified.
# eg:
# test:
# name: "test"
# application: "rbd"
# target_size_ratio: 0.2
#test:
# name: "test"
# application: "rbd"
#test2:
# name: "test2"
# application: "rbd"
#pools:
# - "{{ test }}"
# - "{{ test2 }}"
# Generate a keyring using ceph-authtool CLI or python.
# Eg:
# $ ceph-authtool --gen-print-key
# or
# $ python2 -c "import os ; import struct ; import time; import base64 ; key = os.urandom(16) ; header = struct.pack('<hiih',1,int(time.time()),0,len(key)) ; print(base64.b64encode(header + key))"
#
# To use a particular secret, you have to add 'key' to the dict below, so something like:
# - { name: client.test, key: "AQAin8tUMICVFBAALRHNrV0Z4MXupRw4v9JQ6Q==" ...
#keys:
# - { name: client.test, caps: { mon: "profile rbd", osd: "allow class-read object_prefix rbd_children, profile rbd pool=test" }, mode: "{{ ceph_keyring_permissions }}" }
# - { name: client.test2, caps: { mon: "profile rbd", osd: "allow class-read object_prefix rbd_children, profile rbd pool=test2" }, mode: "{{ ceph_keyring_permissions }}" }
---
# Variables here are applicable to all host groups NOT roles
# This sample file generated by generate_group_vars_sample.sh
# Dummy variable to avoid error because ansible does not recognize the
# file as a good configuration file when no variable in it.
dummy:
# You can override vars by using host or group vars
###########
# GENERAL #
###########
# Even though MDS nodes should not have the admin key
# at their disposal, some people might want to have it
# distributed on MDS nodes. Setting 'copy_admin_key' to 'true'
# will copy the admin key to the /etc/ceph/ directory
#copy_admin_key: false
##########
# DOCKER #
##########
# Resource limitation
# For the whole list of limits you can apply see: docs.docker.com/engine/admin/resource_constraints
# Default values are based from: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/red_hat_ceph_storage_hardware_guide/minimum_recommendations
# These options can be passed using the 'ceph_mds_docker_extra_env' variable.
#ceph_mds_docker_memory_limit: "{{ ansible_facts['memtotal_mb'] }}m"
#ceph_mds_docker_cpu_limit: 4
# we currently for MDS_NAME to hostname because of a bug in ceph-docker
# fix here: https://github.com/ceph/ceph-docker/pull/770
# this will go away soon.
#ceph_mds_docker_extra_env: -e MDS_NAME={{ ansible_facts['hostname'] }}
#ceph_config_keys: [] # DON'T TOUCH ME
###########
# SYSTEMD #
###########
# ceph_mds_systemd_overrides will override the systemd settings
# for the ceph-mds services.
# For example,to set "PrivateDevices=false" you can specify:
# ceph_mds_systemd_overrides:
# Service:
# PrivateDevices: false
---
# Variables here are applicable to all host groups NOT roles
# This sample file generated by generate_group_vars_sample.sh
# Dummy variable to avoid error because ansible does not recognize the
# file as a good configuration file when no variable in it.
dummy:
##########
# GLOBAL #
##########
# Even though MGR nodes should not have the admin key
# at their disposal, some people might want to have it
# distributed on MGR nodes. Setting 'copy_admin_key' to 'true'
# will copy the admin key to the /etc/ceph/ directory
#copy_admin_key: false
#mgr_secret: 'mgr_secret'
###########
# MODULES #
###########
# Ceph mgr modules to enable, to view the list of available modules see: http://docs.ceph.com/docs/CEPH_VERSION/mgr/
# and replace CEPH_VERSION with your current Ceph version, e,g: 'mimic'
#ceph_mgr_modules: []
############
# PACKAGES #
############
# Ceph mgr packages to install, ceph-mgr + extra module packages.
#ceph_mgr_packages:
# - ceph-mgr
##########
# DOCKER #
##########
# Resource limitation
# For the whole list of limits you can apply see: docs.docker.com/engine/admin/resource_constraints
# Default values are based from: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/red_hat_ceph_storage_hardware_guide/minimum_recommendations
# These options can be passed using the 'ceph_mgr_docker_extra_env' variable.
#ceph_mgr_docker_memory_limit: "{{ ansible_facts['memtotal_mb'] }}m"
#ceph_mgr_docker_cpu_limit: 1
#ceph_mgr_docker_extra_env:
#ceph_config_keys: [] # DON'T TOUCH ME
###########
# SYSTEMD #
###########
# ceph_mgr_systemd_overrides will override the systemd settings
# for the ceph-mgr services.
# For example,to set "PrivateDevices=false" you can specify:
# ceph_mgr_systemd_overrides:
# Service:
# PrivateDevices: false
---
# Variables here are applicable to all host groups NOT roles
# This sample file generated by generate_group_vars_sample.sh
# Dummy variable to avoid error because ansible does not recognize the
# file as a good configuration file when no variable in it.
dummy:
# You can override vars by using host or group vars
###########
# GENERAL #
###########
#mon_group_name: mons
# ACTIVATE BOTH FSID AND MONITOR_SECRET VARIABLES FOR NON-VAGRANT DEPLOYMENT
#monitor_secret: "{{ monitor_keyring.stdout }}"
#admin_secret: 'admin_secret'
# Secure your cluster
# This will set the following flags on all the pools:
# * nosizechange
# * nopgchange
# * nodelete
#secure_cluster: false
#secure_cluster_flags:
# - nopgchange
# - nodelete
# - nosizechange
#client_admin_ceph_authtool_cap:
# mon: allow *
# osd: allow *
# mds: allow *
# mgr: allow *
##########
# DOCKER #
##########
# Resource limitation
# For the whole list of limits you can apply see: docs.docker.com/engine/admin/resource_constraints
# Default values are based from: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/red_hat_ceph_storage_hardware_guide/minimum_recommendations
# These options can be passed using the 'ceph_mon_docker_extra_env' variable.
#ceph_mon_docker_memory_limit: "{{ ansible_facts['memtotal_mb'] }}m"
#ceph_mon_docker_cpu_limit: 1
#ceph_mon_container_listen_port: 3300
# Use this variable to add extra env configuration to run your mon container.
# If you want to set a custom admin keyring you can set this variable like following:
# ceph_mon_docker_extra_env: -e ADMIN_SECRET={{ admin_secret }}
#ceph_mon_docker_extra_env:
#mon_docker_privileged: false
#mon_docker_net_host: true
#ceph_config_keys: [] # DON'T TOUCH ME
###########
# SYSTEMD #
###########
# ceph_mon_systemd_overrides will override the systemd settings
# for the ceph-mon services.
# For example,to set "PrivateDevices=false" you can specify:
# ceph_mon_systemd_overrides:
# Service:
# PrivateDevices: false
---
# Variables here are applicable to all host groups NOT roles
# This sample file generated by generate_group_vars_sample.sh
# Dummy variable to avoid error because ansible does not recognize the
# file as a good configuration file when no variable in it.
dummy:
# You can override vars by using host or group vars
###########
# GENERAL #
###########
# Even though NFS nodes should not have the admin key
# at their disposal, some people might want to have it
# distributed on RGW nodes. Setting 'copy_admin_key' to 'true'
# will copy the admin key to the /etc/ceph/ directory
#copy_admin_key: false
# Whether docker container or systemd service should be enabled
# and started, it's useful to set it to false if nfs-ganesha
# service is managed by pacemaker
#ceph_nfs_enable_service: true
# ceph-nfs systemd service uses ansible's hostname as an instance id,
# so service name is ceph-nfs@{{ ansible_facts['hostname'] }}, this is not
# ideal when ceph-nfs is managed by pacemaker across multiple hosts - in
# such case it's better to have constant instance id instead which
# can be set by 'ceph_nfs_service_suffix'
# ceph_nfs_service_suffix: "{{ ansible_facts['hostname'] }}"
######################
# NFS Ganesha Config #
######################
#ceph_nfs_log_file: "/var/log/ganesha/ganesha.log"
#ceph_nfs_dynamic_exports: false
# If set to true then rados is used to store ganesha exports
# and client sessions information, this is useful if you
# run multiple nfs-ganesha servers in active/passive mode and
# want to do failover
#ceph_nfs_rados_backend: false
# Name of the rados object used to store a list of the export rados
# object URLS
#ceph_nfs_rados_export_index: "ganesha-export-index"
# Address ganesha service should listen on, by default ganesha listens on all
# addresses. (Note: ganesha ignores this parameter in current version due to
# this bug: https://github.com/nfs-ganesha/nfs-ganesha/issues/217)
# ceph_nfs_bind_addr: 0.0.0.0
# If set to true, then ganesha's attribute and directory caching is disabled
# as much as possible. Currently, ganesha caches by default.
# When using ganesha as CephFS's gateway, it is recommended to turn off
# ganesha's caching as the libcephfs clients also cache the same information.
# Note: Irrespective of this option's setting, ganesha's caching is disabled
# when setting 'nfs_file_gw' option as true.
#ceph_nfs_disable_caching: false
# This is the file ganesha will use to control NFSv4 ID mapping
#ceph_nfs_idmap_conf: "/etc/ganesha/idmap.conf"
# idmap configuration file override.
# This allows you to specify more configuration options
# using an INI style format.
# Example:
# idmap_conf_overrides:
# General:
# Domain: foo.domain.net
#idmap_conf_overrides: {}
####################
# FSAL Ceph Config #
####################
#ceph_nfs_ceph_export_id: 20133
#ceph_nfs_ceph_pseudo_path: "/cephfile"
#ceph_nfs_ceph_protocols: "3,4"
#ceph_nfs_ceph_access_type: "RW"
#ceph_nfs_ceph_user: "admin"
#ceph_nfs_ceph_squash: "Root_Squash"
#ceph_nfs_ceph_sectype: "sys,krb5,krb5i,krb5p"
###################
# FSAL RGW Config #
###################
#ceph_nfs_rgw_export_id: 20134
#ceph_nfs_rgw_pseudo_path: "/cephobject"
#ceph_nfs_rgw_protocols: "3,4"
#ceph_nfs_rgw_access_type: "RW"
#ceph_nfs_rgw_user: "cephnfs"
#ceph_nfs_rgw_squash: "Root_Squash"
#ceph_nfs_rgw_sectype: "sys,krb5,krb5i,krb5p"
# Note: keys are optional and can be generated, but not on containerized, where
# they must be configered.
# ceph_nfs_rgw_access_key: "QFAMEDSJP5DEKJO0DDXY"
# ceph_nfs_rgw_secret_key: "iaSFLDVvDdQt6lkNzHyW4fPLZugBAI1g17LO0+87[MAC[M#C"
#rgw_client_name: client.rgw.{{ ansible_facts['hostname'] }}
###################
# CONFIG OVERRIDE #
###################
# Ganesha configuration file override.
# These multiline strings will be appended to the contents of the blocks in ganesha.conf and
# must be in the correct ganesha.conf format seen here:
# https://github.com/nfs-ganesha/nfs-ganesha/blob/next/src/config_samples/ganesha.conf.example
#
# Example:
# CACHEINODE {
# # Entries_HWMark = 100000;
# }
#
# ganesha_core_param_overrides:
# ganesha_ceph_export_overrides:
# ganesha_rgw_export_overrides:
# ganesha_rgw_section_overrides:
# ganesha_log_overrides:
# ganesha_conf_overrides: |
# CACHEINODE {
# # Entries_HWMark = 100000;
# }
##########
# DOCKER #
##########
#ceph_docker_image: "ceph/daemon"
#ceph_docker_image_tag: latest
#ceph_nfs_docker_extra_env:
#ceph_config_keys: [] # DON'T TOUCH ME
This diff is collapsed.
---
# Variables here are applicable to all host groups NOT roles
# This sample file generated by generate_group_vars_sample.sh
# Dummy variable to avoid error because ansible does not recognize the
# file as a good configuration file when no variable in it.
dummy:
#########
# SETUP #
#########
# Even though rbd-mirror nodes should not have the admin key
# at their disposal, some people might want to have it
# distributed on rbd-mirror nodes. Setting 'copy_admin_key' to 'true'
# will copy the admin key to the /etc/ceph/ directory. Only
# valid for Luminous and later releases.
#copy_admin_key: false
#################
# CONFIGURATION #
#################
#ceph_rbd_mirror_local_user: client.rbd-mirror-peer
#ceph_rbd_mirror_configure: false
#ceph_rbd_mirror_mode: pool
#ceph_rbd_mirror_remote_cluster: remote
##########
# DOCKER #
##########
# Resource limitation
# For the whole list of limits you can apply see: docs.docker.com/engine/admin/resource_constraints
# Default values are based from: https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/2/html/red_hat_ceph_storage_hardware_guide/minimum_recommendations
# These options can be passed using the 'ceph_rbd_mirror_docker_extra_env' variable.
#ceph_rbd_mirror_docker_memory_limit: "{{ ansible_facts['memtotal_mb'] }}m"
#ceph_rbd_mirror_docker_cpu_limit: 1
#ceph_rbd_mirror_docker_extra_env:
#ceph_config_keys: [] # DON'T TOUCH ME
###########
# SYSTEMD #
###########
# ceph_rbd_mirror_systemd_overrides will override the systemd settings
# for the ceph-rbd-mirror services.
# For example,to set "PrivateDevices=false" you can specify:
# ceph_rbd_mirror_systemd_overrides:
# Service:
# PrivateDevices: false
---
# Variables here are applicable to all host groups NOT roles
# This sample file generated by generate_group_vars_sample.sh
# Dummy variable to avoid error because ansible does not recognize the
# file as a good configuration file when no variable in it.
dummy:
# You can override vars by using host or group vars
###########
# GENERAL #
###########
#haproxy_frontend_port: 80
#haproxy_frontend_ssl_port: 443
#haproxy_frontend_ssl_certificate:
#haproxy_ssl_dh_param: 4096
#haproxy_ssl_ciphers:
# - EECDH+AESGCM
# - EDH+AESGCM
#haproxy_ssl_options:
# - no-sslv3
# - no-tlsv10
# - no-tlsv11
# - no-tls-tickets
#
# virtual_ips:
# - 192.168.238.250
# - 192.168.238.251
#
# virtual_ip_netmask: 24
# virtual_ip_interface: ens33
This diff is collapsed.
Infrastructure playbooks
========================
This directory contains a variety of playbooks that can be used independently of the Ceph roles we have.
They aim to perform infrastructure related tasks that would help use managing a Ceph cluster or performing certain operational tasks.
To use them, run `ansible-playbook infrastructure-playbooks/<playbook>`.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
---
- name: Gather ceph logs
hosts:
- mons
- osds
- mdss
- rgws
- nfss
- rbdmirrors
- clients
- mgrs
gather_facts: false
become: true
tasks:
- name: Create a temp directory
ansible.builtin.tempfile:
state: directory
prefix: ceph_ansible
run_once: true
register: localtempfile
become: false
delegate_to: localhost
- name: Set_fact lookup_ceph_config - lookup keys, conf and logs
ansible.builtin.find:
paths:
- /etc/ceph
- /var/log/ceph
register: ceph_collect
- name: Collect ceph logs, config and keys on the machine running ansible
ansible.builtin.fetch:
src: "{{ item.path }}"
dest: "{{ localtempfile.path }}"
fail_on_missing: false
flat: false
with_items: "{{ ceph_collect.files }}"
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
---
# Nukes a multisite config
- hosts: rgws
become: true
tasks:
- include_tasks: roles/ceph-rgw/tasks/multisite/destroy.yml
handlers:
# Ansible 2.1.0 bug will ignore included handlers without this
- name: Import_tasks roles/ceph-rgw/handlers/main.yml
import_tasks: roles/ceph-rgw/handlers/main.yml
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment