OpenStack in HBP
Infrastructure
4 servers (128GB RAM, 4x 4TB HDD, 1x 240GB SSD, 10GE)
- oscloud-1.scc.kit.edu
- oscloud-2.scc.kit.edu
- oscloud-3.scc.kit.edu
- oscloud-4.scc.kit.edu
1 VM (oscloud-5.scc.kit.edu)
Useful links
- Documentation from RedHat about the Packstack tool [1]
- The PackStack git repository [2]
- The RDO yum repository of OpenStack RPMs for RHEL based distributions [3]
- The latest OpenStack release (Mitaka) [4]
Installation
Single-node installation with PackStack
RHEL subscription
If the FuE cluster subscription using a local server is outdated, configure it to use the central Red Hat subscription server. This is necessary to get the latest OpenStack rpms (otherwise the repo 'rhel-7-server-openstack-8-rpms' cannot be enabled).
Change the following lines in the config file for Red Hat Subscription Manager (/etc/rhsm/rhsm.conf):
# Server hostname: hostname = subscription.rhn.redhat.com # Server prefix: prefix = /subscription # Content base URL: baseurl= https://cdn.redhat.com # Default CA cert to use when generating yum repo configs: repo_ca_cert = %(ca_cert_dir)sredhat-uep.pem
Then run the subscription registration with your credentials:
subscription-manager register --org=<org-d> --activationkey=<key> --force
Software repositories
subscription-manager list --available --all subscription-manager repos --enable=rhel-7-server-rpms subscription-manager repos --enable=rhel-7-server-rh-common-rpms subscription-manager repos --enable=rhel-7-server-openstack-8-rpms subscription-manager repos --enable=rhel-7-server-extras-rpms subscription-manager repos --list yum install yum-utils yum update
Install and run PackStack
Install the Packstack package:
yum install -y openstack-packstack
Run Packstack (requires the machine's root password):
packstack --allinone
If you have issues with the MongoDB installation, run manually (also see bug at [5]):
yum install -y mongodb-server mongodb
The all-in-one installation creates the 'admin' and 'demo' users. To view passwords for later use, run:
grep OS_PASSWORD /root/keystonerc_admin
Test installation
To test your installation, log-in with the admin credentials at the dashboard:
http://141.52.214.14/dashboard/
Configuration
The default config parameters are saved in a packstack answers file, which can later be used to install OpenStack again.
The following should be considered for changing in the future multi-node set-up (commandline arguments for packstack are also available):
CONFIG_USE_EPEL=y # this will fix the MongoDB issue CONFIG_NETWORK_HOSTS=<controller_ip> # for Neutron, centralized network on controller node CONFIG_AMQP_ENABLE_SSL=y CONFIG_AMQP_ENABLE_AUTH=y CONFIG_KEYSTONE_IDENTITY_BACKEND=ldap # only if we want to use LDAP as backend for identity service; interesting to check CONFIG_GLANCE_BACKEND=swift # only if we want to test in the future with SWIFT, e.g. on DDN WOS; requires CONFIG_SWIFT_INSTALL=y CONFIG_CINDER_VOLUMES_SIZE=20G # this is the default value, should be increased CONFIG_NOVA_NETWORK_FLOATRANGE=<public IP range in cidr format> # should ask networking department CONFIG_HORIZON_SSL=y # also get certificate from CA and set parameters accordingly (CONFIG_HORIZON_SSL_CERT, CONFIG_HORIZON_SSL_KEY, CONFIG_HORIZON_SSL_CACERT) CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n # must be disabled for multi-node setup
Multi-Node Installation
Set-up
- oscloud-5.scc.kit.edu: packstack
- oscloud-1.scc.kit.edu: controller host (keystone, cinder, glance, horizon, neutron)
- oscloud-2.scc.kit.edu: compute host (nova-compute)
- oscloud-3.scc.kit.edu: compute host (nova-compute)
- oscloud-4.scc.kit.edu: compute host (nova-compute)
We use the RDO repositories for the Mitaka release of OpenStack (on all nodes):
yum -y install https://rdoproject.org/repos/rdo-release.rpm
Install packstack on oscloud-5:
yum -y install openstack-packstack
Generate answers file for packstack with default settings:
packstack --gen-answer-file=packstack-answers.txt
Modify the file where necessary and then deploy OpenStack:
packstack --answer-file=packstack-answers.txt
Answer file
The full answers file can be found at File:Packstack-answers.txt. The following settings were modified:
CONFIG_NAGIOS_INSTALL=n CONFIG_CONTROLLER_HOST=141.52.214.14 CONFIG_COMPUTE_HOSTS=141.52.214.16,141.52.214.18,141.52.214.20 CONFIG_NETWORK_HOSTS=141.52.214.14 CONFIG_SELFSIGN_CACERT_SUBJECT_CN=oscloud-1.scc.kit.edu CONFIG_SELFSIGN_CACERT_SUBJECT_MAIL=admin@oscloud-1.scc.kit.edu CONFIG_AMQP_HOST=141.52.214.14 CONFIG_AMQP_ENABLE_SSL=y CONFIG_AMQP_ENABLE_AUTH=y CONFIG_MARIADB_HOST=141.52.214.14 CONFIG_CINDER_VOLUMES_SIZE=40G CONFIG_HORIZON_SSL=y CONFIG_PROVISION_OVS_BRIDGE=n CONFIG_MONGODB_HOST=141.52.214.14 CONFIG_REDIS_MASTER_HOST=141.52.214.14 CONFIG_KEYSTONE_API_VERSION=v3 # this does not work yet with current OpenStack and PackStack releases and Puppet modules
Identity Service
Integration with HBP OIDC
Useful links:
- INDIGO-DataCloud instruction for OIDC-Keystone integration [6]
- OpenStack changing Keystone from v2.0 to v3 [7]
Enabling Keystone API v3
The following operations should be performed on controller node oscloud-1.scc.kit.edu.
- enable command line access to Keystone v3
cp keystonerc_admin keystonerc_admin_v3 vi keystonerc_admin_v3 export OS_AUTH_URL=http://141.52.214.14:5000/v3/ export OS_IDENTITY_API_VERSION=3 export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default source keystonerc_admin_v3
- configure keystone: change these lines in /etc/openstack-dashboard/local_settings:
OPENSTACK_KEYSTONE_URL = "http://141.52.214.14:5000/v3" OPENSTACK_API_VERSIONS = { "identity": 3 } OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'
- configure nova: edit /etc/nova/nova.conf
auth_uri=http://141.52.214.14:5000/v3 auth_version=v3
- configure cinder: edit /etc/cinder/cinder.conf
auth_uri=http://141.52.214.14:5000/v3 auth_version=v3
- restart httpd service
service httpd restart
HBP OIDC client registration
You need to be registered by HBP AAI Team, so contact them in order to do so.
Register a new client in the HBP OpenID Connect Client Manager [8].
- introduce the name of your choice.
- introduce the allowed redirect URIs (e.g. https://oscloud-1.scc.kit.edu:5000/v3/auth/OS-FEDERATION/websso/oidc/redirect or https://141.52.214.14:5000/v3/auth/OS-FEDERATION/websso/oidc/redirect)
- select the Server Flow application type.
- once you save it, keep a copy of the following fields:
- Client ID
- Client Secret
- Registration Endpoint
- Registration Access Token
Keystone integration
- install http module for oidc auth [9]
wget https://github.com/pingidentity/mod_auth_openidc/releases/download/v2.0.0rc1/cjose-0.4.1-1.el7.centos.x86_64.rpm wget https://github.com/pingidentity/mod_auth_openidc/releases/download/v2.0.0rc1/mod_auth_openidc-2.0.0rc1-1.el7.centos.x86_64.rpm yum install cjose-0.4.1-1.el7.centos.x86_64.rpm mod_auth_openidc-2.0.0rc1-1.el7.centos.x86_64.rpm
- configure module auth_openidc
vi /etc/httpd/conf.modules.d/auth_openidc.load LoadModule auth_openidc_module /etc/httpd/modules/mod_auth_openidc.so
vi /etc/httpd/conf.d/10-keystone_wsgi_main.conf OIDCClaimPrefix "OIDC-" OIDCResponseType "code" OIDCScope "openid profile" OIDCProviderMetadataURL https://services.humanbrainproject.eu/oidc/.well-known/openid-configuration OIDCClientID <client id> OIDCClientSecret <client secret> OIDCProviderTokenEndpointAuth client_secret_basic OIDCCryptoPassphrase <choose anything> OIDCRedirectURI http://141.52.214.14:5000/v3/auth/OS-FEDERATION/websso/oidc/redirect # The JWKs URL on which the Authorization publishes the keys used to sign its JWT access tokens. # When not defined local validation of JWTs can still be done using statically configured keys, # by setting OIDCOAuthVerifyCertFiles and/or OIDCOAuthVerifySharedKeys. OIDCOAuthVerifyJwksUri "https://services.humanbrainproject.eu/oidc/jwk" <Location ~ "/v3/auth/OS-FEDERATION/websso/oidc"> AuthType openid-connect Require valid-user LogLevel debug </Location> <Location ~ "/v3/OS-FEDERATION/identity_providers/hbp/protocols/oidc/auth"> AuthType oauth20 Require valid-user LogLevel debug </Location>
- edit /etc/keystone/keystone.conf
[auth] methods = external,password,token,oauth1,oidc oidc = keystone.auth.plugins.mapped.Mapped [oidc] remote_id_attribute = HTTP_OIDC_ISS [federation] remote_id_attribute = HTTP_OIDC_ISS trusted_dashboard = https://141.52.214.14/dashboard/auth/websso/ trusted_dashboard = https://lsdf-28-053.scc.kit.edu/dashboard/auth/websso/ trusted_dashboard = https://oscloud-1.scc.kit.edu/dashboard/auth/websso/ sso_callback_template = /etc/keystone/sso_callback_template.html
- service httpd restart
- openstack group create hbp_group --description "HBP Federated users group"
- openstack project create hbp --description "HBP project"
- openstack role add _member_ --group hbp_group --project hbp
- create file hbp_mapping.json
- use HBP unique ID to create shadow users in keystone (OIDC-sub)
- filter HBP users to allow only members of group hbp-kit-cloud
- this group needs to be created on the OIDC server
[ { "local": [ { "group": { "id": "0d3f8a7ba65648008c33c59b2383b817" }, "user": { "domain": { "id": "default" }, "type": "ephemeral", "name": "{0}_hbpID" } } ], "remote": [ { "type":"OIDC-sub" }, { "type": "HTTP_OIDC_ISS", "any_one_of": [ "https://services.humanbrainproject.eu/oidc/" ] }, { "type": "HTTP_OIDC_GROUPS", "any_one_of": [ ".*hbp-kit-cloud.*" ], "regex": true } ] } ]
- openstack mapping create hbp_mapping --rules hbp_mapping.json
- openstack identity provider create hbp --remote-id https://services.humanbrainproject.eu/oidc/
- openstack federation protocol create oidc --identity-provider hbp --mapping hbp_mapping
- edit /etc/openstack-dashboard/local_settings
WEBSSO_ENABLED = True WEBSSO_INITIAL_CHOICE = "credentials" WEBSSO_CHOICES = ( ("credentials", _("Keystone Credentials")), ("oidc", _("HBP OIDC")) )
- service httpd restart
Multiple OIDC providers
Useful links:
- INDIGO-DataCloud instructions for configuring multiple OIDC providers [10]
The set-up above will be modified to support the INDIGO IAM as a second OIDC provider.
INDIGO IAM OIDC client registration
You need to be registered by the INDIGO AAI Team, so contact them in order to do so [11]
Then register a new client under Self Service Client Registration
- introduce the name of your choice.
- introduce the allowed redirect URIs (e.g. https://oscloud-1.scc.kit.edu:5000/v3/auth/OS-FEDERATION/websso/oidc/redirect or https://141.52.214.14:5000/v3/auth/OS-FEDERATION/websso/oidc/redirect)
- in Credentials tab, keep Client Secret over HTTP Basic
- once you save it, keep a copy of the following fields:
- Client ID
- Client Secret
- Registration Endpoint
- Registration Access Token
Preparing the metadata files
Instead of configuring the provider metadata in the httpd config files, the apache server will read them from a given folder. Each provider configuration will be defined in 3 separate files: <name>.provider, <name>.client and <name>.conf, where <name> is the urlencoded issuer value with https prefix and trailing slash stripped.
- create metadata folder writable by the apache user:
mkdir /var/cache/httpd/metadata chown apache:apache /var/cache/httpd/metadata
- create *.provider files:
curl https://iam-test.indigo-datacloud.eu/.well-known/openid-configuration > /var/cache/httpd/metadata/iam-test.indigo-datacloud.eu.provider curl https://services.humanbrainproject.eu/oidc/.well-known/openid-configuration > /var/cache/httpd/metadata/services.humanbrainproject.eu%2Foidc.provider
- create *.client files:
- for INDIGO IAM, copy the JSON in JSON tab in Client Configuration
- for HBP, ensure that you have the fields client_id, client_secret and response_type:
{ "client_id": "<client_id>", "client_secret": "<client_secret>", "redirect_uris": [ "https://oscloud-1.scc.kit.edu:5000/v3/auth/OS-FEDERATION/websso/oidc/redirect", "http://141.52.214.14:5000/v3/auth/OS-FEDERATION/websso/oidc/redirect", "http://oscloud-1.scc.kit.edu:5000/v3/auth/OS-FEDERATION/websso/oidc/redirect", "https://141.52.214.14:5000/v3/auth/OS-FEDERATION/websso/oidc/redirect" ], "client_name": "kit-openstack", "token_endpoint_auth_method": "client_secret_basic", "scope": "openid profile offline_access", "grant_types": [ "authorization_code" ], "response_types": [ "code" ] }
- the *.conf files can contain other parameters, like scope
$ cat /var/cache/httpd/metadata/services.humanbrainproject.eu\%2Foidc.conf { "scope": "openid profile offline_access" }
$ cat /var/cache/httpd/metadata/iam-test.indigo-datacloud.eu.conf { "scope": "address phone openid email profile" }
- make sure the files are owned by the apache user:
chown apache:apache /var/cache/httpd/metadata/*
HTTPD configuration
Edit the virtual host file /etc/httpd/conf.d/10-keystone_wsgi_main.conf:
OIDCMetadataDir /var/cache/httpd/metadata OIDCProviderTokenEndpointAuth client_secret_basic OIDCCryptoPassphrase <choose anything> OIDCRedirectURI http://141.52.214.14:5000/v3/auth/OS-FEDERATION/websso/oidc/redirect OIDCClaimPrefix "OIDC-" OIDCSessionType server-cache <Location ~ "/v3/auth/OS-FEDERATION/websso/oidc"> OIDCDiscoverURL http://141.52.214.14:5000/v3/auth/OS-FEDERATION/websso/oidc/redirect?iss=https%3A%2F%2Fservices.humanbrainproject.eu%2Foidc AuthType openid-connect Require valid-user LogLevel debug </Location> <Location ~ "/v3/auth/OS-FEDERATION/websso/iam"> OIDCDiscoverURL http://141.52.214.14:5000/v3/auth/OS-FEDERATION/websso/oidc/redirect?iss=https%3A%2F%2Fiam-test.indigo-datacloud.eu AuthType openid-connect Require valid-user LogLevel debug </Location> <Location ~ "/v3/OS-FEDERATION/identity_providers/hbp/protocols/oidc/auth"> AuthType oauth20 Require valid-user LogLevel debug </Location> <Location ~ "/v3/OS-FEDERATION/identity_providers/indigo-dc/protocols/iam/auth"> AuthType oauth20 Require valid-user LogLevel debug </Location>
Keystone configuration
Edit the file /etc/keystone/keystone.conf:
methods = external,password,token,oauth1,oidc,iam oidc = keystone.auth.plugins.mapped.Mapped iam = keystone.auth.plugins.mapped.Mapped
Similar to the HBP OIDC provider, create an OpenStack group, project, mapping, identity provider and federation protocol.
- openstack group create iam_group --description "INDIGO Data Cloud IAM users group"
- openstack project create iam --description "INDIGO Data Cloud"
- openstack role add _member_ --group iam_group --project iam
- create file iam_mapping.json
[ { "local": [ { "group": { "id": "026572266d35437e8ab9a6f4adaaed63" }, "user": { "domain": { "id": "default" }, "type": "ephemeral", "name": "{0}_iamID" } } ], "remote": [ { "type":"OIDC-sub" }, { "type": "HTTP_OIDC_ISS", "any_one_of": [ "https://iam-test.indigo-datacloud.eu/" ] } ] } ]
- openstack mapping create iam_mapping --rules iam_mapping.json
- openstack identity provider create indigo-dc --remote-id https://iam-test.indigo-datacloud.eu/
- openstack federation protocol create iam --identity-provider indigo-dc --mapping iam_mapping
- edit /etc/openstack-dashboard/local_settings
WEBSSO_ENABLED = True WEBSSO_INITIAL_CHOICE = "credentials" WEBSSO_CHOICES = ( ("credentials", _("Keystone Credentials")), ("oidc", _("HBP OIDC")) ("iam", _("INDIGO Data Cloud")), )
- service httpd restart
Network Configuration
We want to configure Neutron to create VLAN provider networks which can connect instances directly to external networks.
- Subnet reserved for instances: 141.52.220.64/26, vlan 350.
- Subnet of OpenStack compute&controller nodes: 141.52.214.0/24, vlan 859.
Configuring the controller node
- add vlan to the list of driver types and configure vlan ranges:
$ vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = vxlan,flat,vlan [ml2_type_vlan] network_vlan_ranges=datacenter
- set external_network_bridge to an empty value in /etc/neutron/l3-agent.ini, to be able to use provider external networks instead of bridge-based external networks:
$ vim /etc/neutron/l3-agent.ini external_network_bridge =
- restart neutron
systemctl restart neutron-server neutron-l3-agent
- create networks (an external one and an internal one) and a router:
neutron net-create --shared hbp_private neutron subnet-create --name hbp_private_subnet --gateway 10.1.0.1 --dns-nameserver 141.3.175.65 --dns-nameserver 141.3.175.66 hbp_private 10.1.0.0/16 neutron net-create hbp_public --router:external --provider:network_type vlan --provider:physical_network datacenter --provider:segmentation_id 350 neutron subnet-create --name hbp_public_subnet --enable_dhcp=False --allocation-pool=start=141.52.220.90,end=141.52.220.119 --gateway=141.52.220.65 hbp_public 141.52.220.64/26 neutron router-create hbp_router neutron router-gateway-set hbp_router hbp_public neutron router-interface-add hbp_router hbp_private_subnet
On compute nodes and controller node
- create an external network bridge (br-ex), and associate a port (enp4s0f0) with it:
$ vim /etc/sysconfig/network-scripts/ifcfg-enp4s0f0 BOOTPROTO="none" DEVICE="enp4s0f0" HWADDR="0c:c4:7a:1f:03:fa" # leave the mac address unchanged ONBOOT=yes TYPE=OVSPort DEVICETYPE=ovs OVS_BRIDGE=br-ex
$ vim /etc/sysconfig/network-scripts/ifcfg-br-ex BOOTPROTO="none" IPADDR="141.52.214.14" # use IP of compute/controller node you are configuring NETMASK="255.255.255.0" GATEWAY="141.52.214.1" DEVICE="br-ex" ONBOOT=yes PEERDNS=yes DNS1=141.52.8.18 DNS2=141.52.3.3 PEERROUTES=yes TYPE=OVSBridge DEVICETYPE=ovs
- restart networking and add appropriate route (otherwise you will be locked out of the ssh connection)
systemctl restart network && ip r add default via 141.52.214.1
- configure physical network in ovs plugin and map bridges accordingly:
$ vim /etc/neutron/plugins/ml2/openvswitch_agent.ini bridge_mappings = datacenter:br-ex
- restart ovs agent
systemctl restart neutron-openvswitch-agent
- to enable compute metadata access (instances directly connected to the provider external network instead of neutron router) do the following on all compute nodes and controller:
$ vim /etc/neutron/dhcp_agent.ini enable_isolated_metadata = True $ systemctl restart neutron-dhcp-agent
- for a tap port to be created under br-int for the dhcp process:
$ vim /etc/neutron/dhcp_agent.ini interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver $ systemctl restart neutron-dhcp-agent
- issue: only users with admin role can attach VM ports to the external provider network created by the admin user; on all controller and compute nodes, edit the policy file of nova and change the network:attach_external_network property to empty string:
$ vim /etc/nova/policy.json "network:attach_external_network": ""
Test the configuration
Test creating a VM and connecting it directly to the external provider network:
nova boot --flavor m1.tiny --image cirros test --nic net-name=hbp_public ssh cirros@<ip>
Test creating a VM with a NIC in the internal network, and then attaching a floating IP from the external network:
nova boot --flavor m1.tiny --image cirros test_float --nic net-name=hbp_private neutron floatingip-create hbp_public # retain id of newly-created floating ip neutron port-list # get id of port corresponding to the VM's NIC in the hbp_private network neutron floatingip-associate <floating ip id> <port id> ssh cirros@<floating ip>
Docker Integration
References:
- https://indigo-dc.gitbooks.io/openstack-nova-docker/content/docs/install.html
- https://wiki.openstack.org/wiki/Docker
- https://github.com/openstack/nova-docker
Installation
We perform the following operations on one of the compute nodes (oscloud-4.scc.kit.edu).
Install docker engine
- add the repo:
$ sudo tee /etc/yum.repos.d/docker.repo <<-EOF [dockerrepo] name=Docker Repository baseurl=https://yum.dockerproject.org/repo/main/centos/7 enabled=1 gpgcheck=1 gpgkey=https://yum.dockerproject.org/gpg EOF
- install docker-engine:
yum update; yum install docker-engine
- start the daemon:
service docker start
- check that docker is working:
docker run hello-world
Install docker driver for openstack
- clone git branch corresponding to your OpenStack version and install it:
git clone https://github.com/stackforge/nova-docker.git -b stable/mitaka cd nova-docker/ python setup.py install
Configuration
Configure nova compute machine
- edit file /etc/nova/nova.conf:
[DEFAULT] compute_driver = novadocker.virt.docker.DockerDriver
- create the following folder if it doesn't exist
mkdir /etc/nova/rootwrap.d
- add file inside with content:
$ sudo tee /etc/nova/rootwrap.d/docker.filters <<-EOF # nova-rootwrap command filters for setting up network in the docker driver # This file should be owned by (and only-writeable by) the root user [Filters] # nova/virt/docker/driver.py: 'ln', '-sf', '/var/run/netns/.*' ln: CommandFilter, /bin/ln, root EOF
- restart nova compute service:
systemctl restart openstack-nova-compute
- in order for nova to communicate with docker over its local socket, add nova to the docker group and restart the compute service to pick up the change:
usermod -G docker nova systemctl restart openstack-nova-compute
Configure image service
On the node that runs the glance service (oscloud-1.scc.kit.edu):
- edit file /etc/glance/glance-api.conf:
[DEFAULT] container_formats = ami,ari,aki,bare,ovf,docker
- restart glance services:
systemctl restart openstack-glance-api openstack-glance-registry
Test the installation
docker pull nginx docker save nginx | glance image-create --container-format=docker --disk-format=raw --name nginx nova boot --flavor m1.small --image nginx nginxtest --nic net-name=hbp_public --availability-zone nova:lsdf-28-059.scc.kit.edu curl http://<IP>
Scheduling on mixed libvirt-docker deployments
The availability zone needs to be specified on mixed deployments like the one we have, where some compute nodes support docker, while other support qemu (mixing a libvirt hypervisor with a docker hypervisor on the _same_ host is not recommended).
To let OpenStack know which nodes support docker without specifying the availability zone each time we create a docker container, we rely on the 'ImagePropertiesFilter' property of Nova Scheduler to schedule the image properly, based on a custom image property that we set in advance for each image in glance.
- we set a custom property for each image (hypervisor_type):
glance image-update <id nginx image> --property hypervisor_type=docker glance image-update <id cirros image> --property hypervisor_type=qemu
- make sure in /etc/nova/nova.conf that scheduler_default_filters contains ImagePropertiesFilter [12]
Remote access to Docker containers
To get inside your Docker containers, you might be tempted to use ssh. If you still want to do it after reading this [13]piece explaining how Docker+SSH=evil, then go ahead and add an ssh server to your docker image. Otherwise, the information below is for an ssh-less access to containers.
In a nutshell, we will use nsenter or docker-enter and the ssh authorized_keys command option. All the steps will be performed on the OpenStack node running nova-docker, unless specified otherwise.
- install nsenter
docker run --rm -v /usr/local/bin:/target jpetazzo/nsenter
- test nsenter (get the container name or ID with docker ps)
PID=$(docker inspect --format Template:.State.Pid <container_name_or_ID>) nsenter --target $PID --mount --uts --ipc --net --pid
- similarly, use docker-enter which is a wrapper that takes the container name or id and enters is:
docker-enter <container_name_or_ID>
- or execute command in container directly with docker-enter
docker-enter <container_name_or_ID> ls -la
All these operations need access to the host running the docker container. If we don't want to give users complete access to the machine, we can use command in authorized_keys to allow only execution of docker-enter:
- edit .ssh/authorized_keys on by prefixing a user's ssh key with the following options:
no-port-forwarding,no-X11-forwarding,no-agent-forwarding,command="docker-enter ${SSH_ORIGINAL_COMMAND}" ssh-rsa <key> user@host
- from its own machine, the user can enter the container using the public key above and start executing commands:
ssh root@oscloud-4.scc.kit.edu <container_name_or_ID>
- where oscloud-4.scc.kit.edu is the host running the docker containers and the nova-docker
- for OpenStack users, you can get the container ID by checking the ID in the OpenStack dashboard and adding the "nova-" prefix, e.g. nova-869f7797-c0ac-4d70-9b0e-3bd81172f8a3
Image Repository
Glance-S3 integration
Changing the storage back-end of Glance to use the S3 storage at http://s3.data.kit.edu.
- create S3 keys for glance user on S3 admin node (141.52.220.79):
accesskeyctl -g openstack_glance
- the previous command will return an access key and a secret key for the glance user; save them for later use
- create S3 bucket for glance to store images into, e.g. with s3cmd (where the credentials have been plugged into config file .s3cfg.glance):
s3cmd -c .s3cfg.glance mb s3://openstack_glance
- on OpenStack controller node (oscloud-1.scc.kit.edu), update settings in /etc/glance/glance-api.conf to use s3 as default back-end, with newly created credentials:
stores = file,http,s3 default_store = s3 s3_store_host = s3.data.kit.edu s3_store_access_key = <s3_access_key> s3_store_secret_key = <s3_secret_key> s3_store_bucket = openstack_glance s3_store_create_bucket_on_put = false s3_store_bucket_url_format = subdomain # multipart upload stuff s3_store_large_object_size = 100 s3_store_large_object_chunk_size = 10 s3_store_thread_pools = 10
- restart glance services
systemctl restart openstack-glance-api openstack-glance-registry
- create image
wget http://cdimage.debian.org/cdimage/openstack/current/debian-8.6.1-openstack-amd64.qcow2 glance --debug image-create --name="Debian 8.6.1" --disk-format=qcow2 --container-format=bare --property architecture=x86_64 --property hypervisor_type=qemu --progress --file=debian-8.6.1-openstack-amd64.qcow2
- if you need to migrate existing images to the new back-end, check the link (we chose to skip this step)