Cloud Foundry on OpenStack

Cloud Foundry on OpenStack

Introduction

Cloud Foundry works on a variety of IaaS including OpenStack. OpenStack and Cloud Foundry are the top two Cloud-based Open Source projects currently.

This blog highlights the steps to install Cloud Foundry on OpenStack.

Requirements

I used the following hardware/software/environment.

  1. A laptop/desktop to install MicroBOSH
  2. An OpenStack install with about 25 Virtual CPUs, 64-128GB of main memory, about 400GB of disk space, an internal and external network with at least 2 floating IPs. In this case I leveraged the Mirantis OpenStack express team edition.
  3. One of the VM instances on OpenStack will be used to install BOSH. This requires a floating IP and 40-80GB of disk space.
  4. Besides this VM instance, I needed an environment to create about a dozen VM instances on the internal network running Ubuntu trusty. The HAProxy VM, which is used to access the Cloud Foundry instance, counted for the second floating IP.

Installing MicroBOSH

On my Mac laptop I ran into rvm issues (had to reinstall rvm). I used Ruby version 2.1.5.

Install BOSH CLI with the following command.

gem install bosh_cli

Installing MicroBOSH is as simple as running the following command.

gem install bosh_cli_plugin_micro

Installing BOSH on a VM instance on OpenStack

I leveraged Mirantis OpenStack express online for the OpenStack environment. The screenshot below shows the resources available with the OpenStack install that will be used for the Cloud Foundry install.

CFOO_v1_html_m2961bed8

I created a security group and a key pair in the OpenStack environment.

In the OpenStack environment that I used, the internal net allocated IPs with the CIDR 192.168.111.0/24 address. We will see subsequently how I use some of these addresses and the range in the OpenStack manifest file used to install Cloud Foundry.

The external net had more than a dozen floating IPs although I really only needed two.

I created a security group with appropriate access as specified in the docs. I went ahead and provided access to ICMP, TCP and UDP Protocols. I created a key pair for ssh access to the VMs. I subsequently use the name of the key as provided on the OpenStack install and the path to the private key.

From the OpenStack install, I noted down the following and substituted them in the MicroBOSH manifest file below.

  • A floating IP that will be used by the VM instance on which I installed BOSH. It can be any of the floating IPs that has not been allocated (yet).

  • An internal address from the subnet IP address allocation pool of OpenStack internal network that has not been allocated (yet).

  • The internal network UUID

  • The authorization URL that looks something like this http://23.246.209.226:5000/v2.0.Substitute your Horizon IPv4 address.

  • The tenant and the username, which is admin in this case.

  • The password that is used for Horizon or API access that will be used as an API key.

  • The name of the key pair for ssh access to the VMs.

  • The name of the security group that was created.

I substituted the above values in addition to the path to the private key corresponding to the name of the key pair that you noted down in a file named manifest.yml as below, based on the sample provided.

I’ve provided some values just for illustration purposes. The first line with three dashes is an YML separator and is required. The tabbing also needs to be followed as below.

name:
microbosh

network:

  type: manual

  vip: 23.246.209.228 # Replace with a floating IP address

  ip:  192.168.111.228 # Replace with an address from the subnet IP address allocation pool of your OpenStack internal network

  cloud_properties:

    net_id: 60fdc0d9-6054-4b1d-8aa0-f02e58c662bc # Replace with your OpenStack internal network UUID

resources:

  persistent_disk: 80000

  cloud_properties:

     instance_type: m1.large

cloud:

  plugin: openstack

  properties:

    openstack:

    auth_url: http://23.246.209.226:5000/v2.0 #Replace with your OpenStack Identity API endpoint

    tenant: admin # Replace with OpenStack tenant name

    username: admin # Replace with OpenStack username

    api_key: XXX # Replace with your OpenStack password

    default_key_name: RagsMBP # OpenStack Keypair name

    private_key: id_rsa # Path to OpenStack Keypair private key

    default_security_groups: [all]

apply_spec:

  properties:

  director: {max_threads: 3}

  hm: {resurrector_enabled: true}

  ntp: [0.north-america.pool.ntp.org, 1.north-america.pool.ntp.org]

The resurrector_enabled property for the Health Manager for BOSH will resurrect VMs from the templates that will be used to deploy when set to true. So, for example if the NATS VM dies, BOSH is watching for it’s health and will spin up a VM with the exact same template that was used to construct it.

I ran the following command to set the deployment manifest for installing BOSH on the OpenStack install using MicroBOSH.

bosh micro deployment manifest.yml

The output is as below.

Deployment set to ‘/Users/rags/src/openstack-bosh/manifest.yml’

I downloaded the Ubuntu Stem Cell with the following command. Substitute the appropriate version based on availability.

bosh download public stemcell bosh-stemcell-2831-openstack-kvm-ubuntu-trusty-go_agent.tgz

Completion is shown below for illustration.

bosh-stemcell: 100%
|ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo| 477.6MB 889.1KB/s Time: 00:09:10

Download complete

Stem Cells are Machine Images with BOSH agents and other software pre-installed.

I deployed this Stem Cell with the following command.

bosh micro deploy bosh-stemcell-2831-openstack-kvm-ubuntu-trusty-go_agent.tgz

I acknowledged the prompts as shown below. You can safely ignore the error that might be generated as shown
below.

No `bosh-deployments.yml` file found in current directory.

Conventionally, `bosh-deployments.yml` should be saved in /Users/rags/src.

Is /Users/rags/src/openstack-bosh a directory where you can save state? (type ‘yes’ to continue): yes

Deploying new micro BOSH instance manifest.yml to https://23.246.209.228:25555 (type ‘yes’ to continue): yes

Verifying stemcell…

File exists and readable

Verifying tarball…

Read tarball

Manifest exists

Stemcell image file

Stemcell properties

Stemcell info

————-

Name: bosh-openstack-kvm-ubuntu-trusty-go_agent

Version: 2831

Started deploy micro bosh

Started deploy micro bosh > Unpacking stemcell Done (00:00:08)

Started deploy micro bosh > Uploading stemcell Done (00:04:08)

Started deploy micro bosh > Creating VM from 056bd8cd-90e7-431e-99be-e6d0e75ebafc Done (00:00:41)

Started deploy micro bosh > Waiting for the agent Done (00:01:39)

Started deploy micro bosh > Updating persistent disk

Started deploy micro bosh > Create disk Done (00:00:08)

Started deploy micro bosh > Mount disk Done (00:00:08)

Done deploy micro bosh > Updating persistent disk (00:00:24)

Started deploy micro bosh > Stopping agent services Done (00:00:04)

Started deploy micro bosh > Applying micro BOSH spec Done (00:00:21)

Started deploy micro bosh > Starting agent services Done (00:00:02)

Started deploy micro bosh > Waiting for the director Done (00:00:12)

Done deploy micro bosh (00:07:39)log writing failed. can’t be called from trap context

Deployed manifest.yml to https://23.246.209.228:25555, took 00:07:39 to complete

/Users/rags/.rvm/gems/ruby-2.1.5/gems/net-ssh-2.9.2/lib/net/ssh/ruby_compat.rb:30:in `select’: Bad file descriptor (Errno::EBADF)

You can ignore the Errno::EBADF and the associated stack error.

I verified that BOSH is installed with the following command.

bosh target https://23.246.209.228:25555

Which should yield the following output. Use admin as password.

Target set to microbosh

Your username: admin

Enter password: *****

Logged in as admin

You can get more information with the following command.

bosh status

Which should yield output that looks something like below.

Config /Users/rags/.bosh_config

Director

Name microbosh

URL https://23.246.209.228:25555

Version 1.2831.0 (00000000)

User admin

UUID 2eee9a10-34c6-4d01-8133-8e9e7b2aaf29

CPI openstack

dns enabled (domain_name: microbosh)

compiled_package_cache disabled

snapshots disabled

Deployment not set

The installed BOSH VM is shown in the screenshot below.

CFOO_v1_html_m19526edc

Having installed BOSH I was now ready to install Cloud Foundry. I plugged in the UUID of the director into a manifest file for BOSH and set the deployment to the manifest file and finally deployed it.

Install Cloud Foundry

Create a manifest file called minimal-openstack.yml, which looks like below and is based on the AWS sample.

Replace the password for Cloud Foundry below with the appropriate password. Make sure the internal network gateway is specified and the allocated VMs, etc. are in the range of 2-20 and is reserved. You could also adjust the flavors to suit the installation. Most of the VMs that are created are m1.small except for the DEA, which is an m1.xlarge.

name: cf

  director_uuid: 2eee9a10-34c6-4d01-8133-8e9e7b2aaf29 #REPLACE with Director UUID

releases:

– {name: cf, version: 197}

networks:

– name: cf_private

  type: manual

    subnets:

    – range: 192.168.111.0/24 #REPLACE 10.0.16.0/24

    gateway: 192.168.111.1 #REPLACE 10.0.16.1

    dns: [8.8.8.8]

    reserved: [192.168.111.2 – 192.168.111.20] #REPLACE [“10.0.16.2 – 10.0.16.3”]

    static: [192.168.111.100 – 192.168.111.105] #REPLACE [“10.0.16.100 – 10.0.16.105”]

   cloud_properties: net_id : 60fdc0d9-6054-4b1d-8aa0-f02e58c662bc #REPLACE_WITH_PRIVATE_SUBNET_ID

– name: elastic

  type: vip

  cloud_properties: {}

resource_pools:

– name: small_z1

  network: cf_private

  stemcell:

    name: bosh-openstack-kvm-ubuntu-trusty-go_agent #REPLACE

    version: latest

  cloud_properties:

    availability_zone: nova #REPLACE_WITH_AZ

    instance_type: m1.small

– name: large_z1

  network: cf_private

  stemcell:

    name: bosh-openstack-kvm-ubuntu-trusty-go_agent #REPLACE

    version: latest

  cloud_properties:

    instance_type: m1.xlarge

compilation:

  workers: 6

  network: cf_private

  reuse_compilation_vms: true

  cloud_properties:

    availability_zone: nova #REPLACE_WITH_AZ

    instance_type: m1.medium #REPLACE

update:

  canaries: 1

  max_in_flight: 1

  serial: false

  canary_watch_time: 30000-600000

  update_watch_time: 5000-600000

jobs:

– name: nats_z1

  instances: 1

  resource_pool: small_z1

  templates:

  – {name: nats, release: cf}

  – {name: nats_stream_forwarder, release: cf}

  – {name: metron_agent, release: cf}

  networks:

  – name: cf_private

  static_ips: [192.168.111.103]

– name: etcd_z1

  instances: 1

  resource_pool: small_z1

  templates:

  – {name: etcd, release: cf}

  – {name: etcd_metrics_server, release: cf}

  – {name: metron_agent, release: cf}

  networks:

  – name: cf_private

  static_ips: [192.168.111.104]

  properties:

    etcd_metrics_server:

    nats:

      machines: [192.168.111.103]

      password: REPLACE_PASSWORD

      username: nats

– name: nfs_z1

  instances: 1

  persistent_disk: 102400

  resource_pool: small_z1

  templates:

  – {name: debian_nfs_server, release: cf}

  – {name: metron_agent, release: cf}

  networks:

  – name: cf_private

  static_ips: [192.168.111.105]

– name: postgres_z1

  instances: 1

  persistent_disk: 1024

  resource_pool: small_z1

  templates:

  – {name: postgres, release: cf}

  – {name: metron_agent, release: cf}

  networks:

  – name: cf_private

  static_ips: [192.168.111.101]

  update:

    serial: true

– name: api_z1

  instances: 1

  resource_pool: small_z1

  templates:

  – {name: cloud_controller_ng, release: cf}

  – {name: cloud_controller_worker, release: cf}

  – {name: cloud_controller_clock, release: cf}

  – {name: metron_agent, release: cf}

  – {name: nfs_mounter, release: cf}

  networks:

  – name: cf_private

  properties:

    nfs_server:

      address: 192.168.111.105

      allow_from_entries: [192.168.111.0/24]

– name: ha_proxy_z1

  instances: 1

  resource_pool: small_z1

  templates:

  – {name: haproxy, release: cf}

  – {name: metron_agent, release: cf}

  networks:

  – name: elastic

  static_ips: [23.246.209.250] #REPLACE_WITH_ELASTIC_IP]

  – name: cf_private

    default: [gateway, dns]

  properties:

    ha_proxy:

      ssl_pem: |

        —–BEGIN CERTIFICATE—–

        MIIB8zCCAVwCCQCqpCViv8Vd9TANBgkqhkiG9w0BAQUFADA+MQswCQYDVQQGEwJB

        VTETMBEGA1UECBMKU29tZS1TdGF0ZTEMMAoGA1UEChQDKi4qMQwwCgYDVQQDFAMq

        LiowHhcNMTUwMTI3MTkyMDM4WhcNMTUwMjI2MTkyMDM4WjA+MQswCQYDVQQGEwJB

        VTETMBEGA1UECBMKU29tZS1TdGF0ZTEMMAoGA1UEChQDKi4qMQwwCgYDVQQDFAMq

        LiowgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAPcJyEfEQ5CxO9c8fxUzF0LN

        rkUGKlzHbu7INQ3TPhf5cHXdGg5patJrgDKhQEPbeqwnlMsVtq7si+VueeskaIb2

        eYpMKtLGUpBuK6zLR7Dqg56xVd20sTbcj8MpRzpHkoWBR+mA166LUw4xq1Gs7wcY

        pOs05Fez6jM9pIk7Sq1lAgMBAAEwDQYJKoZIhvcNAQEFBQADgYEAw0tszOhWGbJ5

        t3HzpXFY+2GhytiFEGiKk41hBAidxpziogbrBb6JW4s5r2BLKSRxrFuUT0jBnmbS

        psD2eOyQFXJy1ZDrIKr3hUWEkwj20ACcU3Co1AJjKqU4n5lANmD2GOAgzwwg47o3

        4xxFrgWOjLbpzFspzIVCGlQol4TpSF4=

        —–END CERTIFICATE—–

        —–BEGIN RSA PRIVATE KEY—–

        MIICXAIBAAKBgQD3CchHxEOQsTvXPH8VMxdCza5FBipcx27uyDUN0z4X+XB13RoO

        aWrSa4AyoUBD23qsJ5TLFbau7IvlbnnrJGiG9nmKTCrSxlKQbiusy0ew6oOesVXd

        tLE23I/DKUc6R5KFgUfpgNeui1MOMatRrO8HGKTrNORXs+ozPaSJO0qtZQIDAQAB

        AoGBAK+LCgDFXGWzK5y05nFADuVvlsiBqxSmuxN+vQSH+XW70MhQRzW6fyfrL/vK

        TgpqKe+vaLIvSdNfT8HHEWegRY1MptHAUkWNQ3Grz0uhBZZzxjujxNVPHNNbKfd+

        jyidZXRQA3Q6Tf+anCs5xrF99bTnya33X46OIC33UnghHqj5AkEA/lpMp25FWcnG

        EVW38AJ6GRdPUViyfsyCzJRdTdTri4mrDVc2+pkpasxZHd9SPb1mVWKD4nBTbx/f

        hET8J1+SiwJBAPijWxzZGuZgquEc2aTluKyRFsE65TMiEgepwyav70H/oXveSq97

        c01W/k9dKIQex3cQ/GtvRH+jZWUAWCSabc8CQAWskgUyKo3kOGzukpniFEM3B+fy

        qJi3iztxG9u+ojqMqao0hd91Rz1ArcRC1RzXes7w0axdgR77gQr8Vvux4B0CQDAt

        nysQ2oiHdLUYHQg5xzYRCyK4Ic9tq6a2e20UrDzSptzUrw4f0rDKyY5hU8d+G1J0

        BSVgMxq0c6JFlc7J6bsCQAn5VJ8cPB8hBSSaLPv6uY79JFbXzZlhgzdetdTz5DVo

        Hvgz+e0z3aq6haPesysurpYTIJ9DU30iCCYahXbjNTU=

        —–END RSA PRIVATE KEY—–

    router:

    servers:

      z1: [192.168.111.102]

– name: hm9000_z1

  instances: 1

  resource_pool: small_z1

  templates:

  – {name: hm9000, release: cf}

  – {name: metron_agent, release: cf}

  networks:

  – name: cf_private

– name: loggregator_z1

  instances: 1

  resource_pool: small_z1

  templates:

  – {name: doppler, release: cf}

  networks:

  – name: cf_private

  properties:

    doppler: {zone: z1}

    doppler_endpoint:

       shared_secret: PASSWORD #REPLACE

– name: loggregator_trafficcontroller_z1

  instances: 1

  resource_pool: small_z1

  templates:

  – {name: loggregator_trafficcontroller, release: cf}

  – {name: metron_agent, release: cf}

  networks:

  – name: cf_private

  properties:

    traffic_controller: {zone: z1}

– name: login_z1

  instances: 1

  resource_pool: small_z1

  templates:

  – {name: login, release: cf}

  – {name: uaa, release: cf}

  – {name: metron_agent, release: cf}

  networks:

  – name: cf_private

  properties:

    login:

      catalina_opts: -Xmx768m -XX:MaxPermSize=256m

    uaa:

      port: 8081

      admin:

        client_secret: PASSWORD

      batch:

        password: PASSWORD

        username: batch_user

      cc:

        client_secret: PASSWORD

      scim:

        userids_enabled: true

        users:

        – admin|PASSWORD|scim.write,scim.read,openid,cloud_controller.admin,doppler.firehose

    uaadb:

      address: 192.168.111.101

      databases:

      – {name: uaadb, tag: uaa}

      db_scheme: postgresql

      port: 5524

      roles:

      – {name: uaaadmin, password: PASSWORD, tag: admin}

– name: router_z1

  instances: 1

  resource_pool: small_z1

  templates:

  – {name: gorouter, release: cf}

  – {name: metron_agent, release: cf}

  networks:

  – name: cf_private

  static_ips: [192.168.111.102]

  properties:

    dropsonde: {enabled: true}

– name: runner_z1

  instances: 1

  resource_pool: large_z1

  templates:

  – {name: dea_next, release: cf}

  – {name: dea_logging_agent, release: cf}

  – {name: metron_agent, release: cf}

  networks:

  – name: cf_private

  properties:

    dea_next: {zone: z1}

– name: stats_z1

  instances: 1

  resource_pool: small_z1

  templates:

  – {name: collector, release: cf}

  – {name: metron_agent, release: cf}

  networks:

  – name: cf_private

  properties:

    collector: {deployment_name: CF}

properties:

  networks: {apps: cf_private}

  app_domains: [23.246.209.250.xip.io] #REPLACE_WITH_SYSTEM_DOMAIN]

  cc:

    billing_event_writing_enabled: false

    bulk_api_password: PASSWORD

    db_encryption_key: PASSWORD

    default_running_security_groups: [public_networks, dns]

    default_staging_security_groups: [public_networks, dns]

    install_buildpacks:

    – {name: java_buildpack, package: buildpack_java}

    – {name: ruby_buildpack, package: buildpack_ruby}

    – {name: nodejs_buildpack, package: buildpack_nodejs}

    – {name: go_buildpack, package: buildpack_go}

    – {name: python_buildpack, package: buildpack_python}

    – {name: php_buildpack, package: buildpack_php}

    internal_api_password: PASSWORD

    quota_definitions:

      default:

      memory_limit: 102400

      non_basic_services_allowed: true

      total_routes: 1000

      total_services: -1

      security_group_definitions:

      – name: public_networks

        rules:

        – {destination: 0.0.0.0-9.255.255.255, protocol: all}

        – {destination: 11.0.0.0-169.253.255.255, protocol: all}

        – {destination: 169.255.0.0-172.15.255.255, protocol: all}

        – {destination: 172.32.0.0-192.167.255.255, protocol: all}

        – {destination: 192.169.0.0-255.255.255.255, protocol: all}

        – name: dns

          rules:

          – {destination: 0.0.0.0/0, ports: ’53’, protocol: tcp}

          – {destination: 0.0.0.0/0, ports: ’53’, protocol: udp}

        srv_api_uri: http://api.23.246.209.250.xip.io #REPLACE_WITH_SYSTEM_DOMAIN

        staging_upload_password: PASSWORD

        staging_upload_user: staging_upload_user

  ccdb:

    address: 192.168.111.101

    databases:

    – {name: ccdb, tag: cc}

    db_scheme: postgres

    port: 5524

    roles:

    – {name: ccadmin, password: PASSWORD, tag: admin}

  databases:

    databases:

    – {name: ccdb, tag: cc, citext: true}

    – {name: uaadb, tag: uaa, citext: true}

    port: 5524

    roles:

    – {name: ccadmin, password: PASSWORD, tag: admin}

    – {name: uaaadmin, password: PASSWORD, tag: admin}

  dea_next:

    advertise_interval_in_seconds: 5

    heartbeat_interval_in_seconds: 10

    memory_mb: 33996

  description: Cloud Foundry sponsored by Pivotal

  domain: 23.246.209.250.xip.io #REPLACE_WITH_SYSTEM_DOMAIN

  etcd:

    machines: [192.168.111.104]

  hm9000:

    url: http://hm9000.23.246.209.250.xip.io #REPLACE_WITH_SYSTEM_DOMAIN

  logger_endpoint:

    port: 4443

  loggregator_endpoint:

    shared_secret: PASSWORD

  login:

    protocol: http

  metron_agent:

    zone: z1

  metron_endpoint:

    shared_secret: PASSWORD

  nats:

    machines: [192.168.111.103]

    password: PASSWORD

    port: 4222

    user: nats

  nfs_server:

    address: 192.168.111.105

    allow_from_entries: [192.168.111.0/24]

  ssl:

    skip_cert_verify: true

  system_domain: 23.246.209.250.xip.io #REPLACE_WITH_SYSTEM_DOMAIN

  system_domain_organization: default_organization

  uaa:

    clients:

      cc-service-dashboards:

        authorities: clients.read,clients.write,clients.admin

        authorized-grant-types: client_credentials

        scope: openid,cloud_controller_service_permissions.read

        secret: PASSWORD

      cloud_controller_username_lookup:

        secret:PASSWORD

      doppler:

        authorities: uaa.resource

        secret: PASSWORD

      login:

        authorities: oauth.login,scim.write,clients.read,notifications.write,critical_notifications.write,emails.write,scim.userids,password.write

        authorized-grant-types: authorization_code,client_credentials,refresh_token

        redirect-uri: http://login.23.246.209.250.xip.io

        scope: openid,oauth.approvals

        secret: PASSWORD

      servicesmgmt:

        authorities: uaa.resource,oauth.service,clients.read,clients.write,clients.secret

        authorized-grant-types: authorization_code,client_credentials,password,implicit

        autoapprove: true

        redirect-uri: http://servicesmgmt.23.246.209.250.xip.io/auth/cloudfoundry/callback
#REPLACE_WITH_SYSTEM_DOMAIN

        scope: openid,cloud_controller.read,cloud_controller.write

        secret: PASSWORD

    jwt:

      signing_key: |

        —–BEGIN RSA PRIVATE KEY—–

        MIICXAIBAAKBgQDHFr+KICms+tuT1OXJwhCUmR2dKVy7psa8xzElSyzqx7oJyfJ1

        JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMXqHxf+ZH9BL1gk9Y6kCnbM5R6

        0gfwjyW1/dQPjOzn9N394zd2FJoFHwdq9Qs0wBugspULZVNRxq7veq/fzwIDAQAB

        AoGBAJ8dRTQFhIllbHx4GLbpTQsWXJ6w4hZvskJKCLM/o8R4n+0W45pQ1xEiYKdA

        Z/DRcnjltylRImBD8XuLL8iYOQSZXNMb1h3g5/UGbUXLmCgQLOUUlnYt34QOQm+0

        KvUqfMSFBbKMsYBAoQmNdTHBaz3dZa8ON9hh/f5TT8u0OWNRAkEA5opzsIXv+52J

        duc1VGyX3SwlxiE2dStW8wZqGiuLH142n6MKnkLU4ctNLiclw6BZePXFZYIK+AkE

        xQ+k16je5QJBAN0TIKMPWIbbHVr5rkdUqOyezlFFWYOwnMmw/BKa1d3zp54VP/P8

        +5aQ2d4sMoKEOfdWH7UqMe3FszfYFvSu5KMCQFMYeFaaEEP7Jn8rGzfQ5HQd44ek

        lQJqmq6CE2BXbY/i34FuvPcKU70HEEygY6Y9d8J3o6zQ0K9SYNu+pcXt4lkCQA3h

        jJQQe5uEGJTExqed7jllQ0khFJzLMx0K6tj0NeeIzAaGCQz13oo2sCdeGRHO4aDh

        HH6Qlq/6UOV5wP8+GAcCQFgRCcB+hrje8hfEEefHcFpyKH+5g1Eu1k0mLrxK2zd+

        4SlotYRHgPCEubokb2S1zfZDWIXW3HmggnGgM949TlY=

       —–END RSA PRIVATE KEY—–

      verification_key: |

        —–BEGIN PUBLIC KEY—–

        MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDHFr+KICms+tuT1OXJwhCUmR2d

        KVy7psa8xzElSyzqx7oJyfJ1JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMX

        qHxf+ZH9BL1gk9Y6kCnbM5R60gfwjyW1/dQPjOzn9N394zd2FJoFHwdq9Qs0wBug

        spULZVNRxq7veq/fzwIDAQAB

        —–END PUBLIC KEY—–

  no_ssl: true

  url: http://uaa.23.246.209.250.xip.io #REPLACE_WITH_SYSTEM_DOMAIN

Verify that there are no Stem Cells installed (yet) for BOSH.

bosh stemcells

Which should yield the following output.

No stemcells

Install the Stem Cells again using the following command.

bosh upload stemcell bosh-stemcell-2831-openstack-kvm-ubuntu-trusty-go_agent.tgz

Which should yield the following output.

Verifying stemcell…

File exists and readable

Verifying tarball…

Read tarball

Manifest exists

Stemcell image file

Stemcell properties

Stemcell info

————-

Name: bosh-openstack-kvm-ubuntu-trusty-go_agent

Version: 2831

Checking if stemcell already exists…

No

Uploading stemcell…

bosh-stemcell: 100%
|ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo| 477.6MB 1.3MB/s Time: 00:06:20

Director task

Started update stemcell

Started update stemcell > Extracting stemcell archive. Done (00:00:04)

Started update stemcell > Verifying stemcell manifest. Done (00:00:00)

Started update stemcell > Checking if this stemcell already exists. Done
(00:00:00)

Started update stemcell > Uploading stemcell
bosh-openstack-kvm-ubuntu-trusty-go_agent/2831 to the cloud. Done (00:00:19)

Started update stemcell > Save stemcell
bosh-openstack-kvm-ubuntu-trusty-go_agent/2831 (2bca513a-3735-4e4a-8fbb-69cb3d201e2a). Done (00:00:00)

Done update stemcell (00:00:23)

Task 1 done

Started 2015-02-05 20:14:27 UTC

Finished 2015-02-05 20:14:50 UTC

Duration 00:00:23

Stemcell uploaded and created.

Git clone the Cloud Foundry releases as below.

git clone https://github.com/cloudfoundry/cf-release.git

We will upload the latest Cloud Foundry release based on the releases subdirectory with the following command. Release 197 is the latest in that subdirectory.

bosh upload release /Users/rags/src/openstack-bosh/cf-release/releases/cf-197.yml

The command above will create the release tarball and upload it OR you can do the same based on the following command based on BOSH docs.

bosh upload release https://bosh.io/d/github.com/cloudfoundry/cf-release?v=197

After a few retries, the release was finally uploaded based on the output below. I’ve included only the final part of the output.

Director task

Started extracting release > Extracting release. Done (00:00:40)

Started verifying manifest >Verifying manifest. Done (00:00:00)

Started resolving package dependencies > Resolving package dependencies. Done (00:00:00)

Started creating new packages

Started creating new packages > hm9000/ce5b3ae4b0fa4ece6c5e741fd9b675751e78eb73. Done (00:00:01)

Started creating new packages > uaa/263a72b91dfb6e4e9a2983e50694b5536e81c0bb. Done (00:00:04)

Started creating new packages > haproxy/630ad6d6e1d3cab4547ce104f3019b483f354613. Done (00:00:01)

Started creating new packages > loggregator_trafficcontroller/804c57c396af3a9b4484af58cf41a08a66d8d936. Done (00:00:01)

Started creating new packages >
buildpack_python/759fe70bfa3278668f142156e5688d5638597d5e. Done (00:00:02)

Started creating new packages >
rootfs_lucid64/f9c5405a71198038378ef7fe45b75f1e1f0aa634. Done (00:00:04)

Started creating new packages > sqlite/af44d44e58fffd069459cb63f6fcc37c6326e370. Done (00:00:01)

Started creating new packages > buildpack_php/6ee814b62c4f7a587fdb75f0aeee6775cbb95690. Done (00:00:10)

Started creating new packages > buildpack_java_offline/b82592e53483bcfcea2692aff9fa351c9fb69f12. Done (00:00:04)

Started creating new packages > acceptance-tests/a9fa0a313c165729b0dde68be7112b392b02b141. Done (00:00:00)

Started creating new packages >
buildpack_nodejs/d243df46ac9056320914ca1d6843e112b309720d. Done (00:00:08)

Started creating new packages > buildpack_go/a494a270d015bca2ff18e5cd919f858849cb8d43. Done
(00:00:18)

Started creating new packages > postgres/b63fe0176a93609bd4ba44751ea490a3ee0f646c. Done (00:00:01)

Started creating new packages > golang1.3/e4b65bcb478d9bea1f9c92042346539713551a4a. Done (00:00:01)

Started creating new packages > smoke-tests/d1aaf8be8786ee2ee001ce0a30d154c268f770fc. Done (00:00:00)

Started creating new packages > ruby-2.1.4/5a4612011cb6b8338d384acc7802367ae5e11003. Done (00:00:00)

Started creating new packages > login/f2f60e4ae26ec74ddb2b6ae50aefe47517267fab. Done (00:00:07)

Started creating new packages > golang/aa5f90f06ada376085414bfc0c56c8cd67abba9c. Done (00:00:06)

Started creating new packages > mruby/cd102a7fe440fd9eaeee99c6bc460b06884cbda6. Done (00:00:03)

Started creating new packages > etcd_metrics_server/64efbbfb5761d09a24dad21ecfebd8824b99d433. Done (00:00:01)

Started creating new packages > etcd/44df7612404c5b2ecc1f167126b9e0b20481f79d. Done (00:00:02)

Started creating new packages >
metron_agent/fe9066813bc8f9e641433c1eb9114c561cf6aa40. Done (00:00:02)

Started creating new packages > nginx_newrelic_plugin/92f2c6fb3f807f030d989c52cd1de445eba3f296. Done (00:00:00)

Started creating new packages > dea_next/5ea8a66d9595246ecc049df72168a49715cd4019. Done (00:00:00)

Started creating new packages > buildpack_java/cb9735380ce491b081024de9ef9e981dfb5e1bbf. Done (00:00:00)

Started creating new packages > gnatsd/a0d6f5d3264aa8ecadb52d3bfa04540636800820. Done (00:00:01)

Started creating new packages > mysqlclient/8b5d9ce287341048377997a9b3fe4ff3e6a1c68f. Done (00:00:00)

Started creating new packages > nats/cc6bda829a77ab2321c0c216aa9107aca92c3b1a. Done (00:00:00)

Started creating new packages > buildpack_cache/4ced0bc62f12dcaa79121718ca3525253ede33b5. Done (00:00:02)

Started creating new packages > libpq/49cc7477fcf9a3fef7a1f61e1494b32288587ed8. Done (00:00:01)

Started creating new packages > nginx/c916c10937c83a8be507d3100133101eb403c826. Done (00:00:00)

Started creating new packages > collector/9fa967f02b3dacc621369babb1a5e0b7940a9c80. Done (00:00:00)

Started creating new packages > cli/ce4c111a383538f658e40bf3411fad51d7a5ea29. Done (00:00:00)

Started creating new packages > doppler/024dcacf950b7e12250bc86a1a015a41dd18fbad. Done (00:00:00)

Started creating new packages > gorouter/c74a7b6edf8722d9e1f84b6d3af5985cf621e730. Done (00:00:01)

Started creating new packages > debian_nfs_server/aac05f22582b2f9faa6840da056084ed15772594. Done (00:00:00)

Started creating new packages > cloud_controller_ng/d456e9796a45255a9c680025a9c265703a969201. Done (00:00:00)

Started creating new packages > common/43595236d1ce5f9a6120198108c226c07ab17012. Done (00:00:00)

Started creating new packages > dea_logging_agent/bcd5bb7f0fab180231251b394d7ebfbc962dd4db. Done (00:00:01)

Started creating new packages > buildpack_ruby/bfcad10207e5296f4b647bc5e41a67d4c3b46d77. Done (00:00:14)

Started creating new packages > warden/7d6d68c3f52d9a0925171f879e953b352bbf1325. Done (00:00:02)

Done creating new packages (00:01:39)

Started creating new jobs

Started creating new jobs > hm9000/f9a5be2966ce849ed0e600007d26cbd4339e530. Done (00:00:03)

Started creating new jobs > uaa/9617a6fc1a23bbf8ae92afcfd4022130423e3c43. Done (00:00:00)

Started creating new jobs > haproxy/065430834b344d0ad3f3b1fcede41b40ffa73bc1. Done (00:00:00)

Started creating new jobs > loggregator_trafficcontroller/15c4c206ea00c79c0421e7c859ea14e356c33bd4. Done (00:00:00)

Started creating new jobs > cloud_controller_clock/0c2f409b7cb6f53aebfec7281287a72049455366. Done (00:00:00)

Started creating new jobs > nfs_mounter/e597e616003791ea243cf1d49880a90088c1129e. Done (00:00:00)

Started creating new jobs > acceptance-tests/32ef38f30a8e9b1684075ec4536f3e8861557add. Done
(00:00:00)

Started creating new jobs > cloud_controller_worker/0227ac611c3a924923d5f0afae8468101de20a89.
Done (00:00:00)

Started creating new jobs > postgres/c7870a2b525bf3ec6d77c7a85caca1e094d188b0. Done (00:00:00)

Started creating new jobs > smoke-tests/293ea288da3ba64d548272197a5c984629dcf1b9. Done (00:00:00)

Started creating new jobs > login/04e3b07d6252f84ee1edc65608c7cd3f0f7cc501. Done (00:00:01)

Started creating new jobs > etcd_metrics_server/bdf9fa22da62d0b08c4d867c3a1285fc355290ef. Done (00:00:00)

Started creating new jobs > etcd/79b92026e3dbfa40439693f0293f50910125ceb6. Done (00:00:00)

Started creating new jobs > metron_agent/f56b50b44594442cddf576028b10bc9e079d5ccf. Done (00:00:00)

Started creating new jobs > dea_next/2b81e8aae1f9f2bcd41706e337eebf91ff891bac. Done (00:00:00)

Started creating new jobs > nats/e010c50267d0a477e741172582610c6f0584da8b. Done (00:00:00)

Started creating new jobs > collector/43b67fee22ef25f6e8958d92035bcd4a499bb975. Done (00:00:00)

Started creating new jobs > doppler/312b1137f4e4aa085de54f57f43ee502f4e65d47. Done (00:00:00)

Started creating new jobs > gorouter/f4ad08821bbefbf6c83e6fa50eb0f752bc516ed5. Done (00:00:00)

Started creating new jobs > debian_nfs_server/c9138cb0ea7921872f9fcd55a465c90335fb051f. Done
(00:00:00)

Started creating new jobs > cloud_controller_ng/4cd2825f9c760ec1d904798b4222e77b64238926. Done (00:00:00)

Started creating new jobs > dea_logging_agent/62846e802ab7f25d41a6a49a77e6913e74e0cf28. Done (00:00:00)

Started creating new jobs > nats_stream_forwarder/0f7a5da9369b2baf5a46c2a9af4c75f14d09afd3. Done (00:00:01)

Done creating new jobs (00:00:05)

Started release has been created > cf/197. Done (00:00:00)

Task 3 done

Started 2015-02-06 06:19:38 UTC

Finished 2015-02-06 06:22:02 UTC

Duration 00:02:24

Release uploaded

Deploy using BOSH. Set the deployment as below.

bosh deployment minimal-openstack.yml

Deploy with the following command.

bosh deploy

If you monitor the instances of VMs on the OpenStack install you should be able to see the creation of
worker VMs as below.

CFOO_v1_html_m27f3bf03

Followed by creation of the Cloud Foundry components in different VMs as below.

CFOO_v1_html_m7f6e9210

The CLI output looks something like below.

Processing deployment manifest

——————————

Getting deployment properties from director…

Compiling deployment manifest…

Please review all changes carefully

Deploying

———

Deployment name: minimal-openstack.yml

Director name: microbosh

Are you sure you want to deploy? (type ‘yes’ to continue): yes

Director task

Started preparing deployment

Started preparing deployment > Binding deployment. Done (00:00:00)

Started preparing deployment > Binding releases. Done (00:00:00)

Started preparing deployment > Binding existing deployment. Done (00:00:00)

Started preparing deployment > Binding resource pools. Done (00:00:00)

Started preparing deployment > Binding stemcells. Done (00:00:00)

Started preparing deployment > Binding templates. Done (00:00:00)

Started preparing deployment > Binding properties. Done (00:00:00)

Started preparing deployment > Binding unallocated VMs. Done (00:00:00)

Started preparing deployment > Binding instance networks. Done (00:00:00)

Done preparing deployment (00:00:00)

Started preparing package compilation > Finding packages to compile. Done (00:00:00)

Started preparing dns > Binding DNS. Done (00:00:00)

Started deleting unneeded vms

Started deleting unneeded vms > 72fe08b2-dc5a-44c8-8842-6b839e1e28a7

Started deleting unneeded vms > c443dbbd-742d-41f7-9520-305759211801. Done (00:00:06)

Done deleting unneeded vms > 72fe08b2-dc5a-44c8-8842-6b839e1e28a7 (00:00:06)

Done deleting unneeded vms (00:00:06)

Started creating bound missing vms

Started creating bound missing vms > small_z1/0

Started creating bound missing vms > small_z1/2

Started creating bound missing vms > small_z1/1. Done (00:01:09)

Started creating bound missing vms > small_z1/3

Done creating bound missing vms > small_z1/0 (00:01:39)

Started creating bound missing vms > small_z1/4

Done creating bound missing vms > small_z1/2 (00:01:40)

Started creating bound missing vms > small_z1/5

Done creating bound missing vms > small_z1/3 (00:01:02)

Started creating bound missing vms > small_z1/6

Done creating bound missing vms > small_z1/4 (00:01:06)

Started creating bound missing vms > small_z1/7

Done creating bound missing vms > small_z1/5 (00:01:10)

Started creating bound missing vms > small_z1/8

Done creating bound missing vms > small_z1/6 (00:01:06)

Started creating bound missing vms > small_z1/9

Done creating bound missing vms > small_z1/7 (00:01:07)

Started creating bound missing vms > large_z1/0

Done creating bound missing vms > small_z1/8 (00:01:08)

Done creating bound missing vms > small_z1/9 (00:01:03)

Done creating bound missing vms > large_z1/0 (00:01:11)

Done creating bound missing vms (00:05:03)

Started binding instance vms

Started binding instance vms > nats_z1/0

Started binding instance vms > etcd_z1/0

Started
binding instance vms > nfs_z1/0

Done binding instance vms > nats_z1/0 (00:00:00)

Started binding instance vms > postgres_z1/0

Done binding instance vms > etcd_z1/0 (00:00:00)

Started binding instance vms > api_z1/0

Done binding instance vms > nfs_z1/0 (00:00:00)

Started binding instance vms > ha_proxy_z1/0. Done (00:00:00)

Started binding instance vms > hm9000_z1/0

Done binding instance vms > api_z1/0 (00:00:00)

Started binding instance vms > loggregator_z1/0. Done (00:00:00)

Started binding instance vms > loggregator_trafficcontroller_z1/0

Done binding instance vms > hm9000_z1/0 (00:00:00)

Started binding instance vms > login_z1/0

Done binding instance vms > postgres_z1/0 (00:00:01)

Started binding instance vms > router_z1/0. Done (00:00:00)

Started binding instance vms > runner_z1/0. Done (00:00:00)

Started binding instance vms > stats_z1/0

Done binding instance vms > loggregator_trafficcontroller_z1/0 (00:00:01)

Done binding instance vms > login_z1/0 (00:00:01)

Done binding instance vms > stats_z1/0 (00:00:00)

Done binding instance vms (00:00:01)

Started preparing configuration > Binding configuration. Done (00:00:02)

Started updating job nats_z1 > nats_z1/0 (canary)

Started updating job etcd_z1 > etcd_z1/0 (canary)

Started updating job nfs_z1 > nfs_z1/0 (canary)

Done updating job etcd_z1 > etcd_z1/0 (canary) (00:03:20)

Done updating job nats_z1 > nats_z1/0 (canary) (00:03:25)

Done updating job nfs_z1 > nfs_z1/0 (canary) (00:05:10)

Started updating job postgres_z1 > postgres_z1/0 (canary). Done (00:02:01)

Started updating job api_z1 > api_z1/0 (canary)

Started
updating job ha_proxy_z1 > ha_proxy_z1/0 (canary)

Started updating job loggregator_z1 > loggregator_z1/0 (canary)

Started updating job hm9000_z1 > hm9000_z1/0 (canary)

Started updating job loggregator_trafficcontroller_z1 > loggregator_trafficcontroller_z1/0 (canary)

Started updating job login_z1 > login_z1/0 (canary)

Started updating job router_z1 > router_z1/0 (canary)

Started updating job stats_z1 > stats_z1/0 (canary)

Started updating job runner_z1 > runner_z1/0 (canary)

Done updating job hm9000_z1 > hm9000_z1/0 (canary) (00:01:43)

Done updating job loggregator_trafficcontroller_z1 > loggregator_trafficcontroller_z1/0 (canary) (00:02:13)

Done updating job loggregator_z1 > loggregator_z1/0 (canary) (00:02:15)

Done updating job ha_proxy_z1 > ha_proxy_z1/0 (canary) (00:02:17)

Done updating job stats_z1 > stats_z1/0 (canary) (00:03:22)

Done updating job login_z1 > login_z1/0 (canary) (00:03:23)

Done updating job runner_z1 > runner_z1/0 (canary) (00:04:10)

Done updating job router_z1 > router_z1/0 (canary) (00:04:30)

Done updating job api_z1 > api_z1/0 (canary) (00:06:08)

Task 8 done

Started 2015-02-06 06:56:25 UTC

Finished 2015-02-06 07:14:56 UTC

Duration 00:18:31

Deployed
`minimal-openstack.yml’ to `microbosh’

You can look at the VMs that have been created with the following command.

bosh vms

Which should yield an output that looks something along lines below.

Deployment `cf’

Director task 11

Task 11 done

+————————————+———+—————+—————–+

| Job/index | State | Resource Pool | IPs
|

+————————————+———+—————+—————–+

| api_z1/0 | running | small_z1 | 192.168.111.23 |

| etcd_z1/0 | running | small_z1 | 192.168.111.104 |

| ha_proxy_z1/0 | running | small_z1 | 192.168.111.24 |

| | | | 23.246.209.250 |

| hm9000_z1/0 | running | small_z1 | 192.168.111.25 |

| loggregator_trafficcontroller_z1/0 | running | small_z1 | 192.168.111.27 |

| loggregator_z1/0 | running | small_z1 | 192.168.111.26 |

| login_z1/0 | running | small_z1 | 192.168.111.28 |

| nats_z1/0 | running | small_z1 | 192.168.111.103 |

| nfs_z1/0 | running | small_z1 | 192.168.111.105 |

| postgres_z1/0 | running | small_z1 | 192.168.111.101 |

| router_z1/0 | running | small_z1 | 192.168.111.102 |

| runner_z1/0 | running | large_z1 | 192.168.111.29 |

| stats_z1/0 | running | small_z1 | 192.168.111.30 |

+————————————+———+—————+—————–+

VMs total: 13

The resources used up by the OpenStack Cloud Foundry install is illustrated below.

CFOO_v1_html_m5625ab6

Use the installed Cloud Foundry instance

At this point, the installation is ready and you can use the installed instance with the following command using the URL specified in the minimal-openstack YML file.

If you have not installed the Cloud Foundry CLI yet, you can install it based on the docs available at
https://github.com/cloudfoundry/cli/releases.

cf api –skip-ssl-validation api.23.246.209.250.xip.io

Which should yield an output along the lines of

Setting api endpoint to api.23.246.209.250.xip.io…

OK

API endpoint: https://api.23.246.209.250.xip.io (API version: 2.21.0)

Not logged in. Use ‘cf login’ to log in.

You can now login with the password as provided in the minimal-openstack YML file and start cf pushing apps.

Summary

Here are the steps we followed to install the latest Cloud Foundry release on OpenStack.

  1. Started with an OpenStack install with the requisite resources such as floating IPs, vCPUs, memory, disk space, etc.

  2. Installed MicroBOSH on a laptop/desktop.

  3. Installed BOSH on a VM instance of the OpenStack install with a floating IP assigned.

  4. Installed the latest Cloud Foundry release via the BOSH VM using BOSH.

  5. Installed the Cloud Foundry CLI.

Now we’re ready to cf login and cf push.

Advertisements

3 Comments

  1. […] In this blog, Rags looks at the steps needed to get Cloud Foundry installed and operational on OpenStack. This gives you access to both the OpenStack and Cloud Foundry stacks and various configurations, and is great for demos or testing. […]

  2. Reblogged this on Layne's Coding Play-ground and commented:
    A good reference of how to deploy Cloud Foundry on OpenStack

  3. Thats great its the same i too followed and it worked. Only change is w.r..t certs


Comments RSS TrackBack Identifier URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s