Cloud Foundry on OpenStack

Cloud Foundry on OpenStack

Introduction

Cloud Foundry works on a variety of IaaS including OpenStack. OpenStack and Cloud Foundry are the top two Cloud-based Open Source projects currently.

This blog highlights the steps to install Cloud Foundry on OpenStack.

Requirements

I used the following hardware/software/environment.

  1. A laptop/desktop to install MicroBOSH
  2. An OpenStack install with about 25 Virtual CPUs, 64-128GB of main memory, about 400GB of disk space, an internal and external network with at least 2 floating IPs. In this case I leveraged the Mirantis OpenStack express team edition.
  3. One of the VM instances on OpenStack will be used to install BOSH. This requires a floating IP and 40-80GB of disk space.
  4. Besides this VM instance, I needed an environment to create about a dozen VM instances on the internal network running Ubuntu trusty. The HAProxy VM, which is used to access the Cloud Foundry instance, counted for the second floating IP.

Installing MicroBOSH

On my Mac laptop I ran into rvm issues (had to reinstall rvm). I used Ruby version 2.1.5.

Install BOSH CLI with the following command.

gem install bosh_cli

Installing MicroBOSH is as simple as running the following command.

gem install bosh_cli_plugin_micro

Installing BOSH on a VM instance on OpenStack

I leveraged Mirantis OpenStack express online for the OpenStack environment. The screenshot below shows the resources available with the OpenStack install that will be used for the Cloud Foundry install.

CFOO_v1_html_m2961bed8

I created a security group and a key pair in the OpenStack environment.

In the OpenStack environment that I used, the internal net allocated IPs with the CIDR 192.168.111.0/24 address. We will see subsequently how I use some of these addresses and the range in the OpenStack manifest file used to install Cloud Foundry.

The external net had more than a dozen floating IPs although I really only needed two.

I created a security group with appropriate access as specified in the docs. I went ahead and provided access to ICMP, TCP and UDP Protocols. I created a key pair for ssh access to the VMs. I subsequently use the name of the key as provided on the OpenStack install and the path to the private key.

From the OpenStack install, I noted down the following and substituted them in the MicroBOSH manifest file below.

  • A floating IP that will be used by the VM instance on which I installed BOSH. It can be any of the floating IPs that has not been allocated (yet).

  • An internal address from the subnet IP address allocation pool of OpenStack internal network that has not been allocated (yet).

  • The internal network UUID

  • The authorization URL that looks something like this http://23.246.209.226:5000/v2.0.Substitute your Horizon IPv4 address.

  • The tenant and the username, which is admin in this case.

  • The password that is used for Horizon or API access that will be used as an API key.

  • The name of the key pair for ssh access to the VMs.

  • The name of the security group that was created.

I substituted the above values in addition to the path to the private key corresponding to the name of the key pair that you noted down in a file named manifest.yml as below, based on the sample provided.

I’ve provided some values just for illustration purposes. The first line with three dashes is an YML separator and is required. The tabbing also needs to be followed as below.

name:
microbosh

network:

  type: manual

  vip: 23.246.209.228 # Replace with a floating IP address

  ip:  192.168.111.228 # Replace with an address from the subnet IP address allocation pool of your OpenStack internal network

  cloud_properties:

    net_id: 60fdc0d9-6054-4b1d-8aa0-f02e58c662bc # Replace with your OpenStack internal network UUID

resources:

  persistent_disk: 80000

  cloud_properties:

     instance_type: m1.large

cloud:

  plugin: openstack

  properties:

    openstack:

    auth_url: http://23.246.209.226:5000/v2.0 #Replace with your OpenStack Identity API endpoint

    tenant: admin # Replace with OpenStack tenant name

    username: admin # Replace with OpenStack username

    api_key: XXX # Replace with your OpenStack password

    default_key_name: RagsMBP # OpenStack Keypair name

    private_key: id_rsa # Path to OpenStack Keypair private key

    default_security_groups: [all]

apply_spec:

  properties:

  director: {max_threads: 3}

  hm: {resurrector_enabled: true}

  ntp: [0.north-america.pool.ntp.org, 1.north-america.pool.ntp.org]

The resurrector_enabled property for the Health Manager for BOSH will resurrect VMs from the templates that will be used to deploy when set to true. So, for example if the NATS VM dies, BOSH is watching for it’s health and will spin up a VM with the exact same template that was used to construct it.

I ran the following command to set the deployment manifest for installing BOSH on the OpenStack install using MicroBOSH.

bosh micro deployment manifest.yml

The output is as below.

Deployment set to ‘/Users/rags/src/openstack-bosh/manifest.yml’

I downloaded the Ubuntu Stem Cell with the following command. Substitute the appropriate version based on availability.

bosh download public stemcell bosh-stemcell-2831-openstack-kvm-ubuntu-trusty-go_agent.tgz

Completion is shown below for illustration.

bosh-stemcell: 100%
|ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo| 477.6MB 889.1KB/s Time: 00:09:10

Download complete

Stem Cells are Machine Images with BOSH agents and other software pre-installed.

I deployed this Stem Cell with the following command.

bosh micro deploy bosh-stemcell-2831-openstack-kvm-ubuntu-trusty-go_agent.tgz

I acknowledged the prompts as shown below. You can safely ignore the error that might be generated as shown
below.

No `bosh-deployments.yml` file found in current directory.

Conventionally, `bosh-deployments.yml` should be saved in /Users/rags/src.

Is /Users/rags/src/openstack-bosh a directory where you can save state? (type ‘yes’ to continue): yes

Deploying new micro BOSH instance manifest.yml to https://23.246.209.228:25555 (type ‘yes’ to continue): yes

Verifying stemcell…

File exists and readable

Verifying tarball…

Read tarball

Manifest exists

Stemcell image file

Stemcell properties

Stemcell info

————-

Name: bosh-openstack-kvm-ubuntu-trusty-go_agent

Version: 2831

Started deploy micro bosh

Started deploy micro bosh > Unpacking stemcell Done (00:00:08)

Started deploy micro bosh > Uploading stemcell Done (00:04:08)

Started deploy micro bosh > Creating VM from 056bd8cd-90e7-431e-99be-e6d0e75ebafc Done (00:00:41)

Started deploy micro bosh > Waiting for the agent Done (00:01:39)

Started deploy micro bosh > Updating persistent disk

Started deploy micro bosh > Create disk Done (00:00:08)

Started deploy micro bosh > Mount disk Done (00:00:08)

Done deploy micro bosh > Updating persistent disk (00:00:24)

Started deploy micro bosh > Stopping agent services Done (00:00:04)

Started deploy micro bosh > Applying micro BOSH spec Done (00:00:21)

Started deploy micro bosh > Starting agent services Done (00:00:02)

Started deploy micro bosh > Waiting for the director Done (00:00:12)

Done deploy micro bosh (00:07:39)log writing failed. can’t be called from trap context

Deployed manifest.yml to https://23.246.209.228:25555, took 00:07:39 to complete

/Users/rags/.rvm/gems/ruby-2.1.5/gems/net-ssh-2.9.2/lib/net/ssh/ruby_compat.rb:30:in `select': Bad file descriptor (Errno::EBADF)

You can ignore the Errno::EBADF and the associated stack error.

I verified that BOSH is installed with the following command.

bosh target https://23.246.209.228:25555

Which should yield the following output. Use admin as password.

Target set to microbosh

Your username: admin

Enter password: *****

Logged in as admin

You can get more information with the following command.

bosh status

Which should yield output that looks something like below.

Config /Users/rags/.bosh_config

Director

Name microbosh

URL https://23.246.209.228:25555

Version 1.2831.0 (00000000)

User admin

UUID 2eee9a10-34c6-4d01-8133-8e9e7b2aaf29

CPI openstack

dns enabled (domain_name: microbosh)

compiled_package_cache disabled

snapshots disabled

Deployment not set

The installed BOSH VM is shown in the screenshot below.

CFOO_v1_html_m19526edc

Having installed BOSH I was now ready to install Cloud Foundry. I plugged in the UUID of the director into a manifest file for BOSH and set the deployment to the manifest file and finally deployed it.

Install Cloud Foundry

Create a manifest file called minimal-openstack.yml, which looks like below and is based on the AWS sample.

Replace the password for Cloud Foundry below with the appropriate password. Make sure the internal network gateway is specified and the allocated VMs, etc. are in the range of 2-20 and is reserved. You could also adjust the flavors to suit the installation. Most of the VMs that are created are m1.small except for the DEA, which is an m1.xlarge.

name: cf

  director_uuid: 2eee9a10-34c6-4d01-8133-8e9e7b2aaf29 #REPLACE with Director UUID

releases:

- {name: cf, version: 197}

networks:

- name: cf_private

  type: manual

    subnets:

    – range: 192.168.111.0/24 #REPLACE 10.0.16.0/24

    gateway: 192.168.111.1 #REPLACE 10.0.16.1

    dns: [8.8.8.8]

    reserved: [192.168.111.2 – 192.168.111.20] #REPLACE [“10.0.16.2 – 10.0.16.3″]

    static: [192.168.111.100 – 192.168.111.105] #REPLACE [“10.0.16.100 – 10.0.16.105″]

   cloud_properties: net_id : 60fdc0d9-6054-4b1d-8aa0-f02e58c662bc #REPLACE_WITH_PRIVATE_SUBNET_ID

- name: elastic

  type: vip

  cloud_properties: {}

resource_pools:

- name: small_z1

  network: cf_private

  stemcell:

    name: bosh-openstack-kvm-ubuntu-trusty-go_agent #REPLACE

    version: latest

  cloud_properties:

    availability_zone: nova #REPLACE_WITH_AZ

    instance_type: m1.small

- name: large_z1

  network: cf_private

  stemcell:

    name: bosh-openstack-kvm-ubuntu-trusty-go_agent #REPLACE

    version: latest

  cloud_properties:

    instance_type: m1.xlarge

compilation:

  workers: 6

  network: cf_private

  reuse_compilation_vms: true

  cloud_properties:

    availability_zone: nova #REPLACE_WITH_AZ

    instance_type: m1.medium #REPLACE

update:

  canaries: 1

  max_in_flight: 1

  serial: false

  canary_watch_time: 30000-600000

  update_watch_time: 5000-600000

jobs:

- name: nats_z1

  instances: 1

  resource_pool: small_z1

  templates:

  – {name: nats, release: cf}

  – {name: nats_stream_forwarder, release: cf}

  – {name: metron_agent, release: cf}

  networks:

  – name: cf_private

  static_ips: [192.168.111.103]

- name: etcd_z1

  instances: 1

  resource_pool: small_z1

  templates:

  – {name: etcd, release: cf}

  – {name: etcd_metrics_server, release: cf}

  – {name: metron_agent, release: cf}

  networks:

  – name: cf_private

  static_ips: [192.168.111.104]

  properties:

    etcd_metrics_server:

    nats:

      machines: [192.168.111.103]

      password: REPLACE_PASSWORD

      username: nats

- name: nfs_z1

  instances: 1

  persistent_disk: 102400

  resource_pool: small_z1

  templates:

  – {name: debian_nfs_server, release: cf}

  – {name: metron_agent, release: cf}

  networks:

  – name: cf_private

  static_ips: [192.168.111.105]

- name: postgres_z1

  instances: 1

  persistent_disk: 1024

  resource_pool: small_z1

  templates:

  – {name: postgres, release: cf}

  – {name: metron_agent, release: cf}

  networks:

  – name: cf_private

  static_ips: [192.168.111.101]

  update:

    serial: true

- name: api_z1

  instances: 1

  resource_pool: small_z1

  templates:

  – {name: cloud_controller_ng, release: cf}

  – {name: cloud_controller_worker, release: cf}

  – {name: cloud_controller_clock, release: cf}

  – {name: metron_agent, release: cf}

  – {name: nfs_mounter, release: cf}

  networks:

  – name: cf_private

  properties:

    nfs_server:

      address: 192.168.111.105

      allow_from_entries: [192.168.111.0/24]

- name: ha_proxy_z1

  instances: 1

  resource_pool: small_z1

  templates:

  – {name: haproxy, release: cf}

  – {name: metron_agent, release: cf}

  networks:

  – name: elastic

  static_ips: [23.246.209.250] #REPLACE_WITH_ELASTIC_IP]

  – name: cf_private

    default: [gateway, dns]

  properties:

    ha_proxy:

      ssl_pem: |

        —–BEGIN CERTIFICATE—–

        MIIB8zCCAVwCCQCqpCViv8Vd9TANBgkqhkiG9w0BAQUFADA+MQswCQYDVQQGEwJB

        VTETMBEGA1UECBMKU29tZS1TdGF0ZTEMMAoGA1UEChQDKi4qMQwwCgYDVQQDFAMq

        LiowHhcNMTUwMTI3MTkyMDM4WhcNMTUwMjI2MTkyMDM4WjA+MQswCQYDVQQGEwJB

        VTETMBEGA1UECBMKU29tZS1TdGF0ZTEMMAoGA1UEChQDKi4qMQwwCgYDVQQDFAMq

        LiowgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAPcJyEfEQ5CxO9c8fxUzF0LN

        rkUGKlzHbu7INQ3TPhf5cHXdGg5patJrgDKhQEPbeqwnlMsVtq7si+VueeskaIb2

        eYpMKtLGUpBuK6zLR7Dqg56xVd20sTbcj8MpRzpHkoWBR+mA166LUw4xq1Gs7wcY

        pOs05Fez6jM9pIk7Sq1lAgMBAAEwDQYJKoZIhvcNAQEFBQADgYEAw0tszOhWGbJ5

        t3HzpXFY+2GhytiFEGiKk41hBAidxpziogbrBb6JW4s5r2BLKSRxrFuUT0jBnmbS

        psD2eOyQFXJy1ZDrIKr3hUWEkwj20ACcU3Co1AJjKqU4n5lANmD2GOAgzwwg47o3

        4xxFrgWOjLbpzFspzIVCGlQol4TpSF4=

        —–END CERTIFICATE—–

        —–BEGIN RSA PRIVATE KEY—–

        MIICXAIBAAKBgQD3CchHxEOQsTvXPH8VMxdCza5FBipcx27uyDUN0z4X+XB13RoO

        aWrSa4AyoUBD23qsJ5TLFbau7IvlbnnrJGiG9nmKTCrSxlKQbiusy0ew6oOesVXd

        tLE23I/DKUc6R5KFgUfpgNeui1MOMatRrO8HGKTrNORXs+ozPaSJO0qtZQIDAQAB

        AoGBAK+LCgDFXGWzK5y05nFADuVvlsiBqxSmuxN+vQSH+XW70MhQRzW6fyfrL/vK

        TgpqKe+vaLIvSdNfT8HHEWegRY1MptHAUkWNQ3Grz0uhBZZzxjujxNVPHNNbKfd+

        jyidZXRQA3Q6Tf+anCs5xrF99bTnya33X46OIC33UnghHqj5AkEA/lpMp25FWcnG

        EVW38AJ6GRdPUViyfsyCzJRdTdTri4mrDVc2+pkpasxZHd9SPb1mVWKD4nBTbx/f

        hET8J1+SiwJBAPijWxzZGuZgquEc2aTluKyRFsE65TMiEgepwyav70H/oXveSq97

        c01W/k9dKIQex3cQ/GtvRH+jZWUAWCSabc8CQAWskgUyKo3kOGzukpniFEM3B+fy

        qJi3iztxG9u+ojqMqao0hd91Rz1ArcRC1RzXes7w0axdgR77gQr8Vvux4B0CQDAt

        nysQ2oiHdLUYHQg5xzYRCyK4Ic9tq6a2e20UrDzSptzUrw4f0rDKyY5hU8d+G1J0

        BSVgMxq0c6JFlc7J6bsCQAn5VJ8cPB8hBSSaLPv6uY79JFbXzZlhgzdetdTz5DVo

        Hvgz+e0z3aq6haPesysurpYTIJ9DU30iCCYahXbjNTU=

        —–END RSA PRIVATE KEY—–

    router:

    servers:

      z1: [192.168.111.102]

- name: hm9000_z1

  instances: 1

  resource_pool: small_z1

  templates:

  – {name: hm9000, release: cf}

  – {name: metron_agent, release: cf}

  networks:

  – name: cf_private

- name: loggregator_z1

  instances: 1

  resource_pool: small_z1

  templates:

  – {name: doppler, release: cf}

  networks:

  – name: cf_private

  properties:

    doppler: {zone: z1}

    doppler_endpoint:

       shared_secret: PASSWORD #REPLACE

- name: loggregator_trafficcontroller_z1

  instances: 1

  resource_pool: small_z1

  templates:

  – {name: loggregator_trafficcontroller, release: cf}

  – {name: metron_agent, release: cf}

  networks:

  – name: cf_private

  properties:

    traffic_controller: {zone: z1}

- name: login_z1

  instances: 1

  resource_pool: small_z1

  templates:

  – {name: login, release: cf}

  – {name: uaa, release: cf}

  – {name: metron_agent, release: cf}

  networks:

  – name: cf_private

  properties:

    login:

      catalina_opts: -Xmx768m -XX:MaxPermSize=256m

    uaa:

      port: 8081

      admin:

        client_secret: PASSWORD

      batch:

        password: PASSWORD

        username: batch_user

      cc:

        client_secret: PASSWORD

      scim:

        userids_enabled: true

        users:

        – admin|PASSWORD|scim.write,scim.read,openid,cloud_controller.admin,doppler.firehose

    uaadb:

      address: 192.168.111.101

      databases:

      – {name: uaadb, tag: uaa}

      db_scheme: postgresql

      port: 5524

      roles:

      – {name: uaaadmin, password: PASSWORD, tag: admin}

- name: router_z1

  instances: 1

  resource_pool: small_z1

  templates:

  – {name: gorouter, release: cf}

  – {name: metron_agent, release: cf}

  networks:

  – name: cf_private

  static_ips: [192.168.111.102]

  properties:

    dropsonde: {enabled: true}

- name: runner_z1

  instances: 1

  resource_pool: large_z1

  templates:

  – {name: dea_next, release: cf}

  – {name: dea_logging_agent, release: cf}

  – {name: metron_agent, release: cf}

  networks:

  – name: cf_private

  properties:

    dea_next: {zone: z1}

- name: stats_z1

  instances: 1

  resource_pool: small_z1

  templates:

  – {name: collector, release: cf}

  – {name: metron_agent, release: cf}

  networks:

  – name: cf_private

  properties:

    collector: {deployment_name: CF}

properties:

  networks: {apps: cf_private}

  app_domains: [23.246.209.250.xip.io] #REPLACE_WITH_SYSTEM_DOMAIN]

  cc:

    billing_event_writing_enabled: false

    bulk_api_password: PASSWORD

    db_encryption_key: PASSWORD

    default_running_security_groups: [public_networks, dns]

    default_staging_security_groups: [public_networks, dns]

    install_buildpacks:

    – {name: java_buildpack, package: buildpack_java}

    – {name: ruby_buildpack, package: buildpack_ruby}

    – {name: nodejs_buildpack, package: buildpack_nodejs}

    – {name: go_buildpack, package: buildpack_go}

    – {name: python_buildpack, package: buildpack_python}

    – {name: php_buildpack, package: buildpack_php}

    internal_api_password: PASSWORD

    quota_definitions:

      default:

      memory_limit: 102400

      non_basic_services_allowed: true

      total_routes: 1000

      total_services: -1

      security_group_definitions:

      – name: public_networks

        rules:

        – {destination: 0.0.0.0-9.255.255.255, protocol: all}

        – {destination: 11.0.0.0-169.253.255.255, protocol: all}

        – {destination: 169.255.0.0-172.15.255.255, protocol: all}

        – {destination: 172.32.0.0-192.167.255.255, protocol: all}

        – {destination: 192.169.0.0-255.255.255.255, protocol: all}

        – name: dns

          rules:

          – {destination: 0.0.0.0/0, ports: ’53’, protocol: tcp}

          – {destination: 0.0.0.0/0, ports: ’53’, protocol: udp}

        srv_api_uri: http://api.23.246.209.250.xip.io #REPLACE_WITH_SYSTEM_DOMAIN

        staging_upload_password: PASSWORD

        staging_upload_user: staging_upload_user

  ccdb:

    address: 192.168.111.101

    databases:

    – {name: ccdb, tag: cc}

    db_scheme: postgres

    port: 5524

    roles:

    – {name: ccadmin, password: PASSWORD, tag: admin}

  databases:

    databases:

    – {name: ccdb, tag: cc, citext: true}

    – {name: uaadb, tag: uaa, citext: true}

    port: 5524

    roles:

    – {name: ccadmin, password: PASSWORD, tag: admin}

    – {name: uaaadmin, password: PASSWORD, tag: admin}

  dea_next:

    advertise_interval_in_seconds: 5

    heartbeat_interval_in_seconds: 10

    memory_mb: 33996

  description: Cloud Foundry sponsored by Pivotal

  domain: 23.246.209.250.xip.io #REPLACE_WITH_SYSTEM_DOMAIN

  etcd:

    machines: [192.168.111.104]

  hm9000:

    url: http://hm9000.23.246.209.250.xip.io #REPLACE_WITH_SYSTEM_DOMAIN

  logger_endpoint:

    port: 4443

  loggregator_endpoint:

    shared_secret: PASSWORD

  login:

    protocol: http

  metron_agent:

    zone: z1

  metron_endpoint:

    shared_secret: PASSWORD

  nats:

    machines: [192.168.111.103]

    password: PASSWORD

    port: 4222

    user: nats

  nfs_server:

    address: 192.168.111.105

    allow_from_entries: [192.168.111.0/24]

  ssl:

    skip_cert_verify: true

  system_domain: 23.246.209.250.xip.io #REPLACE_WITH_SYSTEM_DOMAIN

  system_domain_organization: default_organization

  uaa:

    clients:

      cc-service-dashboards:

        authorities: clients.read,clients.write,clients.admin

        authorized-grant-types: client_credentials

        scope: openid,cloud_controller_service_permissions.read

        secret: PASSWORD

      cloud_controller_username_lookup:

        secret:PASSWORD

      doppler:

        authorities: uaa.resource

        secret: PASSWORD

      login:

        authorities: oauth.login,scim.write,clients.read,notifications.write,critical_notifications.write,emails.write,scim.userids,password.write

        authorized-grant-types: authorization_code,client_credentials,refresh_token

        redirect-uri: http://login.23.246.209.250.xip.io

        scope: openid,oauth.approvals

        secret: PASSWORD

      servicesmgmt:

        authorities: uaa.resource,oauth.service,clients.read,clients.write,clients.secret

        authorized-grant-types: authorization_code,client_credentials,password,implicit

        autoapprove: true

        redirect-uri: http://servicesmgmt.23.246.209.250.xip.io/auth/cloudfoundry/callback
#REPLACE_WITH_SYSTEM_DOMAIN

        scope: openid,cloud_controller.read,cloud_controller.write

        secret: PASSWORD

    jwt:

      signing_key: |

        —–BEGIN RSA PRIVATE KEY—–

        MIICXAIBAAKBgQDHFr+KICms+tuT1OXJwhCUmR2dKVy7psa8xzElSyzqx7oJyfJ1

        JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMXqHxf+ZH9BL1gk9Y6kCnbM5R6

        0gfwjyW1/dQPjOzn9N394zd2FJoFHwdq9Qs0wBugspULZVNRxq7veq/fzwIDAQAB

        AoGBAJ8dRTQFhIllbHx4GLbpTQsWXJ6w4hZvskJKCLM/o8R4n+0W45pQ1xEiYKdA

        Z/DRcnjltylRImBD8XuLL8iYOQSZXNMb1h3g5/UGbUXLmCgQLOUUlnYt34QOQm+0

        KvUqfMSFBbKMsYBAoQmNdTHBaz3dZa8ON9hh/f5TT8u0OWNRAkEA5opzsIXv+52J

        duc1VGyX3SwlxiE2dStW8wZqGiuLH142n6MKnkLU4ctNLiclw6BZePXFZYIK+AkE

        xQ+k16je5QJBAN0TIKMPWIbbHVr5rkdUqOyezlFFWYOwnMmw/BKa1d3zp54VP/P8

        +5aQ2d4sMoKEOfdWH7UqMe3FszfYFvSu5KMCQFMYeFaaEEP7Jn8rGzfQ5HQd44ek

        lQJqmq6CE2BXbY/i34FuvPcKU70HEEygY6Y9d8J3o6zQ0K9SYNu+pcXt4lkCQA3h

        jJQQe5uEGJTExqed7jllQ0khFJzLMx0K6tj0NeeIzAaGCQz13oo2sCdeGRHO4aDh

        HH6Qlq/6UOV5wP8+GAcCQFgRCcB+hrje8hfEEefHcFpyKH+5g1Eu1k0mLrxK2zd+

        4SlotYRHgPCEubokb2S1zfZDWIXW3HmggnGgM949TlY=

       —–END RSA PRIVATE KEY—–

      verification_key: |

        —–BEGIN PUBLIC KEY—–

        MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDHFr+KICms+tuT1OXJwhCUmR2d

        KVy7psa8xzElSyzqx7oJyfJ1JZyOzToj9T5SfTIq396agbHJWVfYphNahvZ/7uMX

        qHxf+ZH9BL1gk9Y6kCnbM5R60gfwjyW1/dQPjOzn9N394zd2FJoFHwdq9Qs0wBug

        spULZVNRxq7veq/fzwIDAQAB

        —–END PUBLIC KEY—–

  no_ssl: true

  url: http://uaa.23.246.209.250.xip.io #REPLACE_WITH_SYSTEM_DOMAIN

Verify that there are no Stem Cells installed (yet) for BOSH.

bosh stemcells

Which should yield the following output.

No stemcells

Install the Stem Cells again using the following command.

bosh upload stemcell bosh-stemcell-2831-openstack-kvm-ubuntu-trusty-go_agent.tgz

Which should yield the following output.

Verifying stemcell…

File exists and readable

Verifying tarball…

Read tarball

Manifest exists

Stemcell image file

Stemcell properties

Stemcell info

————-

Name: bosh-openstack-kvm-ubuntu-trusty-go_agent

Version: 2831

Checking if stemcell already exists…

No

Uploading stemcell…

bosh-stemcell: 100%
|ooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo| 477.6MB 1.3MB/s Time: 00:06:20

Director task

Started update stemcell

Started update stemcell > Extracting stemcell archive. Done (00:00:04)

Started update stemcell > Verifying stemcell manifest. Done (00:00:00)

Started update stemcell > Checking if this stemcell already exists. Done
(00:00:00)

Started update stemcell > Uploading stemcell
bosh-openstack-kvm-ubuntu-trusty-go_agent/2831 to the cloud. Done (00:00:19)

Started update stemcell > Save stemcell
bosh-openstack-kvm-ubuntu-trusty-go_agent/2831 (2bca513a-3735-4e4a-8fbb-69cb3d201e2a). Done (00:00:00)

Done update stemcell (00:00:23)

Task 1 done

Started 2015-02-05 20:14:27 UTC

Finished 2015-02-05 20:14:50 UTC

Duration 00:00:23

Stemcell uploaded and created.

Git clone the Cloud Foundry releases as below.

git clone https://github.com/cloudfoundry/cf-release.git

We will upload the latest Cloud Foundry release based on the releases subdirectory with the following command. Release 197 is the latest in that subdirectory.

bosh upload release /Users/rags/src/openstack-bosh/cf-release/releases/cf-197.yml

The command above will create the release tarball and upload it OR you can do the same based on the following command based on BOSH docs.

bosh upload release https://bosh.io/d/github.com/cloudfoundry/cf-release?v=197

After a few retries, the release was finally uploaded based on the output below. I’ve included only the final part of the output.

Director task

Started extracting release > Extracting release. Done (00:00:40)

Started verifying manifest >Verifying manifest. Done (00:00:00)

Started resolving package dependencies > Resolving package dependencies. Done (00:00:00)

Started creating new packages

Started creating new packages > hm9000/ce5b3ae4b0fa4ece6c5e741fd9b675751e78eb73. Done (00:00:01)

Started creating new packages > uaa/263a72b91dfb6e4e9a2983e50694b5536e81c0bb. Done (00:00:04)

Started creating new packages > haproxy/630ad6d6e1d3cab4547ce104f3019b483f354613. Done (00:00:01)

Started creating new packages > loggregator_trafficcontroller/804c57c396af3a9b4484af58cf41a08a66d8d936. Done (00:00:01)

Started creating new packages >
buildpack_python/759fe70bfa3278668f142156e5688d5638597d5e. Done (00:00:02)

Started creating new packages >
rootfs_lucid64/f9c5405a71198038378ef7fe45b75f1e1f0aa634. Done (00:00:04)

Started creating new packages > sqlite/af44d44e58fffd069459cb63f6fcc37c6326e370. Done (00:00:01)

Started creating new packages > buildpack_php/6ee814b62c4f7a587fdb75f0aeee6775cbb95690. Done (00:00:10)

Started creating new packages > buildpack_java_offline/b82592e53483bcfcea2692aff9fa351c9fb69f12. Done (00:00:04)

Started creating new packages > acceptance-tests/a9fa0a313c165729b0dde68be7112b392b02b141. Done (00:00:00)

Started creating new packages >
buildpack_nodejs/d243df46ac9056320914ca1d6843e112b309720d. Done (00:00:08)

Started creating new packages > buildpack_go/a494a270d015bca2ff18e5cd919f858849cb8d43. Done
(00:00:18)

Started creating new packages > postgres/b63fe0176a93609bd4ba44751ea490a3ee0f646c. Done (00:00:01)

Started creating new packages > golang1.3/e4b65bcb478d9bea1f9c92042346539713551a4a. Done (00:00:01)

Started creating new packages > smoke-tests/d1aaf8be8786ee2ee001ce0a30d154c268f770fc. Done (00:00:00)

Started creating new packages > ruby-2.1.4/5a4612011cb6b8338d384acc7802367ae5e11003. Done (00:00:00)

Started creating new packages > login/f2f60e4ae26ec74ddb2b6ae50aefe47517267fab. Done (00:00:07)

Started creating new packages > golang/aa5f90f06ada376085414bfc0c56c8cd67abba9c. Done (00:00:06)

Started creating new packages > mruby/cd102a7fe440fd9eaeee99c6bc460b06884cbda6. Done (00:00:03)

Started creating new packages > etcd_metrics_server/64efbbfb5761d09a24dad21ecfebd8824b99d433. Done (00:00:01)

Started creating new packages > etcd/44df7612404c5b2ecc1f167126b9e0b20481f79d. Done (00:00:02)

Started creating new packages >
metron_agent/fe9066813bc8f9e641433c1eb9114c561cf6aa40. Done (00:00:02)

Started creating new packages > nginx_newrelic_plugin/92f2c6fb3f807f030d989c52cd1de445eba3f296. Done (00:00:00)

Started creating new packages > dea_next/5ea8a66d9595246ecc049df72168a49715cd4019. Done (00:00:00)

Started creating new packages > buildpack_java/cb9735380ce491b081024de9ef9e981dfb5e1bbf. Done (00:00:00)

Started creating new packages > gnatsd/a0d6f5d3264aa8ecadb52d3bfa04540636800820. Done (00:00:01)

Started creating new packages > mysqlclient/8b5d9ce287341048377997a9b3fe4ff3e6a1c68f. Done (00:00:00)

Started creating new packages > nats/cc6bda829a77ab2321c0c216aa9107aca92c3b1a. Done (00:00:00)

Started creating new packages > buildpack_cache/4ced0bc62f12dcaa79121718ca3525253ede33b5. Done (00:00:02)

Started creating new packages > libpq/49cc7477fcf9a3fef7a1f61e1494b32288587ed8. Done (00:00:01)

Started creating new packages > nginx/c916c10937c83a8be507d3100133101eb403c826. Done (00:00:00)

Started creating new packages > collector/9fa967f02b3dacc621369babb1a5e0b7940a9c80. Done (00:00:00)

Started creating new packages > cli/ce4c111a383538f658e40bf3411fad51d7a5ea29. Done (00:00:00)

Started creating new packages > doppler/024dcacf950b7e12250bc86a1a015a41dd18fbad. Done (00:00:00)

Started creating new packages > gorouter/c74a7b6edf8722d9e1f84b6d3af5985cf621e730. Done (00:00:01)

Started creating new packages > debian_nfs_server/aac05f22582b2f9faa6840da056084ed15772594. Done (00:00:00)

Started creating new packages > cloud_controller_ng/d456e9796a45255a9c680025a9c265703a969201. Done (00:00:00)

Started creating new packages > common/43595236d1ce5f9a6120198108c226c07ab17012. Done (00:00:00)

Started creating new packages > dea_logging_agent/bcd5bb7f0fab180231251b394d7ebfbc962dd4db. Done (00:00:01)

Started creating new packages > buildpack_ruby/bfcad10207e5296f4b647bc5e41a67d4c3b46d77. Done (00:00:14)

Started creating new packages > warden/7d6d68c3f52d9a0925171f879e953b352bbf1325. Done (00:00:02)

Done creating new packages (00:01:39)

Started creating new jobs

Started creating new jobs > hm9000/f9a5be2966ce849ed0e600007d26cbd4339e530. Done (00:00:03)

Started creating new jobs > uaa/9617a6fc1a23bbf8ae92afcfd4022130423e3c43. Done (00:00:00)

Started creating new jobs > haproxy/065430834b344d0ad3f3b1fcede41b40ffa73bc1. Done (00:00:00)

Started creating new jobs > loggregator_trafficcontroller/15c4c206ea00c79c0421e7c859ea14e356c33bd4. Done (00:00:00)

Started creating new jobs > cloud_controller_clock/0c2f409b7cb6f53aebfec7281287a72049455366. Done (00:00:00)

Started creating new jobs > nfs_mounter/e597e616003791ea243cf1d49880a90088c1129e. Done (00:00:00)

Started creating new jobs > acceptance-tests/32ef38f30a8e9b1684075ec4536f3e8861557add. Done
(00:00:00)

Started creating new jobs > cloud_controller_worker/0227ac611c3a924923d5f0afae8468101de20a89.
Done (00:00:00)

Started creating new jobs > postgres/c7870a2b525bf3ec6d77c7a85caca1e094d188b0. Done (00:00:00)

Started creating new jobs > smoke-tests/293ea288da3ba64d548272197a5c984629dcf1b9. Done (00:00:00)

Started creating new jobs > login/04e3b07d6252f84ee1edc65608c7cd3f0f7cc501. Done (00:00:01)

Started creating new jobs > etcd_metrics_server/bdf9fa22da62d0b08c4d867c3a1285fc355290ef. Done (00:00:00)

Started creating new jobs > etcd/79b92026e3dbfa40439693f0293f50910125ceb6. Done (00:00:00)

Started creating new jobs > metron_agent/f56b50b44594442cddf576028b10bc9e079d5ccf. Done (00:00:00)

Started creating new jobs > dea_next/2b81e8aae1f9f2bcd41706e337eebf91ff891bac. Done (00:00:00)

Started creating new jobs > nats/e010c50267d0a477e741172582610c6f0584da8b. Done (00:00:00)

Started creating new jobs > collector/43b67fee22ef25f6e8958d92035bcd4a499bb975. Done (00:00:00)

Started creating new jobs > doppler/312b1137f4e4aa085de54f57f43ee502f4e65d47. Done (00:00:00)

Started creating new jobs > gorouter/f4ad08821bbefbf6c83e6fa50eb0f752bc516ed5. Done (00:00:00)

Started creating new jobs > debian_nfs_server/c9138cb0ea7921872f9fcd55a465c90335fb051f. Done
(00:00:00)

Started creating new jobs > cloud_controller_ng/4cd2825f9c760ec1d904798b4222e77b64238926. Done (00:00:00)

Started creating new jobs > dea_logging_agent/62846e802ab7f25d41a6a49a77e6913e74e0cf28. Done (00:00:00)

Started creating new jobs > nats_stream_forwarder/0f7a5da9369b2baf5a46c2a9af4c75f14d09afd3. Done (00:00:01)

Done creating new jobs (00:00:05)

Started release has been created > cf/197. Done (00:00:00)

Task 3 done

Started 2015-02-06 06:19:38 UTC

Finished 2015-02-06 06:22:02 UTC

Duration 00:02:24

Release uploaded

Deploy using BOSH. Set the deployment as below.

bosh deployment minimal-openstack.yml

Deploy with the following command.

bosh deploy

If you monitor the instances of VMs on the OpenStack install you should be able to see the creation of
worker VMs as below.

CFOO_v1_html_m27f3bf03

Followed by creation of the Cloud Foundry components in different VMs as below.

CFOO_v1_html_m7f6e9210

The CLI output looks something like below.

Processing deployment manifest

——————————

Getting deployment properties from director…

Compiling deployment manifest…

Please review all changes carefully

Deploying

———

Deployment name: minimal-openstack.yml

Director name: microbosh

Are you sure you want to deploy? (type ‘yes’ to continue): yes

Director task

Started preparing deployment

Started preparing deployment > Binding deployment. Done (00:00:00)

Started preparing deployment > Binding releases. Done (00:00:00)

Started preparing deployment > Binding existing deployment. Done (00:00:00)

Started preparing deployment > Binding resource pools. Done (00:00:00)

Started preparing deployment > Binding stemcells. Done (00:00:00)

Started preparing deployment > Binding templates. Done (00:00:00)

Started preparing deployment > Binding properties. Done (00:00:00)

Started preparing deployment > Binding unallocated VMs. Done (00:00:00)

Started preparing deployment > Binding instance networks. Done (00:00:00)

Done preparing deployment (00:00:00)

Started preparing package compilation > Finding packages to compile. Done (00:00:00)

Started preparing dns > Binding DNS. Done (00:00:00)

Started deleting unneeded vms

Started deleting unneeded vms > 72fe08b2-dc5a-44c8-8842-6b839e1e28a7

Started deleting unneeded vms > c443dbbd-742d-41f7-9520-305759211801. Done (00:00:06)

Done deleting unneeded vms > 72fe08b2-dc5a-44c8-8842-6b839e1e28a7 (00:00:06)

Done deleting unneeded vms (00:00:06)

Started creating bound missing vms

Started creating bound missing vms > small_z1/0

Started creating bound missing vms > small_z1/2

Started creating bound missing vms > small_z1/1. Done (00:01:09)

Started creating bound missing vms > small_z1/3

Done creating bound missing vms > small_z1/0 (00:01:39)

Started creating bound missing vms > small_z1/4

Done creating bound missing vms > small_z1/2 (00:01:40)

Started creating bound missing vms > small_z1/5

Done creating bound missing vms > small_z1/3 (00:01:02)

Started creating bound missing vms > small_z1/6

Done creating bound missing vms > small_z1/4 (00:01:06)

Started creating bound missing vms > small_z1/7

Done creating bound missing vms > small_z1/5 (00:01:10)

Started creating bound missing vms > small_z1/8

Done creating bound missing vms > small_z1/6 (00:01:06)

Started creating bound missing vms > small_z1/9

Done creating bound missing vms > small_z1/7 (00:01:07)

Started creating bound missing vms > large_z1/0

Done creating bound missing vms > small_z1/8 (00:01:08)

Done creating bound missing vms > small_z1/9 (00:01:03)

Done creating bound missing vms > large_z1/0 (00:01:11)

Done creating bound missing vms (00:05:03)

Started binding instance vms

Started binding instance vms > nats_z1/0

Started binding instance vms > etcd_z1/0

Started
binding instance vms > nfs_z1/0

Done binding instance vms > nats_z1/0 (00:00:00)

Started binding instance vms > postgres_z1/0

Done binding instance vms > etcd_z1/0 (00:00:00)

Started binding instance vms > api_z1/0

Done binding instance vms > nfs_z1/0 (00:00:00)

Started binding instance vms > ha_proxy_z1/0. Done (00:00:00)

Started binding instance vms > hm9000_z1/0

Done binding instance vms > api_z1/0 (00:00:00)

Started binding instance vms > loggregator_z1/0. Done (00:00:00)

Started binding instance vms > loggregator_trafficcontroller_z1/0

Done binding instance vms > hm9000_z1/0 (00:00:00)

Started binding instance vms > login_z1/0

Done binding instance vms > postgres_z1/0 (00:00:01)

Started binding instance vms > router_z1/0. Done (00:00:00)

Started binding instance vms > runner_z1/0. Done (00:00:00)

Started binding instance vms > stats_z1/0

Done binding instance vms > loggregator_trafficcontroller_z1/0 (00:00:01)

Done binding instance vms > login_z1/0 (00:00:01)

Done binding instance vms > stats_z1/0 (00:00:00)

Done binding instance vms (00:00:01)

Started preparing configuration > Binding configuration. Done (00:00:02)

Started updating job nats_z1 > nats_z1/0 (canary)

Started updating job etcd_z1 > etcd_z1/0 (canary)

Started updating job nfs_z1 > nfs_z1/0 (canary)

Done updating job etcd_z1 > etcd_z1/0 (canary) (00:03:20)

Done updating job nats_z1 > nats_z1/0 (canary) (00:03:25)

Done updating job nfs_z1 > nfs_z1/0 (canary) (00:05:10)

Started updating job postgres_z1 > postgres_z1/0 (canary). Done (00:02:01)

Started updating job api_z1 > api_z1/0 (canary)

Started
updating job ha_proxy_z1 > ha_proxy_z1/0 (canary)

Started updating job loggregator_z1 > loggregator_z1/0 (canary)

Started updating job hm9000_z1 > hm9000_z1/0 (canary)

Started updating job loggregator_trafficcontroller_z1 > loggregator_trafficcontroller_z1/0 (canary)

Started updating job login_z1 > login_z1/0 (canary)

Started updating job router_z1 > router_z1/0 (canary)

Started updating job stats_z1 > stats_z1/0 (canary)

Started updating job runner_z1 > runner_z1/0 (canary)

Done updating job hm9000_z1 > hm9000_z1/0 (canary) (00:01:43)

Done updating job loggregator_trafficcontroller_z1 > loggregator_trafficcontroller_z1/0 (canary) (00:02:13)

Done updating job loggregator_z1 > loggregator_z1/0 (canary) (00:02:15)

Done updating job ha_proxy_z1 > ha_proxy_z1/0 (canary) (00:02:17)

Done updating job stats_z1 > stats_z1/0 (canary) (00:03:22)

Done updating job login_z1 > login_z1/0 (canary) (00:03:23)

Done updating job runner_z1 > runner_z1/0 (canary) (00:04:10)

Done updating job router_z1 > router_z1/0 (canary) (00:04:30)

Done updating job api_z1 > api_z1/0 (canary) (00:06:08)

Task 8 done

Started 2015-02-06 06:56:25 UTC

Finished 2015-02-06 07:14:56 UTC

Duration 00:18:31

Deployed
`minimal-openstack.yml’ to `microbosh’

You can look at the VMs that have been created with the following command.

bosh vms

Which should yield an output that looks something along lines below.

Deployment `cf’

Director task 11

Task 11 done

+————————————+———+—————+—————–+

| Job/index | State | Resource Pool | IPs
|

+————————————+———+—————+—————–+

| api_z1/0 | running | small_z1 | 192.168.111.23 |

| etcd_z1/0 | running | small_z1 | 192.168.111.104 |

| ha_proxy_z1/0 | running | small_z1 | 192.168.111.24 |

| | | | 23.246.209.250 |

| hm9000_z1/0 | running | small_z1 | 192.168.111.25 |

| loggregator_trafficcontroller_z1/0 | running | small_z1 | 192.168.111.27 |

| loggregator_z1/0 | running | small_z1 | 192.168.111.26 |

| login_z1/0 | running | small_z1 | 192.168.111.28 |

| nats_z1/0 | running | small_z1 | 192.168.111.103 |

| nfs_z1/0 | running | small_z1 | 192.168.111.105 |

| postgres_z1/0 | running | small_z1 | 192.168.111.101 |

| router_z1/0 | running | small_z1 | 192.168.111.102 |

| runner_z1/0 | running | large_z1 | 192.168.111.29 |

| stats_z1/0 | running | small_z1 | 192.168.111.30 |

+————————————+———+—————+—————–+

VMs total: 13

The resources used up by the OpenStack Cloud Foundry install is illustrated below.

CFOO_v1_html_m5625ab6

Use the installed Cloud Foundry instance

At this point, the installation is ready and you can use the installed instance with the following command using the URL specified in the minimal-openstack YML file.

If you have not installed the Cloud Foundry CLI yet, you can install it based on the docs available at
https://github.com/cloudfoundry/cli/releases.

cf api –skip-ssl-validation api.23.246.209.250.xip.io

Which should yield an output along the lines of

Setting api endpoint to api.23.246.209.250.xip.io…

OK

API endpoint: https://api.23.246.209.250.xip.io (API version: 2.21.0)

Not logged in. Use ‘cf login’ to log in.

You can now login with the password as provided in the minimal-openstack YML file and start cf pushing apps.

Summary

Here are the steps we followed to install the latest Cloud Foundry release on OpenStack.

  1. Started with an OpenStack install with the requisite resources such as floating IPs, vCPUs, memory, disk space, etc.

  2. Installed MicroBOSH on a laptop/desktop.

  3. Installed BOSH on a VM instance of the OpenStack install with a floating IP assigned.

  4. Installed the latest Cloud Foundry release via the BOSH VM using BOSH.

  5. Installed the Cloud Foundry CLI.

Now we’re ready to cf login and cf push.

Does Infrastructure Matter to App Developers?

As Infrastructure becomes more commoditized and an extreme pricing war rages on at the Infrastructure as a Service (IaaS) layer, infrastructure vendors are moving up the stack towards offering more services that app developers can consume. Some of the emerging trends in modern application development makes infrastructure and application development go hand-in-hand. Especially with

  • Building applications for scale including Big Data
  • Agile processes and Devops including Platform as a Service
  • Cloud computing in general

Plus enterprises are still spending substantially for pure storage.

Having said all this I am moving on to EMC and help make the case that the company cares about application developers and to reduce the friction for developers to work with the EMC technologies and products. Although some of the core EMC products might seem far removed from the application developers, products from VMware, Pivotal and RSA are all extremely relevant to application developers.

Many of Spring commiters are from Pivotal. Spring, a developer favorite which also has significant adoption in the enterprise is the underpinnings for many of the Pivotal technologies and products.

EMC is also making a push with some anticipated announcements to the OpenStack community. Some of the core EMC products, like ViPR are making the move to be more open and developer friendly.

Hoping that our newly formed team @emccode can work with open source and developer communities, contribute to projects and solutions and show that EMC cares.

p.s. In late breaking news EMC buys Cloudscaling today and I had nothing to do with it :-)

Should App Developers care about OpenStack?

Will Havana break the Ice with Application Developers?

OpenStack Havana release is out. I realize that the term developer needs to be properly qualified. I am trying to draw the distinction from an OpenStack developer, which usually means a contributor to the OpenStack infrastructure in this context. An Application developer would leverage the OpenStack platform.  Whereas OpenStack is the life and blood of an OpenStack developer, does the Application developer even care about OpenStack?

The Application Developer has been at the center of the “App economy”, which was spurred by the increasing use of Apps on a smartphone and the “API economy”, which is being driven by the huge amounts of data being produced and consumed as web services. Both these economies have grown by leaps and bounds. Large enterprises have jumped on the bandwagon to be able to meet the demands of the consumers. With OpenStack the Application developer has remained in the periphery (so far).

There are definitely signs of OpenStack becoming more developer friendly with the Havana release. The Heat project, which includes AutoScaling, is aimed squarely at application development. Project Neutron provides many services, like Load Balancing, Firewalls and so on. Project Trove, which is a Database as a Service, Project Marconi, which is a Queuing service and Project Savanna, which is intended for Big Data processing are all incubated in the Icehouse release.

Does IaaS matter to Joe Developer?

Before my life as a Racker and OpenStacker, I was an application developer myself. I developed in Java/Java EE. I attended my first OpenStack summit at Portland and although I was very impressed by the phenomenal growth of the community and the participation of enterprise companies, the sessions I attended did not quite hit home the value proposition of OpenStack to an app developer. I needed something more than the argument that developers should care about IaaS since they don’t need to wait for days or weeks for IT to be able to standup their development, testing or production environments.

I decided to be proactive and rounded up some stalwarts from the OpenStack community who were also at similar crossroads and submitted a panel for the OpenStack summit at Hong Kong that got accepted. 

Let your voices be heard, ask the tough questions and act!

Should Developers care about OpenStack? Does the community need to grow to millions or is it fine where it is? Can it cross the chasm with respect to app developer adoption? What is the killer app. for OpenStack? Does OpenStack need an app. marketplace? What can the community do to entice more developers? Or should it?

These are some of the many questions I have heard frequently. I don’t believe that all of these questions will be answered in the panel, but hopefully it will spur some action from everyone in the community going forward. If you’re attending the summit, there will be ample opportunities to get some answers from the panelists. If you’re not at the summit and still care about OpenStack and the developer, tweet (@ragss) or email the question to me for possible use during the panel or in the future. Links to the slides are here.

Is NoSQL the next wave?

I attended a NoSQL talk last night at my local JUG and the interest level for NoSQL seems to be higher than ever before. Without getting into a religious discussion on why the name NoSQL and is it really anti SQL, modern applications like Facebook, NetFlix, Zynga and so on have different needs and use cases from traditional applications. It’s perhaps a cliche to say that scalability and performance is paramount. However, more importantly, the move towards NoSQL is also seen as a way to simplify the architecture whenever possible. It’s no panacea by any means either. Eric Brewer’s work in 1990s – CAP theorem talks about the tradeoffs that need to be made at a very high level. In addition, you have to deal with technologies that are nowhere near the level of maturity as SQL.

The discussions that ensued after the talk made it clear that the NoSQL movement has some really quick growing up to do to become an integral part of the enterprise. Data visualization tools, regulatory compliance requirements and even ETL tools for most of the vendors seem to be well short of expectations. From a developer viewpoint, there is very little in terms of standardization to ease the pain of migration. Language APIs even from the same vendor are not all equal. However, companies from tiny to huge are venturing seemingly undeterred into NoSQL for a multitude of reasons, not all of them technical.

How is it possible to answer the question — should I even consider a NoSQL system, if so, how do I go about evaluating which one, how do I get started and so on?

There are a number of webinars, conferences that you can attend which go through some of these war stories. Often times, it’s great to get the buzz from the peers. The QCon panel provides a good high level perspective.  There are a number of QCon conferences and NFJS conferences for a vendor neutral viewpoint. I personally am doing an O’Reilly webinar, speaking at the Cleveland JUG and at some of the Couch conferences, the next one happens to be in Portland, OR.

Hope to see you at some of these venues and talk about NoSQL, Java, Ruby or whatever development trends that you want to talk about.

 

The Java PaaS Rush – Crossing the Chasm?

As the National Football League (NFL) season opens up and teams are tweaking their rosters, I am pondering a different kind of PaaS rush.

The industry consensus is that the PaaS market in general is growing slowly and lagging the hype curve. However,  there is already a host of Paas vendors with some form of enterprise Java support,  just like in the hey days of the Java Application Servers. From Google with it’s App. Engine to even Microsoft making a play for Java developers on Microsoft Azure, there is a whole bunch of platforms including Amazon, CloudBees, Cloud Foundry, Cumulogic, Force.com, OpenShift and the list goes on.

I could make a detailed comparison of the different platforms (I’ll save that for JavaOne 2011). Instead, I am left marveling at how Java, or more accurately the Java  ecosystem, has been able to spur the recent growth of the PaaS market despite being considered old at a decade and half. No doubt that many of these PaaS platforms support other languages as well like Ruby, Python and so on, but it seems like the defacto choice for enterprise deployment is .jar, .war or .ear files, or some flavor of that. Consequently, PaaS vendors are opening up their platforms to Java developers including supporting the APIs that Java developers are familiar with and hope that they come in droves when they make a move to the cloud.

OScon Java had it’s share of talks on Java. The “Good, Bad, and Ugly of Java” talk went into how the designers of Java mostly made good choices in the initial design of the language that has withstood the test of time and made huge inroads into the enterprise. The talk by Twitter about using the JVM for better performance and to tap into a large developer population while keeping the operations folks happy at the same time since they understood JAR files, how to interpret GC logs, etc. goes to say how important are the aspects of familiarity and not introducing something fundamentally new within the enterprise.

Given a choice between agility and familiarity enterprise developers pick familiarity for a variety of reasons. Of course if you can add in agility as well that the cloud in general and PaaS  in particular provides to an existing environment, it becomes a very compelling value proposition to enterprises.

It would be disingenuous to claim that Java is the only agent that might facilitate the crossing of the PaaS chasm. The gradual embrace of Open Source in the enterprise, the variety of technological improvements and a motivation to control costs will help in the growth of PaaS. Enterprises realize that getting to PaaS i.e. migrating apps. to PaaS is expensive and time consuming and are holding off on migrating to PaaS en mass. This could be substantially eased by significantly reducing the learning curve and not having to introduce entirely new design paradigms.

Many of the PaaS vendors are taking the familiar application development lifecycle and products (like the Tomcat server) that Java developers know and love and adapting it to the cloud. From providing a version control system (like git or svn) to being able to analyze logs, deploy newer versions and even rollback to earlier versions all with a few clicks, the goal seems to become a part of the PaaS rush that Java and other developers are poised to make.

Is Cloud Computing on the developer RADAR? Really?

As kids head back to school, I went back to school as well. I was extended an invitation to attend the No Fluff Just Stuff conference which is a small and compact conference where everyone seems to know everyone else. The attendees all seemed to be hard core developers or architects who seemed to be in tune with the future but mostly focussed on the present.

The sessions I attended were varying in technical depth, but, mostly had stuff and minimal fluff. I did manage to pick up some great nuggets. It was definitely worth my time. I was at a 60-minute expert panel where questions ranged from the future of Java after Sun is consumed by Oracle, future of Java 7, scripting languages, deployment, EJBs, Rich Internet Applications including Flex and JavaFX and the gamut.

What surprised me most was that there was not a single question on the cloud and the word was barely even mentioned. It did come up once as a substitute term for the internet.

While Amazon and Infrastructure as a Service has made a huge dent on startups and small companies and IT organizations for the ability to outsource the data center, I think the cloud has made a very minimal impact on the developer mindshare — for now anyway.

The reasons for this are certainly varied. Developers are probably waiting for the hype to die down to make an assesment of the reality. Unlike evolving distributed system models, from socket programming, to RMI, to web services and so on, the cloud does not seem to be introducing a new programming paradigm. The developer tools are still evolving and there does not seem to be a set of APIs for the cloud other than the Platform as a Service which are relevant to that particular platform alone.

In my informal conversations with the attendees, there were lot of other things besides the cloud that occupied their minds. So, the question that was foremost in my mind after attending the conference was “Is Cloud Computing on the developer RADAR? Really?” Or was it too small a sample size to be representative?

What about Data Latency and cloud computing?

Over a decade back when we were still in the CORBA days, Peter Deutsch came up with the fallacies of distributed computing. Many distributed systems have been flawed due to willfully ignoring some of the fallacies and expecting technologies to obviate some of the inherent limitations in distributed computing. The premise was that those developers and designers who ignored these fallacies would in effect end up with a distributed system that would have some serious limitations in its functionality sooner or later. It was best to plan for some of these problems which was (and still is) inherent in distributed computing.

One of these fallacies is that “Latency is zero“. In traditional computing, the compute and data was typically hosted on the same system and the data latency was determined by the storage disks and the data bus speeds.  It was a simple matter of buying better hardware to overcome data latency if it was ever an issue. In cloud computing and especially when we get to network of clouds with data expected to flow around different clouds, latency (however minimal it is) could be an  issue depending on the data being manipulated, the network speeds and so on. Add to this the fact that the entire data or part of the data should be encrypted and decrypted when it moves around unreliable and public networks, and the fact that data needs to be streamed, latency will soon add up and could become a serious issue.

In the web era, many companies like Akamai have specialized in making data available closer to use and minimizing network latency. Some of these and other companies are already looking into this issue vis-a-vis the cloud, but, just the other day when I was twiddling my thumbs waiting for a not-so-big file to upload from my desktop to my EC2 machine instance, I was wondering how many of those dabbling with cloud computing would eventually be faced with the same question that I did, “How does Data Latency affect my cloud solution or design?”.

Follow

Get every new post delivered to your Inbox.