Continuous Delivery

Continuous Delivery (CD) allows project leads and committers to configure automated processes to build and deploy their software. FINOS provides a dedicated OpenShift instance that can be used for this purpose.

This page provides guidance to project leads and committers who want to integrate Continuous Delivery in their build process; before starting, make sure you have read through the FINOS OpenShift Console documentation.

Travis CI to OpenShift integration

The Travis CI configuration can be extended trigger the OpenShift image build and deployment; to make configuration easier, the Foundation provides a oc-deploy.sh script that can be executed as after_success script and basically wraps the OpenShift CLI tool and triggers a containerised deployment of an application; with a simple bash line, the following operations are performed:

  1. Install OpenShift CLI (oc) in the current environment (see OC_VERSION and OC_RELEASE)
  2. Logs into OpenShift Online (see OC_ENDPOINT) and select the right OpenShift project (see OC_PROJECT)
  3. Deletes - if OC_DELETE_LABEL is defined - all OpenShift resources marked with the specified key=value label
  4. Processes - if OC_TEMPLATE file exists and is valid - and creates all OpenShift resources defined within; template parameter values can be passed using the OC_TEMPLATE_PROCESS_ARGS configuration variable
  5. Triggers the build of an OpenShift image, identified by the BOT_NAME configuration variable, uploading the application binary as ZIP archive (see OC_BINARY_ARCHIVE) or a folder (see OC_BINARY_FOLDER) that have been generated by the project build; the folder is preferred over the archive binary distribution, as it's less likely to cause network timeouts during the upload

Below is the Travis configuration.

.travis.yml
...

after_success: curl -s https://raw.githubusercontent.com/symphonyoss/contrib-toolbox/master/scripts/oc-deploy.sh | bash

Configuration

The oc-deploy.sh script requires some mandatory environment variables to be defined before executing the script; the mandatory ones are:

  • OC_TOKEN - The Openshift Online token
  • BOT_NAME - the name of the BuildConfig registered in OpenShift, which is used to start the OpenShift build
  • OC_TEMPLATE - Path of an OpenShift template to (oc) process and create, if existent; defaults to .openshift-template.yaml
  • OC_BINARY_ARCHIVE or OC_BINARY_FOLDER - Relative path to the ZIP file (or folder) to upload to the container as source
  • Any other environment variable containing template parameters, see OC_TEMPLATE_PROCESS_ARGS below

Please review the default values of the following variables:

  • OC_DELETE_LABEL - Used to delete a group of resources with the same label; If set, the script will invoke oc delete all -l <OC_DELETE_LABEL>; example: OC_DELETE_LABEL="app=mybot"

  • OC_TEMPLATE_PROCESS_ARGS - Comma-separated list of environment variables to pass to the OC template; defaults to null; example OC_TEMPLATE_PROCESS_ARGS="BOT_NAME,S2I_IMAGE"

  • OC_VERSION - OpenShift CLI version; defaults to 1.5.1

  • OC_RELEASE - OpenShift CLI release; defaults to 7b451fc-linux-64bit

  • OC_ENDPOINT - OpenShift server endpoint; defaults to https://api.starter-us-east-1.openshift.com

  • OC_PROJECT_NAME - The Openshift Online project to use; default is ssf-dev, no changes needed
  • OC_ENDPOINT - OpenShift server endpoint; defaults to https://api.pro-us-east-1.openshift.com , no changes needed
  • SKIP_OC_INSTALL - Skips the OpenShift CLI (oc) installation; defaults to false; useful for local oc-deploy.sh test runs

The example below shows a complete .travis.yml configuration.

.travis.yml
env:
  global:
   - BOT_NAME="mybot"
   - OC_DELETE_LABEL="app=mybot"
   - SYMPHONY_POD_HOST="foundation-dev.symphony.com"
   - SYMPHONY_API_HOST="foundation-dev-api.symphony.com"
   - OC_BINARY_FOLDER="vote-bot-service/target/oc"
   - OC_TEMPLATE_PROCESS_ARGS="BOT_NAME,SYMPHONY_POD_HOST,SYMPHONY_API_HOST"

after_success: curl -s https://raw.githubusercontent.com/symphonyoss/contrib-toolbox/master/scripts/oc-deploy.sh | bash

Project setup

In order to configure Continuous Delivery, the project must meet few requirements and some configuration must be defined.

  1. Get familiar with OpenShift concepts; keep in mind that most of them are inherited by Kubernetes, which is the orchestration engine used by OpenShift.
  2. Memory (size) and CPU (number) requirements must be known upfront
  3. The deployment strategy must be known upfront; default is RollingDeployment, which spins up a new container in parallel to the existing one, switches traffic when the new one is ready and finally kills the existing one.
  4. Collect all passwords and secrets that are needed by the applications to run; the Foundation Staff will register these entries as secrets in OpenShift and deliver secret key references
  5. The build process MUST generate a folder that:
    1. MUST contain all the artifacts to run the application; for Maven builds, the assembly plugin can be used
    2. MUST contain a (Unix) run script; for Maven builds, the appassember plugin can generate it
    3. MUST NOT contain any password, secret or sensitive data (like emails, names, addresses, etc) in clear text; OpenShift secrets provide a safe way to manage them
  6. Follow the instructions below to define an OpenShift template called .openshift-template.yaml, in the root folder of the GitHub repository

Template definition

Below is described, section by section, an OpenShift template that defines

  1. The Docker image build process for the given app
  2. The image stream that triggers a deployment configuration when an image is created
  3. The deployment configuration that defines - among other things - the containers to run and its configuration

You will notice that all resources configured below define an app label with the same value, which allows to run commandline commands across all resources with the same label, for example oc delete all -l app=mybot

Template name and parameters

This is the header of the file, which defines the template name mybot-template and a list of parameters, like BOT_NAME; parameters must be exported as environment variables before invoking oc-deploy.sh (see above); from the objects: line below, the Openshift configuration resources are defined.

apiVersion: v1
kind: Template
metadata:
  name: mybot-template
parameters:
- name: BOT_NAME
  description: The Bot name
  displayName: Bot Name
  required: true
  value: "mybot"
- name: SEND_EMAIL
  description: Whether the bot should send emails or not; defaults to true
  displayName: Send Email?
  required: true
  value: "true"
...
objects:

Images and streams

The OpenShift image creation process is carried by a container called deployer and decribed by the BuildConfig resource, which takes as parameters:

  • a sourceStrategy, which identifies the container image used to build the deployer container and points to the ImageStream with name s2i-java
  • the output image name and tag, which points to the ImageStream with name ${BOT_NAME}

See the example below.

- apiVersion: v1
  kind: ImageStream
  metadata:
    labels:
      app: ${BOT_NAME}
    name: s2i-java
  spec:
    dockerImageRepository: "docker.io/jorgemoralespou/s2i-java"
- apiVersion: v1
  kind: ImageStream
  metadata:
    labels:
      app: ${BOT_NAME}
    name: ${BOT_NAME}
  spec: {}
  status:
    dockerImageRepository: ""
- apiVersion: v1
  kind: BuildConfig
  metadata:
    name: ${BOT_NAME}
    labels:
      app: ${BOT_NAME}
  spec:
    output:
      to:
        kind: ImageStreamTag
        name: ${BOT_NAME}:latest
    postCommit: {}
    resources: {}
    runPolicy: Serial
    source:
      type: Binary
      binary:
    strategy:
      type: Source
      sourceStrategy:
        from:
          kind: ImageStreamTag
          name: s2i-java:latest
    triggers: {}

Deployment configuration

The DeploymentConfig resource defines:

  • The deployment strategy, defaults to Rolling
  • The container configuration
    • The image to use to create the container; this must match with the ImageStream output defined above
    • TCP/UDP ports to expose; in this case port 8080 is open at container level
    • The readinessProbe detects if the container is unhealthy
    • Container environment variables can be defined in clear text (ie LOG4J_FILE) or loaded from a secret key reference; secrects are managed by the Foundation Staff and are normally used to manage credentials used to access the Open APIs provided by ODP.
  • The deployment configuration trigger, pointing to the latest tag of an image called ${BOT_NAME}

See the example below.

- apiVersion: v1
  kind: DeploymentConfig
  metadata:
    labels:
      app: ${BOT_NAME}
    name: ${BOT_NAME}
  spec:
    replicas: 1
    selector:
      app: ${BOT_NAME}
      deploymentconfig: ${BOT_NAME}
    strategy:
      rollingParams:
        intervalSeconds: 1
        maxSurge: 25%
        maxUnavailable: 25%
        timeoutSeconds: 600
        updatePeriodSeconds: 1
      type: Rolling
    template:
      metadata:
        labels:
          app: ${BOT_NAME}
          deploymentconfig: ${BOT_NAME}
      spec:
        containers:
        - image: ${BOT_NAME}:latest
          imagePullPolicy: Always
          name: ${BOT_NAME}
          ports:
          - containerPort: 8080
            protocol: TCP
          readinessProbe:
            httpGet:
              path: "/healthcheck"
              port: 8080
            initialDelaySeconds: 15
            timeoutSeconds: 1
          env:
          - name: LOG4J_FILE
            value: "/opt/openshift/log4j.properties"
          - name: TRUSTSTORE_PASSWORD
            valueFrom:
              secretKeyRef:
                name: ${BOT_NAME}.certs
                key: truststore.password
          ...
        ...
    ...
    triggers:
    - type: ConfigChange
    - imageChangeParams:
        automatic: true
        containerNames:
        - ${BOT_NAME}
        from:
          kind: ImageStreamTag
          name: ${BOT_NAME}:latest
      type: ImageChange
  status: {}

Service definition

In order to access the container port, it is necessary to define

  1. a Service that acts as load-balancer across all containers with app=${BOT_NAME} label
  2. a Route that registers to the OpenShift DNS and points to the Service

See the example below.

- apiVersion: v1
  kind: Service
  metadata:
    annotations:
      openshift.io/generated-by: OpenShiftNewApp
    labels:
      app: ${BOT_NAME}
    name: ${BOT_NAME}
  spec:
    ports:
    - name: healthcheck-tcp
      port: 8080
      protocol: TCP
      targetPort: 8080
    selector:
      app: ${BOT_NAME}
      deploymentconfig: ${BOT_NAME}
    sessionAffinity: None
    type: ClusterIP
  status:
    loadBalancer: {}
- apiVersion: v1
  kind: Route
  metadata:
    name: ${BOT_NAME}
    labels:
      app: ${BOT_NAME}
  spec:
    to:
      kind: Service
      name: ${BOT_NAME}
      weight: 100
    port:
      targetPort: healthcheck-tcp
    wildcardPolicy: None

Need help? Email help@finos.org we'll get back to you.

Content on this page is licensed under the CC BY 4.0 license.
Code on this page is licensed under the Apache 2.0 license.