Development in the Cloud

24min
Document image


With the advent of Microservice architectures, limitless compute resources enabled by Kubernetes has to be utilised in the early stages of the development lifecycle to ensure continuous high velocity development.

TLDR: With large and complex Microservices deployments, running services locally is, at some point, not feasible. Automate creation of Preview Environments to allow Developers early and frequent experimentations. Tools like Brigade, Telepresence, Ksync (and many others) can help you get there easily. Dive straight into config repository for code examples. Keep your PRs green!

peers open PR and

Have you ever been in a situation when you or your peers open PR and, instead of conversation about quality and simplicity of new code, your first steps are addressing bugs discovered by the regression testing suite?

Have you ever been in a situation when you or your peers open PR and, instead of conversation about quality and simplicity of new code, your first steps are addressing bugs discovered by the regression testing suite?

I’d like to show you how, with a bit of automation, we can create personal development Preview Environments where each and every developer can conduct early experiments and run a full suite of regression tests in isolation.

I’m going to specifically talk about three different tools : Brigade, Telepresence and Ksync.

Brigade

From the project description we learn that: Brigade is an event-based scripting of Kubernetes pipelines. It is an in-cluster runtime environment. It interprets scripts and executes them by invoking resources inside of the cluster.

In the example below our requirement is to create a workflow where a new environment, in the case of Kubernetes a new namespace, is created and all relevant components (services and their dependencies) are deployed.

Let’s look at what makes up a Brigade deployment with this high level architectural diagram:

Brigade components
Brigade components


There are many ways to trigger a Brigade workflow. The example above shows triggering an event via git push.

The GitHub repository is configured with a webhook that will send the payload with information about the git push to Brigade Gateway. Before the Event can be successfully interpreted, we create a Brigade Projects for every repository we want to automate. From there:

  • Brigade Gateway will interpret incoming payload and convert it to an Event.
  • Event is just a Kubernetes secret containing information about a git push and links it to a relevant Brigade Project.
  • Brigade Controller is watching for new Events and uses that information to start a new Brigade Worker container.
  • Brigade Worker triggers an appropriate handler in brigade.js script that knows how to process information from the incoming event. Both Event and Project data are passed to the handler.
  • Script author has full access to Kubernetes API and can manipulate any its resources (as long as RBAC permissions allow it). It can also run multiple Job containers in parallel to execute some additional sub-tasks.
  • Both Worker and Job containers have access to the full git repository when running the script.

Prerequisites

For the purpose of our demo we will use services from Nameko Examples.



Document image


Each service has been moved to it’s own repository:

Additionally, configuration lives in its own repository:

Each service has its deployment automated and corresponding Brigade Project created. Each Brigade Project is a simple Kubernetes Secret that contains a few important configuration options, mainly:

  • The location of the GitHub repository for the Brigade Project.
  • A GitHub API authorisation token that will allow Worker and Job sidecars to fetch code for us.
  • A GitHub shared secret that is used for additional authentication of incoming webhook requests.

Because every Project is just a simple Kubernetes Secret we can automate Project creation with any common secret deployment strategy. Take a look at deploy-projects Make target in the ditc-config repository for an example approach.

For the rest of the article we’re assuming your Kubernetes cluster is up and running and Brigade is installed in a brigade namespace there.

Preview Environment

To create a new Preview Environment we will trigger brigade.js script located in the root of our config repository. To do so we will use Brigade’s cli utility Brig which can be used to trigger special “exec” events.

When running brig we have an option to provide an additional json payload which, in our case, will contain the name of the action we want the script to perform, the name of the new Environment, as well as the list of the service projects we would like to deploy to the new Preview Environment. In our case this includes Products, Orders and Gateway services. We specify a git tag name dev for each of the services we want to be deployed to the new Environment.

payload.json:

{ "action": "create", "name": "jakub", "projects": { "products": { "org": "kooba", "repo": "ditc-products", "tag": "dev" }, "orders": { "org": "kooba", "repo": "ditc-orders", "tag": "dev" }, "gateway": { "org": "kooba", "repo": "ditc-gateway", "tag": "dev" } } }

To create new Environment run this command:

brig run \ --kube-context my.k8s.cluster \ --namespace brigade \ -c 5dc6f31f6aee6b2ff514161c0cba4906dc3214c3 \ -r master \ -f brigade.js \ -p payload.json \ kooba/ditc-config \



--kube-context my.k8s.cluster k8s cluster context name

--namespace brigade namespace where brigade components are installed

-c 5dc6f31f6aee6b2ff514161c0cba4906dc3214c3 git commit we want to be checked out and available in a worker container.

-r master git branch name

-f brigade.js optional, local brigade.js script. By default one located in root of the repository will be used.

-p payload.json local payload file

kooba/ditc-config brigade project name

Let’s look at the composition of brigade.js that will be executed after running the command above.

The action create will trigger provisionEnvironment function that will perform these steps:

Step 1: Create Namespace

Our Preview Environment will be isolated via Kubernetes Namespace.

F#
JS


Step 2: Create Environment ConfigMap

ConfigMap will contain a “projects” section from the provided “payload.json”. “Type” label will let us find it during individual service releases.

F#
JS


Step 3: Deploy Dependencies

Helm Charts for service dependencies (RabbitMQ, PostgreSQL and Redis) are used for deployment to the newly created Environment. Telepresence’s server-side component is also deployed from a local chart.

F#
JS


Step 4: Deploy Services via Events

We want to trigger service deployments to the newly created Preview Environment in the same fashion as normal ongoing releases would be deployed, via Brigade Events. Because Events are simply Kubernetes Secrets we are utilising Kubernetes API to create them here:

F#
JS


Let’s look at what’s been deployed to the new Namespace:

$ kubectl --context=my.k8s.cluster get pods --namespace jakub NAME READY STATUS RESTARTS AGE gateway-6f468658c7-pbqn2 1/1 Running 0 20s orders-bb77968d6-pfkh8 1/1 Running 0 20s products-9ff84f899-8jj2b 1/1 Running 0 20s postgresql-0 1/1 Running 0 20s rabbitmq-0 1/1 Running 0 20s redis-master-0 1/1 Running 0 20s telepresence-86b77f7cff-56pr9 1/1 Running 0 20s

$ kubectl --context=my.k8s.cluster get pods --namespace jakub

NAME READY STATUS RESTARTS AGE

gateway-6f468658c7-pbqn2 1/1 Running 0 20s

orders-bb77968d6-pfkh8 1/1 Running 0 20s

products-9ff84f899-8jj2b 1/1 Running 0 20s

postgresql-0 1/1 Running 0 20s

rabbitmq-0 1/1 Running 0 20s

redis-master-0 1/1 Running 0 20s

telepresence-86b77f7cff-56pr9 1/1 Running 0 20s

As you can see the new Environment has all of our components deployed and ready to go. New Environments can be created as often as we want. In a real world scenario we would create a new Environment per developer and a pool of Environments to be used by CI/CD pipelines.

New Component Release

Each of our projects’ GitHub repository is configured to send a webhook to Brigade Gateway every time a new git tag has been added. For the purpose of this article our release process is as follows:



  • Commit and push your changes to GitHub.
  • Build and push new Docker images, tagging them with most recent git sha.
  • Add a tag called dev and push it to GitHub.



After a new git tag has been pushed, GitHub will send a webhook to our Brigade Gateway and a new workflow for the relevant project will be triggered. Let’s look at how the workflow is orchestrated in a Products Service project:



F#
JS






Let’s follow through the example above:



  • First events.on(‘create’, (e) => {...}); handler is triggered by Brigade Worker.
  • If the event payload comes from git tag creation we proceed to deployToEnvironments(payload) .
  • In deployToEnvironments we query any ConfigMap in brigade Namespace that is of preview-environment-config Type.
  • If found, for each environment, we check if that Environment is interested in our project and if git tag matches the one specified in config.
  • If so, we make a call to GitHub API (GitHub authorisation token is available to us as an environment variable BRIGADE_REPO_AUTH_TOKEN and was retrieved from the Project’s Secret) to obtain git sha from the relevant tag commit.
  • We pass the Environment name and git sha to deploy function which starts the Helm deployment Job.

One can see how this approach can easily be extended to deployment pipelines of any kind, supporting any type of environment (dev, test, prod) with any level of complexity.



Telepresence

We’ve automated the creation of Preview Environments and now it’s time to reap the benefits from our set-up.



From Telepresence documentation we learn that: Telepresence substitutes a two-way network proxy for your normal pod running in the Kubernetes cluster. This pod proxies data from your Kubernetes environment (e.g., TCP connections, environment variables, volumes) to the local process. The local process has its networking transparently overridden so that DNS calls and TCP connections are routed through the proxy to the remote Kubernetes cluster.



Let’s initiate connection to our new Namespace where Telepresence deployment already exists:

$ telepresence --context my.k8s.cluster \ --deployment telepresence \ --namespace jakub --method vpn-tcp

We can now fully utilise Kubernetes’ DNS service discovery. At the moment these services are exposed:

KBCTL

$ kubectl --context my.k8s.cluster get services --namespace jakub -o=custom-columns="NAME:.metadata.name" NAME gateway postgresql rabbitmq redis-master

Any of the services above are addressable from our local environment. Let’s play with Gateway Service by

creating a new product:

$ curl 'http://gateway/products' -XPOST -d '{"id": "the_odyssey", "title": "The Odyssey", "passenger_capacity": 101, "maximum_speed": 5, "in_stock": 10}'

Let’s verify everything worked by retrieving the newly created product:

$ curl 'http://gateway/products/the_odyssey' { "in_stock": 10, "maximum_speed": 5, "title": "The Odyssey", "id": "the_odyssey", "passenger_capacity": 101 }

Our local environment is now behaving as though it is just another service deployed to the cluster. Not only can we send request by using DNS names but we can also direct traffic from the cluster to locally running services.



Ksync

Let’s take our development experience a step further by employing Ksync. From its documentation we learn that: Ksync speeds up developers who build applications for Kubernetes. It transparently updates containers running on the cluster from your local checkout.



This means we can skip time consuming docker image build/push/pull steps that separate us from continuously testing our changes against the wider cluster environment. Ksync establishes a storage level sync between our local directories and container volumes. It will watch for any changes we make and send them to the cluster on file save, effectively providing hot-reload capabilities to the cloud!



After Ksync initialisation we have to create a link between our local directory and a directory in the running container. Ksync will use label selector to ensure our changes are synced to all relevant containers. For our example we will sync Gateway source code directory with the container directory where Python’s virtual environments’ site packages are deployed:

$ ksync --context my.k8s.cluster --namespace jakub \ create --selector=app=gateway \ /Users/jakubborys/development/ditc-gateway/gateway \ /appenv/lib/python3.6/site-packages/gateway

Once sync is created we need to tell Ksync to start watching our directories:



$ ksync watch --context my.k8s.cluster



Every time we change any file in /Users/jakubborys/development/ditc-gateway/gateway directory, Ksync will sync it’s content to any container found by label selector and restart that container for us, ensuring changes are fully propagated to our environment.



Takeaways

Developers should have an easy and standardised way of creating isolated Preview Environments where changes can be pushed continuously after every code commit. Use tools like Brigade from Azure https://github.com/Azure/brigade to automate this process.



With large and complex Microservices deployments, running integration services locally is at some point not feasible. Use Telepresence proxy https://www.telepresence.io to expose Kubernetes services to your local environment.



Early experimentation is crucial to maintaining high velocity during development. With Ksync https://github.com/vapor-ware/ksync you can continuously make your local changes available in a running container without time consuming image build/upload steps.

Head over to https://github.com/kooba/ditc-config repository for examples.



🤔
Have a question?
Our super-smart AI, knowledgeable support team and an awesome community will get you an answer in a flash.
To ask a question or participate in discussions, you'll need to authenticate first.