Development in the Cloud
24 min
with the advent of microservice architectures, limitless compute resources enabled by kubernetes has to be utilised in the early stages of the development lifecycle to ensure continuous high velocity development tldr with large and complex microservices deployments, running services locally is, at some point, not feasible automate creation of preview environments to allow developers early and frequent experimentations tools like brigade, telepresence , ksync (and many others) can help you get there easily dive straight into config repository for code examples keep your prs green! peers open pr and have you ever been in a situation when you or your peers open pr and, instead of conversation about quality and simplicity of new code, your first steps are addressing bugs discovered by the regression testing suite? have you ever been in a situation when you or your peers open pr and, instead of conversation about quality and simplicity of new code, your first steps are addressing bugs discovered by the regression testing suite? i’d like to show you how, with a bit of automation, we can create personal development preview environments where each and every developer can conduct early experiments and run a full suite of regression tests in isolation i’m going to specifically talk about three different tools brigade https //github com/azure/brigade , telepresence https //github com/telepresenceio/telepresence and ksync https //github com/vapor ware/ksync brigade from the project description we learn that brigade is an event based scripting of kubernetes pipelines it is an in cluster runtime environment it interprets scripts and executes them by invoking resources inside of the cluster in the example below our requirement is to create a workflow where a new environment, in the case of kubernetes a new namespace, is created and all relevant components (services and their dependencies) are deployed let’s look at what makes up a brigade deployment with this high level architectural diagram there are many ways to trigger a brigade workflow the example above shows triggering an event via git push the github repository is configured with a webhook that will send the payload with information about the git push to brigade gateway before the event can be successfully interpreted, we create a brigade projects for every repository we want to automate from there brigade gateway will interpret incoming payload and convert it to an event event is just a kubernetes secret containing information about a git push and links it to a relevant brigade project brigade controller is watching for new events and uses that information to start a new brigade worker container brigade worker triggers an appropriate handler in brigade js script that knows how to process information from the incoming event both event and project data are passed to the handler script author has full access to kubernetes api and can manipulate any its resources (as long as rbac permissions allow it) it can also run multiple job containers in parallel to execute some additional sub tasks both worker and job containers have access to the full git repository when running the script prerequisites for the purpose of our demo we will use services from nameko examples https //github com/nameko/nameko examples each service has been moved to it’s own repository gateway https //github com/kooba/ditc gateway https //github com/kooba/ditc gateway orders https //github com/kooba/ditc orders https //github com/kooba/ditc orders products https //github com/kooba/ditc products https //github com/kooba/ditc products additionally, configuration lives in its own repository https //github com/kooba/ditc config https //github com/kooba/ditc config each service has its deployment automated and corresponding brigade project created each brigade project is a simple kubernetes secret that contains a few important configuration options, mainly the location of the github repository for the brigade project a github api authorisation token that will allow worker and job sidecars to fetch code for us a github shared secret that is used for additional authentication of incoming webhook requests because every project is just a simple kubernetes secret we can automate project creation with any common secret deployment strategy take a look at deploy projects make target in the ditc config https //github com/kooba/ditc config/blob/master/makefile#l74 repository for an example approach for the rest of the article we’re assuming your kubernetes cluster is up and running and brigade is installed https //github com/kooba/ditc config/blob/master/makefile#l29 in a brigade namespace there preview environment to create a new preview environment we will trigger brigade js script located in the root of our config repository https //github com/kooba/ditc config/blob/master/brigade js to do so we will use brigade’s cli utility brig which can be used to trigger special “exec” events when running brig we have an option to provide an additional json payload which, in our case, will contain the name of the action we want the script to perform, the name of the new environment, as well as the list of the service projects we would like to deploy to the new preview environment in our case this includes products, orders and gateway services we specify a git tag name dev for each of the services we want to be deployed to the new environment payload json { "action" "create", "name" "jakub", "projects" { "products" { "org" "kooba", "repo" "ditc products", "tag" "dev" }, "orders" { "org" "kooba", "repo" "ditc orders", "tag" "dev" }, "gateway" { "org" "kooba", "repo" "ditc gateway", "tag" "dev" } } } to create new environment run this command brig run \\ \ kube context my k8s cluster \\ \ namespace brigade \\ c 5dc6f31f6aee6b2ff514161c0cba4906dc3214c3 \\ r master \\ f brigade js \\ p payload json \\ kooba/ditc config \\ \ kube context my k8s cluster k8s cluster context name \ namespace brigade namespace where brigade components are installed c 5dc6f31f6aee6b2ff514161c0cba4906dc3214c3 git commit we want to be checked out and available in a worker container r master git branch name f brigade js optional, local brigade js script by default one located in root of the repository will be used \ p payload json local payload file kooba/ditc config brigade project name let’s look at the composition of brigade js that will be executed after running the command above the action create will trigger provisionenvironment function that will perform these steps step 1 create namespace our preview environment will be isolated via kubernetes namespace const kubernetes = require('@kubernetes/client node'); const k8sclient = kubernetes config defaultclient(); const createnamespace = async (namespacename) => { const existingnamespace = await k8sclient listnamespace( true, '', `metadata name=${namespacename}`, ); if (existingnamespace body items length) { console log(`namespace "${namespacename}" already exists`); return; } const namespace = new kubernetes v1namespace(); namespace metadata = new kubernetes v1objectmeta(); namespace metadata name = namespacename; await k8sclient createnamespace(namespace); }; step 2 create environment configmap configmap will contain a “projects” section from the provided “payload json” “type” label will let us find it during individual service releases const kubernetes = require('@kubernetes/client node'); const yaml = require('js yaml'); const k8sclient = kubernetes config defaultclient(); const brigade namespace = 'brigade'; const createenvironmentconfigmap = async (name, projects) => { const configmap = new kubernetes v1configmap(); const metadata = new kubernetes v1objectmeta(); metadata name = `environment config ${name}`; metadata namespace = brigade namespace; metadata labels = { type 'preview environment config', environmentname name, }; configmap metadata = metadata; configmap data = { environment yaml dump(projects), }; try { await k8sclient createnamespacedconfigmap(brigade namespace, configmap); } catch (error) { if (error body && error body code === 409) { await k8sclient replacenamespacedconfigmap( configmap metadata name, brigade namespace, configmap, ); } else { throw error; } } }; step 3 deploy dependencies helm charts for service dependencies (rabbitmq, postgresql and redis) are used for deployment to the newly created environment telepresence’s server side component is also deployed from a local chart const { job, group } = require('brigadier'); const deploydependencies = async (namespace) => { const postgresql = new job('postgresql', 'jakubborys/ditc brigade worker\ latest'); postgresql storage enabled = false; postgresql imageforcepull = true; postgresql tasks = \[ 'cd /src', `helm upgrade ${namespace} postgresql charts/postgresql \\ \ install namespace=${namespace} \\ \ set fullnameoverride=postgresql \\ \ set postgresqlpassword=password \\ \ set resources requests cpu=50m \\ \ set resources requests memory=156mi \\ \ set readinessprobe initialdelayseconds=60 \\ \ set livenessprobe initialdelayseconds=60;`, ]; const rabbitmq = new job('rabbitmq', 'jakubborys/ditc brigade worker\ latest'); rabbitmq storage enabled = false; rabbitmq imageforcepull = true; rabbitmq tasks = \[ `helm upgrade ${namespace} rabbitmq stable/rabbitmq ha \\ \ install namespace=${namespace} \\ \ set fullnameoverride=rabbitmq \\ \ set image tag=3 7 management alpine \\ \ set replicacount=1 \\ \ set persistentvolume enabled=true \\ \ set updatestrategy=rollingupdate \\ \ set rabbitmqpassword=password \\ \ set rabbitmqmemoryhighwatermarktype=relative \\ \ set rabbitmqmemoryhighwatermark=0 5`, ]; const redis = new job('redis', 'jakubborys/ditc brigade worker\ latest'); redis storage enabled = false; redis imageforcepull = true; redis tasks = \[ `helm upgrade ${namespace} redis stable/redis \\ \ install namespace ${namespace} \\ \ set fullnameoverride=redis \\ \ set password=password \\ \ set cluster enabled=false`, ]; const telepresence = new job( 'telepresence', 'jakubborys/ditc brigade worker\ latest', ); telepresence storage enabled = false; telepresence imageforcepull = true; telepresence tasks = \[ 'cd /src', `helm upgrade ${namespace} telepresence charts/telepresence \\ \ install namespace ${namespace}`, ]; await group runall(\[postgresql, rabbitmq, redis, telepresence]); }; step 4 deploy services via events we want to trigger service deployments to the newly created preview environment in the same fashion as normal ongoing releases would be deployed, via brigade events because events are simply kubernetes secrets we are utilising kubernetes api to create them here const kubernetes = require('@kubernetes/client node'); const yaml = require('js yaml'); const ulid = require('ulid'); const crypto = require('crypto'); const k8sclient = kubernetes config defaultclient(); const brigade namespace = 'brigade'; const github api url = 'https //api github com/repos'; const deployprojects = async (environmentname) => { const environmentconfigmap = await k8sclient readnamespacedconfigmap( `environment config ${environmentname}`, brigade namespace, ); const environmentsconfig = yaml safeload( environmentconfigmap body data environment, ); for (const key of object keys(environmentsconfig)) { const projectconfig = environmentsconfig\[key]; const url = `${github api url}/${projectconfig org}/${projectconfig repo}`; const tagurl = `${url}/git/refs/tags/${projectconfig tag}`; const response = await fetch( tagurl, { method 'get', headers { 'content type' 'application/json', authorization `token ${process env brigade repo auth token}`, }, }, ); if (!response ok && response status === 404) { console log( `'${projectconfig tag}' tag not found in ` \+ `${projectconfig org}/${projectconfig repo} repository`, ); } else { console log( `triggering deployment of '${projectconfig repo}' for tag '${projectconfig tag}'`, ); const commit = await response json(); const gitsha = commit object sha; const projectsha = crypto createhash('sha256') update( `${projectconfig org}/${projectconfig repo}`, ) digest('hex') substring(0, 54); const projectid = `brigade ${projectsha}`; const buildid = ulid() tolowercase(); const buildname = `environment worker ${buildid}`; const deployeventsecret = new kubernetes v1secret(); deployeventsecret metadata = new kubernetes v1objectmeta(); deployeventsecret metadata name = buildname; deployeventsecret metadata labels = { build buildid, component 'build', heritage 'brigade', project projectid, }; deployeventsecret type = 'brigade sh/build'; deployeventsecret stringdata = { build id buildname, build name buildname, commit id gitsha, commit ref projectconfig tag, event provider 'brigade cli', event type 'exec', log level 'log', payload `{"name" "${environmentname}"}`, project id projectid, script '', }; await k8sclient createnamespacedsecret(brigade namespace, deployeventsecret); } } }; let’s look at what’s been deployed to the new namespace $ kubectl context=my k8s cluster get pods namespace jakub name ready status restarts age gateway 6f468658c7 pbqn2 1/1 running 0 20s orders bb77968d6 pfkh8 1/1 running 0 20s products 9ff84f899 8jj2b 1/1 running 0 20s postgresql 0 1/1 running 0 20s rabbitmq 0 1/1 running 0 20s redis master 0 1/1 running 0 20s telepresence 86b77f7cff 56pr9 1/1 running 0 20s $ kubectl context=my k8s cluster get pods namespace jakub name ready status restarts age gateway 6f468658c7 pbqn2 1/1 running 0 20s orders bb77968d6 pfkh8 1/1 running 0 20s products 9ff84f899 8jj2b 1/1 running 0 20s postgresql 0 1/1 running 0 20s rabbitmq 0 1/1 running 0 20s redis master 0 1/1 running 0 20s telepresence 86b77f7cff 56pr9 1/1 running 0 20s as you can see the new environment has all of our components deployed and ready to go new environments can be created as often as we want in a real world scenario we would create a new environment per developer and a pool of environments to be used by ci/cd pipelines new component release each of our projects’ github repository is configured to send a webhook to brigade gateway every time a new git tag has been added for the purpose of this article our release process is as follows commit and push your changes to github build and push new docker images, tagging them with most recent git sha add a tag called dev and push it to github after a new git tag has been pushed, github will send a webhook to our brigade gateway and a new workflow for the relevant project will be triggered let’s look at how the workflow is orchestrated in a products service https //github com/kooba/ditc products/blob/master/brigade js project const { events, job } = require('brigadier'); const kubernetes = require('@kubernetes/client node'); const process = require('process'); const yaml = require('js yaml'); const fetch = require('node fetch'); const k8sclient = kubernetes config defaultclient(); const brigade namespace = 'brigade'; const project name = 'products'; const github api url = 'https //api github com/repos/kooba/ditc products'; const deploy = async (environmentname, gitsha) => { console log('deploying helm charts'); const service = new job( 'ditc products', 'jakubborys/ditc brigade worker\ latest', ); service storage enabled = false; service imageforcepull = true; service tasks = \[ 'cd /src', `helm upgrade ${environmentname} products \\ charts/products install \\ \ namespace=${environmentname} \\ \ set image tag=${gitsha} \\ \ set replicacount=1`, ]; await service run(); }; const gettagcommit = async (tag) => { console log(`getting commit sha for tag ${tag}`); const tagurl = `${github api url}/git/refs/tags/${tag}`; const response = await fetch( tagurl, { method 'get', headers { 'content type' 'application/json', authorization `token ${process env brigade repo auth token}`, }, }, ); if (response ok) { const commit = await response json(); return commit object sha; } throw error(await response text()); }; const deploytoenvironments = async (payload) => { const tag = payload ref; const environmentconfigmaps = await k8sclient listnamespacedconfigmap( brigade namespace, true, undefined, undefined, undefined, 'type=preview environment config', ); if (!environmentconfigmaps body items length) { throw error('no environment configmaps found'); } for (const configmap of environmentconfigmaps body items) { const environmentsconfig = yaml safeload(configmap data environment); const config = environmentsconfig\[project name]; if (config && config tag === tag) { const { environmentname } = configmap metadata labels; const gitsha = await gettagcommit(tag); await deploy(environmentname, gitsha); } } }; events on('create', (e) => { / events triggered by github webhook on tag creation will trigger this handler / try { const payload = json parse(e payload); if (payload ref type !== 'tag') { console log('skipping, not a tag commit'); return; } deploytoenvironments(payload) catch((error) => { logerror(error); }); } catch(error) { logerror(error); } }); let’s follow through the example above first events on(‘create’, (e) => { }); handler is triggered by brigade worker if the event payload comes from git tag creation we proceed to deploytoenvironments(payload) in deploytoenvironments we query any configmap in brigade namespace that is of preview environment config type if found, for each environment, we check if that environment is interested in our project and if git tag matches the one specified in config if so, we make a call to github api (github authorisation token is available to us as an environment variable brigade repo auth token and was retrieved from the project’s secret) to obtain git sha from the relevant tag commit we pass the environment name and git sha to deploy function which starts the helm deployment job one can see how this approach can easily be extended to deployment pipelines of any kind, supporting any type of environment (dev, test, prod) with any level of complexity telepresence we’ve automated the creation of preview environments and now it’s time to reap the benefits from our set up from telepresence documentation we learn that telepresence substitutes a two way network proxy for your normal pod running in the kubernetes cluster this pod proxies data from your kubernetes environment (e g , tcp connections, environment variables, volumes) to the local process the local process has its networking transparently overridden so that dns calls and tcp connections are routed through the proxy to the remote kubernetes cluster let’s initiate connection to our new namespace where telepresence deployment already exists $ telepresence context my k8s cluster \\ \ deployment telepresence \\ \ namespace jakub method vpn tcp we can now fully utilise kubernetes’ dns service discovery at the moment these services are exposed kbctl $ kubectl context my k8s cluster get services namespace jakub o=custom columns="name metadata name" name gateway postgresql rabbitmq redis master any of the services above are addressable from our local environment let’s play with gateway service by creating a new product $ curl 'http //gateway/products' xpost d '{"id" "the odyssey", "title" "the odyssey", "passenger capacity" 101, "maximum speed" 5, "in stock" 10}' let’s verify everything worked by retrieving the newly created product $ curl 'http //gateway/products/the odyssey' { "in stock" 10, "maximum speed" 5, "title" "the odyssey", "id" "the odyssey", "passenger capacity" 101 } our local environment is now behaving as though it is just another service deployed to the cluster not only can we send request by using dns names but we can also direct traffic from the cluster to locally running services ksync let’s take our development experience a step further by employing ksync from its documentation we learn that ksync speeds up developers who build applications for kubernetes it transparently updates containers running on the cluster from your local checkout this means we can skip time consuming docker image build/push/pull steps that separate us from continuously testing our changes against the wider cluster environment ksync establishes a storage level sync between our local directories and container volumes it will watch for any changes we make and send them to the cluster on file save, effectively providing hot reload capabilities to the cloud! after ksync initialisation we have to create a link between our local directory and a directory in the running container ksync will use label selector to ensure our changes are synced to all relevant containers for our example we will sync gateway source code directory with the container directory where python’s virtual environments’ site packages are deployed $ ksync context my k8s cluster namespace jakub \\ create selector=app=gateway \\ /users/jakubborys/development/ditc gateway/gateway \\ /appenv/lib/python3 6/site packages/gateway once sync is created we need to tell ksync to start watching our directories $ ksync watch context my k8s cluster every time we change any file in /users/jakubborys/development/ditc gateway/gateway directory, ksync will sync it’s content to any container found by label selector and restart that container for us, ensuring changes are fully propagated to our environment takeaways developers should have an easy and standardised way of creating isolated preview environments where changes can be pushed continuously after every code commit use tools like brigade from azure https //github com/azure/brigade https //github com/azure/brigade to automate this process with large and complex microservices deployments, running integration services locally is at some point not feasible use telepresence proxy https //www telepresence io https //www telepresence io to expose kubernetes services to your local environment early experimentation is crucial to maintaining high velocity during development with ksync https //github com/vapor ware/ksync https //github com/vapor ware/ksync you can continuously make your local changes available in a running container without time consuming image build/upload steps realtime editor with markdown shortcuts! docid\ x 04meqilww4ymhoomc80 sharing lambda layers docid 4ujrxmsxdwgted9zwymwq head over to https //github com/kooba/ditc config https //github com/kooba/ditc config repository for examples
🤔
Have a question?
Our super-smart AI,knowledgeable support team and an awesome community will get you an answer in a flash.
To ask a question or participate in discussions, you'll need to authenticate first.