Tutorial: Hands on GitOps
In this hands-on tutorial, Brice Fernandes from Weaveworks will go through setting up and using a Gitops pipeline to manage a Kubernetes cluster. This will include setting up monitoring and metric visualisation as well as managing the monitoring configuration using Gitops.
After taking this tutorial, attendees will be able to:
- Set up their own Gitops pipeline to manage their kubernetes cluster
- Compare the desired state of a Kubernetes cluster against the actual state
- Deploy Prometheus and Grafana to a Kubernetes cluster
- Set up a continuous deployment pipeline for Kubernetes workloads
- Know of kubernetes and the kubectl command line.
- Be comfortable with Git
- Be comfortable with the Unix command line
No preparation needed. Attendees will be provided with an online environment to use during the tutorial. Bring a laptop with a modern browser.
Cloud Developer Advocate, Microsoft
Reproducible Data Science pipelines in production
How many times have you developed a model or a data application and tested it locally or in a staging environment just to find out that it breaks in production? How many times have you seen a data science project that is ready to production but the closer it gets the harder it is to answer the following: “Why did the model predict this?” These are two common issues faced by many many data scientist across the globe. On one hand, explainability is essential for the trust of a model and its predictions. It also helps prevent situations in which nobody understands why a prediction is made (i.e. why are all these people being rejected?). On the other hand, as more and more data-intensive applications are created, there is also a higher need for practical data operations and processes to improve the deployment of artificial intelligence application and machine learning models. In this workshop, you’ll learn how to level up your data science workflows with some practical DevOps or rather DataOps/MLOps! We will focus on how to improve the reliability and quality of your data applications preparing them better for production deployment and consumption. We will build an end to end machine learning pipeline focusing on logging, debugging, diagnosis, automated testing, integration and delivery. In brief, this workshop will help you build a release pipeline for a data science project to improve your data deployment processes and increase your model’s robustness and trust.
What are the main takeaways? Attendees will acquire an understanding of DataOps and how these can improve your data science workflows. We will focus on model explainability withouth compromising its accuracy. As we move over to the examples, you will better identify the many challenges faced during the production implementation of data applications and how these can be mitigated through best Ops practices. By the end of the talk, attendees will have the knowledge required to automate the delivery of their data products, increasing their productivity and the quality of their work.
Señor Developer, RedHat
This workshop gives an overview of Knative Serving and Knative Eventing: 'Knative serving' provides primitives for serverless frameworks. It especially knows how to scale-to-zero and is already used in several FaaS frameworks. 'Knative eventing' is an eventing specification for sending CloudEvents, an emerging CNCF standard, from sources to sinks. You will follow along an end-to-end demo, leveraging advanced message brokers such as Apache Kafka, behind Knative APIs.
Technology Strategist, Dynatrace
Building unbreakable automated multi-stage pipelines with keptn
This talk introduces the open source framework keptn (https://keptn.sh/). The goal of keptn is to provide full automation of multi-stage delivery pipelines with automated quality gates and blue/green deployments, as well as self-healing capabilities in case something goes wrong in production. This way, not only the delivery of apps is automated, but it further enables users to automate their operations and let developers focus completely on their code.
During the demo part of this talk, participants will learn how to enrich delivery pipelines with quality gates that use monitoring data to decide whether a new version of a service should be promoted to the next stage or if it should be rejected. Additionally, we will cover how we can leverage build, deployment and environment metadata to automatically self-heal a service in case something goes wrong in production.
Head of Technology Strategy, Solar Winds
Service Meshes, but at what cost?
As you learn of the architecture and value provided by service meshes, you’re intrigued and initially impressed. Upon reflection, you, like many others think: “I see the value, but what overhead does being on the mesh incur?”
Complicating the answer is the fact that there are over 10 service meshes projects to choose from. While this presentation does not take an in-depth look at the landscape of service meshes, it does introduce Meshery as a utility for both benchmarking service mesh performance and provides a playground for familiarizing with the various features of different service meshes.
Trainer & Consultant
GraphQL – forget (the) REST? A query language for your API
Are you using RESTful web services to provide data for your apps? “Sure, what else?” Well, then you should take a closer look at GraphQL. GraphQL is a query language for your API and is used by Facebook, GitHub and others as an alternative to RESTful web services.
What is GraphQL? Which problems does it try to solve? How can we implement a GraphQL backend? A lot of questions, which we will discuss and answer in this workshop.
Together we will develop a GraphQL backend using Node.js and we will have a look at the central parts like the Schema and Resolvers. We will also discuss potential (performance) problems and possible solutions.
This workshop does not require any GraphQL knowledge and targets GraphQL beginners.
Developer Advocate, Pivotal
These days, you can’t swing a dry erase marker without hitting someone talking about microservices. Developers are studying Eric Evan’s prescient book Domain Driven Design. Teams are refactoring monolithic apps, looking for bounded contexts and defining a ubiquitous language. And while there have been countless articles, videos, and talks to help you convert to microservices, few have spent any appreciable time asking if a given application should be a microservice. In this talk, I will show you a set of factors you can apply to help you decide if something deserves to be a microservice or not. We’ll also look at what we need to do to maintain a healthy micro(services)biome.
There are many good reasons to use a microservices architecture. But there are no free lunches. The positives of microservices come with added complexity. Teams should happily take on that complexity…provided the application in question benefits from the upside of microservices. This talk will cut through the hype to help you make the right choice for your unique situation.
Responsible Microservices is based on my blog series Should that be a Microservice? Keep These Six Factors in Mind found on the Pivotal blog.
Senior Technical Evangelist, Microsoft
Serverless Analytics for Streaming Data and Data Lakes
This is a beginner to medium level workshop designed to illustrate how to process real-time data streams in a serverless way. In this workshop, we’ll build infrastructure to enable operations personnel at Wild Rydes headquarters to monitor the health and status of their unicorn fleet. Each unicorn is equipped with a sensor that reports its location and vital signs. During this workshop we’ll use AWS to build applications to process and visualize this data in real-time.
During this workshop you can learn about Real-time Data Streaming, Stream Aggregation, Stream Processing and Data Lakes. We’ll use Lambda to process real-time streams, DynamoDB to persist unicorn vitals, Amazon Kinesis Data Analytics to build a serverless application to aggregate data, Amazon Kinesis Data Firehose to archive the raw data to Amazon S3, and Athena to run ad-hoc queries against the raw data.
- Please sign up for a free AWS account before the workshop under https://aws.amazon.com/free/