Speakers

Brice Fernandes

Brice Fernandes

Engineer, Weaveworks

Brice fell in love with programming while studying physics and never really looked back since. He has a broad technology background that covers everything from embedded C to backendless browser apps using the trendiest javascript frameworks. He taught Game Development and Functional Programming online and founded his own education platform for developers before joining Weaveworks. He now spends his time helping companies make the most of Kubernetes.
DEV

Tutorial: Hands on GitOps

In this hands-on tutorial, Brice Fernandes from Weaveworks will go through setting up and using a Gitops pipeline to manage a Kubernetes cluster. This will include setting up monitoring and metric visualisation as well as managing the monitoring configuration using Gitops.

After taking this tutorial, attendees will be able to:
  • Set up their own Gitops pipeline to manage their kubernetes cluster
  • Compare the desired state of a Kubernetes cluster against the actual state
  • Deploy Prometheus and Grafana to a Kubernetes cluster
  • Set up a continuous deployment pipeline for Kubernetes workloads
Attendees should:
  • Know of kubernetes and the kubectl command line.
  • Be comfortable with Git
  • Be comfortable with the Unix command line

No preparation needed. Attendees will be provided with an online environment to use during the tutorial. Bring a laptop with a modern browser.

Tania Allard

Tania Allard

Cloud Developer Advocate, Microsoft

Tania is a Microsoft developer advocate with vast experience in academic research and industrial environments. Her main areas of expertise are within data-intensive applications, scientific computing, and machine learning. She focuses on the improvement of processes, reproducibility and transparency in research, data science and artificial intelligence. Over the last few years, she has trained hundreds of people on scientific computing, reproducible workflows and ML models testing, monitoring and scaling and delivered talks on the topic worldwide. She is passionate about mentoring, open source, and its community and is involved in a number of initiatives aimed to build more diverse and inclusive communities. She is also a contributor, maintainer, and developer of a number of open source projects and the Founder of Pyladies NorthWest UK.
AI

Reproducible Data Science pipelines in production

How many times have you developed a model or a data application and tested it locally or in a staging environment just to find out that it breaks in production? How many times have you seen a data science project that is ready to production but the closer it gets the harder it is to answer the following: “Why did the model predict this?” These are two common issues faced by many many data scientist across the globe. On one hand, explainability is essential for the trust of a model and its predictions. It also helps prevent situations in which nobody understands why a prediction is made (i.e. why are all these people being rejected?). On the other hand, as more and more data-intensive applications are created, there is also a higher need for practical data operations and processes to improve the deployment of artificial intelligence application and machine learning models. In this workshop, you’ll learn how to level up your data science workflows with some practical DevOps or rather DataOps/MLOps! We will focus on how to improve the reliability and quality of your data applications preparing them better for production deployment and consumption. We will build an end to end machine learning pipeline focusing on logging, debugging, diagnosis, automated testing, integration and delivery. In brief, this workshop will help you build a release pipeline for a data science project to improve your data deployment processes and increase your model’s robustness and trust.

What are the main takeaways? Attendees will acquire an understanding of DataOps and how these can improve your data science workflows. We will focus on model explainability withouth compromising its accuracy. As we move over to the examples, you will better identify the many challenges faced during the production implementation of data applications and how these can be mitigated through best Ops practices. By the end of the talk, attendees will have the knowledge required to automate the delivery of their data products, increasing their productivity and the quality of their work.

Matthias Wessendorf

Matthias Wessendorf

Señor Developer, RedHat

Matthias Wessendorf works on the Messaging team at Red Hat, focusing on event-driven architectures, data-streaming and serverless workloads. He is an active contributor to the Knative project. Matthias is a regular speaker at international conferences and is a long standing member of the Apache Software Foundation.
DEV

Knative

This workshop gives an overview of Knative Serving and Knative Eventing: 'Knative serving' provides primitives for serverless frameworks. It especially knows how to scale-to-zero and is already used in several FaaS frameworks. 'Knative eventing' is an eventing specification for sending CloudEvents, an emerging CNCF standard, from sources to sinks. You will follow along an end-to-end demo, leveraging advanced message brokers such as Apache Kafka, behind Knative APIs.

Florian Bacher

Florian Bacher

Technology Strategist, Dynatrace

In his role as Technology Strategist, Florian drives the strategy, adoption and integration of OpenShift at Dynatrace and is a main contributor of the open source project keptn. He is always eager to dive into new technologies, especially in the context of cloud native applications. Before starting his role at Dynatrace, Florian did his master’s degree in computer science at Klagenfurt University and worked as a Software Engineer, mainly in mobile application development. When not working, he mostly spends his time playing guitar or in the gym.
OPS

Building unbreakable automated multi-stage pipelines with keptn

This talk introduces the open source framework keptn (https://keptn.sh/). The goal of keptn is to provide full automation of multi-stage delivery pipelines with automated quality gates and blue/green deployments, as well as self-healing capabilities in case something goes wrong in production. This way, not only the delivery of apps is automated, but it further enables users to automate their operations and let developers focus completely on their code.

During the demo part of this talk, participants will learn how to enrich delivery pipelines with quality gates that use monitoring data to decide whether a new version of a service should be promoted to the next stage or if it should be rejected. Additionally, we will cover how we can leverage build, deployment and environment metadata to automatically self-heal a service in case something goes wrong in production.

Lee Calcote

Lee Calcote

Head of Technology Strategy, Solar Winds

Lee Calcote is an innovative product and technology leader, passionate about developer platforms and management software for clouds, containers, functions and applications. Advanced and emerging technologies have been a consistent focus through Calcote’s tenure at SolarWinds, Seagate, Cisco and Pelco. An advisor, author and speaker, he is active in the tech community as a Docker Captain and Cloud Native Ambassador.
OPS

Service Meshes, but at what cost?

As you learn of the architecture and value provided by service meshes, you’re intrigued and initially impressed. Upon reflection, you, like many others think: “I see the value, but what overhead does being on the mesh incur?”

Complicating the answer is the fact that there are over 10 service meshes projects to choose from. While this presentation does not take an in-depth look at the landscape of service meshes, it does introduce Meshery as a utility for both benchmarking service mesh performance and provides a playground for familiarizing with the various features of different service meshes.

Christian Schwendtner

Christian Schwendtner

Trainer & Consultant

Christian Schwendtner is a passionate software architect and developer. He is an expert with longstanding experience in web development and Microsoft technologies. Christian is trainer, consultant and speaker at conferences, and with his experience he can offer developers valuable tips and tricks for their daily work. He studied Software Engineering at the University of Applied Sciences Hagenberg, at which he is avocational lecturer.
DEV

GraphQL – forget (the) REST? A query language for your API

Are you using RESTful web services to provide data for your apps? “Sure, what else?” Well, then you should take a closer look at GraphQL. GraphQL is a query language for your API and is used by Facebook, GitHub and others as an alternative to RESTful web services.

What is GraphQL? Which problems does it try to solve? How can we implement a GraphQL backend? A lot of questions, which we will discuss and answer in this workshop.

Together we will develop a GraphQL backend using Node.js and we will have a look at the central parts like the Schema and Resolvers. We will also discuss potential (performance) problems and possible solutions.

This workshop does not require any GraphQL knowledge and targets GraphQL beginners.

Nathaniel Schutta

Nathaniel Schutta

Developer Advocate, Pivotal

Nathaniel T. Schutta is a software architect focused on cloud computing and building usable applications. A proponent of polyglot programming, Nate has written multiple books and appeared in various videos. Nate is a seasoned speaker regularly presenting at conferences worldwide, No Fluff Just Stuff symposia, meetups, universities, and user groups. In addition to his day job, Nate is an adjunct professor at the University of Minnesota where he teaches students to embrace dynamic languages. Driven to rid the world of bad presentations, Nate coauthored the book Presentation Patterns with Neal Ford and Matthew McCullough. Nate recently published Thinking Architecturally available as a free download from Pivotal.
OPS

Responsible Microservices

These days, you can’t swing a dry erase marker without hitting someone talking about microservices. Developers are studying Eric Evan’s prescient book Domain Driven Design. Teams are refactoring monolithic apps, looking for bounded contexts and defining a ubiquitous language. And while there have been countless articles, videos, and talks to help you convert to microservices, few have spent any appreciable time asking if a given application should be a microservice. In this talk, I will show you a set of factors you can apply to help you decide if something deserves to be a microservice or not. We’ll also look at what we need to do to maintain a healthy micro(services)biome.

There are many good reasons to use a microservices architecture. But there are no free lunches. The positives of microservices come with added complexity. Teams should happily take on that complexity…provided the application in question benefits from the upside of microservices. This talk will cut through the hype to help you make the right choice for your unique situation.

Responsible Microservices is based on my blog series Should that be a Microservice? Keep These Six Factors in Mind found on the Pivotal blog.

Frank Munz

Frank Munz

Senior Technical Evangelist, Microsoft

Frank Munz is a Senior Technical Evangelist for Amazon Web Services based in Germany.,Before he went "all in" with the cloud, Frank has worked as a DevOps engineer and software architect in Europe and Australia. Apart from containers, his interests lie in big/fast data, and machine learning.,Frank has over 20 years of industry experience. He ran his own boutique consultancy for more than a decade and worked for and on behalf of TIBCO, BEA, and Oracle. He is a published author of the book Middleware and Cloud Computing, and holds a Ph.D. in Computer Science from Technische Universität München (TUM).
AI

Serverless Analytics for Streaming Data and Data Lakes

This is a beginner to medium level workshop designed to illustrate how to process real-time data streams in a serverless way. In this workshop, we’ll build infrastructure to enable operations personnel at Wild Rydes headquarters to monitor the health and status of their unicorn fleet. Each unicorn is equipped with a sensor that reports its location and vital signs. During this workshop we’ll use AWS to build applications to process and visualize this data in real-time.

During this workshop you can learn about Real-time Data Streaming, Stream Aggregation, Stream Processing and Data Lakes. We’ll use Lambda to process real-time streams, DynamoDB to persist unicorn vitals, Amazon Kinesis Data Analytics to build a serverless application to aggregate data, Amazon Kinesis Data Firehose to archive the raw data to Amazon S3, and Athena to run ad-hoc queries against the raw data.

Important: