An understanding of container and virtualization concepts
[overview] =>
The Moby Project is an open-source library of components for assembling custom container-based systems. It provides a “Lego set” of dozens of components, a framework for assembling them into custom container-based systems, and a place for users to experiment and exchange ideas.
In this instructor-led, live training, participants will learn how to use Moby Project to assemble specialized container systems.
By the end of this training, participants will be able to:
Assemble their own docker engine by stripping out unnecessary components
Swap out build systems and volume management functions
Use Moby tooling to define components (OS, hypervisor, etc.), then pack them into a chosen artifact
Assemble a sample tiny OS that can be booted straight from bare metal
Audience
Developers
DevOps
System administrators
Format of the course
Part lecture, part discussion, exercises and heavy hands-on practice
[category_overview] =>
[outline] =>
To request a customized course outline for this training, please contact us.
An understanding of basic software development principles
Experience with command-line interfaces and Docker
Familiarity with containerization concepts is beneficial
Audience
Software developers
DevOps professionals
Technical managers
[overview] =>
Minikube is a tool that makes it easy to run Kubernetes locally.
This instructor-led, live training (online or onsite) is aimed at beginner-level to intermediate-level software developers and DevOps professionals who wish to learn how to set up and manage a local Kubernetes environment using Minikube.
By the end of this training, participants will be able to:
Install and configure Minikube on their local machine.
Understand the basic concepts and architecture of Kubernetes.
Deploy and manage containers using kubectl and the Minikube dashboard.
Set up persistent storage and networking solutions for Kubernetes.
Utilize Minikube for developing, testing, and debugging applications.
Format of the Course
Interactive lecture and discussion.
Lots of exercises and practice.
Hands-on implementation in a live-lab environment.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
[category_overview] =>
This instructor-led, live training in <loc> (online or onsite) is aimed at beginner-level to intermediate-level software developers and DevOps professionals who wish to learn how to set up and manage a local Kubernetes environment using Minikube.
By the end of this training, participants will be able to:
Install and configure Minikube on their local machine.
Understand the basic concepts and architecture of Kubernetes.
Deploy and manage containers using kubectl and the Minikube dashboard.
Set up persistent storage and networking solutions for Kubernetes.
Utilize Minikube for developing, testing, and debugging applications.
[outline] =>
Understanding Container Orchestration
Introduction to containerization
The role of Kubernetes in container orchestration
Kubernetes Fundamentals
Core concepts and components of Kubernetes
Kubernetes architecture overview
Setting Up Minikube
Installing Minikube on different platforms
Starting a single-node Kubernetes cluster with Minikube
Working with Kubernetes Objects
Understanding Pods, Deployments, and Services
Managing Kubernetes objects using kubectl
Deploying Applications on Minikube
Creating and managing deployments
Exposing applications using NodePort and LoadBalancer
Persistent Storage and Volumes
Using Persistent Volumes and Persistent Volume Claims
ConfigMaps and Secrets for configuration management
Networking in Kubernetes
Service discovery and DNS management
Ingress controllers and Ingress resources
Advanced Minikube Features
Enabling and using Minikube add-ons
Setting up a local registry and using it with Minikube
An understanding of containerization and its benefits
Experience with Docker and basic Kubernetes concepts
Familiarity with software development and deployment processes
Audience
Developers
DevOps engineers
Technical leads
[overview] =>
Minikube is a powerful tool for developers to run Kubernetes locally.
This instructor-led, live training (online or onsite) is aimed at intermediate-level developers and DevOps engineers who wish to use Minikube as a part of their development workflow.
By the end of this training, participants will be able to:
Set up and manage a local Kubernetes environment using Minikube.
Understand how to deploy, manage, and debug applications on Minikube.
Integrate Minikube into their continuous integration and deployment pipelines.
Optimize their development process using Minikube's advanced features.
Apply best practices for local Kubernetes development.
Format of the Course
Interactive lecture and discussion.
Lots of exercises and practice.
Hands-on implementation in a live-lab environment.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
[category_overview] =>
This instructor-led, live training in <loc> (online or onsite) is aimed at intermediate-level developers and DevOps engineers who wish to use Minikube as a part of their development workflow.
By the end of this training, participants will be able to:
Set up and manage a local Kubernetes environment using Minikube.
Understand how to deploy, manage, and debug applications on Minikube.
Integrate Minikube into their continuous integration and deployment pipelines.
Optimize their development process using Minikube's advanced features.
Apply best practices for local Kubernetes development.
[outline] =>
Introduction to Minikube
Benefits of using Minikube for local development
Comparison with other Kubernetes environments
Minikube Quickstart
Installation and configuration
Launching and accessing the Kubernetes dashboard
Application Development Workflow
Setting up a development environment with Minikube
A general understanding of "continuous integration" (CI) / "continuous deliver" (CD) concepts
Audience
Developers
DevOps engineers
[overview] =>
This instructor-led, live training (online or onsite) is aimed at engineers who wish to use Helm to streamline the process of installing and managing Kubernetes applications.
By the end of this training, participants will be able to:
Install and configure Helm.
Create reproducible builds of Kubernetes applications.
Share applications as Helm charts.
Run third-party applications saved as Helm charts.
Manage releases of Helm packages.
Format of the course
Interactive lecture and discussion.
Lots of exercises and practice.
Hands-on implementation in a live-lab environment.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
To learn more about Helm, please visit: https://helm.sh/
[category_overview] =>
This instructor-led, live training in <loc> (online or onsite) is aimed at engineers who wish to use Helm to streamline the process of installing and managing Kubernetes applications.
By the end of this training, participants will be able to:
Install and configure Helm.
Create reproducible builds of Kubernetes applications.
Share applications as Helm charts.
Run third-party applications saved as Helm charts.
Manage releases of Helm packages.
[outline] =>
Introduction
Helm package manager as a Continuous Integration (CI) / Continous Deployment (CD) tool.
Installing and Configuring Kubernetes and Helm
Refresher on Kubernetes Cluster Architecture and Docker
This instructor-led, live training (online or onsite) is aimed at engineers wishing to run containerized applications using the CRI-O container runtime.
By the end of this training, participants will be able to:
Install and configure the CRI-O container runtime.
Pull images from a variety of OCI-compliant registries.
Run, test and manage containerized applications using CRI-O.
Format of the Course
Interactive lecture and discussion
Lots of exercises and practice
Hands-on implementation in a live-lab environment
Course Customization Options
To request a customized training for this course, please contact us to arrange.
To learn more about CRI-O, please visit: http://cri-o.io/.
A general understanding of cloud computing concepts.
Experience with web or mobile application development.
Programming experience in any of the languages supported by Heroku (Ruby, Python, PHP, Clojure, Go, Java, Scala, and Node.js, etc.)
Audience
Web developers
Mobile developers
[overview] =>
Heroku is a Platform-as-a-Service (PaaS) for building, running, operating and scaling containerized web and mobile applications in the cloud. It supports multiple programming languages, various development tools, pre-installed operating systems, and redundant servers.
This instructor-led, live training (online or onsite) is aimed at developers who wish to use Heroku to conveniently deploy web and mobile applications to the cloud, without grappling with infrastructure setup, configuration, management, etc.
By the end of this training, participants will be able to:
Understand the Heroku ecosystem and how it differs from AWS EC2 and other PaaS offerings.
Leverage Heroku features such as Git integration, Heroku CLI and Heroku Dashboard to push applications to the cloud with ease.
Format of the Course
Interactive lecture and discussion.
Lots of exercises and practice.
Hands-on implementation in a live-lab environment.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
[category_overview] =>
This instructor-led, live training in <loc> (online or onsite) is aimed at developers who wish to use Heroku to conveniently deploy web and mobile applications to the cloud, without grappling with infrastructure setup, configuration, management, etc.
By the end of this training, participants will be able to:
Understand the Heroku ecosystem and how it differs from AWS EC2 and other PaaS offerings.
Leverage Heroku features such as Git integration, Heroku CLI and Heroku Dashboard to push applications to the cloud with ease.
[outline] =>
Introduction
Preparing Your Heroku Account
Overview of Heroku Features and Architecture
Architecting an App Using the Twelve-Factor App Methodology
Project Calico is a networking solution for containers and virtual machines. Originally created for OpenStack to simplify data transmission across the network, today it supports Kubernetes, OpenShift, Docker EE, OpenStack, bare metal services, and others. Calico uses IP routing instead of switching, virtual networks, overlay networks, and other complicated workarounds to enable efficient and secure networking.
This instructor-led, live training (online or onsite) is aimed at engineers who wish to network Kubernetes clusters using a simplified IP routing based approach.
By the end of this training, participants will be able to:
Install and configure Calico.
Use Calico to create a container networking solution for Kubernetes clusters.
Understand how Calico differs from traditional overlay networks.
Understand how Calico combines internet routing protocols with consensus-based data stores.
Use Calico to provide a secure network policy for Kubernetes.
Format of the Course
Interactive lecture and discussion.
Lots of exercises and practice.
Hands-on implementation in a live-lab environment.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
To learn more about Project Calico, please visit: https://www.projectcalico.org/
[category_overview] =>
This instructor-led, live training in <loc> (online or onsite) is aimed at engineers who wish to optimize networking for Kubernetes clusters.
By the end of this training, participants will be able to:
Install and configure Calico.
Understand how Calico differs from traditional overlay networks.
Understand how Calico combines internet routing protocols with consensus-based data stores.
Use Calico to create a container networking solution for Kubernetes clusters.
Use Calico to provide network policy for Kubernetes.
[outline] =>
Introduction
Layer 3 networking vs overlay networks
Installing and Configuring Calico
Overview of Calico Features and Architecture
The Problem with Traditional Overlay Networks
Understanding L3 Connectivity and IP Routing
Overview of Calico Components
Setting up a Kubernetes Network Policy with Calico
A general understanding of infrastructure and software deployment.
Audience
Software engineers
System administrators
DevOps engineers
[overview] =>
Rancher is an open source PaaS platform for managing Kubernetes on any infrastructure.
This instructor-led, live course provides participants with an overview of Rancher and demonstrates through hands-on practice how to deploy and manage a Kubernetes cluster with Rancher.
By the end of this course, participants will be able to:
Install and configure Rancher.
Launch a Kubernetes cluster using RKE (Rancher Kubernetes Engine).
Manage multiple cloud Kubernetes clusters while avoiding vendor lock-in.
Manage Kubernetes clusters using their operating system and container engine of choice.
Format of the Course
Part lecture, part discussion, heavy hands-on practice
[category_overview] =>
This instructor-led, live course in <loc> provides participants with an overview of Rancher and demonstrates through hands-on practice how to deploy and manage a Kubernetes cluster with Rancher.
[outline] =>
Introduction
Rancher vs OpenShift
Installing and Configuring Rancher
Understanding Rancher's Kubernetes Distribution
Starting the Rancher Server
Adding Hosts
Launching Infrastructure Services
Creating a Container Using the UI
Creating a Container through Docker Command Line
Creating a Multi-Container Application
Networking Between Containers
Service Discovery
Load Balancing Containers
Launching Kubernetes Using RKE (Rancher Kubernetes Engine)
Basic knowledge of System Center Configuration Manager.
Experience or interest in managing Windows desktops in an Enterprise environment.
Audience
IT Professionals in charge of managing desktop configurations and deployments
IT professionals wishing to expand their knowledge and skills in virtualization
[overview] =>
Microsoft Application Virtualization (App-V) allows for the creation of applications that run as centrally managed services. App-V applications have the benefit of never needing to be installed directly on the end user's computer and never conflicting with other applications.
In this instructor-led, live training, we introduce the architecture, components and processes behind application virtualization and walk participants step-by-step through the deployment of App-V and App-V applications in a live lab environment. By the end of the course, participants will have knowledge and hands-on practice needed to install, configure, administer, and troubleshoot App-V as well as create, package and deploy their own App-V applications.
Format of the course
Part lecture, part discussion, exercises and heavy hands-on practice
Notes:
This course covers version 5.1 of App-V. For training on a different version, please contact us to arrange.
[category_overview] =>
In this instructor-led, live training in <loc>, we introduce the architecture, components and processes behind application virtualization and walk participants step-by-step through the deployment of App-V and App-V applications in a live lab environment. By the end of the course, participants will have knowledge and hands-on practice needed to install, configure, administer, and troubleshoot App-V as well as create, package and deploy their own App-V applications.
[outline] =>
Introduction to Microsoft App-V
Why virtualize your Windows applications?
Overview of App-V's Application Virtualization Architecture
How application virtualization works
The role of the client
The role of the Sequencer
The App-V Package
Planning Your Virtualization Infrastructure
Planning the App-V supporting infrastructure
Overview of various deployment scenarios
Installing and configuring the App-V server
Installing and Configuring the Application Virtualization Sequencer
Overview of the application virtualization sequencer
Planning the sequencer environment
Classifying applications for sequencing
Understanding the sequencing limitations
Sequencing your first application
Using the sequencer-generated MSI file to deploy offline
Using the App-V Package Accelerator
Overview of package accelerator
Creating a package accelerator using PowerShell
Creating a package using a package accelerator
Upgrading Your App-V Application Package
Updating a package to replace an existing one
Updating a package for deployment with the existing package
Updating a package with PowerShell
Sequencing for connection groups (plug-ins and middleware)
Using the App-V package converter (4.6 to 5.x)
Dynamic configuration and targeted scripting
Advanced App-V Sequencing Techniques
Sequencing a web-based application
Creating a Virtual Environment for the application
Sequencing an application that hard codes its install to the C:\ drive
Performing an Open for Package Upgrade on an existing package
Building scripts into an .OSD file
Application Virtualization Management Server Administration
Experience implementing Microsoft Application Virtualization
Audience
IT Professionals in charge of managing desktop configurations and deployments
IT professionals wishing to expand their knowledge and skills in virtualization
[overview] =>
Microsoft Application Virtualization (App-V) allows for the creation of applications that run as centrally managed services. App-V applications have the benefit of never needing to be installed directly on the end user's computer and never conflicting with other applications.
In this instructor-led, live training, we cover advanced techniques and troubleshooting for Microsoft Application Virtualization (App-V), especially in the area of sequencing and packaging.
By the end of the course, participants will have a deep understanding of App-V and be able to sequence, troubleshoot and resolve complex issues.
Format of the course
Part lecture, part discussion, exercises and heavy hands-on practice
Notes:
This course covers version 5.1 of App-V. If you need training on a different version, please contact us to arrange.
This course is focused on App Virtualization and does not cover other MDOP components.
[category_overview] =>
In this instructor-led, live training in <loc>, we cover advanced techniques and troubleshooting for Microsoft Application Virtualization (App-V), especially in the area of sequencing and packaging.
By the end of the course, participants will have a deep understanding of App-V and be able to sequence, troubleshoot and resolve complex issues.
[outline] =>
Introduction
Overview of sequencing
Methods used for sequencing in App- V
Sequencing a web-based application
Creating a Virtual Environment for the application
Sequencing an application that hard codes its install to the C:\ drive
Performing an Open for Package Upgrade on an existing package
Building scripts into an .OSD file
Merging and Overriding
Overriding a local key
Merging with a local key
Overriding a local directory
Merging with a local directory
Microsoft Office in App-V
Understanding different versions of Office
Developing an Office package for App-V using Office Deployment Tool
Steps used in publishing the developed Office package
Customization and management of the App-V packages
Comparing VFS to PVAD
Use of Primary Virtual Application Directory (PVAD) in sequencing
How VFS and PVAD are different in a virtual environment
How PVAD can be accessed even if it hidden from view
Deploying and Testing Power-Shell
Downloading and installation of the App-v server components
Procedures that are followed in the deployment of the power-shell
Common steps followed to test the deployed power-shell
Understanding Run-Virtual in App-V
A sub-key is added to the run-virtual
Acquisition of an AppvClientPackage Power-Shell cmdlet
Use /appvpid:<PID> command line
Make use of the /appvve:<GUID>
Patch and Updates in App-V
Description of what patches and updates are
Understanding Hot-fix 8 for App-V 5.1
How Hot-fix 8 is used for updating purposes
Use of scripts in App- V
More information on script-launcher
Problems associated with app-v scripting solution
Installation of a script- launcher supporting environment
Description of the different types of scripts
Automating conversion to App-V
Amount of time taken to convert should be considered
The cost that is incurred is of importance also
Techniques that can be used to convert in the future
Accelerators in App- V packages
Description of package accelerators
Steps in creating a package accelerator
Dynamic Configuration
Configuring the user files
Configuring the deployment files
Advanced Connection Groups
The function of the configuration group file and where it is found
A layout of the connection group file
Configuration of packages in a connection group
Connection groups that are virtually supported
Advanced Client Integration
Description of what client integration in App-V is
How integration is achieved in the App- V
The importance of having the client integration in App-V
Troubleshooting App-V
Avoiding rabbit holes
Combining different areas of knowledge: foundational, operational, contextual
Using Process Monitor to troubleshoot
Troubleshooting the App-V client
Troubleshooting the OSD file
Client Debugging
Regular training of the customer care on the functions of the App-V
Apache Karaf is an OSGi based runtime for deploying containerized applications.
In this instructor-led, live training (onsite or remote), participants will learn how to set up an OSGi based project as they step through the deployment of a modular Java application using Apache Karaf.
By the end of this training, participants will be able to:
Install and configure Apache Karaf
Understand the essential features of the OSGi runtime environment
Develop a containerized application using the Apache Karaf run time environment
Audience
Architects
Developers
Format of the Course
Part lecture, part discussion, exercises and heavy hands-on practice.
Note
To request a customized training for this course, please contact us to arrange.
[category_overview] =>
[outline] =>
Introduction
OSGI
Overview of the OSGi Life Cycle
Setting up Apache Felix
Working with OSGi Bundles
OSGi Services (SOA in a JVM)
Core Services
Compendium Services
Whiteboard and Extender Patterns
Bundle Host/Fragment
Aries JPA/JTA
Bundle Testing
Apache Karaf
Installing and Configuring Apache Karaf
Overview of Apache Karaf Features and Architecture
Using Karaf Consoles
Application Logging
Application Provisioning
Deploying an Application
Troubleshooting
Summary and Conclusion
[language] => en
[duration] => 21
[status] => published
[changed] => 1700037437
[source_title] => Building OSGi Applications with Apache Karaf
[source_language] => en
[cert_code] =>
[weight] => 0
[excluded_sites] =>
[use_mt] => stdClass Object
(
[field_overview] =>
[field_course_outline] =>
[field_prerequisits] =>
[field_overview_in_category] =>
)
[cc] => apachekaraf2
)
[dockerkubernetesopenshift] => stdClass Object
(
[course_code] => dockerkubernetesopenshift
[hr_nid] => 277930
[title] => Docker, Kubernetes and OpenShift 3 for Administrators
[requirements] =>
An understanding of container concepts
System administration or DevOps experience
Experience with the Linux command line
Audience
System administrators
Architects
Developers
[overview] =>
Red Hat OpenShift Container Platform (formerly OpenShift Enterprise) is an on-premises platform-as-a-service used for developing and deploying containerized applications on Kubernetes. Red Hat OpenShift Container Platform runs on Red Hat Enterprise Linux.
In this instructor-led, live training, participants will learn how to manage Red Hat OpenShift Container Platform.
By the end of this training, participants will be able to:
Create, configure, manage, and troubleshoot OpenShift clusters.
Deploy containerized applications on-premise, in public cloud or on a hosted cloud.
Secure OpenShift Container Platform.
Monitor and gather metrics.
Manage storage.
Format of the Course
Part lecture, part discussion, exercises and heavy hands-on practice.
Course Customization Options
This course is based on OpenShift Container Platform version 3.x.
To customize the course or request training on a different version of OpenShift (e.g., OpenShift Container Platform 4 or OKD), please contact us to arrange.
[category_overview] =>
In this instructor-led, live training in <loc>, participants will learn how to manage Red Hat OpenShift Container Platform.
By the end of this training, participants will be able to:
Create, configure, manage, and troubleshoot OpenShift clusters.
Deploy containerized applications on-premise, in public cloud or on a hosted cloud.
Secure OpenShift Container Platform
Monitor and gather metrics.
Manage storage.
[outline] =>
Introduction
Overview of Docker and Kubernetes
Overview of OpenShift Container Platform Architecture
Creating Containerized Services
Managing Containers
Creating and Managing Container Images
Deploying Multi-container Applications
Setting up an OpenShift Cluster
Securing OpenShift Container Platform
Monitoring OpenShift Container Platform
Deploying Applications on OpenShift Container Platform using Source-to-Image (S2I)
Docker is an open-source platform for automating the process of building, shipping and running applications inside containers.
Kubernetes goes one step further by providing the tools needed to deploy and manage containerized applications at scale in a clustered environment.
OpenShift Container Platform (formerly OpenShift Enterprise) brings Docker and Kubernetes together into a managed platform, or PaaS (Platform as a Service), to further ease and simplify the deployment of Docker and Kubernetes. It provides predefined application environments and helps to realize key DevOps principles such as reduced time to market, infrastructure as code, continuous integration (CI), and continuous delivery (CD). OpenShift Container Platform is maintained by Red Hat and runs atop of Red Hat Enterprise Linux.
In this instructor-led, live training, participants will learn how to use OpenShift Container Platform to deploy containerized applications.
By the end of this training, participants will be able to:
Create and configure an OpenShift setup.
Quickly deploy applications on-premise, in public cloud or on a hosted cloud.
Format of the Course
Part lecture, part discussion, exercises and heavy hands-on practice.
Course Customization Options
This course is based on OpenShift Container Platform version 3.x.
To customize the course or request training on a different version of OpenShift (e.g., OpenShift Container Platform 4 or OKD), please contact us to arrange.
[category_overview] =>
In this instructor-led, live training in <loc>, participants will learn how to use OpenShift Container Platform to deploy containerized applications.
By the end of this training, participants will be able to:
Create and configure an OpenShift setup.
Quickly deploy applications on-premise, in public cloud or on a hosted cloud.
[outline] =>
Introduction
From Docker containers, to managed clusters of containers with Kubernetes, to managed Docker and Kubernetes with OpenShift.
Docker
Overview of Docker architecture
Setting up Docker
Running a web application in a container
Managing Docker images
Networking Docker containers
Managing the date inside a Docker Container
Kubernetes
Overview of Kubernetes architecture
Managing a cluster of Docker containers with Kubernetes
OpenShift Container Platform
Overview of OpenShift Container Platform architecture
Creating containerized services
Managing Docker containers with OpenShift Container Platform
Creating and managing container images
Deploying multi-container applications
Setting up an OpenShift Container Platform cluster
Deploying applications on OpenShift Container Platform using source-to-image (S2I)
A general understanding of containers (Docker) and orchestration (Kubernetes).
Some Python programming experience is helpful.
Experience working with a command line.
Audience
Data science engineers.
DevOps engineers interesting in machine learning model deployment.
Infrastructure engineers interesting in machine learning model deployment.
Software engineers wishing to automate the integration and deployment of machine learning features with their application.
[overview] =>
Kubeflow is a framework for running Machine Learning workloads on Kubernetes. TensorFlow is one of the most popular machine learning libraries. Kubernetes is an orchestration platform for managing containerized applications. OpenShift is a cloud application development platform that uses Docker containers, orchestrated and managed by Kubernetes, on a foundation of Red Hat Enterprise Linux.
This instructor-led, live training (online or onsite) is aimed at engineers who wish to deploy Machine Learning workloads to an OpenShift on-premise or hybrid cloud.
By the end of this training, participants will be able to:
Install and configure Kubernetes and Kubeflow on an OpenShift cluster.
Use OpenShift to simplify the work of initializing a Kubernetes cluster.
Create and deploy a Kubernetes pipeline for automating and managing ML models in production.
Train and deploy TensorFlow ML models across multiple GPUs and machines running in parallel.
Call public cloud services (e.g., AWS services) from within OpenShift to extend an ML application.
Format of the Course
Interactive lecture and discussion.
Lots of exercises and practice.
Hands-on implementation in a live-lab environment.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
[category_overview] =>
This instructor-led, live training in (online or onsite) is aimed at engineers who wish to deploy Machine Learning workloads to an OpenShift on-premise or hybrid cloud.
By the end of this training, participants will be able to:
Install and configure Kubernetes and Kubeflow on an OpenShift cluster.
Use OpenShift to simplify the work of initializing a Kubernetes cluster.
Create and deploy a Kubernetes pipeline for automating and managing ML models in production.
Train and deploy TensorFlow ML models across multiple GPUs and machines running in parallel.
Call public cloud services (e.g., AWS services) from within OpenShift to extend an ML application.
[outline] =>
Introduction
Kubeflow on OpenShift vs public cloud managed services
A general understanding of containers and orchestration
System administration or devops experience
Audience
System administrators
Architects
[overview] =>
OKD is an application development platform for deploying containerized applications using Kubernetes. OKD is the upstream code base upon which Red Hat OpenShift Online and Red Hat OpenShift Container Platform are built.
In this instructor-led, live training (onsite or remote), participants will learn how to install, configure, and manage OKD on-premise or in the cloud.
By the end of this training, participants will be able to:
Create, configure, manage, and troubleshoot an OKD cluster.
Secure OKD.
Deploy containerized applications on OKD.
Monitor the performance of an application running in OKD.
Manage data storage.
Quickly deploy applications on-premise or on a public cloud such as AWS.
Format of the Course
Interactive lecture and discussion.
Lots of exercises and practice.
Hands-on implementation in a live-lab environment.
Course Customization Options
This course is based on OKD (Origin Kubernetes Distribution).
To customize the course or request training on a different version of OpenShift (e.g., OpenShift Container Platform 3 or OpenShift Container Platform 4), please contact us to arrange.
[category_overview] =>
In this instructor-led, live training in <loc> (onsite or remote), participants will learn how to how to install, configure, and manage OKD on-premise or in the cloud.
By the end of this training, participants will be able to:
Create, configure, manage, and troubleshoot an OKD cluster.
Secure OKD.
Deploy containerized applications on OKD.
Monitor the performance of an application running in OKD.
Manage data storage.
Quickly deploy applications on-premise or on a public cloud such as AWS.
A general understanding of containers and orchestration
Software development experience
Audience
Developers
[overview] =>
OKD is an application development platform for deploying containerized applications using Kubernetes. OKD is the upstream code base upon which Red Hat OpenShift Online and Red Hat OpenShift Container Platform are built.
In this instructor-led, live training (onsite or remote), participants will learn learn to create, update, and maintain containerized applications using OKD.
By the end of this training, participants will be able to:
Deploy a containerized web application to an OKD cluster on-premise or in the cloud.
Automate part of the software delivery pipeline.
Apply the principles of the DevOps philosophy to ensure continuous delivery of an application.
Format of the Course
Interactive lecture and discussion.
Lots of exercises and practice.
Hands-on implementation in a live-lab environment.
Course Customization Options
This course is based on OKD (Origin Kubernetes Distribution).
To customize the course or request training on a different version of OpenShift (e.g., OpenShift Container Platform 3 or OpenShift Container Platform 4), please contact us to arrange.
[category_overview] =>
In this instructor-led, live training in <loc> (onsite or remote), participants will learn learn to create, update, and maintain containerized applications using OKD.
By the end of this training, participants will be able to:
Deploy a containerized web application to an OKD cluster on-premise or in the cloud.
Automate part of the software delivery pipeline.
Apply the principles of the DevOps philosophy to ensure continuous delivery of an application.
[outline] =>
Introduction
The DevOps philosophy and Continuous Integration (CI) principles
Overview of OKD Features and Architecture
The Life Cycle of a Containerized Application
Navigating the OKD Web Console and CLI
Setting up the Development Environment
Defining a CI/CD Build Strategy
Developing an Application
Packaging an Application on Kubernetes
Running an Application in an OKD Cluster
Monitoring the Status of an Application
Debugging the Application
Updating an Application in Production
Managing Container Images
Customizing OKD with Custom Resource Definitions (CRDs)
The Moby Project is an open-source library of components for assembling custom container-based systems. It provides a “Lego set” of dozens of components, a framework for assembling them into custom container-based systems, and a place for users to experiment and exchange ideas.
In this instructor-led, live training, participants will learn how to use Moby Project to assemble specialized container systems.
By the end of this training, participants will be able to:
Assemble their own docker engine by stripping out unnecessary components
Swap out build systems and volume management functions
Use Moby tooling to define components (OS, hypervisor, etc.), then pack them into a chosen artifact
Assemble a sample tiny OS that can be booted straight from bare metal
Audience
Developers
DevOps
System administrators
Format of the course
Part lecture, part discussion, exercises and heavy hands-on practice
Thank you for sending your enquiry! One of our team members will contact you shortly.
Thank you for sending your booking! One of our team members will contact you shortly.
Course Outline
To request a customized course outline for this training, please contact us.
Requirements
An understanding of container and virtualization concepts
This instructor-led, live training in China (online or onsite) is aimed at beginner-level to intermediate-level software developers and DevOps professionals who wish to learn how to set up and manage a local Kubernetes environment using Minikube.
By the end of this training, participants will be able to:
Install and configure Minikube on their local machine.
Understand the basic concepts and architecture of Kubernetes.
Deploy and manage containers using kubectl and the Minikube dashboard.
Set up persistent storage and networking solutions for Kubernetes.
Utilize Minikube for developing, testing, and debugging applications.
This instructor-led, live training in China (online or onsite) is aimed at intermediate-level developers and DevOps engineers who wish to use Minikube as a part of their development workflow.
By the end of this training, participants will be able to:
Set up and manage a local Kubernetes environment using Minikube.
Understand how to deploy, manage, and debug applications on Minikube.
Integrate Minikube into their continuous integration and deployment pipelines.
Optimize their development process using Minikube's advanced features.
Apply best practices for local Kubernetes development.
This instructor-led, live training in China (online or onsite) is aimed at engineers who wish to use Helm to streamline the process of installing and managing Kubernetes applications.
By the end of this training, participants will be able to:
Install and configure Helm.
Create reproducible builds of Kubernetes applications.
Share applications as Helm charts.
Run third-party applications saved as Helm charts.
This instructor-led, live training (online or onsite) is aimed at engineers wishing to run containerized applications using the CRI-O container runtime.
By the end of this training, participants will be able to:
Install and configure the CRI-O container runtime.
Pull images from a variety of OCI-compliant registries.
Run, test and manage containerized applications using CRI-O.
Format of the Course
Interactive lecture and discussion
Lots of exercises and practice
Hands-on implementation in a live-lab environment
Course Customization Options
To request a customized training for this course, please contact us to arrange.
To learn more about CRI-O, please visit: http://cri-o.io/.
This instructor-led, live training in China (online or onsite) is aimed at developers who wish to use Heroku to conveniently deploy web and mobile applications to the cloud, without grappling with infrastructure setup, configuration, management, etc.
By the end of this training, participants will be able to:
Understand the Heroku ecosystem and how it differs from AWS EC2 and other PaaS offerings.
Leverage Heroku features such as Git integration, Heroku CLI and Heroku Dashboard to push applications to the cloud with ease.
This instructor-led, live course in China provides participants with an overview of Rancher and demonstrates through hands-on practice how to deploy and manage a Kubernetes cluster with Rancher.
In this instructor-led, live training in China, we introduce the architecture, components and processes behind application virtualization and walk participants step-by-step through the deployment of App-V and App-V applications in a live lab environment. By the end of the course, participants will have knowledge and hands-on practice needed to install, configure, administer, and troubleshoot App-V as well as create, package and deploy their own App-V applications.
In this instructor-led, live training in China, we cover advanced techniques and troubleshooting for Microsoft Application Virtualization (App-V), especially in the area of sequencing and packaging.
By the end of the course, participants will have a deep understanding of App-V and be able to sequence, troubleshoot and resolve complex issues.
Apache Karaf is an OSGi based runtime for deploying containerized applications.
In this instructor-led, live training (onsite or remote), participants will learn how to set up an OSGi based project as they step through the deployment of a modular Java application using Apache Karaf.
By the end of this training, participants will be able to:
Install and configure Apache Karaf
Understand the essential features of the OSGi runtime environment
Develop a containerized application using the Apache Karaf run time environment
Audience
Architects
Developers
Format of the Course
Part lecture, part discussion, exercises and heavy hands-on practice.
Note
To request a customized training for this course, please contact us to arrange.
This instructor-led, live training in (online or onsite) is aimed at engineers who wish to deploy Machine Learning workloads to an OpenShift on-premise or hybrid cloud.
By the end of this training, participants will be able to:
Install and configure Kubernetes and Kubeflow on an OpenShift cluster.
Use OpenShift to simplify the work of initializing a Kubernetes cluster.
Create and deploy a Kubernetes pipeline for automating and managing ML models in production.
Train and deploy TensorFlow ML models across multiple GPUs and machines running in parallel.
Call public cloud services (e.g., AWS services) from within OpenShift to extend an ML application.
In this instructor-led, live training in China (onsite or remote), participants will learn how to how to install, configure, and manage OKD on-premise or in the cloud.
By the end of this training, participants will be able to:
Create, configure, manage, and troubleshoot an OKD cluster.
Secure OKD.
Deploy containerized applications on OKD.
Monitor the performance of an application running in OKD.
Manage data storage.
Quickly deploy applications on-premise or on a public cloud such as AWS.
In this instructor-led, live training in China (onsite or remote), participants will learn learn to create, update, and maintain containerized applications using OKD.
By the end of this training, participants will be able to:
Deploy a containerized web application to an OKD cluster on-premise or in the cloud.
Automate part of the software delivery pipeline.
Apply the principles of the DevOps philosophy to ensure continuous delivery of an application.