Understanding Deep Neural Networks培训

课程编码

undnn

课程时长

35 小时 通常来说是5天,包括中间休息。

要求

Background in physics, mathematics and programming. Involvment in image processing activities.

The delegates should have a prior understanding of machine learning concepts, and should have worked upon Python programming and libraries.

课程概览

本课程首先介绍神经网络的概念知识,通常还包括机器学习算法,深度学习(算法和应用程序)。

本次培训的一部分-1(40%)更注重基本面,但会帮助你选择合适的技术: TensorFlowCaffe ,Theano,DeepDrive, Keras等。

本次培训的第2部分(20%)介绍了Theano--一个python库,可以轻松编写深度学习模型。

第3部分(40%)的培训将广泛基于Tensorflow - Go ogle的Deep Learning开源软件库的第二代API。示例和动手都将在TensorFlow

听众

本课程面向希望将TensorFlow用于Deep Learning项目的工程师

完成本课程后,代表们将:

  • 对深度神经网络(DNN),CNN和RNN有很好的理解

  • 了解TensorFlow的结构和部署机制

  • 能够执行安装/生产环境/架构任务和配置

  • 能够评估代码质量,执行调试,监控

  • 能够实现高级生产,如培训模型,构建图形和记录

Machine Translated

课程大纲

Part 1 – Deep Learning and DNN Concepts


Introduction AI, Machine Learning & Deep Learning

  • History, basic concepts and usual applications of artificial intelligence far Of the fantasies carried by this domain

  • Collective Intelligence: aggregating knowledge shared by many virtual agents

  • Genetic algorithms: to evolve a population of virtual agents by selection

  • Usual Learning Machine: definition.

  • Types of tasks: supervised learning, unsupervised learning, reinforcement learning

  • Types of actions: classification, regression, clustering, density estimation, reduction of dimensionality

  • Examples of Machine Learning algorithms: Linear regression, Naive Bayes, Random Tree

  • Machine learning VS Deep Learning: problems on which Machine Learning remains Today the state of the art (Random Forests & XGBoosts)


 

Basic Concepts of a Neural Network (Application: multi-layer perceptron)

  • Reminder of mathematical bases.

  • Definition of a network of neurons: classical architecture, activation and

  • Weighting of previous activations, depth of a network

  • Definition of the learning of a network of neurons: functions of cost, back-propagation, Stochastic gradient descent, maximum likelihood.

  • Modeling of a neural network: modeling input and output data according to The type of problem (regression, classification ...). Curse of dimensionality.

  • Distinction between Multi-feature data and signal. Choice of a cost function according to the data.

  • Approximation of a function by a network of neurons: presentation and examples

  • Approximation of a distribution by a network of neurons: presentation and examples

  • Data Augmentation: how to balance a dataset

  • Generalization of the results of a network of neurons.

  • Initialization and regularization of a neural network: L1 / L2 regularization, Batch Normalization

  • Optimization and convergence algorithms


 

Standard ML / DL Tools

A simple presentation with advantages, disadvantages, position in the ecosystem and use is planned.

  • Data management tools: Apache Spark, Apache Hadoop Tools

  • Machine Learning: Numpy, Scipy, Sci-kit

  • DL high level frameworks: PyTorch, Keras, Lasagne

  • Low level DL frameworks: Theano, Torch, Caffe, Tensorflow


 

Convolutional Neural Networks (CNN).

  • Presentation of the CNNs: fundamental principles and applications

  • Basic operation of a CNN: convolutional layer, use of a kernel,

  • Padding & stride, feature map generation, pooling layers. Extensions 1D, 2D and 3D.

  • Presentation of the different CNN architectures that brought the state of the art in classification

  • Images: LeNet, VGG Networks, Network in Network, Inception, Resnet. Presentation of Innovations brought about by each architecture and their more global applications (Convolution 1x1 or residual connections)

  • Use of an attention model.

  • Application to a common classification case (text or image)

  • CNNs for generation: super-resolution, pixel-to-pixel segmentation. Presentation of

  • Main strategies for increasing feature maps for image generation.


 

Recurrent Neural Networks (RNN).

  • Presentation of RNNs: fundamental principles and applications.

  • Basic operation of the RNN: hidden activation, back propagation through time, Unfolded version.

  • Evolutions towards the Gated Recurrent Units (GRUs) and LSTM (Long Short Term Memory).

  • Presentation of the different states and the evolutions brought by these architectures

  • Convergence and vanising gradient problems

  • Classical architectures: Prediction of a temporal series, classification ...

  • RNN Encoder Decoder type architecture. Use of an attention model.

  • NLP applications: word / character encoding, translation.

  • Video Applications: prediction of the next generated image of a video sequence.


Generational models: Variational AutoEncoder (VAE) and Generative Adversarial Networks (GAN).

  • Presentation of the generational models, link with the CNNs

  • Auto-encoder: reduction of dimensionality and limited generation

  • Variational Auto-encoder: generational model and approximation of the distribution of a given. Definition and use of latent space. Reparameterization trick. Applications and Limits observed

  • Generative Adversarial Networks: Fundamentals.

  • Dual Network Architecture (Generator and discriminator) with alternate learning, cost functions available.

  • Convergence of a GAN and difficulties encountered.

  • Improved convergence: Wasserstein GAN, Began. Earth Moving Distance.

  • Applications for the generation of images or photographs, text generation, super-resolution.

Deep Reinforcement Learning.

  • Presentation of reinforcement learning: control of an agent in a defined environment

  • By a state and possible actions

  • Use of a neural network to approximate the state function

  • Deep Q Learning: experience replay, and application to the control of a video game.

  • Optimization of learning policy. On-policy && off-policy. Actor critic architecture. A3C.

  • Applications: control of a single video game or a digital system.

 

Part 2 – Theano for Deep Learning

Theano Basics

  • Introduction

  • Installation and Configuration

Theano Functions

  • inputs, outputs, updates, givens

Training and Optimization of a neural network using Theano

  • Neural Network Modeling

  • Logistic Regression

  • Hidden Layers

  • Training a network

  • Computing and Classification

  • Optimization

  • Log Loss

Testing the model


Part 3 – DNN using Tensorflow

TensorFlow Basics

  • Creation, Initializing, Saving, and Restoring TensorFlow variables

  • Feeding, Reading and Preloading TensorFlow Data

  • How to use TensorFlow infrastructure to train models at scale

  • Visualizing and Evaluating models with TensorBoard

TensorFlow Mechanics

  • Prepare the Data

  • Download

  • Inputs and Placeholders

  • Build the GraphS

    • Inference

    • Loss

    • Training

  • Train the Model

    • The Graph

    • The Session

    • Train Loop

  • Evaluate the Model

    • Build the Eval Graph

    • Eval Output

The Perceptron

  • Activation functions

  • The perceptron learning algorithm

  • Binary classification with the perceptron

  • Document classification with the perceptron

  • Limitations of the perceptron

From the Perceptron to Support Vector Machines

  • Kernels and the kernel trick

  • Maximum margin classification and support vectors

Artificial Neural Networks

  • Nonlinear decision boundaries

  • Feedforward and feedback artificial neural networks

  • Multilayer perceptrons

  • Minimizing the cost function

  • Forward propagation

  • Back propagation

  • Improving the way neural networks learn

Convolutional Neural Networks

  • Goals

  • Model Architecture

  • Principles

  • Code Organization

  • Launching and Training the Model

  • Evaluating a Model


 

Basic Introductions to be given to the below modules(Brief Introduction to be provided based on time availability):

Tensorflow - Advanced Usage

  • Threading and Queues

  • Distributed TensorFlow

  • Writing Documentation and Sharing your Model

  • Customizing Data Readers

  • Manipulating TensorFlow Model Files


TensorFlow Serving

  • Introduction

  • Basic Serving Tutorial

  • Advanced Serving Tutorial

  • Serving Inception Model Tutorial

客户评论

★★★★★
★★★★★

课程分类

相关课程

促销课程

订阅促销课程

为尊重您的隐私,我公司不会把您的邮箱地址提供给任何人。您可以享有优先权和随时取消订阅的权利。

我们的客户

is growing fast!

We are looking to expand our presence in China!

As a Business Development Manager you will:

  • expand business in China
  • recruit local talent (sales, agents, trainers, consultants)
  • recruit local trainers and consultants

We offer:

  • Artificial Intelligence and Big Data systems to support your local operation
  • high-tech automation
  • continuously upgraded course catalogue and content
  • good fun in international team

If you are interested in running a high-tech, high-quality training and consulting business.

Apply now!

该网站在其他国家/地区