Deep Learning培训

Deep Learning培训

Deep machine learning, deep structured learning, hierarchical learning, DL, 深度学习培训

Testi...Client Testimonials

Introduction to Deep Learning

The topic is very interesting

Wojciech Baranowski - Dolby Poland Sp. z o.o.

Introduction to Deep Learning

Trainers theoretical knowledge and willingness to solve the problems with the participants after the training

Grzegorz Mianowski - Dolby Poland Sp. z o.o.

Introduction to Deep Learning

Topic. Very interesting!

Piotr - Dolby Poland Sp. z o.o.

Introduction to Deep Learning

Exercises after each topic were really helpful, despite there were too complicated at the end. In general, the presented material was very interesting and involving! Exercises with image recognition were great.

- Dolby Poland Sp. z o.o.

Advanced Deep Learning

The global overview of deep learning

Bruno Charbonnier - OSONES

Advanced Deep Learning

The exercises are sufficiently practical and do not need a high knowledge in Python to be done.

Alexandre GIRARD - OSONES

Advanced Deep Learning

Doing exercises on real examples using Keras. Mihaly totally understood our expectations about this training.

Paul Kassis - OSONES

Introduction to Deep Learning

Interesting subject

Wojciech Wilk - Dolby Poland Sp. z o.o.

Neural Networks Fundamentals using TensorFlow as Example

Knowledgeable trainer

Sridhar Voorakkara - INTEL R&D IRELAND LIMITED

Neural Networks Fundamentals using TensorFlow as Example

I was amazed at the standard of this class - I would say that it was university standard.

David Relihan - INTEL R&D IRELAND LIMITED

Neural Networks Fundamentals using TensorFlow as Example

Very good all round overview.Good background into why Tensorflow operates as it does.

Kieran Conboy - INTEL R&D IRELAND LIMITED

Neural Networks Fundamentals using TensorFlow as Example

I liked the opportunities to ask questions and get more in depth explanations of the theory.

Sharon Ruane - INTEL R&D IRELAND LIMITED

Introduction to Deep Learning

The deep knowledge of the trainer about the topic.

Sebastian Görg - FANUC Europe Corporation

TensorFlow for Image Recognition

Very updated approach or api (tensorflow, kera, tflearn) to do machine learning

Paul Lee - Hong Kong Productivity Council

Neural Networks Fundamentals using TensorFlow as Example

Given outlook of the technology: what technology/process might become more important in the future; see, what the technology can be used for

Commerzbank AG

Neural Networks Fundamentals using TensorFlow as Example

Topic selection. Style of training. Practice orientation

Commerzbank AG

Neural Networks Fundamentals using TensorFlow as Example

Topic selection. Style of training. Practice orientation

Commerzbank AG

其他课程类别

Deep Learning大纲

代码 名字 时长 概览
openface OpenFace: Creating Facial Recognition Systems 14小时 OpenFace is Python and Torch based open-source, real-time facial recognition software based on Google’s FaceNet research. In this instructor-led, live training, participants will learn how to use OpenFace's components to create and deploy a sample facial recognition application. By the end of this training, participants will be able to: Work with OpenFace's components, including dlib, OpenVC, Torch, and nn4 to implement face detection, alignment, and transformation. Apply OpenFace to real-world applications such as surveillance, identity verification, virtual reality, gaming, and identifying repeat customers, etc. Audience Developers Data scientists Format of the course Part lecture, part discussion, exercises and heavy hands-on practice To request a customized course outline for this training, please contact us.
t2t T2T: Creating Sequence to Sequence models for generalized learning 7小时 Tensor2Tensor (T2T) is a modular, extensible library for training AI models in different tasks, using different types of training data, for example: image recognition, translation, parsing, image captioning, and speech recognition. It is maintained by the Google Brain team. In this instructor-led, live training, participants will learn how to prepare a deep-learning model to resolve multiple tasks. By the end of this training, participants will be able to: Install tensor2tensor, select a data set, and train and evaluate an AI model Customize a development environment using the tools and components included in Tensor2Tensor Create and use a single model to concurrently learn a number of tasks from multiple domains Use the model to learn from tasks with a large amount of training data and apply that knowledge to tasks where data is limited Obtain satisfactory processing results using a single GPU Audience Developers Data scientists Format of the course Part lecture, part discussion, exercises and heavy hands-on practice To request a customized course outline for this training, please contact us.
radvml Advanced Machine Learning with R 21小时 In this instructor-led, live training, participants will learn advanced techniques for Machine Learning with R as they step through the creation of a real-world application. By the end of this training, participants will be able to: Use techniques as hyper-parameter tuning and deep learning Understand and implement unsupervised learning techniques Put a model into production for use in a larger application Audience Developers Analysts Data scientists Format of the course Part lecture, part discussion, exercises and heavy hands-on practice To request a customized course outline for this training, please contact us.
deeplrn 深度学习基础与实战 14小时 This course is general overview for Deep Learning without going too deep into any specific methods. It is suitable for people who want to start using Deep learning to enhance their accuracy of prediction.第一课:深度学习必备基础知识点1 第二课:深度学习必备基础知识点2                                              第三课:神经网络 第四课:卷积神经网络 第五课:卷积神经网络实例,技巧 第六课:计算机视觉常规任务,物体识别 第七课:顶级会议论文解读 第八课:深度学习框架使用介绍 第九课:Caffe框架使用进阶  第十课:深度学习实战 - 人脸检测 第十一课:深度学习框架Tensorflow实战 第十二课:验证码识别案例
pythonadvml Python for Advanced Machine Learning 21小时 In this instructor-led, live training, participants will learn the most relevant and cutting-edge machine learning techniques in Python as they build a series of demo applications involving image, music, text, and financial data. By the end of this training, participants will be able to: Implement machine learning algorithms and techniques for solving complex problems Apply deep learning and semi-supervised learning to applications involving image, music, text, and financial data Push Python algorithms to their maximum potential Use libraries and packages such as NumPy and Theano Audience Developers Analysts Data scientists Format of the course Part lecture, part discussion, exercises and heavy hands-on practice To request a customized course outline for this training, please contact us.
deeplearning1 Introduction to Deep Learning 21小时 This course is general overview for Deep Learning without going too deep into any specific methods. It is suitable for people who want to start using Deep learning to enhance their accuracy of prediction. Backprop, modular models Logsum module RBF Net MAP/MLE loss Parameter Space Transforms Convolutional Module Gradient-Based Learning  Energy for inference, Objective for learning PCA; NLL:  Latent Variable Models Probabilistic LVM Loss Function Handwriting recognition
singa Mastering Apache SINGA 21小时 SINGA is a general distributed deep learning platform for training big deep learning models over large datasets. It is designed with an intuitive programming model based on the layer abstraction. A variety of popular deep learning models are supported, namely feed-forward models including convolutional neural networks (CNN), energy models like restricted Boltzmann machine (RBM), and recurrent neural networks (RNN). Many built-in layers are provided for users. SINGA architecture is sufficiently flexible to run synchronous, asynchronous and hybrid training frameworks. SINGA also supports different neural net partitioning schemes to parallelize the training of large models, namely partitioning on batch dimension, feature dimension or hybrid partitioning. Audience This course is directed at researchers, engineers and developers seeking to utilize Apache SINGA as a deep learning framework. After completing this course, delegates will: understand SINGA’s structure and deployment mechanisms be able to carry out installation / production environment / architecture tasks and configuration be able to assess code quality, perform debugging, monitoring be able to implement advanced production like training models, embedding terms, building graphs and logging   Introduction Installation Quick Start Programming NeuralNet Layer Param TrainOneBatch Updater  Distributed Training Data Preparation Checkpoint and Resume Python Binding Performance test and Feature extraction Training on GPU Examples Feed-forward models CNN MLP RBM + Auto-encoder Vanilla RNN for language modelling Char-RNN
Fairseq Fairseq: Setting up a CNN-based machine translation system 7小时 Fairseq is an open-source sequence-to-sequence learning toolkit created by Facebok for use in Neural Machine Translation (NMT). In this training participants will learn how to use Fairseq to carry out translation of sample content. By the end of this training, participants will have the knowledge and practice needed to implement a live Fairseq based machine translation solution. Source and target language content samples can be prepared according to audience's requirements. Audience Localization specialists with a technical background Global content managers Localization engineers Software developers in charge of implementing global content solutions Format of the course     Part lecture, part discussion, heavy hands-on practice Introduction     Why Neural Machine Translation? Overview of the Torch project Overview of a Convolutional Neural Machine Translation model     Convolutional Sequence to Sequence Learning     Convolutional Encoder Model for Neural Machine Translation     Standard LSTM-based model Overview of training approaches     About GPUs and CPUs     Fast beam search generation Installation and setup Evaluating pre-trained models Preprocessing your data Training the model Translating Converting a trained model to use CPU-only operations Joining to the community Closing remarks
dl4j Mastering Deeplearning4j 21小时 Deeplearning4j is the first commercial-grade, open-source, distributed deep-learning library written for Java and Scala. Integrated with Hadoop and Spark, DL4J is designed to be used in business environments on distributed GPUs and CPUs.   Audience This course is directed at engineers and developers seeking to utilize Deeplearning4j in their projects.   After this course delegates will be able to: Getting Started Quickstart: Running Examples and DL4J in Your Projects Comprehensive Setup Guide Introduction to Neural Networks Restricted Boltzmann Machines Convolutional Nets (ConvNets) Long Short-Term Memory Units (LSTMs) Denoising Autoencoders Recurrent Nets and LSTMs Multilayer Neural Nets Deep-Belief Network Deep AutoEncoder Stacked Denoising Autoencoders Tutorials Using Recurrent Nets in DL4J MNIST DBN Tutorial Iris Flower Tutorial Canova: Vectorization Lib for ML Tools Neural Net Updaters: SGD, Adam, Adagrad, Adadelta, RMSProp Datasets Datasets and Machine Learning Custom Datasets CSV Data Uploads Scaleout Iterative Reduce Defined Multiprocessor / Clustering Running Worker Nodes Text DL4J's NLP Framework Word2vec for Java and Scala Textual Analysis and DL Bag of Words Sentence and Document Segmentation Tokenization Vocab Cache Advanced DL2J Build Locally From Master Contribute to DL4J (Developer Guide) Choose a Neural Net Use the Maven Build Tool Vectorize Data With Canova Build a Data Pipeline Run Benchmarks Configure DL4J in Ivy, Gradle, SBT etc Find a DL4J Class or Method Save and Load Models Interpret Neural Net Output Visualize Data with t-SNE Swap CPUs for GPUs Customize an Image Pipeline Perform Regression With Neural Nets Troubleshoot Training & Select Network Hyperparameters Visualize, Monitor and Debug Network Learning Speed Up Spark With Native Binaries Build a Recommendation Engine With DL4J Use Recurrent Networks in DL4J Build Complex Network Architectures with Computation Graph Train Networks using Early Stopping Download Snapshots With Maven Customize a Loss Function
dladv Advanced Deep Learning 28小时 Machine Learning Limitations Machine Learning, Non-linear mappings Neural Networks Non-Linear Optimization, Stochastic/MiniBatch Gradient Decent Back Propagation Deep Sparse Coding Sparse Autoencoders (SAE) Convolutional Neural Networks (CNNs) Successes: Descriptor Matching Stereo-based Obstacle Avoidance for Robotics Pooling and invariance Visualization/Deconvolutional Networks Recurrent Neural Networks (RNNs) and their optimizaiton Applications to NLP RNNs continued, Hessian-Free Optimization Language analysis: word/sentence vectors, parsing, sentiment analysis, etc. Probabilistic Graphical Models Hopfield Nets, Boltzmann machines, Restricted Boltzmann Machines Hopfield Networks, (Restricted) Bolzmann Machines Deep Belief Nets, Stacked RBMs Applications to NLP , Pose and Activity Recognition in Videos Recent Advances Large-Scale Learning Neural Turing Machines  
Neuralnettf Neural Networks Fundamentals using TensorFlow as Example 28小时 This course will give you knowledge in neural networks and generally in machine learning algorithm,  deep learning (algorithms and applications). This training is more focus on fundamentals, but will help you choosing the right technology : TensorFlow, Caffe, Teano, DeepDrive, Keras, etc. The examples are made in TensorFlow. TensorFlow Basics Creation, Initializing, Saving, and Restoring TensorFlow variables Feeding, Reading and Preloading TensorFlow Data How to use TensorFlow infrastructure to train models at scale Visualizing and Evaluating models with TensorBoard TensorFlow Mechanics Inputs and Placeholders Build the GraphS Inference Loss Training Train the Model The Graph The Session Train Loop Evaluate the Model Build the Eval Graph Eval Output The Perceptron Activation functions The perceptron learning algorithm Binary classification with the perceptron Document classification with the perceptron Limitations of the perceptron From the Perceptron to Support Vector Machines Kernels and the kernel trick Maximum margin classification and support vectors Artificial Neural Networks Nonlinear decision boundaries Feedforward and feedback artificial neural networks Multilayer perceptrons Minimizing the cost function Forward propagation Back propagation Improving the way neural networks learn Convolutional Neural Networks Goals Model Architecture Principles Code Organization Launching and Training the Model Evaluating a Model
tf101 Deep Learning with TensorFlow 21小时 TensorFlow is a 2nd Generation API of Google's open source software library for Deep Learning. The system is designed to facilitate research in machine learning, and to make it quick and easy to transition from research prototype to production system. Audience This course is intended for engineers seeking to use TensorFlow for their Deep Learning projects After completing this course, delegates will: understand TensorFlow’s structure and deployment mechanisms be able to carry out installation / production environment / architecture tasks and configuration be able to assess code quality, perform debugging, monitoring be able to implement advanced production like training models, building graphs and logging Machine Learning and Recursive Neural Networks (RNN) basics NN and RNN Backprogation Long short-term memory (LSTM) TensorFlow Basics Creation, Initializing, Saving, and Restoring TensorFlow variables Feeding, Reading and Preloading TensorFlow Data How to use TensorFlow infrastructure to train models at scale Visualizing and Evaluating models with TensorBoard TensorFlow Mechanics 101 Prepare the Data Download Inputs and Placeholders Build the Graph Inference Loss Training Train the Model The Graph The Session Train Loop Evaluate the Model Build the Eval Graph Eval Output Advanced Usage Threading and Queues Distributed TensorFlow Writing Documentation and Sharing your Model Customizing Data Readers Using GPUs¹ Manipulating TensorFlow Model Files TensorFlow Serving Introduction Basic Serving Tutorial Advanced Serving Tutorial Serving Inception Model Tutorial ¹ The Advanced Usage topic, “Using GPUs”, is not available as a part of a remote course. This module can be delivered during classroom-based courses, but only by prior agreement, and only if both the trainer and all participants have laptops with supported NVIDIA GPUs, with 64-bit Linux installed (not provided by NobleProg). NobleProg cannot guarantee the availability of trainers with the required hardware.
tensorflowserving TensorFlow Serving 7小时 TensorFlow Serving is a system for serving machine learning (ML) models to production. In this instructor-led, live training, participants will learn how to configure and use TensorFlow Serving to deploy and manage ML models in a production environment. By the end of this training, participants will be able to: Train, export and serve various TensorFlow models Test and deploy algorithms using a single architecture and set of APIs Extend TensorFlow Serving to serve other types of models beyond TensorFlow models Audience Developers Data scientists Format of the course Part lecture, part discussion, exercises and heavy hands-on practice To request a customized course outline for this training, please contact us.
facebooknmt Facebook NMT: Setting up a neural machine translation system 7小时 Fairseq is an open-source sequence-to-sequence learning toolkit created by Facebok for use in Neural Machine Translation (NMT). In this training participants will learn how to use Fairseq to carry out translation of sample content. By the end of this training, participants will have the knowledge and practice needed to implement a live Fairseq based machine translation solution. Audience Localization specialists with a technical background Global content managers Localization engineers Software developers in charge of implementing global content solutions Format of the course Part lecture, part discussion, heavy hands-on practice Note If you wish to use specific source and target language content, please contact us to arrange. Introduction     Why Neural Machine Translation?     Borrowing from image recognition techniques Overview of the Torch and Caffe2 projects Overview of a Convolutional Neural Machine Translation model     Convolutional Sequence to Sequence Learning     Convolutional Encoder Model for Neural Machine Translation     Standard LSTM-based model Overview of training approaches     About GPUs and CPUs     Fast beam search generation Installation and setup Evaluating pre-trained models Preprocessing your data Training the model Translating Converting a trained model to use CPU-only operations Joining to the community Closing remarks
datamodeling Pattern Recognition 35小时 This course provides an introduction into the field of pattern recognition and machine learning. It touches on practical applications in statistics, computer science, signal processing, computer vision, data mining, and bioinformatics. The course is interactive and includes plenty of hands-on exercises, instructor feedback, and testing of knowledge and skills acquired. Audience     Data analysts     PhD students, researchers and practitioners   Introduction Probability theory, model selection, decision and information theory Probability distributions Linear models for regression and classification Neural networks Kernel methods Sparse kernel machines Graphical models Mixture models and EM Approximate inference Sampling methods Continuous latent variables Sequential data Combining models  
tpuprogramming TPU Programming: Building Neural Network Applications on Tensor Processing Units 7小时 The Tensor Processing Unit (TPU) is the architecture which Google has used internally for several years, and is just now becoming available for use by the general public. It includes several optimizations specifically for use in neural networks, including streamlined matrix multiplication, and 8-bit integers instead of 16-bit in order to return appropriate levels of precision. In this instructor-led, live training, participants will learn how to take advantage of the innovations in TPU processors to maximize the performance of their own AI applications. By the end of the training, participants will be able to: Train various types of neural networks on large amounts of data Use TPUs to speed up the inference process by up to two orders of magnitude Utilize TPUs to process intensive applications such as image search, cloud vision and photos Audience Developers Researchers Engineers Data scientists Format of the course Part lecture, part discussion, exercises and heavy hands-on practice To request a customized course outline for this training, please contact us.
Torch Torch: Getting started with Machine and Deep Learning 21小时 Torch is an open source machine learning library and a scientific computing framework based on the Lua programming language. It provides a development environment for numerics, machine learning, and computer vision, with a particular emphasis on deep learning and convolutional nets. It is one of the fastest and most flexible frameworks for Machine and Deep Learning and is used by companies such as Facebook, Google, Twitter, NVIDIA, AMD, Intel, and many others. In this course we cover the principles of Torch, its unique features, and how it can be applied in real-world applications. We step through numerous hands-on exercises all throughout, demonstrating and practicing the concepts learned. By the end of the course, participants will have a thorough understanding of Torch's underlying features and capabilities as well as its role and contribution within the AI space compared to other frameworks and libraries. Participants will have also received the necessary practice to implement Torch in their own projects. Audience     Software developers and programmers wishing to enable Machine and Deep Learning within their applications Format of the course     Overview of Machine and Deep Learning     In-class coding and integration exercises     Test questions sprinkled along the way to check understanding Introduction to Torch     Like NumPy but with CPU and GPU implementation     Torch's usage in machine learning, computer vision, signal processing, parallel processing, image, video, audio and networking Installing Torch     Linux, Windows, Mac     Bitmapi and Docker Installing Torch packages     Using the LuaRocks package manager Choosing an IDE for Torch     ZeroBrane Studio     Eclipse plugin for Lua Working with the Lua scripting language and LuaJIT     Lua's integration with C/C++     Lua syntax: datatypes, loops and conditionals, functions, functions, tables, and file i/o.     Object orientation and serialization in Torch     Coding exercise Loading a dataset in Torch     MNIST     CIFAR-10, CIFAR-100     Imagenet Machine Learning in Torch     Deep Learning         Manual feature extraction vs convolutional networks     Supervised and Unsupervised Learning         Building a neural network with Torch         N-dimensional arrays Image analysis with Torch     Image package     The Tensor library Working with the REPL interpreter Working with databases Networking and Torch GPU support in Torch Integrating Torch     C, Python, and others Embedding Torch     iOS and Android Other frameworks and libraries     Facebook's optimized deep-learning modules and containers Creating your own package Testing and debugging Releasing your application The future of AI and Torch
tfir TensorFlow for Image Recognition 28小时 This course explores, with specific examples, the application of Tensor Flow to the purposes of image recognition Audience This course is intended for engineers seeking to utilize TensorFlow for the purposes of Image Recognition After completing this course, delegates will be able to: understand TensorFlow’s structure and deployment mechanisms carry out installation / production environment / architecture tasks and configuration assess code quality, perform debugging, monitoring implement advanced production like training models, building graphs and logging Machine Learning and Recursive Neural Networks (RNN) basics NN and RNN Backprogation Long short-term memory (LSTM) TensorFlow Basics Creation, Initializing, Saving, and Restoring TensorFlow variables Feeding, Reading and Preloading TensorFlow Data How to use TensorFlow infrastructure to train models at scale Visualizing and Evaluating models with TensorBoard TensorFlow Mechanics 101 Tutorial Files Prepare the Data Download Inputs and Placeholders Build the Graph Inference Loss Training Train the Model The Graph The Session Train Loop Evaluate the Model Build the Eval Graph Eval Output Advanced Usage Threading and Queues Distributed TensorFlow Writing Documentation and Sharing your Model Customizing Data Readers Using GPUs¹ Manipulating TensorFlow Model Files TensorFlow Serving Introduction Basic Serving Tutorial Advanced Serving Tutorial Serving Inception Model Tutorial Convolutional Neural Networks Overview Goals Highlights of the Tutorial Model Architecture Code Organization CIFAR-10 Model Model Inputs Model Prediction Model Training Launching and Training the Model Evaluating a Model Training a Model Using Multiple GPU Cards¹ Placing Variables and Operations on Devices Launching and Training the Model on Multiple GPU cards Deep Learning for MNIST Setup Load MNIST Data Start TensorFlow InteractiveSession Build a Softmax Regression Model Placeholders Variables Predicted Class and Cost Function Train the Model Evaluate the Model Build a Multilayer Convolutional Network Weight Initialization Convolution and Pooling First Convolutional Layer Second Convolutional Layer Densely Connected Layer Readout Layer Train and Evaluate the Model Image Recognition Inception-v3 C++ Java ¹ Topics related to the use of GPUs are not available as a part of a remote course. They can be delivered during classroom-based courses, but only by prior agreement, and only if both the trainer and all participants have laptops with supported NVIDIA GPUs, with 64-bit Linux installed (not provided by NobleProg). NobleProg cannot guarantee the availability of trainers with the required hardware.
OpenNN OpenNN: Implementing neural networks 14小时 OpenNN is an open-source class library written in C++  which implements neural networks, for use in machine learning. In this course we go over the principles of neural networks and use OpenNN to implement a sample application. Audience     Software developers and programmers wishing to create Deep Learning applications. Format of the course     Lecture and discussion coupled with hands-on exercises. Introduction to OpenNN, Machine Learning and Deep Learning Downloading OpenNN Working with Neural Designer     Using Neural Designer for descriptive, diagnostic, predictive and prescriptive analytics OpenNN architecture     CPU parallelization OpenNN classes     Data set, neural network, loss index, training strategy, model selection, testing analysis     Vector and matrix templates Building a neural network application     Choosing a suitable neural network     Formulating the variational problem (loss index)     Solving the reduced function optimization problem (training strategy) Working with datasets      The data matrix (columns as variables and rows as instances) Learning tasks     Function regression     Pattern recognition Compiling with QT Creator Integrating, testing and debugging your application The future of neural networks and OpenNN
caffe Deep Learning for Vision with Caffe 21小时 Caffe is a deep learning framework made with expression, speed, and modularity in mind. This course explores the application of Caffe as a Deep learning framework for image recognition using MNIST as an example Audience This course is suitable for Deep Learning researchers and engineers interested in utilizing Caffe as a framework. After completing this course, delegates will be able to: understand Caffe’s structure and deployment mechanisms carry out installation / production environment / architecture tasks and configuration assess code quality, perform debugging, monitoring implement advanced production like training models, implementing layers and logging Installation Docker Ubuntu RHEL / CentOS / Fedora installation Windows Caffe Overview Nets, Layers, and Blobs: the anatomy of a Caffe model. Forward / Backward: the essential computations of layered compositional models. Loss: the task to be learned is defined by the loss. Solver: the solver coordinates model optimization. Layer Catalogue: the layer is the fundamental unit of modeling and computation – Caffe’s catalogue includes layers for state-of-the-art models. Interfaces: command line, Python, and MATLAB Caffe. Data: how to caffeinate data for model input. Caffeinated Convolution: how Caffe computes convolutions. New models and new code Detection with Fast R-CNN Sequences with LSTMs and Vision + Language with LRCN Pixelwise prediction with FCNs Framework design and future Examples: MNIST    
matlabdl Matlab:用于深度学习 14小时 在这一由讲师引导的现场培训中,参与者将学习如何使用Matlab来设计、构建、可视化用于图像识别的卷积神经网络。 在培训结束后,参与者将能够: 建立深度学习的模式 使数据分类自动化 使用Caffe和TensorFlow-Keras的模型 使用多个GPU、云或群集训练数据 受众 开发人员 工程师 领域专家 课程形式 部分讲座、部分讨论、练习和大量实操 若需定制本培训课程大纲,请联系我们。
MicrosoftCognitiveToolkit Microsoft Cognitive Toolkit 2.x 21小时 Microsoft Cognitive Toolkit 2.x (previously CNTK) is an open-source, commercial-grade toolkit that trains deep learning algorithms to learn like the human brain. According to Microsoft, CNTK can be 5-10x faster than TensorFlow on recurrent networks, and 2 to 3 times faster than TensorFlow for image-related tasks. In this instructor-led, live training, participants will learn how to use Microsoft Cognitive Toolkit to create, train and evaluate deep learning algorithms for use in commercial-grade AI applications involving multiple types of data such data, speech, text, and images. By the end of this training, participants will be able to: Access CNTK as a library from within a Python, C#, or C++ program Use CNTK as a standalone machine learning tool through its own model description language (BrainScript) Use the CNTK model evaluation functionality from a Java program Combine feed-forward DNNs, convolutional nets (CNNs), and recurrent networks (RNNs/LSTMs) Scale computation capacity on CPUs, GPUs and multiple machines Access massive datasets using existing programming languages and algorithms Audience Developers Data scientists Format of the course Part lecture, part discussion, exercises and heavy hands-on practice Note If you wish to customize any part of this training, including the programming language of choice, please contact us to arrange. To request a customized course outline for this training, please contact us.
dsstne Amazon DSSTNE: Build a recommendation system 7小时 Amazon DSSTNE is an open-source library for training and deploying recommendation models. It allows models with weight matrices that are too large for a single GPU to be trained on a single host. In this instructor-led, live training, participants will learn how to use DSSTNE to build a recommendation application. By the end of this training, participants will be able to: Train a recommendation model with sparse datasets as input Scale training and prediction models over multiple GPUs Spread out computation and storage in a model-parallel fashion Generate Amazon-like personalized product recommendations Deploy a production-ready application that can scale at heavy workloads Audience Developers Data scientists Format of the course Part lecture, part discussion, exercises and heavy hands-on practice To request a customized course outline for this training, please contact us.  
mldlnlpintro ML、DL與NLP入門與進階大綱 14小时 The aim of this course is to provide a basic proficiency in applying Machine Learning methods in practice. Through the use of the Python programming language and its various libraries, and based on a multitude of practical examples this course teaches how to use the most important building blocks of Machine Learning, how to make data modeling decisions, interpret the outputs of the algorithms and validate the results. Our goal is to give you the skills to understand and use the most fundamental tools from the Machine Learning toolbox confidently and avoid the common pitfalls of Data Sciences applications. 第1部分机器学习入门与进阶大纲 1.1单参数线性回归Linear Regression with one variable 1)曲线拟合:一系列点拟合直线h(x)=theta0+theta1*x   2)误差函数cost function:J(theta0, theta1);目标:min J   3)bowl-shape function:两个参数时,cost function是三维函数   4)梯度下降Gradient descent:让参数沿着梯度下降的方向走,并迭代地不断减少J,即稳态   5)学习率a:a太小,学习很慢;a太大,容易过学习 1.2多变量线性回归Linear Regression with multiple variable 1)多特征multiple features:输出由多维输入决定,即输入是多维特征;   2)假设h(x)=θ0+θ1x1+……所谓多参数线性回归,即每个输入x有(n+1)维[x0……xn]   3)单参数的梯度递降单变量学习方法、多参数梯度下降学习方法   4)特征scaling:feature归一化到[-1,1]:mean normalization   5)学习率a 6)特征和多项式回归 7)Normal Equation 1.3 逻辑回归和过拟合问题Logistic Regression & Regularization 1)分类classification   2)假设表达Hypothesis Representation   3)判定边界Decision Boundary   4)损失函数Cost Function   5)简化损失函数和梯度下降Simplified Cost Function and Gradient Descent   6)参数优化Parameter Optimization   7)多类分类Multiclass classification   8)过拟合问题The problem of overfitting   9)规则化线性/逻辑回归Regularized Linear/Logistic Regression 1.4 神经网络Neural Network的表示 1)为什么引入神经网络:非线性假设   2)神经元与大脑:输入向量,中间层,输出层   3)神经网络的表示形式:逻辑回归   4)怎样用神经网络实现逻辑表达式:AND,OR,NOT   5)分类问题:one-vs-all 1.5 神经网络学习Neural Networks Learning 1)代价函数Cost Function   2)后向传播算法Back Propagation algorithm   3)后向传播推导Back Propagation Intuition   4)梯度检测   5)随机初始化 1.6 怎样选择机器学习方法系统 1)候选机器学习方法:诊断   2)评价方法假设:error   3)模型选择和训练、验证实验数据:欠拟合、过拟合   4)区别诊断偏离bias和偏差variance 偏差:J(train)大,J(cv)大,J(train)≈J(cv),bias产生于d小,欠拟合阶段; 方差:J(train)小,J(cv)大,J(train)<<J(cv),variance产生于d大,过拟合阶段;   5)正则化和偏差/方差 regularization就是为了防止欠拟合而在代价函数中引入的一个分量。regularization项就是cost function J(θ)中的最后一项,其中λ太大导致欠拟合,λ太小导致过拟合   6)学习曲线:什么时候增加训练数据才是有效的 过拟合的高方差: 增加m使得J(train)和J(cv)之间gap减小,有助于性能提高; 增加训练数据的个数对于过拟合是有用的,对于欠拟合是徒劳 1.7 机器学习系统设计 1)决定基本策略:收集大量数据;提取复杂特征;建立精确的特征库;   2)Error分析   3)对Skewed Classes建立Error Metrics   4)在精度Precision和召回率Recall间均衡   5)机器学习数据选定 1.8 支持向量机SVM 1)SVM的损失函数:cost function   2)最大间距分类:判定边界   3)SVM核-Gaussian Kernel:非线性判定边界   4)核技巧:内积、相似度计算 5)SVM中高斯核的使用 6)SVM的使用与选择 1.9 聚类 1)无监督学习有密度估计与聚类-无标签的机器学习; 有监督学习有回归与分类-有标签的机器学习; 2)Kmeans聚类算法:首先随机指定k个类的中心U1~Uk,然后迭代地更新该中心。其中C(i)表示第i个数据离那个类中心最近,也就是将其判定为属于那个类,然后将这k各类的中心分别更新为所有属于这个类的数据的平均值;   3)聚类问题的代价函数:最小化所有数据与其聚类中心的欧氏距离和;   4)如何选择初始化时的类中心:进行不同初始化(50~1000次),每一种初始化的情况分别进行聚类,最后选取代价函数J(C,U)最小的作为聚类结果;   5)聚类个数的选择:elbow-method,做图k-J(cost function),找图中的elbow位置作为k的选定值 1.10 降维 1)为什么要降维:特征太多会造成模型复杂,训练速度过慢;多维数据很难进行可视化分析 2)主成分分析PCA:寻找平面使得两点之间的正交距离总和最小;计算各个feature的平均值、将每一个feature scaling、将特征进行mean normalization、 求n×n的协方差矩阵Σ、根据SVD求取特征值和特征向量、按特征值从大到小排列,重新组织U、选择k个分量 3)从压缩数据中恢复元数据:逆过程,近似 4)怎样决定降维个数/主成分个数:定义一个threshold(如10%),如果error ratio<threshold,主成分是可以接受的 5)应用PCA进行降维的建议:加入regularization项;可以用交叉验证数据和测试数据进行检验,但是选择主分量时只应用训练数据;   第2部分深度学习入门大纲 2.1 K-means特征学习 1)通过最小化数据点和最近邻中心的距离来寻找各个类中心 2)矢量量化vector quantization:通过最小化重构误差,一个数据样本x(i)可以通过这个字典映射为一个k维的码矢量。就是寻找D的一个过程 3)数据、预处理、初始化:需要先对数据进行均值和对比度的归一化;使用白化来消去数据的相关性。注意白化参数的选择;通过高斯噪声和归一化来初始化k-means的聚类中心;使用抑制更新来避免空聚类和增加稳定性 4)与稀疏特征学习的比较:注意数据的维数和稀疏影响。K-means通过寻找数据分布的”heavy-tailed"方向来找到输入数据的稀疏投影 5)在图像识别上的应用:更多的聚类中心往往是有帮助的。当然,前提是我们要有足够的训练样本   6)构建深度网络:尽可能地使用局部感受野。K-means是否成功,其瓶颈在于输入数据的维数,越低越好。如果无法手动的选择好的局部感受野,那么尝试去通过一个自动的依赖性测试来帮助从数据中挑选出一组低维的数据。 2.2稀疏滤波Sparse Filtering 1)核心思想就是避免对数据分布的显式建模,而是优化特征分布的稀疏性从而得到好的特征表达 2)非监督特征学习:学习一个模型,这个模型描述的就是数据真实分布的一种近似。包括denoisingautoencoders,restricted Boltzmann machines (RBMs), independent component analysis (ICA) 和sparse coding等 3)特征分布:优化high dispersal和population sparsity 4)Sparse filtering:Optimizing for population sparsity、Optimizing for high dispersal、Optimizing for lifetime sparsity 5)Deep Sparse filtering:多层网络来计算这些特征   6)与divisive normalization的联系   7)与ICA 和sparse coding的联系 2.3单层非监督学习网络 1)网络中隐含层神经元节点的个数(需要学习的特征数目),采集的密度(也就是convolution时的移动步伐,也就是在什么地方计算特征)和感知区域大小对最终特征提取效果的影响很大,甚至比网络的层次数,deep learning学习算法本身还要重要; Whitening在预处理过程中还是很有必要的; 如果不考虑非监督学习算法的选择的话,whitening、 large numbers of features和small stride都会得到更好的性能; 2)非监督特征学习框架:通过以下步骤去学习一个特征表达:        ①从无便签的训练图像中随机提取一些小的patches;        ②对这些patches做预处理(每个patch都减去均值,也就是减去直流分量,并且除以标准差,以归一化。对于图像来说,分别相当于局部亮度和对比度归一化。然后还需要经过白化);        ③用非监督学习算法来学习特征映射,也就是输入到特征的映射函数 学习到特征映射函数后,给定一个有标签的训练图像集,我们用学习到的特征映射函数,对其进行特征提取,然后用来训练分类器:        ①对一个图像,用上面学习到的特征来卷积图像的每一个sub-patches,得到该输入图像的特征;        ②将上面得到的卷积特征图进行pooling,减少特征数,同时得到平移等不变性;        ③用上面得到的特征,和对应的标签来训练一个线性分类器,然后在给定新的输入时,预测器标签。 3)特征学习: ①sparse auto-encoders ②sparse restricted Boltzmann machine ③K-means clustering ④Gaussian mixtures   4)特征提取与分类 2.4CNN卷积神经网络推导和实现 1)全连接的反向传播算法: ①Feedforward Pass前向传播 ②Backpropagation Pass反向传播 2)Convolutional Neural Networks 卷积神经网络 ①Convolution Layers 卷积层:Computing the Gradients梯度计算 ②Sub-sampling Layers 子采样层:Computing the Gradients 梯度计算 ③Learning Combinations of Feature Maps 学习特征map的组合:Enforcing Sparse Combinations 加强稀疏性组合 3)CNN的训练主要是在卷积层和子采样层的交互上,其主要的计算瓶颈是: ①前向传播过程:下采样每个卷积层的maps; ②反向传播过程:上采样高层子采样层的灵敏度map,以匹配底层的卷积层输出maps的大小; ③sigmoid的运用和求导 ④CNN卷积神经网络代码理解:cnnsetup、cnntrain、cnnff、cnnbp、cnnapplygrads、cnntest 2.5Multi-Stage多级架构分析 1)分级通过堆叠一个或者多个特征提取阶段,每个阶段包括一个滤波器组合层、非线性变换层和一个pooling层,pooling层通过组合(取平均或者最大的)局部邻域的滤波器响应,因而达到对微小变形的不变性 2)滤波器组层Filter Bank Layer-FCSG:一组卷积滤波器(C)、再接一个sigmoid/tanh非线性变换函数(S),然后是一个可训练的增益系数(G)   3)校正层Rectification Layer-Rabs:卷积是越相似,输出值越大 4)局部对比度归一化层Local Contrast Normalization Layer-N   5)平均池化和子采样层Average Pooling and Subsampling Layer -PA   6)最大值池化和子采样层Max-Pooling and Subsampling Layer -PM 2.6深度网络高层特征可视化 1)模型:DBNs、降噪自动编码器 2)Maximizing the activation 最大化激活值 3)Sampling from a unit of a Deep Belief Network 从DBN的一个节点中采样 4)Linear combination of previous layers’ filters 上层滤波器的线性组合   5)实验:Data and setup、Activation Maximization、Sampling a unit、Comparison of methods   第3部分深度学习进阶大纲 1)关于特征        ①特征表示的粒度        ②初级(浅层)特征表示        ③结构性特征表示        ④需要有多少个特征? 2)Deep Learning的基本思想 3)浅层学习(Shallow Learning)和深度学习(Deep Learning) 4)Deep learning与Neural Network 5)Deep learning训练过程       ①传统神经网络的训练方法       ②deep learning训练过程 6)Deep Learning的常用模型或者方法       ①AutoEncoder自动编码器       ②Sparse Coding稀疏编码       ③Restricted Boltzmann Machine(RBM)限制波尔兹曼机       ④Deep BeliefNetworks深信度网络       ⑤Convolutional Neural Networks卷积神经网络     第4部分自然语言处理NLP学习大纲 1)应用领域:机器翻译Machine Translation、信息检索Information Retrieval、自动文摘Automatic summarization/abstracting、文档分类Document Categorization、问答系统Question-answering system、信息过滤Information filtering、语言教学Language Teaching、文字识别Character Recognition、自动校对Automatic Proofreading、语音识别Speech recognition  2)形式语言与自动机 ①形式语法:4元组 ②最左推导、最右推导和规范推导 ③句型与句子 ④正则文法 ⑤上下文无关文法CFG ⑥上下文有关文法CSG ⑦确定的有限自动机DFA ⑧不确定的有限自动机NFA  3)语料库语言学 ①国内语料库:汉语现代文学作品语料库、现代汉语语料库、中学语文教材语料库、现代汉语词频统计语料库 ②布朗语料库、LLC口语语料库、朗文语料库、宾州大学语料库、北京大学语料库、台湾中科院平衡语料库、Chinese LDC、LC-STAR项目 ③抽取词汇、标注词性、拼音、WordNet、知网 ④同义关系、反义关系、上下位关系、部分关系  4)概率语法 ①n阶马尔科夫链语言模型 ②隐马尔科夫模型HMM ③概率上下文无关文法 ④概率链接语法  5)词法分析 ①有词典切分/无词典切分 ②基于规则分析方法/基于统计方法 ③最大匹配法(正向、逆向、双向) ④最少分词法 ⑤基于统计模型法的分词方法  6)语法理论与句法分析 ①规则系统、原则系统 ②X理论、格理论、管辖理论、θ理论、约束理论、控制理论、界限理论、 ③功能合一文法FUG ④词汇功能语法、广义的短语结构语法、树连接语法、 ⑤线图分析法:字底向上chart ⑥概率上下文无关文法PCFG  7)语义计算 ①语义网络:概念关系、事件语义网络表示、事件的语义关系、基于语义网络的推理分析 ②格语法:定义、格表、格框架约束 ③CD理论:基本动作、剧本、计划 ④主题模型PLSA、LDA ⑤关键字树
mlbankingr Machine Learning for Banking (with R) 28小时 In this instructor-led, live training, participants will learn how to apply machine learning techniques and tools for solving real-world problems in the banking industry. R will be used as the programming language. Participants first learn the key principles, then put their knowledge into practice by building their own machine learning models and using them to complete live team projects. Introduction Difference between statistical learning (statistical analysis) and machine learning Adoption of machine learning technology by finance and banking companies Different Types of Machine Learning Supervised learning vs unsupervised learning Iteration and evaluation Bias-variance trade-off Combining supervised and unsupervised learning (semi-supervised learning) Machine Learning Languages and Toolsets Open source vs proprietary systems and software R vs Python vs Matlab Libraries and frameworks Machine Learning Case Studies Consumer data and big data Assessing risk in consumer and business lending Improving customer service through sentiment analysis Detecting identity fraud, billing fraud and money laundering Introduction to R Installing the RStudio IDE Loading R packages Data structures Vectors Factors Lists Data Frames Matrixes and Arrays How to Load Machine Learning Data Databases, data warehouses and streaming data Distributed storage and processing with Hadoop and Spark Importing data from a database Importing data from Excel and CSV Modeling Business Decisions with Supervised Learning Classifying your data (classification) Using regression analysis to predict outcome Choosing from available machine learning algorithms Understanding decision tree algorithms Understanding random forest algorithms Model evaluation Exercise Regression Analysis Linear regression Generalizations and Nonlinearity Exercise Classification Bayesian refresher Naive Bayes Logistic regression K-Nearest neighbors Exercise Hands-on: Building an Estimation Model Assessing lending risk based on customer type and history Evaluating the performance of Machine Learning Algorithms Cross-validation and resampling Bootstrap aggregation (bagging) Exercise Modeling Business Decisions with Unsupervised Learning K-means clustering Challenges of unsupervised learning Beyond K-means Exercise Hands-on: Building a Recommendation System Analyzing past customer behavior to improve new service offerings Extending your company's capabilities Developing models in the cloud Accelerating machine learning with additional GPUs Beyond machine learning: Artificial Intelligence (AI) Applying Deep Learning neural networks for computer vision, voice recognition and text analysis Closing Remarks
dlv Deep Learning for Vision 21小时 Audience This course is suitable for Deep Learning researchers and engineers interested in utilizing available tools (mostly open source ) for analyzing computer images This course provide working examples. Deep Learning vs Machine Learning vs Other Methods When Deep Learning is suitable Limits of Deep Learning Comparing accuracy and cost of different methods Methods Overview Nets and  Layers Forward / Backward: the essential computations of layered compositional models. Loss: the task to be learned is defined by the loss. Solver: the solver coordinates model optimization. Layer Catalogue: the layer is the fundamental unit of modeling and computation Convolution​ Methods and models Backprop, modular models Logsum module RBF Net MAP/MLE loss Parameter Space Transforms Convolutional Module Gradient-Based Learning  Energy for inference, Objective for learning PCA; NLL:  Latent Variable Models Probabilistic LVM Loss Function Detection with Fast R-CNN Sequences with LSTMs and Vision + Language with LRCN Pixelwise prediction with FCNs Framework design and future Tools Caffe Tensorflow R Matlab Others...
mlbankingpython_ Machine Learning for Banking (with Python) 21小时 In this instructor-led, live training, participants will learn how to apply machine learning techniques and tools for solving real-world problems in the banking industry. Python will be used as the programming language. Participants first learn the key principles, then put their knowledge into practice by building their own machine learning models and using them to complete live team projects. Introduction Difference between statistical learning (statistical analysis) and machine learning Adoption of machine learning technology and talent by finance and banking companies Different Types of Machine Learning Supervised learning vs unsupervised learning Iteration and evaluation Bias-variance trade-off Combining supervised and unsupervised learning (semi-supervised learning) Machine Learning Languages and Toolsets Open source vs proprietary systems and software Python vs R vs Matlab Libraries and frameworks Machine Learning Case Studies Consumer data and big data Assessing risk in consumer and business lending Improving customer service through sentiment analysis Detecting identity fraud, billing fraud and money laundering Hands-on: Python for Machine Learning Preparing the Development Environment Obtaining Python machine learning libraries and packages Working with scikit-learn and PyBrain How to Load Machine Learning Data Databases, data warehouses and streaming data Distributed storage and processing with Hadoop and Spark Exported data and Excel Modeling Business Decisions with Supervised Learning Classifying your data (classification) Using regression analysis to predict outcome Choosing from available machine learning algorithms Understandind decision tree algorithms Understanding random forest algorithms Model evaluation Exercise Regression Analysis Linear regression Generalizations and Nonlinearity Exercise Classification Bayesian refresher Naive Bayes Logistic regression K-Nearest neighbors Exercise Hands-on: Building an Estimation Model Assessing lending risk based on customer type and history Evaluating the performance of Machine Learning Algorithms Cross-validation and resampling Bootstrap aggregation (bagging) Exercise Modeling Business Decisions with Unsupervised Learning K-means clustering Challenges of unsupervised learning Beyond K-means Exercise Hands-on: Building a Recommendation System Analyzing past customer behavior to improve new service offerings Extending your company's capabilities Developing models in the cloud Accelerating machine learning with GPU Beyond machine learning: Artificial Intelligence (AI) Applying Deep Learning neural networks for computer vision, voice recognition and text analysis Closing Remarks
embeddingprojector Embedding Projector: Visualizing your Training Data 14小时 Embedding Projector is an open-source web application for visualizing the data used to train machine learning systems. Created by Google, it is part of TensorFlow. This instructor-led, live training introduces the concepts behind Embedding Projector and walks participants through the setup of a demo project. By the end of this training, participants will be able to: Explore how data is being interpreted by machine learning models Navigate through 3D and 2D views of data to understand how a machine learning algorithm interprets it Understand the concepts behind Embeddings and their role in representing mathematical vectors for images, words and numerals. Explore the properties of a specific embedding to understand the behavior of a model Apply Embedding Project to real-world use cases such building a song recommendation system for music lovers Audience Developers Data scientists Format of the course Part lecture, part discussion, exercises and heavy hands-on practice To request a customized course outline for this training, please contact us.

近期课程

其它地区

Deep Learning,培训,课程,培训课程, Deep Learning老师,Deep Learnings辅导,Deep Learning私教,学Deep Learning班,Deep Learning辅导班,短期Deep Learning培训,Deep Learning周末培训,Deep Learning远程教育,企业Deep Learning培训,Deep Learning晚上培训,Deep Learning训练,Deep Learning讲师,学习Deep Learning ,一对一Deep Learning课程,Deep Learning课程,Deep Learning教程,小组Deep Learning课程

促销课程

订阅促销课程

为尊重您的隐私,我公司不会把您的邮箱地址提供给任何人。您可以享有优先权和随时取消订阅的权利。

我们的客户