Hadoop培训

Hadoop培训

Apache Hadoop培训

Testi...Client Testimonials

Hadoop for Developers and Administrators

The fact that all the data and software was ready to use on an already prepared VM, provided by the trainer in external disks.

- vyzVoice

Administrator Training for Apache Hadoop

Trainer give reallive Examples

Simon Hahn - OPITZ CONSULTING Deutschland GmbH

Administrator Training for Apache Hadoop

Big competences of Trainer

Grzegorz Gorski - OPITZ CONSULTING Deutschland GmbH

Administrator Training for Apache Hadoop

Many hands-on sessions.

Jacek Pieczątka - OPITZ CONSULTING Deutschland GmbH

A practical introduction to Data Analysis and Big Data

Willingness to share more

Balaram Chandra Paul - MOL Information Technology Asia Limited

Data Analysis with Hive/HiveQL

Liked very much the interactive way of learning.

Luigi Loiacono - Proximus

Data Analysis with Hive/HiveQL

It was a very practical training, I liked the hands-on exercises.

Proximus

Data Analysis with Hive/HiveQL

good overview, good balance between theory and exercises

Proximus

Data Analysis with Hive/HiveQL

Dynamic interaction and "hands on" the subject, thanks to the Virtual Machine, very stimulating!

Philippe Job - Proximus

Data Analysis with Hive/HiveQL

The competence and knowledge of the trainer

Jonathan Puvilland - Proximus

其他课程类别

Hadoop大纲

代码 名字 时长 概览
hadoopadm Hadoop Administration 21小时 The course is dedicated to IT specialists that are looking for a solution to store and process large data sets in distributed system environment Course goal: Getting knowledge regarding Hadoop cluster administration Introduction to Cloud Computing and Big Data solutions Apache Hadoop evolution: HDFS, MapReduce, YARN Installation and configuration of Hadoop in Pseudo-distributed mode Running MapReduce jobs on Hadoop cluster Hadoop cluster planning, installation and configuration Hadoop ecosystem: Pig, Hive, Sqoop, HBase Big Data future: Impala, Cassandra
hadoopmapr Hadoop Administration on MapR 28小时 Audience: This course is intended to demystify big data/hadoop technology and to show it is not difficult to understand. Big Data Overview: What is Big Data Why Big Data is gaining popularity Big Data Case Studies Big Data Characteristics Solutions to work on Big Data. Hadoop & Its components: What is Hadoop and what are its components. Hadoop Architecture and its characteristics of Data it can handle /Process. Brief on Hadoop History, companies using it and why they have started using it. Hadoop Frame work & its components- explained in detail. What is HDFS and Reads -Writes to Hadoop Distributed File System. How to Setup Hadoop Cluster in different modes- Stand- alone/Pseudo/Multi Node cluster. (This includes setting up a Hadoop cluster in VirtualBox/KVM/VMware, Network configurations that need to be carefully looked into, running Hadoop Daemons and testing the cluster). What is Map Reduce frame work and how it works. Running Map Reduce jobs on Hadoop cluster. Understanding Replication , Mirroring and Rack awareness in context of Hadoop clusters. Hadoop Cluster Planning: How to plan your hadoop cluster. Understanding hardware-software to plan your hadoop cluster. Understanding workloads and planning cluster to avoid failures and perform optimum. What is MapR and why MapR : Overview of MapR and its architecture. Understanding & working of MapR Control System, MapR Volumes , snapshots & Mirrors. Planning a cluster in context of MapR. Comparison of MapR with other distributions and Apache Hadoop. MapR installation and cluster deployment. Cluster Setup & Administration: Managing services, nodes ,snapshots, mirror volumes and remote clusters. Understanding and managing Nodes. Understanding of Hadoop components, Installing Hadoop components alongside MapR Services. Accessing Data on cluster including via NFS Managing services & nodes. Managing data by using volumes, managing users and groups, managing & assigning roles to nodes, commissioning decommissioning of nodes, cluster administration and performance monitoring, configuring/ analyzing and monitoring metrics to monitor performance, configuring and administering MapR security. Understanding and working with M7- Native storage for MapR tables. Cluster configuration and tuning for optimum performance. Cluster upgrade and integration with other setups: Upgrading software version of MapR and types of upgrade. Configuring Mapr cluster to access HDFS cluster. Setting up MapR cluster on Amazon Elastic Mapreduce. All the above topics include Demonstrations and practice sessions for learners to have hands on experience of the technology.
ambari Apache Ambari: Efficiently manage Hadoop clusters 21小时 Apache Ambari is an open-source management platform for provisioning, managing, monitoring and securing Apache Hadoop clusters. In this instructor-led live training participants will learn the management tools and practices provided by Ambari to successfully manage Hadoop clusters. By the end of this training, participants will be able to: Set up a live Big Data cluster using Ambari Apply Ambari's advanced features and functionalities to various use cases Seamlessly add and remove nodes as needed Improve a Hadoop cluster's performance through tuning and tweaking Audience DevOps System Administrators DBAs Hadoop testing professionals Format of the course Part lecture, part discussion, exercises and heavy hands-on practice To request a customized course outline for this training, please contact us.
68736 Hadoop for Developers (2 days) 14小时 Introduction What is Hadoop? What does it do? How does it do it? The Motivation for Hadoop Problems with Traditional Large-Scale Systems Introducing Hadoop Hadoopable Problems Hadoop: Basic Concepts and HDFS The Hadoop Project and Hadoop Components The Hadoop Distributed File System Introduction to MapReduce MapReduce Overview Example: WordCount Mappers Reducers Hadoop Clusters and the Hadoop Ecosystem Hadoop Cluster Overview Hadoop Jobs and Tasks Other Hadoop Ecosystem Components Writing a MapReduce Program in Java Basic MapReduce API Concepts Writing MapReduce Drivers, Mappers, and Reducers in Java Speeding Up Hadoop Development by Using Eclipse Differences Between the Old and New MapReduce APIs Writing a MapReduce Program Using Streaming Writing Mappers and Reducers with the Streaming API Unit Testing MapReduce Programs Unit Testing The JUnit and MRUnit Testing Frameworks Writing Unit Tests with MRUnit Running Unit Tests Delving Deeper into the Hadoop API Using the ToolRunner Class Setting Up and Tearing Down Mappers and Reducers Decreasing the Amount of Intermediate Data with Combiners Accessing HDFS Programmatically Using The Distributed Cache Using the Hadoop API’s Library of Mappers, Reducers, and Partitioners Practical Development Tips and Techniques Strategies for Debugging MapReduce Code Testing MapReduce Code Locally by Using LocalJobRunner Writing and Viewing Log Files Retrieving Job Information with Counters Reusing Objects Creating Map-Only MapReduce Jobs Partitioners and Reducers How Partitioners and Reducers Work Together Determining the Optimal Number of Reducers for a Job Writing Customer Partitioners Data Input and Output Creating Custom Writable and Writable-Comparable Implementations Saving Binary Data Using SequenceFile and Avro Data Files Issues to Consider When Using File Compression Implementing Custom InputFormats and OutputFormats Common MapReduce Algorithms Sorting and Searching Large Data Sets Indexing Data Computing Term Frequency — Inverse Document Frequency Calculating Word Co-Occurrence Performing Secondary Sort Joining Data Sets in MapReduce Jobs Writing a Map-Side Join Writing a Reduce-Side Join Integrating Hadoop into the Enterprise Workflow Integrating Hadoop into an Existing Enterprise Loading Data from an RDBMS into HDFS by Using Sqoop Managing Real-Time Data Using Flume Accessing HDFS from Legacy Systems with FuseDFS and HttpFS An Introduction to Hive, Imapala, and Pig The Motivation for Hive, Impala, and Pig Hive Overview Impala Overview Pig Overview Choosing Between Hive, Impala, and Pig An Introduction to Oozie Introduction to Oozie Creating Oozie Workflows
hivehiveql Data Analysis with Hive/HiveQL 7小时 This course covers how to use Hive SQL language (AKA: Hive HQL, SQL on Hive, HiveQL) for people who extract data from Hive Hive Overview Architecture and design Aata types SQL support in Hive Creating Hive tables and querying Partitions Joins Text processing labs : various labs on processing data with Hive DQL (Data Query Language) in Detail SELECT clause Column aliases Table aliases Date types and Date functions Group function Table joins JOIN clause UNION operator Nested queries Correlated subqueries
druid Druid: Build a fast, real-time data analysis system 21小时 Druid is an open-source, column-oriented, distributed data store written in Java. It was designed to quickly ingest massive quantities of event data and execute low-latency OLAP queries on that data. Druid is commonly used in business intelligence applications to analyze high volumes of real-time and historical data. It is also well suited for powering fast, interactive, analytic dashboards for end-users. Druid is used by companies such as Alibaba, Airbnb, Cisco, eBay, Netflix, Paypal, and Yahoo. In this course we explore some of the limitations of data warehouse solutions and discuss how Druid can compliment those technologies to form a flexible and scalable streaming analytics stack. We walk through many examples, offering participants the chance to implement and test Druid-based solutions in a lab environment. Audience     Application developers     Software engineers     Technical consultants     DevOps professionals     Architecture engineers Format of the course     Part lecture, part discussion, heavy hands-on practice, occasional tests to gauge understanding Introduction Installing and starting Druid Druid architecture and design Real-time ingestion of event data Sharding and indexing Loading data Querying data Visualizing data Running a distributed cluster Druid + Apache Hive Druid + Apache Kafka Druid + others Troubleshooting Administrative tasks
kylin Apache Kylin: From classic OLAP to real-time data warehouse 14小时 Apache Kylin is an extreme, distributed analytics engine for big data. In this instructor-led live training, participants will learn how to use Apache Kylin to set up a real-time data warehouse. By the end of this training, participants will be able to: Consume real-time streaming data using Kylin Utilize Apache Kylin's powerful features, including snowflake schema support, a rich SQL interface, spark cubing and subsecond query latency Note We use the latest version of Kylin (as of this writing, Apache Kylin v2.0) Audience Big data engineers Big Data analysts Format of the course Part lecture, part discussion, exercises and heavy hands-on practice To request a customized course outline for this training, please contact us.
voldemort Voldemort: Setting up a key-value distributed data store 14小时 Voldemort is an open-source distributed data store that is designed as a key-value store.  It is used at LinkedIn by numerous critical services powering a large portion of the site. This course will introduce the architecture and capabilities of Voldomort and walk participants through the setup and application of a key-value distributed data store. Audience     Software developers     System administrators     DevOps engineers Format of the course     Part lecture, part discussion, heavy hands-on practice, occasional tests to gauge understanding Introduction Understanding distributed key-value storage systems Voldomort data model and architecture Downloading and configuration Command line operations Clients and servers Working with Hadoop Configuring build and push jobs Rebalancing a Voldemort instance Serving Large-scale Batch Computed Data Using the Admin Tool Performance tuning
hdp Hortonworks Data Platform (HDP) for administrators 21小时 Hortonworks Data Platform is an open-source Apache Hadoop support platform that provides a stable foundation for developing big data solutions on the Apache Hadoop ecosystem. This instructor-led live training introduces Hortonworks and walks participants through the deployment of Spark + Hadoop solution. By the end of this training, participants will be able to: Use Hortonworks to reliably run Hadoop at a large scale Unify Hadoop's security, governance, and operations capabilities with Spark's agile analytic workflows. Use Hortonworks to investigate, validate, certify and support each of the components in a Spark project Process different types of data, including structured, unstructured, in-motion, and at-rest. Audience Hadoop administrators Format of the course Part lecture, part discussion, exercises and heavy hands-on practice To request a customized course outline for this training, please contact us.  
mdlmrah Model MapReduce and Apache Hadoop 14小时 The course is intended for IT specialist that works with the distributed processing of large data sets across clusters of computers. Data Mining and Business Intelligence Introduction Area of application Capabilities Basics of data exploration Big data What does Big data stand for? Big data and Data mining MapReduce Model basics Example application Stats Cluster model Hadoop What is Hadoop Installation Configuration Cluster settings Architecture and configuration of Hadoop Distributed File System Console tools DistCp tool MapReduce and Hadoop Streaming Administration and configuration of Hadoop On Demand Alternatives
alluxio Alluxio: Unifying disparate storage systems 7小时 Alexio is an open-source virtual distributed storage system that unifies disparate storage systems and enables applications to interact with data at memory speed. It is used by companies such as Intel, Baidu and Alibaba. In this instructor-led, live training, participants will learn how to use Alexio to bridge different computation frameworks with storage systems and efficiently manage multi-petabyte scale data as they step through the creation of an application with Alluxio. By the end of this training, participants will be able to: Develop an application with Alluxio Connect big data systems and applications while preserving one namespace Efficiently extract value from big data in any storage format Improve workload performance Deploy and manage Alluxio standalone or clustered Audience Data scientist Developer System administrator Format of the course Part lecture, part discussion, exercises and heavy hands-on practice To request a customized course outline for this training, please contact us.  
ApHadm1 Apache Hadoop: Manipulation and Transformation of Data Performance 21小时 This course is intended for developers, architects, data scientists or any profile that requires access to data either intensively or on a regular basis. The major focus of the course is data manipulation and transformation. Among the tools in the Hadoop ecosystem this course includes the use of Pig and Hive both of which are heavily used for data transformation and manipulation. This training also addresses performance metrics and performance optimisation. The course is entirely hands on and is punctuated by presentations of the theoretical aspects. 1.1Hadoop Concepts 1.1.1HDFS The Design of HDFS Command line interface Hadoop File System 1.1.2Clusters Anatomy of a cluster Mater Node / Slave node Name Node / Data Node 1.2Data Manipulation 1.2.1MapReduce detailed Map phase Reduce phase Shuffle 1.2.2Analytics with Map Reduce Group-By with MapReduce Frequency distributions and sorting with MapReduce Plotting results (GNU Plot) Histograms with MapReduce Scatter plots with MapReduce Parsing complex datasets Counting with MapReduce and Combiners Build reports   1.2.3Data Cleansing Document Cleaning Fuzzy string search Record linkage / data deduplication Transform and sort event dates Validate source reliability Trim Outliers 1.2.4Extracting and Transforming Data Transforming logs Using Apache Pig to filter Using Apache Pig to sort Using Apache Pig to sessionize 1.2.5Advanced Joins Joining data in the Mapper using MapReduce Joining data using Apache Pig replicated join Joining sorted data using Apache Pig merge join Joining skewed data using Apache Pig skewed join Using a map-side join in Apache Hive Using optimized full outer joins in Apache Hive Joining data using an external key value store 1.3Performance Diagnosis and Optimization Techniques Map Investigating spikes in input data Identifying map-side data skew problems Map task throughput Small files Unsplittable files Reduce Too few or too many reducers Reduce-side data skew problems Reduce tasks throughput Slow shuffle and sort Competing jobs and scheduler throttling Stack dumps & unoptimized code Hardware failures CPU contention Tasks Extracting and visualizing task execution times Profiling your map and reduce tasks Avoid the reducer Filter and project Using the combiner Fast sorting with comparators Collecting skewed data Reduce skew mitigation
tigon Tigon: Real-time streaming for the real world 14小时 Tigon is an open-source, real-time, low-latency, high-throughput, native YARN, stream processing framework that sits on top of HDFS and HBase for persistence. Tigon applications address use cases such as network intrusion detection and analytics, social media market analysis, location analytics, and real-time recommendations to users. This instructor-led, live training introduces Tigon's approach to blending real-time and batch processing as it walks participants through the creation a sample application. By the end of this training, participants will be able to: Create powerful, stream processing applications for handling large volumes of data Process stream sources such as Twitter and Webserver Logs Use Tigon for rapid joining, filtering, and aggregating of streams Audience Developers Format of the course Part lecture, part discussion, exercises and heavy hands-on practice To request a customized course outline for this training, please contact us.
apacheh Administrator Training for Apache Hadoop 35小时 Audience: The course is intended for IT specialists looking for a solution to store and process large data sets in a distributed system environment Goal: Deep knowledge on Hadoop cluster administration. 1: HDFS (17%) Describe the function of HDFS Daemons Describe the normal operation of an Apache Hadoop cluster, both in data storage and in data processing. Identify current features of computing systems that motivate a system like Apache Hadoop. Classify major goals of HDFS Design Given a scenario, identify appropriate use case for HDFS Federation Identify components and daemon of an HDFS HA-Quorum cluster Analyze the role of HDFS security (Kerberos) Determine the best data serialization choice for a given scenario Describe file read and write paths Identify the commands to manipulate files in the Hadoop File System Shell 2: YARN and MapReduce version 2 (MRv2) (17%) Understand how upgrading a cluster from Hadoop 1 to Hadoop 2 affects cluster settings Understand how to deploy MapReduce v2 (MRv2 / YARN), including all YARN daemons Understand basic design strategy for MapReduce v2 (MRv2) Determine how YARN handles resource allocations Identify the workflow of MapReduce job running on YARN Determine which files you must change and how in order to migrate a cluster from MapReduce version 1 (MRv1) to MapReduce version 2 (MRv2) running on YARN. 3: Hadoop Cluster Planning (16%) Principal points to consider in choosing the hardware and operating systems to host an Apache Hadoop cluster. Analyze the choices in selecting an OS Understand kernel tuning and disk swapping Given a scenario and workload pattern, identify a hardware configuration appropriate to the scenario Given a scenario, determine the ecosystem components your cluster needs to run in order to fulfill the SLA Cluster sizing: given a scenario and frequency of execution, identify the specifics for the workload, including CPU, memory, storage, disk I/O Disk Sizing and Configuration, including JBOD versus RAID, SANs, virtualization, and disk sizing requirements in a cluster Network Topologies: understand network usage in Hadoop (for both HDFS and MapReduce) and propose or identify key network design components for a given scenario 4: Hadoop Cluster Installation and Administration (25%) Given a scenario, identify how the cluster will handle disk and machine failures Analyze a logging configuration and logging configuration file format Understand the basics of Hadoop metrics and cluster health monitoring Identify the function and purpose of available tools for cluster monitoring Be able to install all the ecosystem components in CDH 5, including (but not limited to): Impala, Flume, Oozie, Hue, Manager, Sqoop, Hive, and Pig Identify the function and purpose of available tools for managing the Apache Hadoop file system 5: Resource Management (10%) Understand the overall design goals of each of Hadoop schedulers Given a scenario, determine how the FIFO Scheduler allocates cluster resources Given a scenario, determine how the Fair Scheduler allocates cluster resources under YARN Given a scenario, determine how the Capacity Scheduler allocates cluster resources 6: Monitoring and Logging (15%) Understand the functions and features of Hadoop’s metric collection abilities Analyze the NameNode and JobTracker Web UIs Understand how to monitor cluster Daemons Identify and monitor CPU usage on master nodes Describe how to monitor swap and memory allocation on all nodes Identify how to view and manage Hadoop’s log files Interpret a log file
datameer Datameer for Data Analysts 14小时 Datameer is a business intelligence and analytics platform built on Hadoop. It allows end-users to access, explore and correlate large-scale, structured, semi-structured and unstructured data in an easy-to-use fashion. In this instructor-led, live training, participants will learn how to use Datameer to overcome Hadoop's steep learning curve as they step through the setup and analysis of a series of big data sources. By the end of this training, participants will be able to: Create, curate, and interactively explore an enterprise data lake Access business intelligence data warehouses, transactional databases and other analytic stores Use a spreadsheet user-interface to design end-to-end data processing pipelines Access pre-built functions to explore complex data relationships Use drag-and-drop wizards to visualize data and create dashboards Use tables, charts, graphs, and maps to analyze query results Audience Data analysts Format of the course Part lecture, part discussion, exercises and heavy hands-on practice To request a customized course outline for this training, please contact us.
hadoopforprojectmgrs Hadoop for Project Managers 14小时 As more and more software and IT projects migrate from local processing and data management to distributed processing and big data storage, Project Managers are finding the need to upgrade their knowledge and skills to grasp the concepts and practices relevant to Big Data projects and opportunities. This course introduces Project Managers to the most popular Big Data processing framework: Hadoop.   In this instructor-led training, participants will learn the core components of the Hadoop ecosystem and how these technologies can used to solve large-scale problems. In learning these foundations, participants will also improve their ability to communicate with the developers and implementers of these systems as well as the data scientists and analysts that many IT projects involve. Audience Project Managers wishing to implement Hadoop into their existing development or IT infrastructure Project Managers needing to communicate with cross-functional teams that include big data engineers, data scientists and business analysts Format of the course Part lecture, part discussion, exercises and heavy hands-on practice Introduction     Why and how project teams adopt Hadoop.     How it all started     The Project Manager's role in Hadoop projects Understanding Hadoop's architecture and key concepts     HDFS     MapReduce     Other pieces of the Hadoop ecosystem What constitutes Big Data? Different approaches to storing Big Data HDFS (Hadoop Distributed File System) as the foundation How Big Data is processed     The power of distributed processing Processing data with Map Reduce     How data is picked apart step by step The role of clustering in large-scale distributed processing     Architectural overview     Clustering approaches Clustering your data and processes with YARN The role of non-relational database in Big Data storage Working with Hadoop's non-relational database: HBase Data warehousing architectural overview Managing your data warehouse with Hive Running Hadoop from shell-scripts Working with Hadoop Streaming Other Hadoop tools and utilities Getting started on a Hadoop project     Demystifying complexity Migrating an existing project to Hadoop     Infrastructure considerations     Scaling beyond your allocated resources Hadoop project stakeholders and their toolkits     Developers, data scientists, business analysts and project managers Hadoop as a foundation for new technologies and approaches Closing remarks
HadoopDevAd Hadoop for Developers and Administrators 21小时 Module 1. Introduction to Hadoop The Hadoop Distributed File System (HDFS) The Read Path and The Write Path Managing Filesystem Metadata The Namenode and the Datanode The Namenode High Availability Namenode Federation The Command-Line Tools Understanding REST Support Module 2. Introduction to MapReduce Analyzing the Data with Hadoop Map and Reduce Pattern Java MapReduce Scaling Out Data Flow Developing Combiner Functions Running a Distributed MapReduce Job Module 3. Planning a Hadoop Cluster Picking a Distribution and Version of Hadoop Versions and Features Hardware Selection Master and Worker Hardware Selection Cluster Sizing Operating System Selection and Preparation Deployment Layout Setting up Users, Groups, and Privileges Disk Configuration Network Design Module 4. Installation and Configuration Installing Hadoop Configuration: An Overview The Hadoop XML Configuration Files Environment Variables and Shell Scripts Logging Configuration Managing HDFS Optimization and Tuning Formatting the Namenode Creating a /tmp Directory Thinking Namenode High Availability The Fencing Options Automatic Failover Configuration Format and Bootstrap the Namenodes Namenode Federation Module 5. Understanding Hadoop I/O Data Integrity in HDFS   Understanding Codecs Compression and Input Splits Using Compression in MapReduce The Serialization mechanism File-Based Data Structures The SequenceFile format Other File Formats and Column-Oriented Formats Module 6. Developing a MapReduce Application The Configuration API  Setting Up the Development Environment Managing Configuration GenericOptionsParser, Tool, and ToolRunner Writing a Unit Test with MRUnit The Mapper and Reducer Running Locally on Test Data  Testing the Driver Running on a Cluster Packaging and Launching a Job The MapReduce Web UI Tuning a Job Module 7. Identity, Authentication, and Authorization Managing Identity Kerberos and Hadoop Understanding Authorization Module 8. Resource Management What Is Resource Management? HDFS Quotas MapReduce Schedulers Anatomy of a YARN Application Run Resource Requests Application Lifespan YARN Compared to MapReduce 1 Scheduling in YARN Scheduler Options Capacity Scheduler Configuration Fair Scheduler Configuration Delay Scheduling Dominant Resource Fairness Module 9. MapReduce Types and Formats MapReduce Types The Default MapReduce Job Defining the Input Formats Managing Input Splits and Records Text Input and Binary Input Managing Multiple Inputs Database Input (and Output) Output Formats Text Output and Binary Output Managing Multiple Outputs The Database Output Module 10. Using MapReduce Features Using Counters Reading Built-in Counters User-Defined Java Counters Understanding Sorting Using the Distributed Cache Module 11. Cluster Maintenance and Troubleshooting Managing Hadoop Processes Starting and Stopping Processes with Init Scripts Starting and Stopping Processes Manually HDFS Maintenance Tasks Adding a Datanode Decommissioning a Datanode Checking Filesystem Integrity with fsck Balancing HDFS Block Data Dealing with a Failed Disk MapReduce Maintenance Tasks  Killing a MapReduce Job Killing a MapReduce Task Managing Resource Exhaustion Module 12. Monitoring The available Hadoop Metrics The role of SNMP Health Monitoring Host-Level Checks HDFS Checks MapReduce Checks Module 13. Backup and Recovery Data Backup Distributed Copy (distcp) Parallel Data Ingestion Namenode Metadata
hadoopdeva Advanced Hadoop for Developers 21小时 Apache Hadoop is one of the most popular frameworks for processing Big Data on clusters of servers. This course delves into data management in HDFS, advanced Pig, Hive, and HBase.  These advanced programming techniques will be beneficial to experienced Hadoop developers. Audience: developers Duration: three days Format: lectures (50%) and hands-on labs (50%).   Section 1: Data Management in HDFS Various Data Formats (JSON / Avro / Parquet) Compression Schemes Data Masking Labs : Analyzing different data formats;  enabling compression Section 2: Advanced Pig User-defined Functions Introduction to Pig Libraries (ElephantBird / Data-Fu) Loading Complex Structured Data using Pig Pig Tuning Labs : advanced pig scripting, parsing complex data types Section 3 : Advanced Hive User-defined Functions Compressed Tables Hive Performance Tuning Labs : creating compressed tables, evaluating table formats and configuration Section 4 : Advanced HBase Advanced Schema Modelling Compression Bulk Data Ingest Wide-table / Tall-table comparison HBase and Pig HBase and Hive HBase Performance Tuning Labs : tuning HBase; accessing HBase data from Pig & Hive; Using Phoenix for data modeling
IntroToAvro Apache Avro: Data serialization for distributed applications 14小时 This course is intended for Developers Format of the course Lectures, hands-on practice, small tests along the way to gauge understanding Principles of distributed computing Apache Spark Hadoop Principles of data serialization How data object is passed over the network Serialization of objects Serialization approaches Thrift Protocol Buffers Apache Avro data structure size, speed, format characteristics persistent data storage integration with dynamic languages dynamic typing schemas untagged data change management Data serialization and distributed computing Avro as a subproject of Hadoop Java serialization Hadoop serialization Avro serialization Using Avro with Hive (AvroSerDe) Pig (AvroStorage) Porting Existing RPC Frameworks
hadoopdev Hadoop for Developers (4 days) 28小时 Apache Hadoop is the most popular framework for processing Big Data on clusters of servers. This course will introduce a developer to various components (HDFS, MapReduce, Pig, Hive and HBase) Hadoop ecosystem.   Section 1: Introduction to Hadoop hadoop history, concepts eco system distributions high level architecture hadoop myths hadoop challenges hardware / software lab : first look at Hadoop Section 2: HDFS Design and architecture concepts (horizontal scaling, replication, data locality, rack awareness) Daemons : Namenode, Secondary namenode, Data node communications / heart-beats data integrity read / write path Namenode High Availability (HA), Federation labs : Interacting with HDFS Section 3 : Map Reduce concepts and architecture daemons (MRV1) : jobtracker / tasktracker phases : driver, mapper, shuffle/sort, reducer Map Reduce Version 1 and Version 2 (YARN) Internals of Map Reduce Introduction to Java Map Reduce program labs : Running a sample MapReduce program Section 4 : Pig pig vs java map reduce pig job flow pig latin language ETL with Pig Transformations & Joins User defined functions (UDF) labs : writing Pig scripts to analyze data Section 5: Hive architecture and design data types SQL support in Hive Creating Hive tables and querying partitions joins text processing labs : various labs on processing data with Hive Section 6: HBase concepts and architecture hbase vs RDBMS vs cassandra HBase Java API Time series data on HBase schema design labs : Interacting with HBase using shell;   programming in HBase Java API ; Schema design exercise
BigData_ 数据分析和大数据的实用介绍 35小时 参与者完成此次培训后,将会对大数据及其相关技术、方法、工具有一个实际和真实的理解。 参与者将有机会通过动手练习将这些知识付诸实践。小组互动和讲师反馈是课堂的重要组成部分。 本课程首先介绍大数据的基本概念,然后讲解用于执行数据分析的编程语言和方法,最后我们会讨论可启用大数据存储、分布式处理及可扩展性的工具和基础架构。 受众 开发人员/程序员 IT顾问 课程形式 部分讲座、部分讨论、实操、偶尔测评进度 数据分析和大数据简介 大数据何以称为“大”? 速度(Velocity)、体量(Volume)、种类(Variety)、准确度(Veracity)(VVVV) 对传统数据处理的限制 分布式处理 统计分析 机器学习分析的类型 数据可视化 用于数据分析的语言 R语言 为什么R用于数据分析? 数据处理、计算、图形显示 Python 为什么Python用于数据分析? 操作、处理、清理、运算数据 数据分析的方法 统计分析 时间序列分析 用相关和回归模型预测 推论统计(估算) 大数据集中的描述性统计(例如:计算平均数) 机器学习 监督与无监督学习 分类和聚类 估算具体方法的成本 过滤 自然语言处理 处理文本 理解文本的含义 自动生成文本 情感分析/主题分析 计算机视觉 获取、处理、分析、理解图像 重建、解读、理解3D场景 使用图像数据做出决定 大数据基础架构 数据存储 关系数据库(SQL) MySQL Postgres Oracle 非关系数据库(NoSQL) Cassandra MongoDB Neo4js 了解细微差别 分层数据库 面向对象的数据库 面向文档的数据库 面向图形的数据库 其他 分布式处理 Hadoop HDFS作为分布式文件系统 MapReduce用于分布式处理 Spark 用于大规模数据处理的一体化内存集群计算框架 结构化数据流(structured streaming) Spark SQL 机器学习库:MLlib 使用GraphX进行图形处理 可扩展性 公共云 AWS、Google、阿里云等 私有云 OpenStack、Cloud Foundry等 自动可扩展性 针对问题选择正确的解决方案 大数据的未来 结束语
hbasedev HBase for Developers 21小时 This course introduces HBase – a NoSQL store on top of Hadoop.  The course is intended for developers who will be using HBase to develop applications,  and administrators who will manage HBase clusters. We will walk a developer through HBase architecture and data modelling and application development on HBase. It will also discuss using MapReduce with HBase, and some administration topics, related to performance optimization. The course  is very  hands-on with lots of lab exercises. Duration : 3 days Audience : Developers  & Administrators Section 1: Introduction to Big Data & NoSQL Big Data ecosystem NoSQL overview CAP theorem When is NoSQL appropriate Columnar storage HBase and NoSQL Section 2 : HBase Intro Concepts and Design Architecture (HMaster and Region Server) Data integrity HBase ecosystem Lab : Exploring HBase Section 3 : HBase Data model Namespaces, Tables and Regions Rows, columns, column families, versions HBase Shell and Admin commands Lab : HBase Shell Section 3 : Accessing HBase using Java API Introduction to Java API Read / Write path Time Series data Scans Map Reduce Filters Counters Co-processors Labs (multiple) : Using HBase Java API to implement  time series , Map Reduce, Filters and counters. Section 4 : HBase schema Design : Group session students are presented with real world use cases students work in groups to come up with design solutions discuss / critique and learn from multiple designs Labs : implement a scenario in HBase Section 5 : HBase Internals Understanding HBase under the hood Memfile / HFile / WAL HDFS storage Compactions Splits Bloom Filters Caches Diagnostics Section 6 : HBase installation and configuration hardware selection install methods common configurations Lab : installing HBase Section 7 : HBase eco-system developing applications using HBase interacting with other Hadoop stack (MapReduce, Pig, Hive) frameworks around HBase advanced concepts (co-processors) Labs : writing HBase applications Section 8 : Monitoring And Best Practices monitoring tools and practices optimizing HBase HBase in the cloud real world use cases of HBase Labs : checking HBase vitals
nifi Apache NiFi for Administrators 21小时 Apache NiFi (Hortonworks DataFlow) is a real-time integrated data logistics and simple event processing platform that enables the moving, tracking and automation of data between systems. It is written using flow-based programming and provides a web-based user interface to manage dataflows in real time. In this instructor-led, live training, participants will learn how to deploy and manage Apache NiFi in a live lab environment. By the end of this training, participants will be able to: Install and configure Apachi NiFi Source, transform and manage data from disparate, distributed data sources, including databases and big data lakes Automate dataflows Enable streaming analytics Apply various approaches for data ingestion Transform Big Data and into business insights Audience System administrators Data engineers Developers DevOps Format of the course Part lecture, part discussion, exercises and heavy hands-on practice Introduction to Apache NiFi        Data at rest vs data in motion Overview of big data and Apache Hadoop     HDFS and MapReduce architecture Installing and configuring NiFi Cluster integration NiFi FlowFile Processor NiFi Flow Controller Database aggregating, splitting and transforming Troubleshooting Closing remarks
storm Apache Storm 28小时 Apache Storm is a distributed, real-time computation engine used for enabling real-time business intelligence. It does so by enabling applications to reliably process unbounded streams of data (a.k.a. stream processing). "Storm is for real-time processing what Hadoop is for batch processing!" In this instructor-led live training, participants will learn how to install and configure Apache Storm, then develop and deploy an Apache Storm application for processing big data in real-time. Some of the topics included in this training include: Apache Storm in the context of Hadoop Working with unbounded data Continuous computation Real-time analytics Distributed RPC and ETL processing Request this course now! Audience Software and ETL developers Mainframe professionals Data scientists Big data analysts Hadoop professionals Format of the course     Part lecture, part discussion, exercises and heavy hands-on practice Request a customized course outline for this training!
nifidev Apache NiFi for Developers 7小时 Apache NiFi (Hortonworks DataFlow) is a real-time integrated data logistics and simple event processing platform that enables the moving, tracking and automation of data between systems. It is written using flow-based programming and provides a web-based user interface to manage dataflows in real time. In this instructor-led, live training, participants will learn the fundamentals of flow-based programming as they develop a number of demo extensions, components and processors using Apache NiFi. By the end of this training, participants will be able to: Understand NiFi's architecture and dataflow concepts Develop extensions using NiFi and third-party APIs Custom develop their own Apache Nifi processor Ingest and process real-time data from disparate and uncommon file formats and data sources Audience Developers Data engineers Format of the course Part lecture, part discussion, exercises and heavy hands-on practice Introduction     Data at rest vs data in motion Overview of big data tools and technologies     Hadoop (HDFS and MapReduce) and Spark Installing and configuring NiFi Overview of NiFi architecture Development approaches     Application development tools and mindset     Extract, Transform, and Load (ETL) tools and mindset Design considerations Components, events, and processor patterns Exercise: Streaming data feeds into HDFS Error Handling Controller Services Exercise: Ingesting data from IoT devices using web-based APIs Exercise: Developing a custom Apache Nifi processor using JSON Testing and troubleshooting Contributing to Apache NiFi Closing remarks
hadoopba Hadoop for Business Analysts 21小时 Apache Hadoop is the most popular framework for processing Big Data. Hadoop provides rich and deep analytics capability, and it is making in-roads in to tradional BI analytics world. This course will introduce an analyst to the core components of Hadoop eco system and its analytics Audience Business Analysts Duration three days Format Lectures and hands on labs. Section 1: Introduction to Hadoop hadoop history, concepts eco system distributions high level architecture hadoop myths hadoop challenges hardware / software Labs : first look at Hadoop Section 2: HDFS Overview concepts (horizontal scaling, replication, data locality, rack awareness) architecture (Namenode, Secondary namenode, Data node) data integrity future of HDFS : Namenode HA, Federation labs : Interacting with HDFS Section 3 : Map Reduce Overview mapreduce concepts daemons : jobtracker / tasktracker phases : driver, mapper, shuffle/sort, reducer Thinking in map reduce Future of mapreduce (yarn) labs : Running a Map Reduce program Section 4 : Pig pig vs java map reduce pig latin language user defined functions understanding pig job flow basic data analysis with Pig complex data analysis with Pig multi datasets with Pig advanced concepts lab : writing pig scripts to analyze / transform data Section 5: Hive hive concepts architecture SQL support in Hive data types table creation and queries Hive data management partitions & joins text analytics labs (multiple) : creating Hive tables and running queries, joins , using partitions, using text analytics functions Section 6: BI Tools for Hadoop BI tools and Hadoop Overview of current BI tools landscape Choosing the best tool for the job
hadoopadm1 Hadoop For Administrators 21小时 Apache Hadoop is the most popular framework for processing Big Data on clusters of servers. In this three (optionally, four) days course, attendees will learn about the business benefits and use cases for Hadoop and its ecosystem, how to plan cluster deployment and growth, how to install, maintain, monitor, troubleshoot and optimize Hadoop. They will also practice cluster bulk data load, get familiar with various Hadoop distributions, and practice installing and managing Hadoop ecosystem tools. The course finishes off with discussion of securing cluster with Kerberos. “…The materials were very well prepared and covered thoroughly. The Lab was very helpful and well organized” — Andrew Nguyen, Principal Integration DW Engineer, Microsoft Online Advertising Audience Hadoop administrators Format Lectures and hands-on labs, approximate balance 60% lectures, 40% labs. Introduction Hadoop history, concepts Ecosystem Distributions High level architecture Hadoop myths Hadoop challenges (hardware / software) Labs: discuss your Big Data projects and problems Planning and installation Selecting software, Hadoop distributions Sizing the cluster, planning for growth Selecting hardware and network Rack topology Installation Multi-tenancy Directory structure, logs Benchmarking Labs: cluster install, run performance benchmarks HDFS operations Concepts (horizontal scaling, replication, data locality, rack awareness) Nodes and daemons (NameNode, Secondary NameNode, HA Standby NameNode, DataNode) Health monitoring Command-line and browser-based administration Adding storage, replacing defective drives Labs: getting familiar with HDFS command lines Data ingestion Flume for logs and other data ingestion into HDFS Sqoop for importing from SQL databases to HDFS, as well as exporting back to SQL Hadoop data warehousing with Hive Copying data between clusters (distcp) Using S3 as complementary to HDFS Data ingestion best practices and architectures Labs: setting up and using Flume, the same for Sqoop MapReduce operations and administration Parallel computing before mapreduce: compare HPC vs Hadoop administration MapReduce cluster loads Nodes and Daemons (JobTracker, TaskTracker) MapReduce UI walk through Mapreduce configuration Job config Optimizing MapReduce Fool-proofing MR: what to tell your programmers Labs: running MapReduce examples YARN: new architecture and new capabilities YARN design goals and implementation architecture New actors: ResourceManager, NodeManager, Application Master Installing YARN Job scheduling under YARN Labs: investigate job scheduling Advanced topics Hardware monitoring Cluster monitoring Adding and removing servers, upgrading Hadoop Backup, recovery and business continuity planning Oozie job workflows Hadoop high availability (HA) Hadoop Federation Securing your cluster with Kerberos Labs: set up monitoring Optional tracks Cloudera Manager for cluster administration, monitoring, and routine tasks; installation, use. In this track, all exercises and labs are performed within the Cloudera distribution environment (CDH5) Ambari for cluster administration, monitoring, and routine tasks; installation, use. In this track, all exercises and labs are performed within the Ambari cluster manager and Hortonworks Data Platform (HDP 2.0)

近期课程

课程日期价格【远程 / 传统课堂】
Model MapReduce and Apache Hadoop - 上海 - 六八八广场星期一, 2017-12-04 09:30¥18180 / ¥20180

其它地区

Hadoop,培训,课程,培训课程, 短期Hadoop培训,学Hadoop班,Hadoop晚上培训,Hadoop教程,Hadoop训练,一对一Hadoop课程,Hadoop远程教育,Hadoop辅导班,小组Hadoop课程,Hadoop课程,Hadoop培训师,Hadoop私教,Hadoop讲师,Hadoops辅导,学习Hadoop ,企业Hadoop培训,Hadoop老师

促销课程

订阅促销课程

为尊重您的隐私,我公司不会把您的邮箱地址提供给任何人。您可以享有优先权和随时取消订阅的权利。

我们的客户