Difference between revisions of "Hadoop Hands-on"

From Gridkaschool
(Created page with "== Technical specification/requirements == {| class="wikitable" width="85%" style="border-collapse: separate; border-spacing: 0; border-width: 1px; border-style: solid; border-c…")
 
(Books)
 
(18 intermediate revisions by 2 users not shown)
Line 1: Line 1:
  +
Tuesday, 28.8.2012, 13:00 - 18:30
== Technical specification/requirements ==
 
   
  +
=Objectives=
{| class="wikitable" width="85%" style="border-collapse: separate; border-spacing: 0; border-width: 1px; border-style: solid; border-color: #000; padding: 0"
 
  +
! colspan="6" style="background:#ABE" | Hadoop Hands-on
 
  +
The focus of this session is on the hadoop ecosystem and the interplay of many specialized tools for data analysis.
|-
 
  +
| colspan="6" style="background:#ffffcc" | Alexander Lorenz (in progress)
 
  +
We look into the Java API as well, but not in so much detail as in a pure developer class. We will try to show a big picture
|-
 
  +
! style="width: 5%; border-style: solid; border-width: 1px 1px 0 0" | # of nodes
 
  +
of hadoop in the context of scientific computing. You will learn, what hadoop can be used for, and what it is not intended to be
! style="width: 20%; border-style: solid; border-width: 1px 1px 0 0" | Hardware
 
  +
! style="width: 25%; border-style: solid; border-width: 1px 1px 0 0" | Software/Utilities
 
  +
applied to. Therefore we will discuss the underlying principles as well as the programming model and installation / configuration
! style="width: 30%; border-style: solid; border-width: 1px 1px 0 0" | Miscellaneous
 
  +
! style="width: 10%; border-style: solid; border-width: 1px 1px 0 0" | Status (ToDo/DONE)
 
  +
procedures. You will test some of the commands on a real cluster and some life demos give you an idea of lots of features provided
! style="width: 10%; border-style: solid; border-width: 1px 0 0 0" | Tested (YES/NO)
 
  +
|-
 
  +
by the web based user interface.
| style="border-style: solid; border-width: 1px 1px 0 0" align="center" | ??
 
  +
| style="border-style: solid; border-width: 1px 1px 0 0" |
 
  +
=Prerequisites=
* HDD: ??GB
 
  +
* Basic understanding of Unix/Linux OS management is needed to do the exercises.
* RAM: ??GB
 
  +
* No prior knowledge of Hadoop is required, as we go through the basic concepts.
* ...
 
  +
* For this workshop a personal notebook is recommendet.
| style="border-style: solid; border-width: 1px 1px 0 0" |
 
  +
* If you use Windows: please install "PuTTY" and the VMWare-Player.
* CDH4 (currently CDH3 r0 on KIT resources, need upgrade)
 
  +
| style="border-style: solid; border-width: 1px 1px 0 0" |
 
  +
=Recommendet Material=
* Alex: thought about HDFS and Hive tutorials
 
  +
* Ariel: the concepts of Map-Reduce, HDFS and other tools like Hive, Pig, etc. are really appreciated by the attendees
 
  +
==Books==
| style="border-style: solid; border-color: #000; border-width: 1px 1px 0 0; color: red;" align="center" | ToDo
 
  +
* Hadoop the Defenitive Guide [http://www.amazon.de/Hadoop-Definitive-Guide-Tom-White/dp/1449311520/ref=sr_1_fkmr1_1?ie=UTF8&qid=1345918087&sr=8-1-fkmr1]
| style="border-style: solid; border-color: #000; border-width: 1px 0 0 0; color: red;" align="center" | NO
 
  +
* Hadoop in Action [http://www.amazon.de/Hadoop-Action-Chuck-Lam/dp/1935182196/ref=sr_1_1?s=books-intl-de&ie=UTF8&qid=1345918219&sr=1-1]
|}
 
  +
* Data Intensive Text Processing with MapReduce [http://www.amazon.de/Data-Intensive-Processing-Mapreduce-Author-Paperback/dp/B006V38ZCK/ref=sr_1_2?ie=UTF8&qid=1345918261&sr=8-2]
  +
  +
==Scripts from last year==
  +
  +
* Introduction [http://gridka-school.scc.kit.edu/2011/downloads/Hadoop_tutorial-1-Introduction.pdf]
  +
* MapReduce [http://gridka-school.scc.kit.edu/2011/downloads/Hadoop_tutorial-2_4-MapReduce.pdf]
  +
* Pig [http://gridka-school.scc.kit.edu/2011/downloads/Hadoop_tutorial-5-Pig.pdf]
  +
* Hand-out [http://gridka-school.scc.kit.edu/2011/downloads/Hadoop_tutorial-Hand_outs.pdf]
  +
  +
  +
  +
=Content=
  +
==Session A==
  +
* The hadoop ecosystem: HDFS, MR, HUE, Sqoop, Hive, Pig, HBase, Flume, Oozie
  +
* What is CDH and the Cloudera-Manager?
  +
* Installation, starting and basic configurations of a small cluster
  +
  +
==Session B==
  +
* HDFS intro (Name Node, Data Node, Secondary Name Node)
  +
* How is data stored in HDFS?
  +
* Properties and configurations, relevant for efficient working with HDFS.
  +
* HDFS commands
  +
  +
==Session C==
  +
* Working with the webbased-GUI
  +
* Running and tracking jobs
  +
* Java-API and samples
  +
* Streaming API sample
  +
  +
==Session D==
  +
* Map Reduce details, Java-API and Streaming
  +
* HDFS details, using the webbased-GUI for deeper insights
  +
* Breaking down a cluster and heal it
  +
  +
==Session E==
  +
* Intro to Hive and Sqoop
  +
* Dataimport via Sqoop
  +
* Hive scripts
  +
  +
==Session F (optional)==
  +
* Serialisation and deserialisation (SerDe) and user defined functions (UDF) with Hive
  +
* Workflows with oozie

Latest revision as of 21:05, 26 August 2012

Tuesday, 28.8.2012, 13:00 - 18:30

Objectives

The focus of this session is on the hadoop ecosystem and the interplay of many specialized tools for data analysis.

We look into the Java API as well, but not in so much detail as in a pure developer class. We will try to show a big picture

of hadoop in the context of scientific computing. You will learn, what hadoop can be used for, and what it is not intended to be

applied to. Therefore we will discuss the underlying principles as well as the programming model and installation / configuration

procedures. You will test some of the commands on a real cluster and some life demos give you an idea of lots of features provided

by the web based user interface.

Prerequisites

  • Basic understanding of Unix/Linux OS management is needed to do the exercises.
  • No prior knowledge of Hadoop is required, as we go through the basic concepts.
  • For this workshop a personal notebook is recommendet.
  • If you use Windows: please install "PuTTY" and the VMWare-Player.

Recommendet Material

Books

  • Hadoop the Defenitive Guide [1]
  • Hadoop in Action [2]
  • Data Intensive Text Processing with MapReduce [3]

Scripts from last year


Content

Session A

  • The hadoop ecosystem: HDFS, MR, HUE, Sqoop, Hive, Pig, HBase, Flume, Oozie
  • What is CDH and the Cloudera-Manager?
  • Installation, starting and basic configurations of a small cluster

Session B

  • HDFS intro (Name Node, Data Node, Secondary Name Node)
  • How is data stored in HDFS?
  • Properties and configurations, relevant for efficient working with HDFS.
  • HDFS commands

Session C

  • Working with the webbased-GUI
  • Running and tracking jobs
  • Java-API and samples
  • Streaming API sample

Session D

  • Map Reduce details, Java-API and Streaming
  • HDFS details, using the webbased-GUI for deeper insights
  • Breaking down a cluster and heal it

Session E

  • Intro to Hive and Sqoop
  • Dataimport via Sqoop
  • Hive scripts

Session F (optional)

  • Serialisation and deserialisation (SerDe) and user defined functions (UDF) with Hive
  • Workflows with oozie