Tuesday, 28.8.2012, 13:00 - 18:30
The focus of this session is on the hadoop ecosystem and the interplay of many specialized tools for data analysis.
We look into the Java API as well, but not in so much detail as in a pure developer class. We will try to show a big picture
of hadoop in the context of scientific computing. You will learn, what hadoop can be used for, and what it is not intended to be
applied to. Therefore we will discuss the underlying principles as well as the programming model and installation / configuration
procedures. You will test some of the commands on a real cluster and some life demos give you an idea of lots of features provided
by the web based user interface.
- Basic understanding of Unix/Linux OS management is needed to do the exercises.
- No prior knowledge of Hadoop is required, as we go through the basic concepts.
- For this workshop a personal notebook is recommendet.
- If you use Windows: please install "PuTTY" and the VMWare-Player.
- Hadoop the Defenitive Guide 
- Hadoop in Action 
- Data Intensive Text Processing with MapReduce 
Scripts from last year
- The hadoop ecosystem: HDFS, MR, HUE, Sqoop, Hive, Pig, HBase, Flume, Oozie
- What is CDH and the Cloudera-Manager?
- Installation, starting and basic configurations of a small cluster
- HDFS intro (Name Node, Data Node, Secondary Name Node)
- How is data stored in HDFS?
- Properties and configurations, relevant for efficient working with HDFS.
- HDFS commands
- Working with the webbased-GUI
- Running and tracking jobs
- Java-API and samples
- Streaming API sample
- Map Reduce details, Java-API and Streaming
- HDFS details, using the webbased-GUI for deeper insights
- Breaking down a cluster and heal it
- Intro to Hive and Sqoop
- Dataimport via Sqoop
- Hive scripts
Session F (optional)
- Serialisation and deserialisation (SerDe) and user defined functions (UDF) with Hive
- Workflows with oozie