Benchmarking Deep learning platforms by means of Tensorflow and Docker
The goal of the DEEP Hybrid DataCloud project  is to prepare a new generation of e-Infrastructures that harness latest generation technologies, supporting e.g. deep learning. In order to assess our solutions for new e-Infrastructures, a set of benchmark tools is needed.
Aim of this project is to prepare a set of benchmarks in form of Docker image(s) based on Tensorflow  and a number of pre-existing Tensorflow scripts (e.g. ), run them by means of various container tools and in different environments, compare performance.
- get acquanted with Docker containers and best practices [4, 5].
- adapt existing Convolutional Neural Network (CNN) benchmarks  and pack them in Docker image(s) such that they can be run on GPUs in the DEEP testbeds.
- run these benchmarks in HPC-like environment by means of udocker and singularity and on DEEP testbeds, use syntetic and real data, compare performance with bare-metal case.
- (optional) extend the benchmarks for another neural network type (neural network code will be provided). Run same experiments as for CNN scripts.
- at the end of Praktikum you write a short report with results of the performance tests (similar to ).
- programming experience in python
- knowledge on how to install a program from source in Linux
- ideally you have knowledge of containers and neural networks