Difference between revisions of "Access to Resources"

From Lsdf
m
m
 
(12 intermediate revisions by 3 users not shown)
Line 1: Line 1:
  +
In order to get access to the LSDF first read the [[LSDF_Usage|Usage Policy]].
The storage resources of the LSDF are governed by general terms of use. [[File:Example.jpg]].
 
   
  +
Contact Jos van Wezel <jos.vanwezel@kit.edu> at SCC. In the end we need the following information:
In order to get access to the LSDF first read our [[LSDF_Usage|LSDF Usage Policy]].
 
 
Contact Jos van Wezel <jos.vanwezel@kit.edu> at SCC. Please send the following information:
 
   
 
* Name, email, institute
 
* Name, email, institute
* Other persons (name, email) involved in the intended activities who do already have access to Hadoop? (ie, do you belong to a group who requests access)
+
* Other persons (name, email) involved in the intended activities who do already have access to DIS Storage or Hadoop? (ie, do you belong to a group who requests access)
* Short description of the intended activities to be performed on the Hadoop infrastructure
+
* Short description of the intended activities
 
* Tentative timeline (start, end, any intermediate milestones?)
 
* Tentative timeline (start, end, any intermediate milestones?)
* Your SSH public key if you have one. KIT users can use their KIT account.
 
   
 
For Hadoop usage please state:
 
For Hadoop usage please state:
Line 18: Line 15:
 
* Amount of RAM per job needed
 
* Amount of RAM per job needed
 
* Expected disk space usage in HDFS
 
* Expected disk space usage in HDFS
 
For use of Virtual Machines (Currently via Open Nebula) please state:
 
 
* Expected Nr. of VM's
 
* Nr. of CPUs per VM
 
* Disk space per VM
 
* RAM per VM
 

Latest revision as of 10:28, 29 July 2016

In order to get access to the LSDF first read the Usage Policy.

Contact Jos van Wezel <jos.vanwezel@kit.edu> at SCC. In the end we need the following information:

  • Name, email, institute
  • Other persons (name, email) involved in the intended activities who do already have access to DIS Storage or Hadoop? (ie, do you belong to a group who requests access)
  • Short description of the intended activities
  • Tentative timeline (start, end, any intermediate milestones?)

For Hadoop usage please state:

  • Expected nr. of jobs
  • Expected running time of average job
  • Nr. of map-tasks (and CPUs) per job foreseen
  • Amount of RAM per job needed
  • Expected disk space usage in HDFS