Difference between revisions of "Access to Resources"

From Lsdf
m
m
 
(16 intermediate revisions by 3 users not shown)
Line 1: Line 1:
For access to LSDF please read our [[LSDF_Usage|usage policy]]. Contact Jos van Wezel <jos.vanwezel@kit.edu> at SCC. Please send the following information:
+
In order to get access to the LSDF first read the [[LSDF_Usage|Usage Policy]].
   
  +
Contact Jos van Wezel <jos.vanwezel@kit.edu> at SCC. In the end we need the following information:
- Name, email, institute
 
  +
- Other persons (name, email) involved in the intended activities who do already have access to Hadoop? (ie, do you belong to a group who requests access)
 
  +
* Name, email, institute
- Short description of the intended activities to be performed on the Hadoop infrastructure
 
  +
* Other persons (name, email) involved in the intended activities who do already have access to DIS Storage or Hadoop? (ie, do you belong to a group who requests access)
- Tentative timeline (start, end, any intermediate milestones?)
 
  +
* Short description of the intended activities
- Your SSH public key if you have one
 
  +
* Tentative timeline (start, end, any intermediate milestones?)
   
 
For Hadoop usage please state:
 
For Hadoop usage please state:
Line 14: Line 15:
 
* Amount of RAM per job needed
 
* Amount of RAM per job needed
 
* Expected disk space usage in HDFS
 
* Expected disk space usage in HDFS
 
For use of Virtual Machines (Currently via Open Nebula) please state:
 
 
* Expected Nr. of VM's
 
* Nr. of CPUs per VM
 
* Disk space per VM
 
* RAM per VM
 

Latest revision as of 10:28, 29 July 2016

In order to get access to the LSDF first read the Usage Policy.

Contact Jos van Wezel <jos.vanwezel@kit.edu> at SCC. In the end we need the following information:

  • Name, email, institute
  • Other persons (name, email) involved in the intended activities who do already have access to DIS Storage or Hadoop? (ie, do you belong to a group who requests access)
  • Short description of the intended activities
  • Tentative timeline (start, end, any intermediate milestones?)

For Hadoop usage please state:

  • Expected nr. of jobs
  • Expected running time of average job
  • Nr. of map-tasks (and CPUs) per job foreseen
  • Amount of RAM per job needed
  • Expected disk space usage in HDFS