Access to Resources: Difference between revisions
From Lsdf
Jump to navigationJump to search
No edit summary |
mNo edit summary |
||
(32 intermediate revisions by 5 users not shown) | |||
Line 1: | Line 1: | ||
In order to get access to the LSDF first read the [[LSDF_Usage|Usage Policy]]. |
|||
Contact Jos van Wezel <jos.vanwezel@kit.edu> at SCC. In the end we need the following information: |
|||
---- |
|||
==== Request for Hadoop access ==== |
|||
* Name, email, institute |
|||
⚫ | |||
⚫ | |||
⚫ | |||
For Hadoop usage please state: |
|||
... |
|||
- Your SSH public key (please attach) |
|||
... |
|||
⚫ | |||
... |
|||
- Short description of the intended activities to be performed on the Hadoop infrastructure |
|||
... |
|||
⚫ | |||
... |
|||
- Requested resources: |
|||
* Expected nr. of jobs |
* Expected nr. of jobs |
||
* Expected running time of average job |
* Expected running time of average job |
||
Line 29: | Line 15: | ||
* Amount of RAM per job needed |
* Amount of RAM per job needed |
||
* Expected disk space usage in HDFS |
* Expected disk space usage in HDFS |
||
---- |
|||
==== Request for Cloud resources ==== |
|||
- Name, email, institute |
|||
... |
|||
- Other persons involved in the intended activities who do already have access to OpenNebula resources? (ie, do you belong to a group who requests access) |
|||
... |
|||
⚫ | |||
... |
|||
- Tentative timeline (start, end, any intermediate milestones?) |
|||
... |
|||
- Requested resources: |
|||
* Nr. of VM's |
|||
* Nr. of CPUs per VM |
|||
* Disk space per VM |
|||
* RAM per VM |
|||
... |
|||
---- |
Latest revision as of 10:28, 29 July 2016
In order to get access to the LSDF first read the Usage Policy.
Contact Jos van Wezel <jos.vanwezel@kit.edu> at SCC. In the end we need the following information:
- Name, email, institute
- Other persons (name, email) involved in the intended activities who do already have access to DIS Storage or Hadoop? (ie, do you belong to a group who requests access)
- Short description of the intended activities
- Tentative timeline (start, end, any intermediate milestones?)
For Hadoop usage please state:
- Expected nr. of jobs
- Expected running time of average job
- Nr. of map-tasks (and CPUs) per job foreseen
- Amount of RAM per job needed
- Expected disk space usage in HDFS