Difference between revisions of "Data services"
From Lsdf
Line 1: | Line 1: | ||
− | The services currently provided by LSDF |
+ | ==The services currently provided by LSDF== |
* Storage, comprising two systems with 1,4 PB and 500 TB |
* Storage, comprising two systems with 1,4 PB and 500 TB |
||
* Cloud computing: OpenNebula cloud environment |
* Cloud computing: OpenNebula cloud environment |
||
− | * Hadoop data intensive computing framework |
+ | * [[Hadoop data intensive computing framework]] |
+ | * [[Data Transfers]] |
||
− | |||
− | |||
− | The "Hadoop cluster" consists of '''58 nodes''' with '''464 physical cores''' in total, each node having: |
||
− | * 2 sockets Intel Xeon CPU E5520 @ 2.27GHz, 4 cores each, hyperthreading active (16 cores total) |
||
− | * 36 GB of RAM |
||
− | * 2 TB of disk |
||
− | * 1 GE network connection |
||
− | * OS Scientific Linux 5.5 |
||
− | ** Linux kernel 2.6.18 |
||
− | plus '''2 headnodes''', each having: |
||
− | * 2 sockets Intel Xeon CPU E5520 @ 2.27GHz, 4 cores each, hyperthreading active (16 cores total) |
||
− | * 96 GB of RAM |
||
− | * 10 GE network connection |
||
− | * OS Scientific Linux 5.5 |
||
− | ** Linux kernel 2.6.18 |
||
− | All nodes are however SHARED between the different Hadoop tasks and the OpenNebula virtual machines. |
Revision as of 19:42, 29 November 2012
The services currently provided by LSDF
- Storage, comprising two systems with 1,4 PB and 500 TB
- Cloud computing: OpenNebula cloud environment
- Hadoop data intensive computing framework
- Data Transfers