https://wiki.scc.kit.edu/lsdf/api.php?action=feedcontributions&user=Jmeyer&feedformat=atomLsdf - User contributions [en]2024-03-29T10:26:52ZUser contributionsMediaWiki 1.31.16https://wiki.scc.kit.edu/lsdf/index.php?title=Standardisierte_Metadaten_f%C3%BCr_Virtuelle_Forschungsumgebung&diff=5295Standardisierte Metadaten für Virtuelle Forschungsumgebung2019-04-08T15:44:16Z<p>Jmeyer: Created page with "=Beschreibung= Interoperables Datenmanagement erfordert die genaue Beschreibung von Daten durch standardisierte Metadaten und den Gebrauch von kontrollierten Vokabularien. =..."</p>
<hr />
<div>=Beschreibung=<br />
Interoperables Datenmanagement erfordert die genaue Beschreibung von Daten durch standardisierte Metadaten und den Gebrauch von kontrollierten Vokabularien.<br />
<br />
= Aufgabe =<br />
Für die Virtuelle Forschungsumgebung V-FOR-WaTer soll das bestehende Metadatenkonzept erweitert werden, so dass der Metadatenstandard ISO19115 unterstützt wird. Keyword-Listen aus NASA GCMD Science Keywords und dem GEMET Thesaurus sollen eingebunden werden und gegebenfalls durch INSPIRE oder GeoSciML Keyword-Listen ergänzt werden. Ziel ist die Kompatibilität zu bestehenden Repositorien, wie dem [http://dataservices.gfz-potsdam.de GFZ Datenservice].<br />
<br />
= Voraussetzungen =<br />
* Gute Kenntnisse in relationalen Datenbanken (postgresql)<br />
* Webprogrammierungskenntnisse von Vorteil<br />
<br />
= Kontakt =<br />
[mailto:joerg.meyer2@kit.edu joerg.meyer2@kit.edu]</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=Praktikum_Datenmanagement_und_Datenanalyse_am_SCC&diff=5294Praktikum Datenmanagement und Datenanalyse am SCC2019-04-08T15:23:20Z<p>Jmeyer: </p>
<hr />
<div>Die unten gelisteten Themen können im Rahmen des [https://campus.studium.kit.edu/ev/qpEEaBn9RISRk0GmCnB9Qw Praktikums Datenmanagement und Datenanalyse] am SCC bearbeitet werden. Detailliertere Informationen zum Praktikum finden Sie unter https://www.scc.kit.edu/personen/12890.php.<br />
<br />
Für die Teilnahme am Praktikum ist eine vorherige Anmeldung erforderlich. Bitte schicken Sie eine kurze E-Mail mit Lebenslauf und Notenauszug an Nico.Schlitter@kit.edu. Die Einführungsveranstaltung findet am 24.04.2019 um 13.00 Uhr am Campus Süd Gebäude 20.21 in Raum 218 statt.<br />
<br />
Die nachfolgende Themenliste wird bis zur Einführungsveranstaltung um weitere Themen ergänzt werden. Die Themen stammen aus unterschiedlichen, am SCC beheimateten Projekten:<br />
* LHC GridComputing Karlsruhe [http://www.gridka.de http://www.gridka.de]<br />
* Smart Data Innovation Lab [http://www.sdil.de http://www.sdil.de]<br />
* Large Scale Data Facility (LSDF) [https://www.scc.kit.edu/forschung/lsdf.php https://www.scc.kit.edu/forschung/lsdf.php]<br />
* Virtuelle Forschungsumgebung für die Wasser- und Terrestrische Umweltforschung [http://www.vforwater.de/ http://www.vforwater.de/]<br />
<br />
Themenliste:<br />
<!-- * [[ Performance Optimization of the dCache Storage System ]] --><br />
* [[ Migration eines RedMine Systems ]]<br />
* [[ Monitoring the availability of firmware updates ]]<br />
* [[ Analyse von Netzwerkpaketen im LHCONE Netzwerk mit Logstash/Elasticsearch/Kibana/Grafana ]]<br />
* [[ Datareduction/Downsampling in InfluxDB ]]<br />
* [[ Implementation of a Software Layer for the Control of Data Transfers to/from Magnetic Tape at GridKa ]]<br />
* [[ Large-scale visualisation/analysis platform for climate data ]]<br />
* [[ Standardisierte Metadaten für Virtuelle Forschungsumgebung ]]</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=Praktikum_Datenmanagement_und_Datenanalyse_am_SCC&diff=5293Praktikum Datenmanagement und Datenanalyse am SCC2019-04-08T15:22:35Z<p>Jmeyer: </p>
<hr />
<div>Die unten gelisteten Themen können im Rahmen des [https://campus.studium.kit.edu/ev/qpEEaBn9RISRk0GmCnB9Qw Praktikums Datenmanagement und Datenanalyse] am SCC bearbeitet werden. Detailliertere Informationen zum Praktikum finden Sie unter https://www.scc.kit.edu/personen/12890.php.<br />
<br />
Für die Teilnahme am Praktikum ist eine vorherige Anmeldung erforderlich. Bitte schicken Sie eine kurze E-Mail mit Lebenslauf und Notenauszug an Nico.Schlitter@kit.edu. Die Einführungsveranstaltung findet am 24.04.2019 um 13.00 Uhr am Campus Süd Gebäude 20.21 in Raum 218 statt.<br />
<br />
Die nachfolgende Themenliste wird bis zur Einführungsveranstaltung um weitere Themen ergänzt werden. Die Themen stammen aus unterschiedlichen, am SCC beheimateten Projekten:<br />
* LHC GridComputing Karlsruhe [http://www.gridka.de http://www.gridka.de]<br />
* Smart Data Innovation Lab [http://www.sdil.de http://www.sdil.de]<br />
* Large Scale Data Facility (LSDF) [https://www.scc.kit.edu/forschung/lsdf.php https://www.scc.kit.edu/forschung/lsdf.php]<br />
* Virtuelle Forschungsumgebung für die Wasser- und Terrestrische Umweltforschung [http://www.vforwater.de/]<br />
<br />
Themenliste:<br />
<!-- * [[ Performance Optimization of the dCache Storage System ]] --><br />
* [[ Migration eines RedMine Systems ]]<br />
* [[ Monitoring the availability of firmware updates ]]<br />
* [[ Analyse von Netzwerkpaketen im LHCONE Netzwerk mit Logstash/Elasticsearch/Kibana/Grafana ]]<br />
* [[ Datareduction/Downsampling in InfluxDB ]]<br />
* [[ Implementation of a Software Layer for the Control of Data Transfers to/from Magnetic Tape at GridKa ]]<br />
* [[ Large-scale visualisation/analysis platform for climate data ]]<br />
* [[ Standardisierte Metadaten für Virtuelle Forschungsumgebung ]]</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=Praktikum_Datenmanagement_und_Datenanalyse_am_SCC&diff=4996Praktikum Datenmanagement und Datenanalyse am SCC2017-09-18T13:16:24Z<p>Jmeyer: </p>
<hr />
<div>Die unten gelisteten Themen können im Rahmen des [https://campus.kit.edu/live/campus/all/event.asp?objgguid=0xEA89B16CB258477A9A5815960FDC8621&from=vvz&gguid=0x5A0D93D802D24D0D902375A917BE2809&mode=own&tguid=0xA6438164539E6D4EAE28A3E92738DD3E Praktikums Datenmanagement und Datenanalyse] am SCC bearbeitet werden. Die Liste kann bis zum Anmeldeschluss um weitere Themen ergänzt werden.<br />
<br />
Für die Teilnahme am Praktikum ist eine vorherige Anmeldung erforderlich. Bitte schicken Sie eine kurze E-Mail mit Lebenslauf und Notenauszug an das Sekretariat Prof. Streit (Fr. A. Müller).<br />
<br />
<br />
== Themen aus dem Smart Data Innovation Lab [http://www.sdil.de http://www.sdil.de] ==<br />
[[Benchmarking the Performance of Redhat Enterprise Virtualization (RHEV) running on GPFS and GlusterFS]]<br />
<br />
[[Monitoring the availability of firmware updates]]<br />
<br />
[[Development of a plugin for the opensource server lifecycle management tool Foreman]]<br />
<br />
[[Developing a HTCondor Plugin for Jupyter Notebook]]<br />
<br />
== Themen aus dem LHC GridComputing Karlsruhe [http://www.gridka.de http://www.gridka.de] und der Large Scale Data Facility (LSDF) [http://wiki.lsdf.kit.edu http://wiki.lsdf.kit.edu] ==<br />
<br />
[[Auswertung von Filesystemmetadaten mit Elasticsearch and Kibana]]<br />
<br />
[[Erfassung und Darstellung von Stromverbrauch im Rechenzentrum]]<br />
<br />
== Themen aus der INDIGO DataCloud [https://www.indigo-datacloud.eu/ https://www.indigo-datacloud.eu/] ==<br />
[[ SSH Certification Authority as Plugin for WaTTS ]]<br />
<br />
[[ Metadatamining und -analyse von Simulationsläufen ]]</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=Praktikum_Datenmanagement_und_Datenanalyse_am_SCC&diff=4862Praktikum Datenmanagement und Datenanalyse am SCC2017-05-17T09:11:44Z<p>Jmeyer: /* Themen aus dem Data Life Cycle Lab Climatology https://www.helmholtz-lsdma.de/climatology.php */</p>
<hr />
<div>Die unten gelisteten Themen können im Rahmen des Praktikums [https://campus.kit.edu/live/campus/all/event.asp?gguid=0x5AB3639106537C43AB5F8B1F2ACAB090&error=readOnly&tguid=0x14A6844C9F6BDE43A5E4A0DB65A76F83&lang=dePraktikums Datenmanagement und Datenanalyse] am SCC bearbeitet werden. Die Liste kann bis zum Anmeldeschluss um weitere Themen ergänzt werden.<br />
<br />
Für die Teilnahme am Praktikum ist eine vorherige Anmeldung erforderlich. Bitte schicken Sie eine kurze E-Mail mit Lebenslauf und Notenauszug an das Sekretariat Prof. Streit (Fr. A. Müller).<br />
<br />
<br />
== Themen aus dem Smart Data Innovation Lab [http://www.sdil.de http://www.sdil.de] ==<br />
[[Benchmarking the Performance of Redhat Enterprise Virtualization (RHEV) running on GPFS and GlusterFS]]<br />
<br />
[[Monitoring the availability of firmware updates]]<br />
<br />
[[Development of a plugin for the opensource server lifecycle management tool Foreman]]<br />
<br />
[[Developing a HTCondor Plugin for Jupyter Notebook]]<br />
<br />
== Themen aus dem LHC GridComputing Karlsruhe [http://www.gridka.de http://www.gridka.de] und der Large Scale Data Facility (LSDF) [http://wiki.lsdf.kit.edu http://wiki.lsdf.kit.edu] ==<br />
<br />
[[Auswertung von Filesystemmetadaten mit Elasticsearch and Kibana]]<br />
<br />
[[Erfassung und Darstellung von Stromverbrauch im Rechenzentrum]]<br />
<br />
== Themen aus dem Data Life Cycle Lab Climatology [http://www.helmholtz-lsdma.de/climatology.php https://www.helmholtz-lsdma.de/climatology.php]==<br />
<br />
[[A_Scaleable_and_Extensible_Online_Platform_for_Spatial_IT]]<br />
<br />
[[Access token basierte Authentifizierung in virtueller Forschungsumgebung]]<br />
<br />
[[Open-sourcing an In-house Software Project]]<br />
<br />
== Themen aus der INDIGO DataCloud [https://www.indigo-datacloud.eu/ https://www.indigo-datacloud.eu/] ==<br />
[[ SSH Certification Authority as Plugin for WaTTS ]]<br />
<br />
[[ Metadatamining und -analyse von Simulationsläufen ]]</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=Studentische_Arbeiten_am_SCC&diff=4860Studentische Arbeiten am SCC2017-05-17T09:11:17Z<p>Jmeyer: </p>
<hr />
<div>=== Hiwis ===<br />
* [[HiWi_Stellen_am_SCC | Hiwi Stellen]]<br />
<br />
=== Abschlussarbeiten ===<br />
Bachelorarbeiten<br />
<br />
* [[Distributed Volunteer Computing for scientific Simulations]]<br />
* [[Adding a UFTP endpoint to HPSS]]<br />
* [[Market-based cloud resource allocation]]<br />
* [[JSON-Schema-Extractor aus Quellcode]]<br />
<br />
Masterarbeiten<br />
<br />
* [[Module to use HPSS as storage backend for Bareos]]<br />
* [[Entwicklung eines WebPortal fuer Mess- und Simulationsdaten aus der Wissenschaft]]<br />
* [[Fast fixity checking with rsync]]<br />
* [[Development of a simulationmodel for the estimation dataloss in digital archives]]<br />
* [[Graphical Interface to the GPFS policy engine]]<br />
* [[Kollektive Summe per Omega-Netzwerk]]<br />
* [[Globaler Informationsaustausch ohne Kollektive Operatoren]]<br />
<br />
=== Praktika ===<br />
* [[PSE_am_SCC | Praxis der Software-Entwicklung (PSE)]]<br />
* [[Praktikum_Datenmanagement_und_Datenanalyse_am_SCC | Datenmanagement und Datenanalyse]]<br />
<br />
Template: [[Thesis-Template]]<br />
<br />
Note: Incomplete Articles were moved here: [[Studentische_Arbeiten_am_SCC_ueberarbeiten]]</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=Studentische_Arbeiten_am_SCC&diff=4859Studentische Arbeiten am SCC2017-05-17T09:10:09Z<p>Jmeyer: </p>
<hr />
<div>=== Hiwis ===<br />
* [[HiWi_Stellen_am_SCC | Hiwi Stellen]]<br />
<br />
=== Abschlussarbeiten ===<br />
Bachelorarbeiten<br />
<br />
* [[Distributed Volunteer Computing for scientific Simulations]]<br />
* [[Adding a UFTP endpoint to HPSS]]<br />
* [[Market-based cloud resource allocation]]<br />
* [[JSON-Schema-Extractor aus Quellcode]]<br />
<br />
Masterarbeiten<br />
<br />
* [[Module to use HPSS as storage backend for Bareos]]<br />
* [[Entwicklung eines WebPortal fuer Mess- und Simulationsdaten aus der Wissenschaft]]<br />
* [[Fast fixity checking with rsync]]<br />
* [[Development of a simulationmodel for the estimation dataloss in digital archives]]<br />
* [[Graphical Interface to the GPFS policy engine]]<br />
* [[Kollektive Summe per Omega-Netzwerk]]<br />
* [[Globaler Informationsaustausch ohne Kollektive Operatoren]]<br />
<br />
=== Praktika ===<br />
* [[PSE_am_SCC | Praxis der Software-Entwicklung (PSE)]]<br />
* [[Praktikum_Datenmanagement_und_Datenanalyse_am_SCC | Datenmanagement und Datenanalyse]]<br />
* [[Open-sourcing an In-house Software Project]]<br />
Template: [[Thesis-Template]]<br />
<br />
Note: Incomplete Articles were moved here: [[Studentische_Arbeiten_am_SCC_ueberarbeiten]]</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=Access_token_basierte_Authentifizierung_in_virtueller_Forschungsumgebung&diff=4855Access token basierte Authentifizierung in virtueller Forschungsumgebung2017-05-17T08:30:04Z<p>Jmeyer: </p>
<hr />
<div>= Description =<br />
<br />
Im Rahmen des Landesprojektes VForWaTer [1] soll eine token-basierte Authentifizierungs-Autorisierungs-Infrastruktur erstellt werden. Nutzer, die sich (z.B. per Ldap) an einem Web-Portal angemeldet haben, sollen über grid proxies im folgenden auf verteilte Ressourcen (Daten, Web Processing services, Web-Dienste) zugreifen können. Die Zugriffsrechte sollen sehr feingranular vergeben und über das Portal gemanagt werden können.<br />
<br />
<br />
= Tasks =<br />
* Anforderungsliste erstellen<br />
* Einarbeiten in Software-Umgebung (django, web processing services, grid proxies)<br />
* Evaluation vom Einsatz von VOMS-Servern [2]<br />
* Erstellung von Nutzerprofilen für die Rechtevergabe<br />
<br />
= Useful qualifications =<br />
* Python Programmierung<br />
* Datenbank-Grundlagen<br />
<br />
= References =<br />
: [1] [http://vforwater.de http://vforwater.de]<br />
: [2] [https://en.wikipedia.org/wiki/VOMS https://en.wikipedia.org/wiki/VOMS]<br />
<br />
= Contact =<br />
[mailto:joerg.meyer2@kit.edu Jörg Meyer]</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=Access_token_basierte_Authentifizierung_in_virtueller_Forschungsumgebung&diff=4854Access token basierte Authentifizierung in virtueller Forschungsumgebung2017-05-17T08:29:14Z<p>Jmeyer: </p>
<hr />
<div>= Description =<br />
<br />
Im Rahmen des Landesprojektes VForWaTer [1] soll eine token-basierte Authentifizierungs-Autorisierungs-Infrastruktur erstellt werden. Nutzer, die sich (z.B. per Ldap) an einem Web-Portal angemeldet haben, sollen über grid proxies im folgenden auf verteilte Ressourcen (Daten, Web Processing services, Web-Dienste) zugreifen können. Die Zugriffsrechte sollen sehr feingranular vergeben und über das Portal gemanagt werden können.<br />
<br />
<br />
= Tasks =<br />
* Anforderungsliste erstellen<br />
* Einarbeiten in Software-Umgebung (django, web processing services, grid proxies)<br />
* Evaluation vom Einsatz von VOMS-Servern [2]<br />
* Erstellung von Nutzerprofilen für die Rechtevergabe<br />
<br />
= Useful qualifications =<br />
* Python Programmierung<br />
* Datenbank-Grundlagen<br />
<br />
= References =<br />
: [1] [http://vforwater.de http://vforwater.de]<br />
: [2] [https://en.wikipedia.org/wiki/VOMS]<br />
<br />
= Contact =<br />
[mailto:joerg.meyer2@kit.edu Jörg Meyer]</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=Access_token_basierte_Authentifizierung_in_virtueller_Forschungsumgebung&diff=4769Access token basierte Authentifizierung in virtueller Forschungsumgebung2017-03-28T09:07:35Z<p>Jmeyer: </p>
<hr />
<div>= Description =<br />
<br />
Im Rahmen des Landesprojektes VForWaTer [0] soll eine token-basierte Authentifizierungs-Autorisierungs-Infrastruktur erstellt werden. Nutzer, die sich z.B. per Ldap an einem Web-Portal angemeldet haben, sollen im folgenden auf verteilte Ressourcen (Daten, Web Processing services, Web-Dienste) zugreifen können. Dies soll durch einen Proxy-Dienst ermöglicht werden, der access tokens (uuid, Macaroons [2]) ausstellt.<br />
<br />
Als Beispiel für einen solchen Dienst soll twitcher [1] dienen. Twitcher ist zwar in Python implementiert, arbeitet aber mit dem Web framework Pyramid zusammen, statt dem von uns verwendeten Django.<br />
<br />
<br />
= Tasks =<br />
* Anforderungsliste erstellen<br />
* Einarbeiten in Software-Umgebung (django, web processing services)<br />
* Analyse von twitcher [1] (python Prototyp für Pyramid mit Nginx und MongoDB)<br />
* Erstellen einer eigenen Lösung für django (sowie apache + postgresql)<br />
<br />
= Useful qualifications =<br />
* Python Programmierung<br />
* Datenbank-Grundlagen<br />
<br />
= References =<br />
: [0] [http://vforwater.de http://vforwater.de]<br />
: [1] [https://github.com/bird-house/twitcher https://github.com/bird-house/twitcher]<br />
: [2] [https://blog.bren2010.io/2014/12/04/macaroons.html https://blog.bren2010.io/2014/12/04/macaroons.html]<br />
<br />
= Contact =<br />
[mailto:joerg.meyer2@kit.edu Jörg Meyer]</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=Access_token_basierte_Authentifizierung_in_virtueller_Forschungsumgebung&diff=4768Access token basierte Authentifizierung in virtueller Forschungsumgebung2017-03-28T09:04:19Z<p>Jmeyer: </p>
<hr />
<div>= Description =<br />
<br />
Im Rahmen des Landesprojektes VForWaTer [0] soll eine token-basierte Authentifizierungs-Autorisierungs-Infrastruktur erstellt werden. Nutzer, die sich z.B. per Ldap an einem Web-Portal angemeldet haben, sollen im folgenden auf verteilte Ressourcen (Daten, Web Processing services, Web-Dienste) zugreifen können. Dies soll durch einen Proxy-Dienst ermöglicht werden, der access tokens (uuid, Macaroons [2]) ausstellt.<br />
<br />
Als Beispiel für einen solchen Dienst soll twitcher [0] dienen. Twitcher ist zwar in Python implementiert, arbeitet aber mit dem Web framework Pyramid zusammen, statt dem von uns verwendeten Django.<br />
<br />
<br />
= Tasks =<br />
* Anforderungsliste erstellen<br />
* Einarbeiten in Software-Umgebung (django, web processing services)<br />
* Analyse von twitcher [1] (python Prototyp für Pyramid mit Nginx und MongoDB)<br />
* Erstellen einer eigenen Lösung für django (sowie apache + postgresql)<br />
<br />
= Useful qualifications =<br />
* Python Programmierung<br />
* Datenbank-Grundlagen<br />
<br />
= References =<br />
: [0] [http://vforwater.de http://vforwater.de]<br />
: [1] [https://github.com/bird-house/twitcher https://github.com/bird-house/twitcher]<br />
: [2] [https://blog.bren2010.io/2014/12/04/macaroons.html https://blog.bren2010.io/2014/12/04/macaroons.html]<br />
<br />
= Contact =<br />
[mailto:joerg.meyer2@kit.edu Jörg Meyer]</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=Access_token_basierte_Authentifizierung_in_virtueller_Forschungsumgebung&diff=4767Access token basierte Authentifizierung in virtueller Forschungsumgebung2017-03-28T08:59:53Z<p>Jmeyer: /* References */</p>
<hr />
<div>= Description =<br />
<br />
Im Rahmen des Landesprojektes VForWaTer [0] soll eine token-basierte Authentifizierungs-Infrastruktur erstellt werden. Nutzer, die sich z.B. per Ldap an einem Web-Portal angemeldet haben, sollen im folgenden auf verteilte Ressourcen (Daten, Web Processing services, Web-Dienste) zugreifen können. Dies soll durch einen Proxy-Dienst ermöglicht werden, der access tokens (uuid, Macaroons) ausstellt.<br />
<br />
Als Beispiel für einen solchen Dienst soll twitcher [0] dienen. Twitcher ist zwar in Python implementiert, arbeitet aber mit dem Web framework Pyramid zusammen, statt dem von uns verwendeten Django.<br />
<br />
<br />
= Tasks =<br />
* Anforderungsliste erstellen<br />
* Einarbeiten in Software-Umgebung (django, web processing services)<br />
* Analyse von twitcher [1] (python Prototyp für Pyramid mit Nginx und MongoDB)<br />
* Erstellen einer eigenen Lösung für django (sowie apache + postgresql)<br />
<br />
= Useful qualifications =<br />
* Python Programmierung<br />
* Datenbank-Grundlagen<br />
<br />
= References =<br />
: [0] [http://vforwater.de http://vforwater.de]<br />
: [1] [https://github.com/bird-house/twitcher https://github.com/bird-house/twitcher]<br />
<br />
= Contact =<br />
[mailto:joerg.meyer2@kit.edu Jörg Meyer]</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=Access_token_basierte_Authentifizierung_in_virtueller_Forschungsumgebung&diff=4766Access token basierte Authentifizierung in virtueller Forschungsumgebung2017-03-28T08:59:03Z<p>Jmeyer: Created page with "= Description = Im Rahmen des Landesprojektes VForWaTer [0] soll eine token-basierte Authentifizierungs-Infrastruktur erstellt werden. Nutzer, die sich z.B. per Ldap an einem..."</p>
<hr />
<div>= Description =<br />
<br />
Im Rahmen des Landesprojektes VForWaTer [0] soll eine token-basierte Authentifizierungs-Infrastruktur erstellt werden. Nutzer, die sich z.B. per Ldap an einem Web-Portal angemeldet haben, sollen im folgenden auf verteilte Ressourcen (Daten, Web Processing services, Web-Dienste) zugreifen können. Dies soll durch einen Proxy-Dienst ermöglicht werden, der access tokens (uuid, Macaroons) ausstellt.<br />
<br />
Als Beispiel für einen solchen Dienst soll twitcher [0] dienen. Twitcher ist zwar in Python implementiert, arbeitet aber mit dem Web framework Pyramid zusammen, statt dem von uns verwendeten Django.<br />
<br />
<br />
= Tasks =<br />
* Anforderungsliste erstellen<br />
* Einarbeiten in Software-Umgebung (django, web processing services)<br />
* Analyse von twitcher [1] (python Prototyp für Pyramid mit Nginx und MongoDB)<br />
* Erstellen einer eigenen Lösung für django (sowie apache + postgresql)<br />
<br />
= Useful qualifications =<br />
* Python Programmierung<br />
* Datenbank-Grundlagen<br />
<br />
= References =<br />
: [0] [http://vforwater.de]<br />
: [1] [https://github.com/bird-house/twitcher]<br />
<br />
= Contact =<br />
[mailto:joerg.meyer2@kit.edu Jörg Meyer]</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=Praktikum_Datenmanagement_und_Datenanalyse_am_SCC&diff=4765Praktikum Datenmanagement und Datenanalyse am SCC2017-03-28T08:26:07Z<p>Jmeyer: </p>
<hr />
<div>Die unten gelisteten Themen können im Rahmen des Praktikums [https://campus.kit.edu/live/campus/all/event.asp?gguid=0x5AB3639106537C43AB5F8B1F2ACAB090&error=readOnly&tguid=0x14A6844C9F6BDE43A5E4A0DB65A76F83&lang=dePraktikums Datenmanagement und Datenanalyse] am SCC bearbeitet werden. Die Liste kann bis zum Anmeldeschluss um weitere Themen ergänzt werden.<br />
<br />
Für die Teilnahme am Praktikum ist eine vorherige Anmeldung erforderlich. Bitte schicken Sie eine Bewerbung mit Lebenslauf und Notenauszug an das Sekretariat Prof. Streit (Fr. A. Müller) per Email bis zum 03.04.2017 um 23:59:59.<br />
<br />
<br />
== Themen aus dem Smart Data Innovation Lab [http://www.sdil.de http://www.sdil.de] ==<br />
[[Benchmarking the Performance of Redhat Enterprise Virtualization (RHEV) running on GPFS and GlusterFS]]<br />
<br />
[[Monitoring the availability of firmware updates]]<br />
<br />
[[Development of a plugin for the opensource server lifecycle management tool Foreman]]<br />
<br />
== Themen aus dem LHC GridComputing Karlsruhe [http://www.gridka.de http://www.gridka.de] und der Large Scale Data Facility (LSDF) [http://wiki.lsdf.kit.edu http://wiki.lsdf.kit.edu] ==<br />
<br />
[[Auswertung von Filesystemmetadaten mit Elasticsearch and Kibana]]<br />
<br />
[[Erfassung und Darstellung von Stromverbrauch im Rechenzentrum]]<br />
<br />
== Themen aus dem Data Life Cycle Lab Climatology [http://www.helmholtz-lsdma.de/climatology.php https://www.helmholtz-lsdma.de/climatology.php]==<br />
<br />
[[A_Scaleable_and_Extensible_Online_Platform_for_Spatial_IT]]<br />
<br />
[[Access token basierte Authentifizierung in virtueller Forschungsumgebung]]<br />
<br />
== Themen aus der INDIGO DataCloud [https://www.indigo-datacloud.eu/ https://www.indigo-datacloud.eu/] ==<br />
[[ SSH Certification Authority as Plugin for WaTTS ]]</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=Praktikum_Datenmanagement_und_Datenanalyse_am_SCC&diff=4764Praktikum Datenmanagement und Datenanalyse am SCC2017-03-28T08:25:23Z<p>Jmeyer: </p>
<hr />
<div>Die unten gelisteten Themen können im Rahmen des Praktikums [https://campus.kit.edu/live/campus/all/event.asp?gguid=0x5AB3639106537C43AB5F8B1F2ACAB090&error=readOnly&tguid=0x14A6844C9F6BDE43A5E4A0DB65A76F83&lang=dePraktikums Datenmanagement und Datenanalyse] am SCC bearbeitet werden. Die Liste kann bis zum Anmeldeschluss um weitere Themen ergänzt werden.<br />
<br />
Für die Teilnahme am Praktikum ist eine vorherige Anmeldung erforderlich. Bitte schicken Sie eine Bewerbung mit Lebenslauf und Notenauszug an das Sekretariat Prof. Streit (Fr. A. Müller) per Email bis zum 03.04.2017 um 23:59:59.<br />
<br />
<br />
== Themen aus dem Smart Data Innovation Lab [http://www.sdil.de http://www.sdil.de] ==<br />
[[Benchmarking the Performance of Redhat Enterprise Virtualization (RHEV) running on GPFS and GlusterFS]]<br />
<br />
[[Monitoring the availability of firmware updates]]<br />
<br />
[[Development of a plugin for the opensource server lifecycle management tool Foreman]]<br />
<br />
== Themen aus dem LHC GridComputing Karlsruhe [http://www.gridka.de http://www.gridka.de] und der Large Scale Data Facility (LSDF) [http://wiki.lsdf.kit.edu http://wiki.lsdf.kit.edu] ==<br />
<br />
[[Auswertung von Filesystemmetadaten mit Elasticsearch and Kibana]]<br />
<br />
[[Erfassung und Darstellung von Stromverbrauch im Rechenzentrum]]<br />
<br />
== Themen aus dem Data Life Cycle Lab Climatology [http://www.helmholtz-lsdma.de/climatology.php https://www.helmholtz-lsdma.de/climatology.php]==<br />
<br />
[[A_Scaleable_and_Extensible_Online_Platform_for_Spatial_IT]]<br />
[[Access token basierte Authentifizierung in virtueller Forschungsumgebung]]<br />
<br />
== Themen aus der INDIGO DataCloud [https://www.indigo-datacloud.eu/ https://www.indigo-datacloud.eu/] ==<br />
[[ SSH Certification Authority as Plugin for WaTTS ]]</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=Studentische_Arbeiten_am_SCC&diff=4678Studentische Arbeiten am SCC2017-02-06T08:00:37Z<p>Jmeyer: </p>
<hr />
<div>=== Hiwis ===<br />
* [[HiWi_Stellen_am_SCC | Hiwi Stellen]]<br />
<br />
=== Abschlussarbeiten ===<br />
Bachelorarbeiten<br />
<br />
* [[Distributed Volunteer Computing for scientific Simulations]]<br />
* [[Adding a UFTP endpoint to HPSS]]<br />
* [[Open-sourcing an In-house Software Project]]<br />
* [[Market-based cloud resource allocation]]<br />
<br />
Masterarbeiten<br />
<br />
* [[Module to use HPSS as storage backend for Bareos]]<br />
* [[Entwicklung eines WebPortal fuer Mess- und Simulationsdaten aus der Wissenschaft]]<br />
* [[Fast fixity checking with rsync]]<br />
* [[Development of a simulationmodel for the estimation dataloss in digital archives]]<br />
* [[Graphical Interface to the GPFS policy engine]]<br />
<br />
=== Praktika ===<br />
* [[PSE_am_SCC | Praxis der Software-Entwicklung (PSE)]]<br />
* [[Praktikum_Datenmanagement_und_Datenanalyse_am_SCC | Datenmanagement und Datenanalyse]]<br />
<br />
Template: [[Thesis-Template]]<br />
<br />
Note: Incomplete Articles were moved here: [[Studentische_Arbeiten_am_SCC_ueberarbeiten]]</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=Design_and_Deployment_of_a_Sharded_Cluster_for_the_KASCADE_Cosmic-ray_Data_Centre&diff=4677Design and Deployment of a Sharded Cluster for the KASCADE Cosmic-ray Data Centre2017-02-06T08:00:04Z<p>Jmeyer: Replaced content with "{{db|1=topic exists no longer}}"</p>
<hr />
<div>{{db|1=topic exists no longer}}</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=Optimisation_of_MongoDB_Data_Structures_for_KASCADE_Cosmic-ray_Data_Centre&diff=4676Optimisation of MongoDB Data Structures for KASCADE Cosmic-ray Data Centre2017-02-06T07:58:45Z<p>Jmeyer: Replaced content with "{{db|1=topic exists no longer}}"</p>
<hr />
<div>{{db|1=topic exists no longer}}</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=MongoDB_as_an_In-memory_Sharded_Database&diff=4675MongoDB as an In-memory Sharded Database2017-02-06T07:57:33Z<p>Jmeyer: Replaced content with "{{db|1=topic exists no longer}}"</p>
<hr />
<div>{{db|1=topic exists no longer}}</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=MongoDB_as_an_In-memory_Sharded_Database&diff=4674MongoDB as an In-memory Sharded Database2017-02-06T07:56:53Z<p>Jmeyer: </p>
<hr />
<div>{{db|1=topic exists no longer}}<br />
<br />
[[Studentische_Arbeiten_am_SCC|Zurück zur Themenliste]]<br />
<br />
= Description =<br />
The Data Life-Cycle Lab Earth and Environment at KIT manages data from instruments set up on Earth-observing climate satellites, such as Envisat MIPAS. The corresponding data is stored in the NoSQL database MongoDB. This large dataset should be handled in a distributed database cluster, using sharding - a horizontal-partitioning solution available in MongoDB.<br />
<br />
A typical MongoDB instance makes heavy use of available system memory but ultimately relies on underlying persistent storage. At the same time, our database cluster offers only low-performance persistent storage which is unsuitable for sustained load. It is therefore required to have MongoDB on our cluster operate as an *in-memory database*.<br />
<br />
= Task =<br />
Your task will be to to research optimal configuration of MongoDB for in-memory operation, develop tools for the initial population of cluster nodes as well as periodic commits of their data to persistent storage, and finally to evaluate the performance of the system.<br />
<br />
= Requirements =<br />
* basic administration of Linux/Unix systems<br />
* good working knowledge of Python, Node.js JavaScript or other scripting language capable of interfacing with MongoDB<br />
* familiarity with MongoDB and/or sharding would be a plus<br />
<br />
= Contact =<br />
Marek.Szuba@kit.edu - 29178</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=Developing_python_modules_for_a_web_portal_for_processing_geodata&diff=4455Developing python modules for a web portal for processing geodata2016-10-18T08:36:03Z<p>Jmeyer: /* Description */</p>
<hr />
<div>= Description =<br />
Online tools, developed and shared from a scientific community, can improve the workflow and scientific output. <br />
Here you can contribute to the development of a virtual research environment for hydrological questions.<br />
<br />
For more information, see our project webpage http://vforwater.de<br />
<br />
= Requirements =<br />
Basic knowledge in Python is helpful.<br />
<br />
= Contact =<br />
[mailto:joerg.meyer2@kit.edu,marcus.strobl@kit.edu Joerg.Meyer2@kit.edu, Marcus Strobl@kit.edu]</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=Developing_python_modules_for_a_web_portal_for_processing_geodata&diff=4454Developing python modules for a web portal for processing geodata2016-10-18T08:35:11Z<p>Jmeyer: </p>
<hr />
<div>= Description =<br />
Online tools, developed and shared from a scientific community, can improve the workflow and scientific output. <br />
Here you can contribute to the development of a virtual research environment for hydrological questions.<br />
For more information, see our project webpage http://vforwater.de<br />
<br />
= Requirements =<br />
Basic knowledge in Python is helpful.<br />
<br />
= Contact =<br />
[mailto:joerg.meyer2@kit.edu,marcus.strobl@kit.edu Joerg.Meyer2@kit.edu, Marcus Strobl@kit.edu]</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=Developing_python_modules_for_a_web_portal_for_processing_geodata&diff=4453Developing python modules for a web portal for processing geodata2016-10-18T08:34:09Z<p>Jmeyer: /* Contact */</p>
<hr />
<div>= Description =<br />
Online tools, developed and shared from a scientific community, can improve the workflow and scientific output. Here you can contribute to the development of a virtual research environment for hydrological questions.<br />
For more information, see our project webpage http://vforwater.de<br />
<br />
= Contact =<br />
[mailto:joerg.meyer2@kit.edu,marcus.strobl@kit.edu Joerg.Meyer2@kit.edu, Marcus Strobl@kit.edu]</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=Developing_python_modules_for_a_web_portal_for_processing_geodata&diff=4452Developing python modules for a web portal for processing geodata2016-10-18T08:31:03Z<p>Jmeyer: /* Description */</p>
<hr />
<div>= Description =<br />
Online tools, developed and shared from a scientific community, can improve the workflow and scientific output. Here you can contribute to the development of a virtual research environment for hydrological questions.<br />
For more information, see our project webpage http://vforwater.de<br />
<br />
= Contact =<br />
[mailto:joerg.meyer2@kit.edu Joerg.Meyer2@kit.edu]</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=hidden:IRODS&diff=1987hidden:IRODS2013-10-30T11:08:56Z<p>Jmeyer: </p>
<hr />
<div>==iRODS (irods-1)==<br />
iRODS is installed on irods-1.lsdf.kit.edu:<br />
* installation directory: /data/irods/iRODS/<br />
* installation logs in /data/irods/iRODS/installLogs/<br />
<br />
* configuration: /data/irods/iRODS/config/irods.config<br />
<br />
==Machine (irods-1)==<br />
* host: irods-1.lsdf.kit.edu, alias: irods-lsdma.lsdf.kit.edu, ip: 141.52.208.10<br />
* location: LSDF-rack 8, height unit 18<br />
* OS: SL 6.2<br />
* System: IBM x3550 M2<br />
* Serial Number: KD03C2T<br />
==Machine (irods-2)==<br />
* host: irods-2.lsdf.kit.edu, eudat.lsdf.kit.edu<br />
* location: rack 21, height unit 35<br />
* DELL PowerEdge R510<br />
* warrenty till 31/12/2014: http://www.dell.com/support/troubleshooting/us/en/04/Servicetag/FG5TW4J<br />
* purpose: EUDAT: full save replication for ENES<br />
* [[hidden:irods installation on irods-2 | irods installation on irods-2]]<br />
<br />
==Backup (irods-1)==<br />
* disk backup on eu-stor.fzk.de<br />
* /etc/cron.daily/postgres_bak.sh (credentials in /root/.pgpass):<br />
#!/bin/bash<br />
<br />
save_place=/gpfsibm/irods/postgresdump_irods-1<br />
PG_DUMPALL=/data/postgres/pgsql/bin/pg_dumpall<br />
#-----------------------------<br />
<br />
test -d $save_place || mkdir -p $save_place<br />
chmod 700 $save_place $0<br />
<br />
ddir=$(date +%Y)/$(date +%m)<br />
dfile=$(date +%Y-%m-%d).dump.bz2<br />
dumpdir=$save_place/$ddir<br />
test -d $dumpdir || mkdir -p $dumpdir<br />
<br />
$PG_DUMPALL -U irods |bzip2 >$dumpdir/$dfile<br />
<br />
# keep last 7 dumps:<br />
find $save_place -type f -mtime +7 -exec rm {} \;<br />
<br />
==TO DO==<br />
* Need second disk to create RAID1 for system!<br />
<br />
==links==<br />
https://redmine.dkrz.de/collaboration/projects/lsdma/wiki</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=hidden:IRODS&diff=1984hidden:IRODS2013-10-28T16:07:41Z<p>Jmeyer: /* Backup (irods-1) */</p>
<hr />
<div>==iRODS (irods-1)==<br />
iRODS is installed on irods-1.lsdf.kit.edu:<br />
* installation directory: /data/irods/iRODS/<br />
* installation logs in /data/irods/iRODS/installLogs/<br />
<br />
* configuration: /data/irods/iRODS/config/irods.config<br />
<br />
==Machine (irods-1)==<br />
* host: irods-1.lsdf.kit.edu, alias: irods-lsdma.lsdf.kit.edu, ip: 141.52.208.10<br />
* location: LSDF-rack 8, height unit 18<br />
* OS: SL 6.2<br />
* System: IBM x3550 M2<br />
* Serial Number: KD03C2T<br />
==Machine (irods-2)==<br />
* host: irods-2.lsdf.kit.edu, eudat.lsdf.kit.edu<br />
* location: rack 21, height unit 35<br />
* DELL PowerEdge R510<br />
* warrenty till 31/12/2014: http://www.dell.com/support/troubleshooting/us/en/04/Servicetag/FG5TW4J<br />
* purpose: EUDAT: full save replication for ENES<br />
* [[hidden:irods installation on irods-2 | irods installation on irods-2]]<br />
<br />
==Backup (irods-1)==<br />
* disk backup on eu-stor.fzk.de<br />
* /etc/cron.daily/postgres_bak.sh (credentials in /root/.pgpass):<br />
#!/bin/bash<br />
<br />
save_place=/gpfsibm/irods/postgresdump_irods-1<br />
PG_DUMPALL=/data/postgres/pgsql/bin/pg_dumpall<br />
#-----------------------------<br />
<br />
test -d $save_place || mkdir -p $save_place<br />
chmod 700 $save_place $0<br />
<br />
ddir=$(date +%Y)/$(date +%m)<br />
dfile=$(date +%Y-%m-%d).dump.bz2<br />
dumpdir=$save_place/$ddir<br />
test -d $dumpdir || mkdir -p $dumpdir<br />
<br />
$PG_DUMPALL -U irods |bzip2 >$dumpdir/$dfile<br />
<br />
# keep last 7 dumps:<br />
find $save_place -type f -mtime +7 -exec rm {} \;<br />
<br />
==TO DO==<br />
* Need second disk to create RAID1 for system!</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=hidden:Irods_installation_on_irods-2&diff=1983hidden:Irods installation on irods-22013-10-28T08:58:43Z<p>Jmeyer: </p>
<hr />
<div>==OS==<br />
* SL 6.4<br />
* various packages:<br />
yum install ntp bind-utils man strace nmap emacs gcc-c++ make<br />
<br />
* add user:<br />
useradd -u 995 irods <br />
[root@irods-2 ~]# id irods<br />
uid=995(irods) gid=995(irods) groups=995(irods)<br />
<br />
==postgresql==<br />
* repository<br />
wget http://yum.postgresql.org/9.3/redhat/rhel-6-x86_64/pgdg-sl93-9.3-1.noarch.rpm<br />
rpm -ihv pgdg-sl93-9.3-1.noarch.rpm<br />
* installation<br />
yum install postgresql93-server ppostgresql93-libs ostgresql93 <br />
* directory<br />
/var/lib/pgsql/9.3/data<br />
* first start<br />
service postgresql-9.3 initdb<br />
* start/stop/status<br />
service postgresql-9.3 start/stop/status<br />
* database user access<br />
su - postgres<br />
/usr/pgsql-9.3/bin/psql<br />
=>CREATE USER irods WITH PASSWORD 'mypassword'; (see password in /data/irods/iRODS/config/irods.config)<br />
=>ALTER USER irods CREATEDB;<br />
=>\q<br />
* useful symlinks<br />
cd /var/lib/pgsql/9.3<br />
ln -s /usr/pgsql-9.3/bin<br />
ln -s /usr/pgsql-9.3/lib<br />
* /var/lib/pgsql/9.3/data/pg_hba.conf:<br />
# TYPE DATABASE USER CIDR-ADDRESS METHOD<br />
<br />
# "local" is for Unix domain socket connections only<br />
local all all trust<br />
# IPv4 local connections:<br />
host all all 127.0.0.1/32 md5<br />
# IPv6 local connections:<br />
host all all ::1/128 md5<br />
<br />
# iRODS connections:<br />
# Force use of md5 scrambling for all connections.<br />
host all all 0.0.0.0/0 md5<br />
host all all ::/0 md5<br />
<br />
* /var/lib/pgsql/9.3/data/postgresql.conf<br />
...<br />
listen_addresses = '*'<br />
...<br />
* restart postgres after modification of pg_hba.conf and postgresql.conf<br />
* check that postgresql listens on all ip's (netstat -nlp)<br />
<br />
==ODBC==<br />
yum install unixODBC unixODBC-devel<br />
* config files:<br />
/etc/odbcinst.ini<br />
/etc/odbc.ini<br />
* replace /etc/odbc.ini by (taken from irods-1 and adopted):<br />
[PostgreSQL]<br />
Driver=/usr/lib64/psqlodbc.so<br />
Debug=0<br />
CommLog=0<br />
Servername=irods-2<br />
ReadOnly=no<br />
Ksqo=0<br />
Port=5432<br />
Database=ICAT<br />
* crosscheck that all libs in /etc/odbcinst.ini exist<br />
* point to config files (not sure whether this is needed): <br />
# cat /etc/profile.d/odbc.sh<br />
export ODBCSYSINI=/etc<br />
export ODBCINI=/etc/odbc.ini<br />
* test:<br />
[root@irods-2]# odbcinst -q -s<br />
[PostgreSQL]<br />
[root@irods-2]# odbcinst -q -d<br />
[PostgreSQL]<br />
[MySQL]<br />
<br />
==irods installation==<br />
* get irods3.2.tgz from somewhere (e.g. irods-1)<br />
mkdir /data/irods<br />
cd /data/irods<br />
mv irods3.2.tgz .<br />
tar xfvz irods3.2.tgz<br />
cd iRODS<br />
less INSTALL.txt<br />
* run irodssetup<br />
./irodssetup<br />
* use the following settings<br />
--------------------------------------------------------<br />
Build iRODS data server + iCAT metadata catalog<br />
directory '/data/irods/iRODS'<br />
port '1247'<br />
start svrPort '20000'<br />
end svrPort '20199'<br />
account 'rods'<br />
password '***'<br />
zone 'tempZone'<br />
db name 'ICAT'<br />
scramble key '321'<br />
resource name 'demoResc'<br />
resource dir '/data/irods/iRODS/Vault'<br />
<br />
Use existing Postgres<br />
host 'localhost'<br />
port '5432'<br />
directory '/var/lib/pgsql/9.3'<br />
account 'irods'<br />
password '***'<br />
pg version ''<br />
odbc version ''<br />
control do not let irods controll start and stop of postgres<br />
<br />
==irods configuration==</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=hidden:Irods_installation_on_irods-2&diff=1982hidden:Irods installation on irods-22013-10-28T08:51:26Z<p>Jmeyer: </p>
<hr />
<div>==OS==<br />
* SL 6.4<br />
* various packages:<br />
yum install ntp bind-utils man strace nmap emacs gcc-c++ make<br />
<br />
* add user:<br />
useradd -u 995 irods <br />
[root@irods-2 ~]# id irods<br />
uid=995(irods) gid=995(irods) groups=995(irods)<br />
<br />
==postgresql==<br />
* repository<br />
wget http://yum.postgresql.org/9.3/redhat/rhel-6-x86_64/pgdg-sl93-9.3-1.noarch.rpm<br />
rpm -ihv pgdg-sl93-9.3-1.noarch.rpm<br />
* installation<br />
yum install postgresql93-server ppostgresql93-libs ostgresql93 <br />
* directory<br />
/var/lib/pgsql/9.3/data<br />
* first start<br />
service postgresql-9.3 initdb<br />
* start/stop/status<br />
service postgresql-9.3 start/stop/status<br />
* database user access<br />
su - postgres<br />
/usr/pgsql-9.3/bin/psql<br />
=>CREATE USER irods WITH PASSWORD 'mypassword'; (see password in /data/irods/iRODS/config/irods.config)<br />
=>ALTER USER irods CREATEDB;<br />
=>\q<br />
* useful symlinks<br />
cd /var/lib/pgsql/9.3<br />
ln -s /usr/pgsql-9.3/bin<br />
ln -s /usr/pgsql-9.3/lib<br />
* /var/lib/pgsql/9.3/data/pg_hba.conf:<br />
# TYPE DATABASE USER CIDR-ADDRESS METHOD<br />
<br />
# "local" is for Unix domain socket connections only<br />
local all all md5<br />
# IPv4 local connections:<br />
host all all 127.0.0.1/32 md5<br />
# IPv6 local connections:<br />
host all all ::1/128 md5<br />
<br />
# iRODS connections:<br />
# Force use of md5 scrambling for all connections.<br />
host all all 0.0.0.0/0 md5<br />
host all all ::/0 md5<br />
<br />
* /var/lib/pgsql/9.3/data/postgresql.conf<br />
...<br />
listen_addresses = '*'<br />
...<br />
* restart postgres after modification of pg_hba.conf and postgresql.conf<br />
* check that postgresql listens on all ip's (netstat -nlp)<br />
<br />
==ODBC==<br />
yum install unixODBC unixODBC-devel<br />
* config files:<br />
/etc/odbcinst.ini<br />
/etc/odbc.ini<br />
* replace /etc/odbc.ini by (taken from irods-1 and adopted):<br />
[PostgreSQL]<br />
Driver=/usr/lib64/psqlodbc.so<br />
Debug=0<br />
CommLog=0<br />
Servername=irods-2<br />
ReadOnly=no<br />
Ksqo=0<br />
Port=5432<br />
Database=ICAT<br />
* crosscheck that all libs in /etc/odbcinst.ini exist<br />
* point to config files (not sure whether this is needed): <br />
# cat /etc/profile.d/odbc.sh<br />
export ODBCSYSINI=/etc<br />
export ODBCINI=/etc/odbc.ini<br />
* test:<br />
[root@irods-2]# odbcinst -q -s<br />
[PostgreSQL]<br />
[root@irods-2]# odbcinst -q -d<br />
[PostgreSQL]<br />
[MySQL]<br />
<br />
==irods installation==<br />
* get irods3.2.tgz from somewhere (e.g. irods-1)<br />
mkdir /data/irods<br />
cd /data/irods<br />
mv irods3.2.tgz .<br />
tar xfvz irods3.2.tgz<br />
cd iRODS<br />
less INSTALL.txt<br />
* run irodssetup<br />
./irodssetup<br />
* use the following settings<br />
--------------------------------------------------------<br />
Build iRODS data server + iCAT metadata catalog<br />
directory '/data/irods/iRODS'<br />
port '1247'<br />
start svrPort '20000'<br />
end svrPort '20199'<br />
account 'rods'<br />
password '***'<br />
zone 'tempZone'<br />
db name 'ICAT'<br />
scramble key '321'<br />
resource name 'demoResc'<br />
resource dir '/data/irods/iRODS/Vault'<br />
<br />
Use existing Postgres<br />
host 'localhost'<br />
port '5432'<br />
directory '/var/lib/pgsql/9.3'<br />
account 'irods'<br />
password '***'<br />
pg version ''<br />
odbc version ''<br />
control do not let irods controll start and stop of postgres<br />
<br />
==irods configuration==</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=hidden:Irods_installation_on_irods-2&diff=1981hidden:Irods installation on irods-22013-10-28T08:48:48Z<p>Jmeyer: /* irods installation */</p>
<hr />
<div>==OS==<br />
* SL 6.4<br />
* various packages:<br />
yum install ntp bind-utils man strace nmap emacs gcc-c++ make<br />
<br />
* add user:<br />
useradd -u 995 irods <br />
[root@irods-2 ~]# id irods<br />
uid=995(irods) gid=995(irods) groups=995(irods)<br />
<br />
==postgresql==<br />
* repository<br />
wget http://yum.postgresql.org/9.3/redhat/rhel-6-x86_64/pgdg-sl93-9.3-1.noarch.rpm<br />
rpm -ihv pgdg-sl93-9.3-1.noarch.rpm<br />
* installation<br />
yum install postgresql93-server ppostgresql93-libs ostgresql93 <br />
* directory<br />
/var/lib/pgsql/9.3/data<br />
* first start<br />
service postgresql-9.3 initdb<br />
* start/stop/status<br />
service postgresql-9.3 start/stop/status<br />
* database user access<br />
su - postgres<br />
/usr/pgsql-9.3/bin/psql<br />
=>CREATE USER irods WITH PASSWORD 'mypassword'; (see password in /data/irods/iRODS/config/irods.config)<br />
=>ALTER USER irods CREATEDB;<br />
=>\q<br />
* useful symlinks<br />
cd /var/lib/pgsql/9.3<br />
ln -s /usr/pgsql-9.3/bin<br />
ln -s /usr/pgsql-9.3/lib<br />
* /var/lib/pgsql/9.3/data/pg_hba.conf:<br />
# TYPE DATABASE USER CIDR-ADDRESS METHOD<br />
<br />
# "local" is for Unix domain socket connections only<br />
local all all md5<br />
# IPv4 local connections:<br />
host all all 127.0.0.1/32 md5<br />
# IPv6 local connections:<br />
host all all ::1/128 md5<br />
<br />
# iRODS connections:<br />
# Force use of md5 scrambling for all connections.<br />
host all all 0.0.0.0/0 md5<br />
host all all ::/0 md5<br />
<br />
* /var/lib/pgsql/9.3/data/postgresql.conf<br />
...<br />
listen_addresses = '*'<br />
...<br />
* restart postgres after modification of pg_hba.conf and postgresql.conf<br />
* check that postgresql listens on all ip's (netstat -nlp)<br />
<br />
==ODBC==<br />
yum install unixODBC unixODBC-devel<br />
* config files:<br />
/etc/odbcinst.ini<br />
/etc/odbc.ini<br />
* replace /etc/odbc.ini by (taken from irods-1 and adopted):<br />
[PostgreSQL]<br />
Driver=/usr/lib64/psqlodbc.so<br />
Debug=0<br />
CommLog=0<br />
Servername=irods-2<br />
ReadOnly=no<br />
Ksqo=0<br />
Port=5432<br />
Database=ICAT<br />
* crosscheck that all libs in /etc/odbcinst.ini exist<br />
* point to config files (not sure whether this is needed): <br />
# cat /etc/profile.d/odbc.sh<br />
export ODBCSYSINI=/etc<br />
export ODBCINI=/etc/odbc.ini<br />
* test:<br />
[root@irods-2]# odbcinst -q -s<br />
[PostgreSQL]<br />
[root@irods-2]# odbcinst -q -d<br />
[PostgreSQL]<br />
[MySQL]<br />
<br />
==irods installation==<br />
* get irods3.2.tgz from somewhere (e.g. irods-1)<br />
mkdir /data/irods<br />
cd /data/irods<br />
mv irods3.2.tgz .<br />
tar xfvz irods3.2.tgz<br />
cd iRODS<br />
less INSTALL.txt<br />
* run irodssetup<br />
./irodssetup<br />
* use the following settings<br />
--------------------------------------------------------<br />
Build iRODS data server + iCAT metadata catalog<br />
directory '/data/irods/iRODS'<br />
port '1247'<br />
start svrPort '20000'<br />
end svrPort '20199'<br />
account 'rods'<br />
password '***'<br />
zone 'tempZone'<br />
db name 'ICAT'<br />
scramble key '321'<br />
resource name 'demoResc'<br />
resource dir '/data/irods/iRODS/Vault'<br />
<br />
Use existing Postgres<br />
host 'localhost'<br />
port '5432'<br />
directory '/var/lib/pgsql/9.3'<br />
account 'irods'<br />
password '***'<br />
pg version ''<br />
odbc version ''<br />
control do not let irods controll start and stop of postgres<br />
<br />
==irods configuration==</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=hidden:Irods_installation_on_irods-2&diff=1980hidden:Irods installation on irods-22013-10-28T08:41:38Z<p>Jmeyer: </p>
<hr />
<div>==OS==<br />
* SL 6.4<br />
* various packages:<br />
yum install ntp bind-utils man strace nmap emacs gcc-c++ make<br />
<br />
* add user:<br />
useradd -u 995 irods <br />
[root@irods-2 ~]# id irods<br />
uid=995(irods) gid=995(irods) groups=995(irods)<br />
<br />
==postgresql==<br />
* repository<br />
wget http://yum.postgresql.org/9.3/redhat/rhel-6-x86_64/pgdg-sl93-9.3-1.noarch.rpm<br />
rpm -ihv pgdg-sl93-9.3-1.noarch.rpm<br />
* installation<br />
yum install postgresql93-server ppostgresql93-libs ostgresql93 <br />
* directory<br />
/var/lib/pgsql/9.3/data<br />
* first start<br />
service postgresql-9.3 initdb<br />
* start/stop/status<br />
service postgresql-9.3 start/stop/status<br />
* database user access<br />
su - postgres<br />
/usr/pgsql-9.3/bin/psql<br />
=>CREATE USER irods WITH PASSWORD 'mypassword'; (see password in /data/irods/iRODS/config/irods.config)<br />
=>ALTER USER irods CREATEDB;<br />
=>\q<br />
* useful symlinks<br />
cd /var/lib/pgsql/9.3<br />
ln -s /usr/pgsql-9.3/bin<br />
ln -s /usr/pgsql-9.3/lib<br />
* /var/lib/pgsql/9.3/data/pg_hba.conf:<br />
# TYPE DATABASE USER CIDR-ADDRESS METHOD<br />
<br />
# "local" is for Unix domain socket connections only<br />
local all all md5<br />
# IPv4 local connections:<br />
host all all 127.0.0.1/32 md5<br />
# IPv6 local connections:<br />
host all all ::1/128 md5<br />
<br />
# iRODS connections:<br />
# Force use of md5 scrambling for all connections.<br />
host all all 0.0.0.0/0 md5<br />
host all all ::/0 md5<br />
<br />
* /var/lib/pgsql/9.3/data/postgresql.conf<br />
...<br />
listen_addresses = '*'<br />
...<br />
* restart postgres after modification of pg_hba.conf and postgresql.conf<br />
* check that postgresql listens on all ip's (netstat -nlp)<br />
<br />
==ODBC==<br />
yum install unixODBC unixODBC-devel<br />
* config files:<br />
/etc/odbcinst.ini<br />
/etc/odbc.ini<br />
* replace /etc/odbc.ini by (taken from irods-1 and adopted):<br />
[PostgreSQL]<br />
Driver=/usr/lib64/psqlodbc.so<br />
Debug=0<br />
CommLog=0<br />
Servername=irods-2<br />
ReadOnly=no<br />
Ksqo=0<br />
Port=5432<br />
Database=ICAT<br />
* crosscheck that all libs in /etc/odbcinst.ini exist<br />
* point to config files (not sure whether this is needed): <br />
# cat /etc/profile.d/odbc.sh<br />
export ODBCSYSINI=/etc<br />
export ODBCINI=/etc/odbc.ini<br />
* test:<br />
[root@irods-2]# odbcinst -q -s<br />
[PostgreSQL]<br />
[root@irods-2]# odbcinst -q -d<br />
[PostgreSQL]<br />
[MySQL]<br />
<br />
==irods installation==<br />
* get irods3.2.tgz from somewhere (e.g. irods-1)<br />
mkdir /data/irods<br />
cd /data/irods<br />
mv irods3.2.tgz .<br />
tar xfvz irods3.2.tgz<br />
cd iRODS<br />
less INSTALL.txt<br />
* run irodssetup<br />
./irodssetup<br />
* use the following settings<br />
--------------------------------------------------------<br />
Build iRODS data server + iCAT metadata catalog<br />
directory '/data/irods/iRODS'<br />
port '1247'<br />
start svrPort '20000'<br />
end svrPort '20199'<br />
account 'rods'<br />
password '***'<br />
zone 'tempZone'<br />
db name 'ICAT'<br />
scramble key '321'<br />
resource name 'demoResc'<br />
resource dir '/data/irods/iRODS/Vault'<br />
<br />
Use existing Postgres<br />
host 'localhost'<br />
port '5432'<br />
directory '/var/lib/pgsql/9.3'<br />
account 'irods'<br />
password '***'<br />
pg version ''<br />
odbc version ''<br />
control not not let irods controll start and stop of postgres <br />
<br />
==irods configuration==</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=hidden:Irods_installation_on_irods-2&diff=1979hidden:Irods installation on irods-22013-10-28T08:31:18Z<p>Jmeyer: /* postgresql */</p>
<hr />
<div>==OS==<br />
* SL 6.4<br />
* various packages:<br />
yum install ntp bind-utils man strace nmap emacs gcc-c++ make<br />
<br />
* add user:<br />
useradd -u 995 irods <br />
[root@irods-2 ~]# id irods<br />
uid=995(irods) gid=995(irods) groups=995(irods)<br />
<br />
==postgresql==<br />
* repository<br />
wget http://yum.postgresql.org/9.3/redhat/rhel-6-x86_64/pgdg-sl93-9.3-1.noarch.rpm<br />
rpm -ihv pgdg-sl93-9.3-1.noarch.rpm<br />
* installation<br />
yum install postgresql93-server ppostgresql93-libs ostgresql93 <br />
* directory<br />
/var/lib/pgsql/9.3/data<br />
* first start<br />
service postgresql-9.3 initdb<br />
* start/stop/status<br />
service postgresql-9.3 start/stop/status<br />
* database user access<br />
su - postgres<br />
/usr/pgsql-9.3/bin/psql<br />
=>CREATE USER irods WITH PASSWORD 'mypassword'; (see password in /data/irods/iRODS/config/irods.config)<br />
=>ALTER USER irods CREATEDB;<br />
=>\q<br />
* useful symlinks<br />
cd /var/lib/pgsql/9.3<br />
ln -s /usr/pgsql-9.3/bin<br />
ln -s /usr/pgsql-9.3/lib<br />
* /var/lib/pgsql/9.3/data/pg_hba.conf:<br />
# TYPE DATABASE USER CIDR-ADDRESS METHOD<br />
<br />
# "local" is for Unix domain socket connections only<br />
local all all md5<br />
# IPv4 local connections:<br />
host all all 127.0.0.1/32 md5<br />
# IPv6 local connections:<br />
host all all ::1/128 md5<br />
<br />
# iRODS connections:<br />
# Force use of md5 scrambling for all connections.<br />
host all all 0.0.0.0/0 md5<br />
host all all ::/0 md5<br />
<br />
==ODBC==<br />
yum install unixODBC unixODBC-devel<br />
* config files:<br />
/etc/odbcinst.ini<br />
/etc/odbc.ini<br />
* replace /etc/odbc.ini by (taken from irods-1 and adopted):<br />
[PostgreSQL]<br />
Driver=/usr/lib64/psqlodbc.so<br />
Debug=0<br />
CommLog=0<br />
Servername=irods-2<br />
ReadOnly=no<br />
Ksqo=0<br />
Port=5432<br />
Database=ICAT<br />
* crosscheck that all libs in /etc/odbcinst.ini exist<br />
* point to config files (not sure whether this is needed): <br />
# cat /etc/profile.d/odbc.sh<br />
export ODBCSYSINI=/etc<br />
export ODBCINI=/etc/odbc.ini<br />
* test:<br />
[root@irods-2]# odbcinst -q -s<br />
[PostgreSQL]<br />
[root@irods-2]# odbcinst -q -d<br />
[PostgreSQL]<br />
[MySQL]<br />
<br />
==irods installation==<br />
* get irods3.2.tgz from somewhere (e.g. irods-1)<br />
mkdir /data/irods<br />
cd /data/irods<br />
mv irods3.2.tgz .<br />
tar xfvz irods3.2.tgz<br />
cd iRODS<br />
less INSTALL.txt<br />
<br />
==irods configuration==</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=hidden:Irods_installation_on_irods-2&diff=1978hidden:Irods installation on irods-22013-10-28T08:17:00Z<p>Jmeyer: Created page with "==OS== * SL 6.4 * various packages: yum install ntp bind-utils man strace nmap emacs gcc-c++ make * add user: useradd -u 995 irods [root@irods-2 ~]# id irods uid=995(irods)…"</p>
<hr />
<div>==OS==<br />
* SL 6.4<br />
* various packages:<br />
yum install ntp bind-utils man strace nmap emacs gcc-c++ make<br />
<br />
* add user:<br />
useradd -u 995 irods <br />
[root@irods-2 ~]# id irods<br />
uid=995(irods) gid=995(irods) groups=995(irods)<br />
<br />
==postgresql==<br />
* repository<br />
wget http://yum.postgresql.org/9.3/redhat/rhel-6-x86_64/pgdg-sl93-9.3-1.noarch.rpm<br />
rpm -ihv pgdg-sl93-9.3-1.noarch.rpm<br />
* installation<br />
yum install postgresql93-server ppostgresql93-libs ostgresql93 <br />
* directory<br />
/var/lib/pgsql/9.3/data<br />
* first start<br />
service postgresql-9.3 initdb<br />
* start/stop/status<br />
service postgresql-9.3 start/stop/status<br />
<br />
==ODBC==<br />
yum install unixODBC unixODBC-devel<br />
* config files:<br />
/etc/odbcinst.ini<br />
/etc/odbc.ini<br />
* replace /etc/odbc.ini by (taken from irods-1 and adopted):<br />
[PostgreSQL]<br />
Driver=/usr/lib64/psqlodbc.so<br />
Debug=0<br />
CommLog=0<br />
Servername=irods-2<br />
ReadOnly=no<br />
Ksqo=0<br />
Port=5432<br />
Database=ICAT<br />
* crosscheck that all libs in /etc/odbcinst.ini exist<br />
* point to config files (not sure whether this is needed): <br />
# cat /etc/profile.d/odbc.sh<br />
export ODBCSYSINI=/etc<br />
export ODBCINI=/etc/odbc.ini<br />
* test:<br />
[root@irods-2]# odbcinst -q -s<br />
[PostgreSQL]<br />
[root@irods-2]# odbcinst -q -d<br />
[PostgreSQL]<br />
[MySQL]<br />
<br />
==irods installation==<br />
* get irods3.2.tgz from somewhere (e.g. irods-1)<br />
mkdir /data/irods<br />
cd /data/irods<br />
mv irods3.2.tgz .<br />
tar xfvz irods3.2.tgz<br />
cd iRODS<br />
less INSTALL.txt<br />
<br />
==irods configuration==</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=hidden:IRODS&diff=1977hidden:IRODS2013-10-28T07:56:25Z<p>Jmeyer: /* Machine (irods-2) */</p>
<hr />
<div>==iRODS (irods-1)==<br />
iRODS is installed on irods-1.lsdf.kit.edu:<br />
* installation directory: /data/irods/iRODS/<br />
* installation logs in /data/irods/iRODS/installLogs/<br />
<br />
* configuration: /data/irods/iRODS/config/irods.config<br />
<br />
==Machine (irods-1)==<br />
* host: irods-1.lsdf.kit.edu, alias: irods-lsdma.lsdf.kit.edu, ip: 141.52.208.10<br />
* location: LSDF-rack 8, height unit 18<br />
* OS: SL 6.2<br />
* System: IBM x3550 M2<br />
* Serial Number: KD03C2T<br />
==Machine (irods-2)==<br />
* host: irods-2.lsdf.kit.edu, eudat.lsdf.kit.edu<br />
* location: rack 21, height unit 35<br />
* DELL PowerEdge R510<br />
* warrenty till 31/12/2014: http://www.dell.com/support/troubleshooting/us/en/04/Servicetag/FG5TW4J<br />
* purpose: EUDAT: full save replication for ENES<br />
* [[hidden:irods installation on irods-2 | irods installation on irods-2]]<br />
<br />
==Backup (irods-1)==<br />
* disk backup on eu-stor.fzk.de<br />
* manual postgresql dump:<br />
./pg_dump -U irods ICAT > /gpfs/irods/pg_dump_ICAT.26092013<br />
==TO DO==<br />
* Need second disk to create RAID1 for system!</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=hidden:IRODS&diff=1937hidden:IRODS2013-10-01T12:19:18Z<p>Jmeyer: </p>
<hr />
<div>==iRODS (irods-1)==<br />
iRODS is installed on irods-1.lsdf.kit.edu:<br />
* installation directory: /data/irods/iRODS/<br />
* installation logs in /data/irods/iRODS/installLogs/<br />
<br />
* configuration: /data/irods/iRODS/config/irods.config<br />
<br />
==Machine (irods-1)==<br />
* host: irods-1.lsdf.kit.edu, alias: irods-lsdma.lsdf.kit.edu, ip: 141.52.208.10<br />
* location: LSDF-rack 8, height unit 18<br />
* OS: SL 6.2<br />
* System: IBM x3550 M2<br />
* Serial Number: KD03C2T<br />
==Machine (irods-2)==<br />
* host: irods-2.lsdf.kit.edu<br />
* location: rack 21, height unit 35<br />
* DELL PowerEdge R510<br />
* warrenty till 31/12/2014: http://www.dell.com/support/troubleshooting/us/en/04/Servicetag/FG5TW4J<br />
* purpose: idrop test maschine<br />
<br />
==Backup (irods-1)==<br />
* disk backup on eu-stor.fzk.de<br />
* manual postgresql dump:<br />
./pg_dump -U irods ICAT > /gpfs/irods/pg_dump_ICAT.26092013<br />
==TO DO==<br />
* Need second disk to create RAID1 for system!</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=hidden:IRODS&diff=1936hidden:IRODS2013-10-01T12:17:56Z<p>Jmeyer: /* Machine (irods-2) */</p>
<hr />
<div>==iRODS==<br />
iRODS is installed on irods-1.lsdf.kit.edu:<br />
* installation directory: /data/irods/iRODS/<br />
* installation logs in /data/irods/iRODS/installLogs/<br />
<br />
* configuration: /data/irods/iRODS/config/irods.config<br />
<br />
==Machine (irods-1)==<br />
* host: irods-1.lsdf.kit.edu, alias: irods-lsdma.lsdf.kit.edu, ip: 141.52.208.10<br />
* location: LSDF-rack 8, height unit 18<br />
* OS: SL 6.2<br />
* System: IBM x3550 M2<br />
* Serial Number: KD03C2T<br />
==Machine (irods-2)==<br />
* host: irods-2.lsdf.kit.edu<br />
* location: rack 21, height unit 35<br />
* DELL PowerEdge R510<br />
* warrenty till 31/12/2014: http://www.dell.com/support/troubleshooting/us/en/04/Servicetag/FG5TW4J<br />
* purpose: idrop test maschine<br />
<br />
==Backup==<br />
* backuppc@eu-stor.fzk.de (?)<br />
* manual postgresql dump:<br />
./pg_dump -U irods ICAT > /gpfs/irods/pg_dump_ICAT.26092013<br />
==TO DO==<br />
* Need second disk to create RAID1 for system!</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=hidden:IRODS&diff=1935hidden:IRODS2013-09-30T15:30:41Z<p>Jmeyer: </p>
<hr />
<div>==iRODS==<br />
iRODS is installed on irods-1.lsdf.kit.edu:<br />
* installation directory: /data/irods/iRODS/<br />
* installation logs in /data/irods/iRODS/installLogs/<br />
<br />
* configuration: /data/irods/iRODS/config/irods.config<br />
<br />
==Machine (irods-1)==<br />
* host: irods-1.lsdf.kit.edu, alias: irods-lsdma.lsdf.kit.edu, ip: 141.52.208.10<br />
* location: LSDF-rack 8, height unit 18<br />
* OS: SL 6.2<br />
* System: IBM x3550 M2<br />
* Serial Number: KD03C2T<br />
==Machine (irods-2)==<br />
* host: irods-2.lsdf.kit.edu<br />
* location: rack 21, height unit 35<br />
* purpose: idrop test maschine<br />
==Backup==<br />
* backuppc@eu-stor.fzk.de (?)<br />
* manual postgresql dump:<br />
./pg_dump -U irods ICAT > /gpfs/irods/pg_dump_ICAT.26092013<br />
==TO DO==<br />
* Need second disk to create RAID1 for system!</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=hidden:IRODS&diff=1934hidden:IRODS2013-09-26T15:17:51Z<p>Jmeyer: /* iRODS */</p>
<hr />
<div>==iRODS==<br />
iRODS is installed on irods-1.lsdf.kit.edu:<br />
* installation directory: /data/irods/iRODS/<br />
* installation logs in /data/irods/iRODS/installLogs/<br />
<br />
* configuration: /data/irods/iRODS/config/irods.config<br />
<br />
==Machine==<br />
* host: irods-1.lsdf.kit.edu, alias: irods-lsdma.lsdf.kit.edu, ip: 141.52.208.10<br />
* location: LSDF-rack 8, height unit 18<br />
* OS: SL 6.2<br />
* System: IBM x3550 M2<br />
* Serial Number: KD03C2T<br />
<br />
==Backup==<br />
* backuppc@eu-stor.fzk.de (?)<br />
* manual postgresql dump:<br />
./pg_dump -U irods ICAT > /gpfs/irods/pg_dump_ICAT.26092013<br />
==TO DO==<br />
* Need second disk to create RAID1 for system!</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=hidden:IRODS&diff=1933hidden:IRODS2013-09-26T15:17:27Z<p>Jmeyer: </p>
<hr />
<div>==iRODS==<br />
iRODS is installed on irods-1.lsdf.kit.edu:<br />
installation directory: /data/irods/iRODS/<br />
installation logs in /data/irods/iRODS/installLogs/<br />
<br />
configuration: /data/irods/iRODS/config/irods.config<br />
<br />
==Machine==<br />
* host: irods-1.lsdf.kit.edu, alias: irods-lsdma.lsdf.kit.edu, ip: 141.52.208.10<br />
* location: LSDF-rack 8, height unit 18<br />
* OS: SL 6.2<br />
* System: IBM x3550 M2<br />
* Serial Number: KD03C2T<br />
<br />
==Backup==<br />
* backuppc@eu-stor.fzk.de (?)<br />
* manual postgresql dump:<br />
./pg_dump -U irods ICAT > /gpfs/irods/pg_dump_ICAT.26092013<br />
==TO DO==<br />
* Need second disk to create RAID1 for system!</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=hidden:IRODS&diff=1932hidden:IRODS2013-09-26T15:07:44Z<p>Jmeyer: </p>
<hr />
<div>==iRODS==<br />
iRODS is installed on irods-1.lsdf.kit.edu:<br />
installation directory: /data/irods/iRODS/<br />
installation logs in /data/irods/iRODS/installLogs/<br />
<br />
configuration: /data/irods/iRODS/config/irods.config<br />
<br />
==Machine==<br />
* host: irods-1.lsdf.kit.edu, alias: irods-lsdma.lsdf.kit.edu, ip: 141.52.208.10<br />
* location: LSDF-rack 8, height unit 18<br />
* OS: SL 6.2<br />
* System: IBM x3550 M2<br />
* Serial Number: KD03C2T<br />
<br />
==Backup==<br />
* backuppc@eu-stor.fzk.de (?)<br />
<br />
==TO DO==<br />
* Need second disk to create RAID1 for system!</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=hidden:IRODS&diff=1931hidden:IRODS2013-09-26T15:00:15Z<p>Jmeyer: </p>
<hr />
<div>==iRODS==<br />
iRODS is installed on irods-1.lsdf.kit.edu:<br />
installation directory: /data/irods/iRODS/<br />
installation logs in /data/irods/iRODS/installLogs/<br />
<br />
==Machine==<br />
* host: irods-1.lsdf.kit.edu, alias: irods-lsdma.lsdf.kit.edu, ip: 141.52.208.10<br />
* location: LSDF-rack 8, height unit 18<br />
* OS: SL 6.2<br />
* System: IBM x3550 M2<br />
* Serial Number: KD03C2T<br />
<br />
==Backup==<br />
* backuppc@eu-stor.fzk.de (?)<br />
<br />
==TO DO==<br />
* Need second disk to create RAID1 for system!</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=hidden:IRODS&diff=1930hidden:IRODS2013-09-26T14:58:26Z<p>Jmeyer: </p>
<hr />
<div>==iRODS==<br />
iRODS is installed on irods-1.lsdf.kit.edu:<br />
installation directory: /data/irods/iRODS/<br />
installation logs in /data/irods/iRODS/installLogs/<br />
<br />
==Machine==<br />
* host: irods-1.lsdf.kit.edu, alias: irods-lsdma.lsdf.kit.edu, ip: 141.52.208.10<br />
* location: LSDF-rack 8, height unit 18<br />
* OS: SL 6.2<br />
* System: IBM x3550 M2<br />
* Serial Number: KD03C2T<br />
<br />
==Backup==<br />
* backuppc@eu-stor.fzk.de<br />
<br />
==TO DO==<br />
* Need second disk to create RAID1 for system!</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=hidden:IRODS&diff=1929hidden:IRODS2013-09-26T14:20:16Z<p>Jmeyer: </p>
<hr />
<div><br />
==iRODS==<br />
iRODS is installed on irods-1.lsdf.kit.edu:<br />
installation directory: /data/irods/iRODS/<br />
installation logs in /data/irods/iRODS/installLogs/<br />
<br />
==Machine==<br />
* host: irods-1.lsdf.kit.edu, alias: irods-lsdma.lsdf.kit.edu, ip: 141.52.208.10<br />
* location: LSDF-rack 8, height unit 18<br />
* OS: SL 6.2<br />
* System: IBM x3550 M2<br />
* Serial Number: KD03C2T<br />
<br />
==Backup==<br />
<br />
==TO DO==<br />
* Need second disk to create RAID1 for system!</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=OpenNebula&diff=1868OpenNebula2013-06-07T13:18:19Z<p>Jmeyer: /* starting and spopping VMs in OpenNebula */</p>
<hr />
<div>===Instructions on how to use the LSDF OpenNebula cloud===<br />
'''under construction'''<br />
==accounts and access==<br />
* See [[Access_to_Resources]] for instructions on how to request an OpenNebula account.<br />
* Login to the OpenNebula front-end server:<br />
ssh one.lsdf.kit.edu<br />
* Point the environment variables ONE_AUTH to your one_auth file. Default (/etc/profile.d/local-opennebula.sh):<br />
export ONE_AUTH=$HOME/ONE/one_auth<br />
<br />
==creating a kvm image==<br />
* creating a new kvm image using [[virt-manager]]<br />
* contextualizing image<br />
* registration of image in OpenNebula<br><br />
Copy image file to: one.lsdf.kit.edu:/var/lib/one/images/users<br><br />
Create text file: myimage_template<br />
NAME = "myimage"<br />
PATH = /var/lib/one/images/users/myimage.img<br />
PUBLIC = YES<br />
DESCRIPTION = "CentOS 6"<br />
Register image:<br />
oneimage register myimage_template<br />
Get image id:<br />
oneimage list<br />
Enable image:<br />
oneimage enable <Image-ID><br />
<br />
==starting and stopping VMs in OpenNebula==</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=OpenNebula&diff=1867OpenNebula2013-06-07T13:17:58Z<p>Jmeyer: </p>
<hr />
<div>===Instructions on how to use the LSDF OpenNebula cloud===<br />
'''under construction'''<br />
==accounts and access==<br />
* See [[Access_to_Resources]] for instructions on how to request an OpenNebula account.<br />
* Login to the OpenNebula front-end server:<br />
ssh one.lsdf.kit.edu<br />
* Point the environment variables ONE_AUTH to your one_auth file. Default (/etc/profile.d/local-opennebula.sh):<br />
export ONE_AUTH=$HOME/ONE/one_auth<br />
<br />
==creating a kvm image==<br />
* creating a new kvm image using [[virt-manager]]<br />
* contextualizing image<br />
* registration of image in OpenNebula<br><br />
Copy image file to: one.lsdf.kit.edu:/var/lib/one/images/users<br><br />
Create text file: myimage_template<br />
NAME = "myimage"<br />
PATH = /var/lib/one/images/users/myimage.img<br />
PUBLIC = YES<br />
DESCRIPTION = "CentOS 6"<br />
Register image:<br />
oneimage register myimage_template<br />
Get image id:<br />
oneimage list<br />
Enable image:<br />
oneimage enable <Image-ID><br />
<br />
==starting and spopping VMs in OpenNebula==</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=OpenNebula&diff=1866OpenNebula2013-06-07T13:15:49Z<p>Jmeyer: /* creating a kvm image */</p>
<hr />
<div>===Instructions on how to use the LSDF OpenNebula cloud===<br />
'''under construction'''<br />
==accounts and access==<br />
* See [[Access_to_Resources]] for instructions on how to request an OpenNebula account.<br />
* Login to the OpenNebula front-end server:<br />
ssh one.lsdf.kit.edu<br />
* Point the environment variables ONE_AUTH to your one_auth file. Default (/etc/profile.d/local-opennebula.sh):<br />
export ONE_AUTH=$HOME/ONE/one_auth<br />
<br />
==creating a kvm image==<br />
* creating a new kvm image using [[virt-manager]]<br />
* contextualizing image<br />
* registration of image in OpenNebula<br><br />
Copy image file to: one.lsdf.kit.edu:/var/lib/one/images/users<br><br />
Create text file: myimage_template<br />
NAME = "myimage"<br />
PATH = /var/lib/one/images/users/myimage.img<br />
PUBLIC = YES<br />
DESCRIPTION = "CentOS 6"<br />
Register image:<br />
oneimage register myimage_template<br />
Get image id:<br />
oneimage list<br />
Enable image:<br />
oneimage enable <Image-ID></div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=Virt-manager&diff=1865Virt-manager2013-06-07T12:42:24Z<p>Jmeyer: /* Extending the Size of an Image */</p>
<hr />
<div>'''UNDER CONSTRUCTION'''<br><br><br />
<br />
A kvm image can be created on any linux system. The page describes how to install a kvm image using the virt-manager gui. Alternatively, the command line tool virsh could be used.<br />
<br />
==prerequeste==<br />
The following packages are needed:<br />
* libvirt-daemon-kvm<br />
* qemu-kvm<br />
* virt-manager<br />
It is recommanded to use a CPU with virtualization support:<br />
egrep '^flags.*(vmx|svm)' /proc/cpuinfo <br />
Make sure libvirtd is running:<br />
/etc/init.d/libvirtd restart<br />
Prepare a network bridge:<br><br />
'''RedHat (CentOS, Fedora)'''<br><br />
/etc/sysconfig/network-scripts/ifcfg-br0<br />
DEVICE=br0<br />
TYPE=Bridge<br />
BOOTPROTO=dhcp<br />
ONBOOT=yes<br />
NM_CONTROLLED=no<br />
/etc/sysconfig/network-scripts/ifcfg-eth0<br />
DEVICE=eth0<br />
HWADDR=00:11:22:33:44:55<br />
ONBOOT=yes<br />
NM_CONTROLLED=no<br />
BRIDGE=br0<br />
<br />
brctl addbr br0<br />
brctl addif br0 eth0<br />
brctl show<br />
bridge name bridge id STP enabled interfaces<br />
br0 0080.000000000000 no eth0<br />
ifup br0<br />
<br />
Note that the NetworkManager does not support bridges.<br><br />
'''Ubuntu'''<br />
<br />
==Installation of Image==<br />
* Start virt-manager (as superuser) and click on 'Create a new virtual machine' button:<br><br />
[[File:Virt-manager1.png|200px]]<br />
* Enter a name for your VM and choose the type of installation of the OS. This example will show how to install CentOS6 with an [http://ftp-stud.fht-esslingen.de/pub/Mirrors/centos/6/isos/x86_64/CentOS-6.4-x86_64-netinstall.iso netinstall iso image]. Choose ''Local install media'' and click forward.<br />
* Enter the local path to the [http://ftp-stud.fht-esslingen.de/pub/Mirrors/centos/6/isos/x86_64/CentOS-6.4-x86_64-netinstall.iso netinstall iso image] and choose the OS type and version.<br><br />
[[File:Virt-manager2.png|200px]]<br />
* Choose the memory and number of CPUs for your VM.<br />
* Choose the size of your virtal hard disk:<br />
[[File:Virt-manager3.png|200px]]<br />
* Click Finish to create your image file and to boot the VM from the netinstall iso image.<br />
* Follow the OS installation instructions<br />
** CentOS installation method: URL<br />
** Enable IPv4 support and get a dynamic or manual IP (depends on your network environment)<br />
** Enter [http://ftp-stud.fht-esslingen.de/pub/Mirrors/centos/6/os/x86_64/ URL of CentOS packages].<br />
** It is recommanded to create a physical volume as last partition on the virtual dist, because this allows to extend the image size later. Example partition layout:<br><br />
-first partition: ext4, 2 GB, mount point /<br><br />
-second partition: lvm, free space<br><br />
-logical volumes: /usr, swap, /tmp, /home, /var<br />
<br />
==Converting an Image==<br />
* Converting a raw image to qcow2:<br />
qemu-img convert -O qcow2 myraw.img myqcow2.qcow2<br />
* Converting VirtualBox image to raw format:<br />
VBoxManage clonehd --format RAW MyTestVM.vdi MyTestVM.raw<br />
<br />
==Extending the Size of an Image==<br />
Stop the VM. The last partition inside the VM image needs to be a physical volume (PV). You can list the partitions with:<br />
# virt-list-partitions -lh test.img <br />
/dev/sda1 ext3 996.2M<br />
/dev/sda2 ext3 2.4G<br />
/dev/sda3 pv 588.3M<br />
# ls -lh test.img <br />
-rw-r--r-- 1 qemu qemu 4.0G Aug 9 12:27 test.img<br />
Adding 1G:<br />
truncate -s 5g testn.img<br />
Copy old image to new one and state which partition should be extended:<br />
virt-resize --expand /dev/sda3 test.img testn.img<br />
Start the VM and resize the PV:<br />
pvresize /dev/vda3<br />
<br />
==Mounting a KVM Image==</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=Virt-manager&diff=1864Virt-manager2013-06-07T12:35:15Z<p>Jmeyer: /* Converting an Image */</p>
<hr />
<div>'''UNDER CONSTRUCTION'''<br><br><br />
<br />
A kvm image can be created on any linux system. The page describes how to install a kvm image using the virt-manager gui. Alternatively, the command line tool virsh could be used.<br />
<br />
==prerequeste==<br />
The following packages are needed:<br />
* libvirt-daemon-kvm<br />
* qemu-kvm<br />
* virt-manager<br />
It is recommanded to use a CPU with virtualization support:<br />
egrep '^flags.*(vmx|svm)' /proc/cpuinfo <br />
Make sure libvirtd is running:<br />
/etc/init.d/libvirtd restart<br />
Prepare a network bridge:<br><br />
'''RedHat (CentOS, Fedora)'''<br><br />
/etc/sysconfig/network-scripts/ifcfg-br0<br />
DEVICE=br0<br />
TYPE=Bridge<br />
BOOTPROTO=dhcp<br />
ONBOOT=yes<br />
NM_CONTROLLED=no<br />
/etc/sysconfig/network-scripts/ifcfg-eth0<br />
DEVICE=eth0<br />
HWADDR=00:11:22:33:44:55<br />
ONBOOT=yes<br />
NM_CONTROLLED=no<br />
BRIDGE=br0<br />
<br />
brctl addbr br0<br />
brctl addif br0 eth0<br />
brctl show<br />
bridge name bridge id STP enabled interfaces<br />
br0 0080.000000000000 no eth0<br />
ifup br0<br />
<br />
Note that the NetworkManager does not support bridges.<br><br />
'''Ubuntu'''<br />
<br />
==Installation of Image==<br />
* Start virt-manager (as superuser) and click on 'Create a new virtual machine' button:<br><br />
[[File:Virt-manager1.png|200px]]<br />
* Enter a name for your VM and choose the type of installation of the OS. This example will show how to install CentOS6 with an [http://ftp-stud.fht-esslingen.de/pub/Mirrors/centos/6/isos/x86_64/CentOS-6.4-x86_64-netinstall.iso netinstall iso image]. Choose ''Local install media'' and click forward.<br />
* Enter the local path to the [http://ftp-stud.fht-esslingen.de/pub/Mirrors/centos/6/isos/x86_64/CentOS-6.4-x86_64-netinstall.iso netinstall iso image] and choose the OS type and version.<br><br />
[[File:Virt-manager2.png|200px]]<br />
* Choose the memory and number of CPUs for your VM.<br />
* Choose the size of your virtal hard disk:<br />
[[File:Virt-manager3.png|200px]]<br />
* Click Finish to create your image file and to boot the VM from the netinstall iso image.<br />
* Follow the OS installation instructions<br />
** CentOS installation method: URL<br />
** Enable IPv4 support and get a dynamic or manual IP (depends on your network environment)<br />
** Enter [http://ftp-stud.fht-esslingen.de/pub/Mirrors/centos/6/os/x86_64/ URL of CentOS packages].<br />
** It is recommanded to create a physical volume as last partition on the virtual dist, because this allows to extend the image size later. Example partition layout:<br><br />
-first partition: ext4, 2 GB, mount point /<br><br />
-second partition: lvm, free space<br><br />
-logical volumes: /usr, swap, /tmp, /home, /var<br />
<br />
==Converting an Image==<br />
* Converting a raw image to qcow2:<br />
qemu-img convert -O qcow2 myraw.img myqcow2.qcow2<br />
* Converting VirtualBox image to raw format:<br />
VBoxManage clonehd --format RAW MyTestVM.vdi MyTestVM.raw<br />
<br />
==Extending the Size of an Image==<br />
<br />
==Mounting a KVM Image==</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=Virt-manager&diff=1863Virt-manager2013-06-07T12:33:00Z<p>Jmeyer: /* Installation of Image */</p>
<hr />
<div>'''UNDER CONSTRUCTION'''<br><br><br />
<br />
A kvm image can be created on any linux system. The page describes how to install a kvm image using the virt-manager gui. Alternatively, the command line tool virsh could be used.<br />
<br />
==prerequeste==<br />
The following packages are needed:<br />
* libvirt-daemon-kvm<br />
* qemu-kvm<br />
* virt-manager<br />
It is recommanded to use a CPU with virtualization support:<br />
egrep '^flags.*(vmx|svm)' /proc/cpuinfo <br />
Make sure libvirtd is running:<br />
/etc/init.d/libvirtd restart<br />
Prepare a network bridge:<br><br />
'''RedHat (CentOS, Fedora)'''<br><br />
/etc/sysconfig/network-scripts/ifcfg-br0<br />
DEVICE=br0<br />
TYPE=Bridge<br />
BOOTPROTO=dhcp<br />
ONBOOT=yes<br />
NM_CONTROLLED=no<br />
/etc/sysconfig/network-scripts/ifcfg-eth0<br />
DEVICE=eth0<br />
HWADDR=00:11:22:33:44:55<br />
ONBOOT=yes<br />
NM_CONTROLLED=no<br />
BRIDGE=br0<br />
<br />
brctl addbr br0<br />
brctl addif br0 eth0<br />
brctl show<br />
bridge name bridge id STP enabled interfaces<br />
br0 0080.000000000000 no eth0<br />
ifup br0<br />
<br />
Note that the NetworkManager does not support bridges.<br><br />
'''Ubuntu'''<br />
<br />
==Installation of Image==<br />
* Start virt-manager (as superuser) and click on 'Create a new virtual machine' button:<br><br />
[[File:Virt-manager1.png|200px]]<br />
* Enter a name for your VM and choose the type of installation of the OS. This example will show how to install CentOS6 with an [http://ftp-stud.fht-esslingen.de/pub/Mirrors/centos/6/isos/x86_64/CentOS-6.4-x86_64-netinstall.iso netinstall iso image]. Choose ''Local install media'' and click forward.<br />
* Enter the local path to the [http://ftp-stud.fht-esslingen.de/pub/Mirrors/centos/6/isos/x86_64/CentOS-6.4-x86_64-netinstall.iso netinstall iso image] and choose the OS type and version.<br><br />
[[File:Virt-manager2.png|200px]]<br />
* Choose the memory and number of CPUs for your VM.<br />
* Choose the size of your virtal hard disk:<br />
[[File:Virt-manager3.png|200px]]<br />
* Click Finish to create your image file and to boot the VM from the netinstall iso image.<br />
* Follow the OS installation instructions<br />
** CentOS installation method: URL<br />
** Enable IPv4 support and get a dynamic or manual IP (depends on your network environment)<br />
** Enter [http://ftp-stud.fht-esslingen.de/pub/Mirrors/centos/6/os/x86_64/ URL of CentOS packages].<br />
** It is recommanded to create a physical volume as last partition on the virtual dist, because this allows to extend the image size later. Example partition layout:<br><br />
-first partition: ext4, 2 GB, mount point /<br><br />
-second partition: lvm, free space<br><br />
-logical volumes: /usr, swap, /tmp, /home, /var<br />
<br />
==Converting an Image==<br />
* Converting a raw image to qcow2:<br />
qemu-img convert -O qcow2 myraw.img myqcow2.qcow2<br />
==Extending the Size of an Image==<br />
<br />
==Mounting a KVM Image==</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=Virt-manager&diff=1862Virt-manager2013-06-07T12:32:14Z<p>Jmeyer: </p>
<hr />
<div>'''UNDER CONSTRUCTION'''<br><br><br />
<br />
A kvm image can be created on any linux system. The page describes how to install a kvm image using the virt-manager gui. Alternatively, the command line tool virsh could be used.<br />
<br />
==prerequeste==<br />
The following packages are needed:<br />
* libvirt-daemon-kvm<br />
* qemu-kvm<br />
* virt-manager<br />
It is recommanded to use a CPU with virtualization support:<br />
egrep '^flags.*(vmx|svm)' /proc/cpuinfo <br />
Make sure libvirtd is running:<br />
/etc/init.d/libvirtd restart<br />
Prepare a network bridge:<br><br />
'''RedHat (CentOS, Fedora)'''<br><br />
/etc/sysconfig/network-scripts/ifcfg-br0<br />
DEVICE=br0<br />
TYPE=Bridge<br />
BOOTPROTO=dhcp<br />
ONBOOT=yes<br />
NM_CONTROLLED=no<br />
/etc/sysconfig/network-scripts/ifcfg-eth0<br />
DEVICE=eth0<br />
HWADDR=00:11:22:33:44:55<br />
ONBOOT=yes<br />
NM_CONTROLLED=no<br />
BRIDGE=br0<br />
<br />
brctl addbr br0<br />
brctl addif br0 eth0<br />
brctl show<br />
bridge name bridge id STP enabled interfaces<br />
br0 0080.000000000000 no eth0<br />
ifup br0<br />
<br />
Note that the NetworkManager does not support bridges.<br><br />
'''Ubuntu'''<br />
<br />
==Installation of Image==<br />
* Start virt-manager (as superuser) and click on 'Create a new virtual machine' button:<br><br />
[[File:Virt-manager1.png|200px]]<br />
* Enter a name for your VM and choose the type of installation of the OS. This example will show how to install CentOS6 with an [http://ftp-stud.fht-esslingen.de/pub/Mirrors/centos/6/isos/x86_64/CentOS-6.4-x86_64-netinstall.iso netinstall iso image]. Choose ''Local install media'' and click forward.<br />
* Enter the local path to the [http://ftp-stud.fht-esslingen.de/pub/Mirrors/centos/6/isos/x86_64/CentOS-6.4-x86_64-netinstall.iso netinstall iso image] and choose the OS type and version.<br><br />
[[File:Virt-manager2.png|200px]]<br />
* Choose the memory and number of CPUs for your VM.<br />
* Choose the size of your virtal hard disk:<br />
[[File:Virt-manager3.png|200px]]<br />
* Click Finish to create your image file and to boot the VM from the netinstall iso image.<br />
* Follow the OS installation instructions<br />
** CentOS installation method: URL<br />
** Enable IPv4 support and get a dynamic or manual IP (depends on your network environment)<br />
** Enter [[http://ftp-stud.fht-esslingen.de/pub/Mirrors/centos/6/os/x86_64/ URL of CentOS packages]].<br />
** It is recommanded to create a physical volume as last partition on the virtual dist, because this allows to extend the image size later. Example partition layout:<br><br />
-first partition: ext4, 2 GB, mount point /<br><br />
-second partition: lvm, free space<br><br />
-logical volumes: /usr, swap, /tmp, /home, /var <br />
<br />
==Converting an Image==<br />
* Converting a raw image to qcow2:<br />
qemu-img convert -O qcow2 myraw.img myqcow2.qcow2<br />
==Extending the Size of an Image==<br />
<br />
==Mounting a KVM Image==</div>Jmeyerhttps://wiki.scc.kit.edu/lsdf/index.php?title=File:Virt-manager3.png&diff=1861File:Virt-manager3.png2013-06-07T12:00:47Z<p>Jmeyer: </p>
<hr />
<div></div>Jmeyer