Difference between revisions of "UNICORE and S3"

From Lsdf
(UNICORE REST API)
(UNICORE REST API)
Line 6: Line 6:
   
 
==UNICORE REST API==
 
==UNICORE REST API==
  +
===Description===
 
 
The base URL of the the REST API for a UNICORE/X container is:
 
The base URL of the the REST API for a UNICORE/X container is:
 
<nowiki>https://<gateway_url>/SITENAME/rest/core</nowiki>
 
<nowiki>https://<gateway_url>/SITENAME/rest/core</nowiki>
Line 25: Line 25:
 
}
 
}
 
This page is only dedicated to storage operations, but check the [http://sourceforge.net/p/unicore/wiki/REST_API/ API reference] and [http://sourceforge.net/p/unicore/wiki/REST_API_Examples/ examples] in the official documentation for information on the other resources.
 
This page is only dedicated to storage operations, but check the [http://sourceforge.net/p/unicore/wiki/REST_API/ API reference] and [http://sourceforge.net/p/unicore/wiki/REST_API_Examples/ examples] in the official documentation for information on the other resources.
===Basic operations on storage in Python===
+
===Basic operations on storage using Python===
   
 
''requests'' is a simple HTTP library for Python that provides most of the functionality you need. However, it is not suited for OAuth authentication.
 
''requests'' is a simple HTTP library for Python that provides most of the functionality you need. However, it is not suited for OAuth authentication.

Revision as of 19:26, 12 February 2015

UNICORE

UNICORE Commandline Client

UNICORE REST API

Description

The base URL of the the REST API for a UNICORE/X container is:

https://<gateway_url>/SITENAME/rest/core

We will abbreviate this URL as BASE in the text below. For the UNICORE instance at KIT, BASE is:

https://unicore.data.kit.edu:8080/DEFAULT-SITE/rest/core

The REST API provide access to several resources handled by the UNICORE/X instance, such as the target system services available to the user (sites), the available storages or the server-to-server file transfers. Furthermore, users can manage their jobs and submit new jobs to target systems.

To list the available resources, use your username and password to get the base URL with curl:

$> curl -k -u user:pass -H "Accept: application/json" BASE

which will output:

{
"resources": [
    "BASE/storages",
    "BASE/jobs",
    "BASE/sites",
    "BASE/transfers"
    ]
}

This page is only dedicated to storage operations, but check the API reference and examples in the official documentation for information on the other resources.

Basic operations on storage using Python

requests is a simple HTTP library for Python that provides most of the functionality you need. However, it is not suited for OAuth authentication.

rauth is built on top of requests and supports OAuth 1.0, OAuth 2.0 and OFly authentication. There are many other libraries implementing OAuth, such as requests_oauthlib, but in examples below we settled on rauth.

To install rauth:

$ pip install rauth

First, we can set-up a basic OAuth2 session and check if we can access the server. In setting up the authentication using an OIDC token, we assume we have the token saved in the 'oidcToken' file. rauth also supports obtaining the access token from the OAuth service, if the client has an id and a secret.

#!/usr/bin/env python
import json
import time
from rauth import OAuth2Session

base = "https://unicore.data.kit.edu:8080/DEFAULT-SITE/rest/core"
print "Accessing REST API at ", base

clientId = "portal-client"        # the client id for HBP Portal users
file = open("oidcToken","r")
btoken = file.read().replace('\n',)
rauth_session = OAuth2Session(client_id=clientId, access_token=btoken)

headers = {'Accept': 'application/json'}
r = rauth_session.get(base, headers=headers, verify=False)
if r.status_code!=200:
    print "Error accessing the server!"
else:
    print "Ready."
print json.dumps(r.json(), indent=4)

Listing the existing storages can be done by a simple get request to the url BASE/storages:

headers = {'Accept': 'application/json'}
r = rauth_session.get(base+"/storages", headers=headers, verify=False)
storagesList = r.json()
print json.dumps(r.json(), indent=4)

The code outputs something like:

{
    "storages": [
        "https://unicore.data.kit.edu:8080/DEFAULT-SITE/rest/core/storages/f13b3ffa-413a-4ab7-a620-e8e04291a5f5", 
        "https://unicore.data.kit.edu:8080/DEFAULT-SITE/rest/core/storages/52126d09-8ea6-4b06-84f8-205839bb5bdc"
    ]
}

Getting information on the second storage listed:

headers = {'Accept': 'application/json'}
s3storageUrl = storagesList['storages'][1]
r = rauth_session.get(s3storageUrl, headers=headers, verify=False)
s3storageProps = r.json()
print json.dumps(r.json(), indent=4)

This will print:

{
    "resourceStatus": "READY", 
    "currentTime": "2015-02-12T17:41:52+0100", 
    "terminationTime": "2015-02-20T14:39:03+0100", 
    "umask": "77", 
    "_links": {
        "files": {
            "href": "https://unicore.data.kit.edu:8080/DEFAULT-SITE/rest/core/storages/52126d09-8ea6-4b06-84f8-205839bb5bdc/files", 
            "description": "Files"
        }, 
        "self": {
            "href": "https://unicore.data.kit.edu:8080/DEFAULT-SITE/rest/core/storages/52126d09-8ea6-4b06-84f8-205839bb5bdc"
        }
    }, 
    "uniqueID": "52126d09-8ea6-4b06-84f8-205839bb5bdc", 
    "owner": "CN=Diana Gudu 123456,O=HBP", 
    "protocols": [
        "BFT"
    ]
}

Listing the files (in case of S3 storage, the buckets) in the previously selected storage:

headers = {'Accept': 'application/json'}
bucketBase = s3storageProps['_links']['files']['href']
r = rauth_session.get(bucketBase, headers=headers, verify=False)
bucketList = r.json()
print json.dumps(r.json(), indent=4)

The code will output:

{
    "isDirectory": true, 
    "children": [
        "/bucket2", 
        "/unicore", 
        "/testbucket"
    ], 
    "size": 0
}

Listing the objects in the first bucket:

headers = {'Accept': 'application/json'}
bucket = bucketList['children'][0]
bucketUrl = bucketBase + bucket
r = rauth_session.get(bucketUrl, headers=headers, verify=False)
objectList = r.json()
print json.dumps(r.json(), indent=4)

This will print a similar list as for the buckets. Note that the paths listed are relative to the storage root and not the parent directory:

{
    "isDirectory": true, 
    "children": [
        "/bucket2/object2"
    ], 
    "size": 0
}

Downloading the first file in the first bucket and then saving it locally:

headers = {'Content type': 'application/octet-stream'}
object2down = objectList['children'][0]
object2downUrl = bucketBase + object2down
r = rauth_session.get(object2downUrl, headers=headers, verify=False)
objectName = object2down.split('/')[-1]
fileOut = open(objectName, "wb")
fileOut.write(r.content)
fileOut.close()

To only get the file info, not the data, the media type should be json instead of octet-stream

headers = {'Accept': 'application/json'}
r = rauth_session.get(object2downUrl, headers=headers, verify=False)
print json.dumps(r.json(), indent=4)

Uploading a file in the first bucket. In the code below, we upload the file 'test' from the current directory:

headers = {'Content': 'application/octet-stream'}
object2up = "test"
object2upUrl = bucketUrl + "/" + object2up
bytes2up = open(object2up, "rb")
r = rauth_session.put(object2upUrl, data=bytes2up, headers=headers, verify=False)
print 'Object ' + object2up + ' uploaded'
bytes2up.close()

To delete the uploaded object:

headers = {'Accept': 'application/json'}
r = rauth_session.delete(object2upUrl, headers=headers, verify=False)

The whole Python script can be found here.

References