29 July 2015

UMA Part 2: Accessing Protected Resources

This second blog on UMA, follows on from part 1, where I looked at creating resource sets and policies on the authorization server.

Once an authorization server understands what resources are being protected and who is able to access them, the authorization server is in a position to respond to access requests from requesting parties.

A requesting party is simply an application that wants to access resources managed by the resource server.



The above diagram looks complicated at first, but it really only contains a couple of main themes.

The authorization server, responsible for the resource sets, policies and ultimately the policy decision point for evaluating the inbound request, the resource server, acting as the data custodian and the relying party - the application and end user wanting to gain access to the resources.

There are a couple of relationships to think about.  Firstly, the relationship between the resource server and the authorization server.  Described in the first blog, this relationship centres around an OAuth2 client and the uma_protection scope.  The second relationship is between the requesting party and the authorization server.  This generally centres around an OAuth2 client and the uma_authorization scope.   Then of course, there are the interactions between the requesting party and the resource server.  Ultimately this revolves around the use of a permission ticket, used to receive a requesting party token, which then gives the resource server the ability to introspect that token via an authorization endpoint, in order to determine whether access should be granted.

Another aspect to consider, is the verification of the end user - in this case Bob.  This is currently done via an OpenID Connect JWT issued by the authorization server.  This JWT is then used by the relying party when submitting a request to the AS (step 3 in the above).

A powerful component, is of course the loose coupling of the main players.  All integrated using standard OAuth2 patterns for API protection.

The above use cases are all available in the nightly build of OpenAM 13.  As with any nightly build, there is instability, so expect a little inconsistency in some of the flows.  The draft documentation describes all the detail with respect to the flows and interactions.

Chrome's Postman REST client can be used to test the API integrations.  I created a project that contains all the necessary flows that can be used as starting point for testing ForgeRock integrations.


UMA Part 1: Creating Resource Sets & Policies

User Managed Access (UMA) is a new standard, that defines how resource owners can control protected-resource access by clients operated by arbitrary requesting parties, where the resources reside on any number of resource servers, and where a centralized authorization server governs access based on resource owner policy.  So what does that mean?

Basically in today's highly federated, cross platform, distributed world, data and information needs sharing to multiple 3rd parties, who all reside in different locations with different permission levels.

UMA looks to solve that problem by introducing a powerful approach to transparent user centric consent and sharing.

I'm going to take a look at the ForgeRock implementation of UMA, available in the nightly builds of OpenAM 13.

Creating Resource Sets

First up, what are resource sets?  Before someone can share anything, they basically need to define what it is they are going to share.  So this is concerned with defining object level components - which could be physical objects such as photos or digital URL's.  The second aspect of the resource set, are the permissions, or scopes, that are required to access those objects.  A basic example could be a read scope against a picture album.


The above schematic shows some of the flows required to create the resource sets on the OpenAM authorization server.  The UMA-Resource-Server here (where ultimately my resources will be shared from), is simply an OAuth2 client of OpenAM, with a specific scope of uma_protection.  The resource set CRUD (create, read, update, delete) API on the OpenAM authorization server, is protected via this OAuth2 scope.  The UMA terminology calls this a Protection API Token (or PAT).

The PAT allows my resource server to create the resource sets on the authorization server with minimal disruption to my resource owner - a simple OAuth2 consent is enough.

Creating Policies

OK, so my OpenAM authorization server, now knows what is being protected - a set of objects and some permissions or scopes are in place.  But we now need some policies, which govern who or what can access those new resources.  This is the job of the resource owner.  The resource owner is going to know who they want to share their data or things with - so there are no delegation or consent style flows here.  The resource owner needs to have a session on the authorization server to be able to access the CRUD API for policy creation.

The policy API allows the resource owner to match their resource sets, to individual users with the associated permissions or scopes.  For example, Alice (resource owner) may create a policy that gives read access (scope) to an entire photo album (resource) to bob (end user).

Now the authorization server knows about the resource sets and the policies that govern who and what can be accessed, we can start looking at the flow of how access is enforced.  This will be covered in my second blog "UMA Part 2: Accessing Protected Resources".



15 July 2015

API Throttling with OpenIG

A common requirement with regards to API access, is the ability to throttle the number of hits a user or service can have against a particular endpoint or set of endpoints - similar to a soft paywall style use case.

The nightly build of OpenIG contains a new filter - the ThrottlingFilter - which allows a simple way to limit and then time-out, a user who hits an endpoint x-number of times.

To test, I created a simple node.js API that allows the read (GET) and write (POST) of widgets.  Each widget has a numerical id that is used as the identifier.  I also created a basic queryAll route within my node.js API to do a wildcard-esque search to return all ids.

So now, I wanted to add OpenIG into the mix and do a little reverse-proxying.  I wanted to basically expose only the GET queries against my API, preventing POST requests entirely and only allowing GETs to specific endpoints.  To those endpoints I also wanted to limit the number of requests per user to 3 - if that threshold was hit I would redirect the user to a warning page and time them out for 10 seconds.




To set things up, I added a few things to my config.json main heap. Firstly, I used the defaultHandler attribute within my main router, to act as a catch all and handle all the requests that came in for which a specific route file was not defined.  I also added in the new ThrottlingFilter so I could use this from within any of my routes - as objects in the config.json main heap are visible to allow my lower level route handlers.  The ThrottlingFilter just looks like this:



I then setup a couple of static HTML files that I housed in a config/html folder in my OpenIG base directory. I had a noRouteResponse.html that my defaultHandler delivered via a StaticResponseHandler (note here, I also wanted to include an image in my HTML, so I included the image as a base64 encoded object, so I didn't have to worry about access to the image URL).  I also created a thresholdBreachedResponse.html, that I would redirect to, again via a StaticResponseHandler, when a user racked up 3 hits to my API.

In my config/routes directory I had two routes - one for a GET on an :id and another for a GET on the queryAll endpoint.  I added no explicit routes for POST requests, so they would be caught by my defaultHandler and redirected and thus preventing access.

The route for my throttling does a few things. Firstly, I added a conditional on what would trigger it - using the out of the box matches function, I added a basic regex for capturing requests that matched '/widget/[0-9][0-9]' - that is, only requests with digits as the :id - /widget/AA would fail for example.  The route passed all traffic that matched into a chain - which is just an ordered set of filters.  Here I could call my throttle filter and also a SwitchFilter.  The switch allowed me to check if a user had the threshold imposed by my throttle.  If the throttle was triggered, a 429 response code was hit back - if so, OpenIG would catch this and I would redirect to my thresholdBreachedResponse.html page.

At a high level, the set up looks like the following:


The redirect for the threshold breach in reality, may redirect to an identity provider like Facebook or OpenAM to log the user in, before allowing unlimited access that avoided the throttle filter entirely.

Code for the node.js API example is available here.

Artifacts for the OpenIG config are available here.