6 November 2014

Thingteraction: Identity and Devices

This week was the EMEA IRM Summit over in Dublin and whilst the Guinness was flowing there were a few things ticking over in my mind, especially after some of the great Internet of Things related talks and demo's we had.

The 'R' in Identity Relationship Management, is often now referring to the relationships between people and the things they either own, share or have access to: the IoT explosion.  People to people relationships are still a massive underpinning in the IoT world and the real person behind machine and device interaction is still a priority.  However, how can we start handling authentication and authorization services for devices?

I think there are several key high level points:

  • Identity Registration - the simple case of a person providing attribute values to a store.  The basic self-service form, or perhaps social media registration service.  A web service ultimately gets to know about a 'real' person.  Nothing too knew here.
  • Device Registration - Ok, so we know about the person, but now they've gone and bought a widget / pair of tech trainers / a 'wearable' / car / fridge... you get the picture.  Each of these 'smart' devices have the capability to be networked, are likely to contain a manufactured GUID and perhaps some data publishing or subscribing capabilities.  So how does the device register? And to what to do they register?  The device is going to need network and the ability to send a JSON object, or some other representation of itself, to a service that can capture that data. Perhaps the device identity data will need to be verified or reconciled to confirm that thing attempting registration is actually a real thing and not a fake.
  • Device to Identity Linking - so, now we have two identities if you like.  One is a set of attributes that map to a person, another set of attributes to a thing.  Certain attributes in each object are likely to have been verified to a certain degree of assurance.  Now we can start the 'R' bit - relationship building.  Here you're probably likely to have this as being person driven. The person being the device owner and wants to link / claim / show ownership of the device. Perhaps via a linking process, entering a code, the GUID or some other way to provide a) they're in possession of the thing and b) the thing is real.
  • Device Authentication - now comes a bit more magic.  We now want to start doing something with our devices.  Perhaps allowing them to capture and send data on our behalf. To do that, the device needs to authenticate to a service to prove that the device is real and has authority to act on our behalf.  So how does a thing authenticate?  Shared secrets and passwords are dead here. We're looking at crypto in some shape.  Perhaps JWT tokens, perhaps PKI in some shape, but something with big numbers that requires limited human involvement and limited computational power.
  • Device Data Sharing - Ok, so we've got a device, that is authenticated to something and perhaps is also acting either on the users behalf to a 3rd party service, or is at least capturing data that can be shared to a 3rd party service.  But can we share that data effectively in a transparent and simple way?  The likes of OAuth2 and the more recent UMA can help here. We want all of these 3rd parties, that we can't control or manage effectively to be able to either gain access to our data, data that the device has captured or perhaps even a level of access to the device itself.  This 3-way interaction requires simple registrations and authorization decisions to be made in a way that both the human and device can understand, easily revoke and sustain.
  • Multifaceted Relationships - the 'graph' in IRM.  Ah ha! But there is no graph.  Well there is, it just isn't defined yet.  The more relationships the individual has with their things, the more chance there will be requirements for relationships between things and certainly many-to-many relationships between other people and your device.  How can that be handled?

If you're building an IoT platform, there are certain basic identity relationship patterns that need to be implemented and the registration, linking and authN/authZ components are key.  You're certainly going to need the ability to register, store and reconcile data (provisioning), have the ability to authenticate (be that crypto, JWT or whatever...) (access management) and then provide OAuth2/UMA style token services (authorization management).  All of which is likely to be done in a REST style mash-up that can easily be spun up and torn down as and when new services are required.


2 September 2014

OAuth-erize The Nation! With OpenIG 3.0

OpenIG 3.0 was released a couple of weeks ago, with some significant enhancements.  One of which was the ability to protect applications through the use OAuth2 access tokens, with very little effort.

OAuth2 has been around for a while, and provides a lightweight and developer friendly way to leverage authorization services for web and native applications.  To utilise the features of OAuth2 such as access token validation, refresh token to access token exchange and then scope querying by the client application, generally requires code changes within both the client app and resource servers.  This isn't necessarily a bad thing nor particularly complex, but in some circumstances,  you may not have access to the underlying code, or perhaps the app is hosted by a 3rd party.

OpenIG, as it's a reverse proxy, can easily sit in between the user community and the underlying target application.  With a simple edit of a JSON file, OpenIG can be setup to act as both the resource server and client in an OAuth2 or OpenID Connect environment.

The installation of OpenIG is trivial. A simple Java web application that can be dropped into either a Tomcat or Jetty container.  The app bootstraps from a locally stored configuration folder.  A standard config.json file should be created in the ~/.openig/config/ directory (or equivalent home directory on Windows).  This file contains the entire setup for IG, with things such as handlers, chains and clients, that perform the necessary request checking, stripping or parsing of attributes and replay into the target applications.  Of course, one of the benefits of OpenIG, is that itself, can indeed be protected by an OpenAM policy agent, and utilise any attributes that can be passed downstream to IG.

Following the simple example that comes with the OpenIG documentation, setting up integration a an OIDC relying party is pretty quick.  OpenAM can quickly be configured as the OAuth2 provider, as this functionality is available out of the box and configurable via one of the OpenAM Common Tasks wizard.

The example configuration for setting up IG as an OAuth2 client basically has two main components - an overall handler object (OpenIDConnectChain) that initiates the interaction to the OAuth2 provider and an out going handler, that retrieves the necessary attributes from the OIDC scope, and replays them into the target application as the necessary username and password.  There's also a capture filter for logging.  In production, you perhaps wouldn't necessarily replay the password here, but it would depend on the underlying application.

The OpenIDConnectChain, contains an OAuth2ClientFilter object conveniently called OpenIDConnectClient!  This object, contains the necessary OAuth2 provider details - URL, clientID, requested scopes and so on.  The information retrieved by the request is actually stored in the target attribute - ${exchange.openid}.  This attribute can then be queried by the out going chain, namely the GetCredentials object, which is a Groovy scriptable component.  Being scriptable, means we are pretty free to extend this as we see fit.  In this example, the GetCredentials object, simply pulls out the username and password fields.  Those fields are then passed down to the LoginRequestFilter object, which replays the fields into a form in the protected application.

Mega simple!  The beauty of it, is that underlying application (in this case the Java sample HTTP that comes with the OpenIG document)  requires zero code changes. All of the configuration is abstracted into the OpenIG proxy.

The same process can easily be repeated for other federation protocols such as SAML2.



27 August 2014

Delegated Admin Within OpenIDM

A common use case for both internal and consumer identity management, is that of scoped or delegated administration.  For example, your managed user repository may contain objects from different business areas, consumer types or locations, with each requiring a specialist administrator to perform creates and deletes.

The authorization and custom endpoint model in OpenIDM is very simple to extend, to allow for authorization rules across a number of different scenarios.  The most simple I've picked is that of an attribute called "type" - you could make this attribute anything you like - but type is easy to explain.

For example  - all I require is that all users of "type" == "staff" are only managed via administrators who are also of "type" == "staff".  Users who are administrators, but of say "type" == "consumer" can't manage staff, they can only manage consumers.  Obviously type could be altered for any attribute that is applicable, such as location or project.

The first thing is to restrict the results that the base query all gives back.  I only want users of "type" == "staff" being returned in my query if I'm the staff admin.  To do this I created a custom endpoint called "scopedQuery".  This endpoint, basically checks the "type" of the user performing the query, then performs a query on OpenIDM to return only those users that match the query criteria.  I used the default "get-by-field-value" query in my repo.jdbc.json config - note as I'm using "type" as my query attribute, I needed to add this as a searchable attribute in the repo.jdbc.json config before creating my managed/users.  I then altered the access.js file to allow only certain admins access to the scopedQuery endpoint - note by default the only other users who can perform queries is openidm-admin so scopedQuery is the only entry point to my delegated admins!

Now that query is sorted, I then needed to add in some control over the create, read, delete, update and patch HTTP methods.  To do this, I created a simple function in the router-authz.js file called isSameType().  This function does as it says...and checks if the user performing the operation is the same "type" of the user they are performing the operation on.  I then call this function as a customAuthz method within access.js, whenever those methods are called against managed/user for the admins that I designate.

Simples :-)

Note this is an example and complex delegated administration functions would need modification. This assumes the REST API is being used for administration not the OpenIDM UI, which would need editing to accommodate the new administrators.

The code for this example is available on Github here.

3 July 2014

Provisioning Apps to OpenAM's Dashboard Service

OpenAM v11 has a basic dashboard service, that can be used to provide SSO links to internal and cloud apps, in form of a personalised portal or dashboard.  It is pretty simple to setup out of the box.

A question I get asked quite often, is how to manage the apps a user gets?  Can we provision them just like say groups within AD?

The simple answer is yes.  Once the service is setup for a particular realm and an app is assigned to a user, an attribute called assignedDashboard is added and populated on the users profile.

The assignedDashboard attribute, is an array, that can be manipulated just like any other on the user object.  The items in the array, are the names of the app given within the config in OpenAM.

The setup of the apps within OpenAM is well documented, but each dashboard app contains the necessary SSO link and any associated federation classes.  The name is what can be 'provisioned' to the individual user and stored on their user profile.

Within OpenIDM it is then fairly simple to add the assignedDashboard attribute into the provisioner configuration as an array, the native type of the items as strings.


One thing to remember, is that the assignedDashboard attribute is part of the forgerock-am-dashboard-service object class with OpenDJ.  As such, the forgerock-am-dashboard-service needs adding into the provisioner JSON file within the object classes that are to be synchronized on  the OpenIDM side.




To populate the assignedDashboard attribute in production, you'd probably use a business role based on a business characteristic such as job title, or manager or location.




For a PoC you could simply use a JavaScript transform rule to populate it.

With the sync.json you can simply drop in a transform to link to a JavaScript containing all the necessary logic that will interpret the users context to determine which dashboard apps to provision.  In this case, the country attribute on the user relates directly to what apps they see.






 


In this example, the getAssignedDashboard JavaScript, simply goes through a basic conditional, assigning values to an array of dashboard apps for that user, based on their current country.

OpenAM Dashboard Configuration - http://docs.forgerock.org/en/openam/11.0.0/admin-guide/index/chap-dashboard.html

The code for the above example is available here - https://github.com/smof/openIDM_artifacts/tree/master/dashboard_provisioning

6 June 2014

Consumer Identity: Registration, Reconciliation & Approval

Most online digitization programmes require new consumers, customers or potential customers to register to their service.  Nothing new here.  You are obviously going to have to register, give away some contact information, probably enter a username and certainly a password, before you can gain access to a service or website.  In the C2C world, the main objective is virality - ie spread like a virus and get as many people signed up to your app or site as soon as possible.

In the B2B and B2C worlds, there is a subtle difference - mainly verification or approval of the users attempting registration.  Many organisations, especially within the financial services arena (I'm thinking retail banking, insurance, asset management, share management etc) all require not only a strong level of authentication (OTP, biometric, MFA) but also a strong level of assurance or verification first.


The above is an example of a basic flow that occurs during a self-registration process.  (Note I documented the detailed integration with Facebook is here)

Step 3 is the interesting part.  Many organisations may have an internal authoritative source of user information which they want to use to link to any form based data collected at registration time.  For example, this maybe a database of policyholders and their names and addresses which has been collected and cleaned over several years, perhaps populated via in branch / in person sales, making it a the most authoritative source.  During the registration process, the user may have to submit their account number, policy number, customer registration number or even just a unique identifier created to allow them to register.  Basically something they know.

In OpenIDM we can quickly set up a managed/user mapping to the authoritative source, simply to perform linkages if the data entered is accurate and correlates.  In the ../conf/sync.json we can create a mapping that only creates links, as opposed to the traditional reconciliation processes of creating and updating users.


The two situations of interest are FOUND and MISSING.  For a detailed explanation of what these mean see here.

A link will intimate that the data entered via the form maps to the authoritative source.  The mapping criteria is controlled via a JavaScript correlation query that is entered in the main part of the mapping meta-data.  The correlation query can be made up of a simple one to one attribute value map, or in v3.0 of OpenIDM, we can use a queryFilter which may be more complex.

When a match occurs, a link is then recorded within the OpenIDM repository, which can then use for identifying a user as being verified.  We can simply drop in an onLink script, to update the source user with an attribute called 'verified' which we can populate with a true/false flip dependent on the data entered.


Another option at the FOUND stage, is to enter in a workflow.  This workflow could stop the immediate verification script from running, until an assessor or approver, has physically logged into OpenIDM, checked the user details and then approved.

Here we simply replace the link action within the FOUND situation to run a workflow trigger.  The trigger file simply calls the BPMN2.0 workflow definition that contains the approval logic.  The workfile is created in any BPMN2.0 compliant IDE, to the XML output that sits in the ../workflow folder.  When a user now registers, and also correctly maps into the authoritative source, an approver can be notified to perform an additional approval step via the REST API or OpenIDM UI.


Once approved, the user is then updated with a verified attribute, that is flagged as true, which can then be used in further downstream provisioning jobs.


30 April 2014

Power IoT Offline Authentication With OpenIDM

The Internet of Things (IoT) is an exciting buzz, for the many collections of previously dumb devices that are now gaining 'smart' status and increased network connectivity.  Whilst there is an increasing rush to make everything connected, not all egres devices can be on line all the time.

Many IoT devices are small, even compared to the smallest of smart phones.  Many have limited memory, processing power and networking capability.  This can reduce the ability of the device, to interact with central authentication and authorization services over common approaches such as HTTP and REST.

Challenges

Without connectivity to a central source, how do devices authenticate users, services and other devices or perform policy enforcement point style use cases?  The requirement for offline capabilities forces many to look at cumbersome out of band syncing or caching approaches.

JSON Web Tokens

Asymmetric cryptographic solutions have been around for years and can provide many encryption and signing approaches when it comes to data in transit or authentication assertions.  The federation protocol SAML2 relies on PKI crypto heavily.  But how can this help in the brave new world of IoT? JSON Web Tokens (JWT) - or jots - can provide a lightweight signed payload that can be verified by an offline device, without the need for runtime communication to a central source.  A JWT signed with the private key of a device can contain things like the public key of the identity requiring access as well as any other claims, expiration timestamps and audience attributes.  These signed assertions can be quickly provisioned, stored and then presented by users and devices in order to gain access to an offline resource or machine.

(For further information on JWT see - http://self-issued.info/docs/draft-ietf-oauth-json-web-token.html)

The creation of a JWT in this use case would require a few pieces of information.  Firstly the payload - this is likely to contain the public key of the user, service or thing wanting access (this is used later in the authentication process during a challenge/response style interaction), an expiration time (Unix timestamp), an aud (or audience) attribute (which advertises who the JWT is for) and any other potential attributes used as claims.

The payload is then signed with a private key, using information from the JWT header, which contains algorithm details.  The result, is a dot-separated three valued encoded string.



Integrating With OpenIDM

The power of OpenIDM is it's simplicity and speed, with which new services and endpoints can be integrated into it's core engine.  OpenIDM uses the omnipresent web language of Javascript to provide logic and processing capabilities that can be created or extended.  A simple custom REST endpoint - a web URL containing static data or dynamic processing capabilities - can be setup in a few hours, that can provide the ability to provision JWTs instead of the standard provisioning use case of permissions such as groups or roles.

The custom endpoint is simply a Javascript file of 100 or so lines, that contains some open source Javascript crypto libs to help sign and manage the JWT, as well as some simple OpenIDM actions to update managed objects and perform retrievals of objects in OpenDJ.  The end result is a JWT that is provisioned against the object stored in OpenIDM as well as either provisioning of that attribute to a downstream system (if for example you want to push the JWT to an API or device distribution service) or simply into OpenDJ.

Once the end user or device has collected the JWT, they can then present the JWT to the device they wish to access in an offline manner.  The authenticating device doesn't need to have HTTP access.  It can simply use local crypto to verify the signature of the JWT, and create a challenge response using the public key of the subject, that is already contained inside the JWT payload.  A successful challenge response process would require the subject presenting the JWT to have the corresponding private key, so is a nice mix of multifactor something their have authentication.

I'll shortly release the code to Github as a community contribution.






27 March 2014

JavaScript OAuth2 Client - Authorization Code Grant

For a PoC, the OAuth2 authorization code grant use case, needed to be stubbed out.  Whilst this can be done over Curl, I decided to build this out in NodeJS to replicate a client application more closely.

The OAuth2 authorization code grant is fully explained here - http://docs.forgerock.org/en/openam/11.0.0/admin-guide/index/chap-oauth2.html#oauth2-authz

Basically there is a decoupling between the resource owner, the requesting client and the authorization server.


My basic client, first of all authenticates the end user to get an OpenAM session token.  That token is used to generate an authorization code, which is in turn used by the client to request access and refresh tokens and ultimately the attribute scopes for the user.

The code is available on Github - https://github.com/smof/node_openam_oauth2_client

18 February 2014

Conditional URL Policy Evaluation in OpenAM


To perform conditional URL evaluation (where there are arguments in the URL that will change and impact the policy decision), a custom policy evaluation plugins needs implementing - http://docs.forgerock.org/en/openam/11.0.0/dev-guide/index/chap-policy-spi.html

Use Case

URL to contain all information required to make a policy decision, but components of the URL vary adding context.


In this example an organisation number prefixs users, whilst the user number suffixes users. A condition should exist where only users who are managers AND managers of the same organisation of the user they're accessing should be allowed.



Implementation

Either build out a specific policy plugin, or use the existing community contributed ScriptedCondition plugin which allows for the use of Javascript to build out the condition evaluation. ScriptedCondition is available from the OpenAM trunk source - http://sources.forgerock.org/browse/openam/trunk/community/extensions/ScriptedCondition/README.txt?hb=true

Build the ScriptedCondition.java plugin and compile against the OpenAM core and shared libraries, and add to a policy-plugins.jar, before dropping into the ../openam/WEB-INF/lib directory.

Extensions to the OpenAM services schema are needed to allow for the selection of the new condition type. Follow instructions in the ScriptedCondition README. A restart of Tomcat will result in the ScriptedCondition being available in policy edit screens.




So the above Javascript basically does a compare of the org value that is split from the URL and a session attribute that holds the users organisation value, before returning a true or false back to the condition decision method.

30 January 2014

Using OpenAM as a REST based PDP

OpenAM has a powerful policy decision point functionality (PDP) that can be leveraged entirely over the REST endpoints provided out of the box.  These endpoints allow for nice decoupling between the PDP and authentication infrastructure and your app.  A few things to setup first...

Policies - policies map to the resource URL's that you want to protect, along with additional data such as the subjects (users) the policy will affect, as well as conditions such as IP address, time, authentication level requirements and so on.

Authentication Modules - an obvious component, but the modules can also be configured with an authentication level (an arbitrary numeric value) that provides an assurance level once a user has used a particular chain / module.  The auth level can then be leveraged via the policy.

Authentication

Authenticating the user over REST in v11 has changed slightly.  There is now the use of JSON based callbacks that allow for more flexible authentication scenarios.  For example, say the user is not authenticated but wants a session with an assurance level to be able to access app.example.com/examples.  The following could be called:

http://openam.example.com:8080/openam/json/authenticate?authIndexType=resource&authIndexValue=http%3A%2F%2Fapp.example.com%3A8081%2Fexamples

OpenAM will then return a JSON and associated JWT, either with a choice callback to choose a module that has the appropriate auth level, or the attributes and value placeholders for the module that matches. Sending the JSON back to OpenAM with the necessary username, password or other attributes filled in will result in a token and success URL for redirection:

{
      tokenId: "AQIC5wM2LY.......QtMzg2NzM4NzAwMjEwMDc2NzIyMQ..*"
      successUrl: "/openam/console"
}

Now comes the authorization part.  There are few avenues to take here.  Either taking the attribute and querying OpenAM for other attributes associated with it to help make an authorization decision natively, or performing policy queries.

Attribute Query

Taking the tokenID as the header cookie, a call is made to retrieve either the entire user object, or specific fields, by appending attributes to the URL:

http://openam.example.com:8080/openam/json/users/smof?_fields=uid,inetuserstatus,employeenumber

Returns:

{"uid":["smof"],"inetuserstatus":["Active"],"employeenumber":["123456"]}


Policy Decision

There are couple of endpoints for performing a URL access check.  The main one I use here is the ../entitlement/entitlement (note the two entitlements...) endpoint that is very flexible and also returns advice objects to assist with handling any deny messages.

By wanting to do a check against app.example.com:8081/examples, encode the URL and taking the subjects tokenID as the cookie, call:

http://openam.example.com:8080/openam/ws/1/entitlement/entitlement?action=GET&resource=http%3A%2F%2Fapp.example.com%3A8081%2Fexamples

A deny message (for example the user being authenticated to a module that didn't meet the 110 authlevel minimum...) would deliver:

{
  "statusMessage": "OK",
  "body": {
    "resourceName": "http://app.example.com:8081/examples",
    "advices": {
      "AuthLevelConditionAdvice": [
        "/:110"
      ]
    },
    "attributes": {},
    "actionsValues": {}
  },
  "statusCode": 200
}

A positive response would deliver:

{
  "statusMessage": "OK",
  "body": {
    "resourceName": "http://app.example.com:8081/examples",
    "advices": {},
    "attributes": {},
    "actionsValues": {
      "GET": true,
      "POST": true
    }
  },
  "statusCode": 200
}

24 January 2014

Role Mining & Peer Analytics in OpenIDM

I created a few custom endpoint extensions for use with OpenIDM, that allows for the analysis of users and their entitlements.  I won't go into the virtues of roles and role based access control, but these endpoints are a simple way to quickly identify similarities between groups of users and then quickly find any differences or exceptions.  These exceptions would then be analysed either by a certification system or perhaps manually by the security admin teams.

Peer Analysis

Peer Analysis JSON
The first endpoint simply groups users (generally managed users) together based on a functional similarity.  This is generally known as 'top down' mining in full blown role mining projects.  The endpoint returns a JSON object with role names and an array of users that are part of that functional grouping.


Peer Entitlements

Peer Entitlements JSON
The role object on it's own it's much use.  What we're really interested in, is what entitlements should be associated with that role.  This makes the onboarding of new users really simple and less error prone.  If we know what entitlements the role should have, we can simply associate a new user, based on the previously identified functional grouping, and the user can then be provisioned those entitlements.  But how do we know which entitlements to associate with the role?  The peerEntitlements.js endpoint is a really robust way of finding out which entitlements are common for a given group of users.  Using the JSON output from the peerAnalysis.js endpoint, we can simply pull out the entitlements for any system known to OpenIDM. The peerEntitlements.js endpoint then identifies every entitlements that is common across every user in that grouping and adds it to the role.

For example if Billy (grp1, grp2), Ann (grp1, grp5) and John (grp1, grp6) were all in the same role, the entitlements endpoint would see that only "grp1" was common across all users and push that into the entitlements array.  The other entitlements we'll come to in a second....


Peer Exceptions

Peer Exceptions JSON
Any entitlements that do no find themselves added to a role due to their similarity (or lack of), need to be handled.  Generally these entitlements are known as exceptions and are managed through long and complex access certification and attestation projects.  There are several fully blown compliance products on the market place, that can take several months to configure and deploy.  The idea behind the peerExceptions.js endpoint, is to quickly identify only the high risk exception entitlements and get those entitlements cleaned up in a timely fashion.  By focusing just on the high risk exceptions, this can cut down the access review process by 80%.  The exceptions in this example, are simply entitlements associated with a user that fall outside of the role model.  Taking user Billy (above), he has "grp1" and "grp2" associated with him.  "grp1" landed in the newly found role, leaving "grp2" as an exception.  He has that entitlement, but other users who perform a similar business function do not.  Maybe he is a manager, or perhaps is new to the job and has experienced privilege creep.  Either way this entitlements don't match those of his peers and that needs investigating.  The peerExceptions.js endpoint performs a diff between the effective entitlements directly associated with the user, and the effective entitlements from the roles the user has.

All code from the above examples is available from Github here.  Note this is an example endpoint and is no way supported by ForgeRock.  It is released simply as a community contribution.  Use as-is!