1 December 2015

Scripted OpenID Connect Claims and Custom JWT Contents

OpenID Connect has been the cool cat on the JSON authorization cat walk for some time.  A powerful extension to the basic authorization flows in OAuth2, by adding in an id_token. The id_token is a JWT (JSON Web Token, pronounced 'jot' but you knew that) that is cryptographically signed and sometimes encrypted - depending on the contents.

The id_token is basically separate to the traditional access_token, containing details such as which authorization service issued the token, when the user or entity authenticated and when the token will expire.

OpenAM has supported implementations for OpenID Connect for a while, but a more recent feature is the ability to add scripting support to the returnable claims.  Adding scripting here, is a really powerful feature.  Scripts can be either Groovy or JavaScript based, with a default Groovy script coming with OpenAM 13 out of the box.

The script is basically allowing us to creatively map scopes into attribute data, either held on the user's identity profile, or perhaps dynamically created at run time via call outs or via applied logic.

A quick edit of the of the out of the box OIDC claims script, allows me to add a users status from their profile held in OpenDJ, into the data available to presented scopes.  I've used the inetuserstatus attribute simply as it's populated by design.  By adding "status" to the scopes list on my OIDC client profile, allows it to be called and then mapped via the script.
So pretty simply I can add in what is made available from the user's identity profile, which could include permissions attributes or group data for example.

Another neat feature (which isn't necessarily part of the OIDC spec), is the ability to add claims data directly into the id_token - instead of making the extra hop to the user_info endpoint with the returned access_token.  This is useful for scenarios where "offline" token introspection is needed, where an application, API, device or service, wants to perform local authorization decision making, by simply using the information provided in the id_token.  This could be quite common in the IoT world.

To add the claims data into the signed JWT id_token, you need to edit the global OIDC provider settings (Configuration | Global | OAuth2 Provider).  Under this tab, use the check box "Always return claims in ID Tokens"

Now, when I perform a standard request to the ../access_token endpoint, including my openid scope along with my scripted scope, I receive an id_token and access_token combination the same as normal.

So I can either call the ../user_info endpoint directly, with my access_token to check my scope values (including my newly added status one) or use a tool or piece of code to introspect my id_token.  The JWT.io website is a quite a cool tool to introspect the id_token by doing the decode and signing verification automatically online.  The resulting id_token introspect would look something like this:

Note the newly added "status" attribute is in the verified id_token.

23 October 2015

Device Authorization using OAuth2 and OpenAM

IoT and smart device style use cases, often require the need to authorize a device to act on behalf of a user.  A common example is things like smart TV's, home appliances or wearables, that are powerful enough to communicate over HTTPS, and will often access services and APIs on the end user's behalf.

How can that be done securely, without sharing credentials?  Well, OAuth2 can come to the rescue. Whilst not part of the ratified standard, many of the OAuth2 IETF drafts, describe how this could be acheived using what's known as the "Device Flow"  This flow leverages the same components of the other OAuth2 flows, with a few subtle differences.

Firstly, the device is generally not known to have a great UI, that can handle decent human interaction - such as logging in or authorizing a consent request.  So, the consenting aspect, needs to be handled on a different device, that does have standard UI capabilities.  The concept, is to have the device trigger a request, before passing the authorization process off to the end user on a different device - basically accessing a URL to "authorize and pair" the device.

From an OpenAM perspective, we create a standard OAuth2 (or OIDC) agent profile with the necessary client identifier and secret (or JWT config) with the necessary scope.  The device starts the process by send a POST request to /oauth2/device/code end point, with arguments such as the scope, client ID and nonce in the URL.  If the request is successful, the response is a JSON payload, with a verification URL, device_code and user_code payload.

The end user views the URL and code (or perhaps notified via email or app) and in a separate device, goes to the necessary URL to enter the code.

This triggers the standard OAuth2 consent screen - showing which scopes the device is trying to access.

Once approved, the end user dashboard in the OpenAM UI shows the authorization - which importantly can be revoked at any time by the end user to "detach" the device.

Once authorized, the device can then call the ../oauth2/device/token? endpoint with the necessary client credentials and device_code, to receive the access and refresh token payload - or OpenID Connect JWT token as well.

The device can then start accessing resources on the users behalf - until the user revokes the bearer token.

NB - this OAuth2 flow is only available in the nightly OpenAM 13.0 build.

DeviceEmulator code that tests the flows is available here.

18 August 2015

OpenIDM: Relationships as First Class Citizens

One of the most powerful concepts within OpenIDM, is the ability to create arbitrary managed objects on the fly, with hooks that can be triggered at various points of that managed objects life cycle.  A common use of managed objects is to separate logic, policy and operational control over a type of users.  However managed objects need not be just users, commonly they are devices, but they can also be used to manage relationship logic too - similar to how graph databases store relationship data separate to the entity being managed.

For example, think of the basic relationship between a parent and child (read this article I wrote on basic parent and child relationships). That article looked into the basic aspect of storing relationship data within the managed object itself - basically a pointer to some other object managed in OpenIDM.

That in itself is powerful, but doesn't always cover more complex many-to-many style relationships.

The above picture illustrates the concept of externalising the relationship component away from the parent and child objects.  Here we create a new managed object called "family".  This family basically has pointers to the appropriate parents and children that make up the family.  It provides a much more scalable architecture for building connections and relationships.

This model is fairly trivial to implement.  Firstly, create the 3 new managed object types via the UI - one for parent, one for child and one for family.  We can then look at adding in some hooks to link the objects together.  I'm only using onDelete and postCreate hooks but obviously more fine grained hooks can be used as necessary.

A basic creation flow looks like the following:

  1. Parent self-registers
  2. Via delegated admin, the parent then creates a child object (the creation of the child is done via an endpoint, in order to get access the context object to capture the id of the parent creating the child on the server side)
  3. After the child is created, the postCreate hook is triggered, which creates a family object that contains the parent and child pointers
  4. Once the family object is created, the family postCreate hook is triggered, to simply update the parent and child objects with the newly created family _id
That is a fairly simple flow.  The family object can be read just via the managed/family/_id endpoint and be manipulated in the same way as any other object.  A useful use case, is to look for approvers on the child - for example if the child wants to gain access to something, the family object can be looked up either for parallel approval from both parents or just via an explicit approver associated with the family.  

Another use case, could be when a new parent registers and wants to "join" a particular family - the approver on the family object can be sent an "access request" and approve as necessary.

The flow when a parent or child object is deleted is similar in it's simplicity:
  1. Parent (or child is deleted) perhaps via de-registeration or admin performing a delete
  2. The parent onDelete hook triggers, which finds the appropriate family object and removes the child or parent from the array of parents or children
When a family object is deleted, again the flow is simple to interpret:
  1. Family object is deleted (via an admin)
  2. All parents and children found within the family object, are looked up and deleted
This provides a clean de-provisioning aspect.

Obviously each managed object can also have it's own sync.json mappings and links to downstream systems the same as a non-relationship based deployment.

Another common set of use cases is around consumer single views.  For example, the relationship object becomes the consumer, with all other managed objects, such as preferences, marketing data, sales history, postal addresses and so on, all being linked in the same manner.  The consumer object becomes the single entry point into the data model.

12 August 2015

Explain it Like I'm 5: OAuth2 & UMA

This entry is the first in a mini-series, where I will attempt to explain some relatively complex terms and flows in the identity and access management space, in a way a 5 year could understand.


First up is OAuth2 and User Managed Access or UMA, a powerful federated authorization flow that sits on top of OAuth2.

Explain it like I’m 5: OAuth2

OAuth2 allows people to share data and things with other services that can access that data on their behalf.  For example, an individual might want to allow a photo printing service access to a few pictures from an album stored on a picture hosting service.

Explain it like I’m 5: Resource Server

The resource server is the service or application that holds the data or object that needs sharing.  For example, this could be the picture hosting site that stores the taken pictures.

Explain it like I’m 5: Resource Owner

The resource owner is the person who has the say on who can retrieve data from the resource server.  For example, this could be the user who took the pictures and uploaded them to the hosting service.

Explain it like I’m 5: Authorization Server

The authorization server is the security system that allows the resource owner to grant access to the data or objects stored on the resource server to the application or service.  In continuing the example of the picture hosting, it’s likely the hosting service itself would be the authorization server.

Explain it like I’m 5: Client

The client is the application that wants to gain access to the data on the resource server.  So in the continuing example, the the picture printing service would be the client.

Explain it like I’m 5: UMA

UMA allows the sharing of feed of data to multiple different 3rd parties, all from different places.  
For example, wanting to share pictures with not only 3rd party services to act on the resource owner’s behalf, but also to other trusted individuals, who can perhaps store those pictures in their store and print them using their own printing service selection.

29 July 2015

UMA Part 2: Accessing Protected Resources

This second blog on UMA, follows on from part 1, where I looked at creating resource sets and policies on the authorization server.

Once an authorization server understands what resources are being protected and who is able to access them, the authorization server is in a position to respond to access requests from requesting parties.

A requesting party is simply an application that wants to access resources managed by the resource server.

The above diagram looks complicated at first, but it really only contains a couple of main themes.

The authorization server, responsible for the resource sets, policies and ultimately the policy decision point for evaluating the inbound request, the resource server, acting as the data custodian and the relying party - the application and end user wanting to gain access to the resources.

There are a couple of relationships to think about.  Firstly, the relationship between the resource server and the authorization server.  Described in the first blog, this relationship centres around an OAuth2 client and the uma_protection scope.  The second relationship is between the requesting party and the authorization server.  This generally centres around an OAuth2 client and the uma_authorization scope.   Then of course, there are the interactions between the requesting party and the resource server.  Ultimately this revolves around the use of a permission ticket, used to receive a requesting party token, which then gives the resource server the ability to introspect that token via an authorization endpoint, in order to determine whether access should be granted.

Another aspect to consider, is the verification of the end user - in this case Bob.  This is currently done via an OpenID Connect JWT issued by the authorization server.  This JWT is then used by the relying party when submitting a request to the AS (step 3 in the above).

A powerful component, is of course the loose coupling of the main players.  All integrated using standard OAuth2 patterns for API protection.

The above use cases are all available in the nightly build of OpenAM 13.  As with any nightly build, there is instability, so expect a little inconsistency in some of the flows.  The draft documentation describes all the detail with respect to the flows and interactions.

Chrome's Postman REST client can be used to test the API integrations.  I created a project that contains all the necessary flows that can be used as starting point for testing ForgeRock integrations.

UMA Part 1: Creating Resource Sets & Policies

User Managed Access (UMA) is a new standard, that defines how resource owners can control protected-resource access by clients operated by arbitrary requesting parties, where the resources reside on any number of resource servers, and where a centralized authorization server governs access based on resource owner policy.  So what does that mean?

Basically in today's highly federated, cross platform, distributed world, data and information needs sharing to multiple 3rd parties, who all reside in different locations with different permission levels.

UMA looks to solve that problem by introducing a powerful approach to transparent user centric consent and sharing.

I'm going to take a look at the ForgeRock implementation of UMA, available in the nightly builds of OpenAM 13.

Creating Resource Sets

First up, what are resource sets?  Before someone can share anything, they basically need to define what it is they are going to share.  So this is concerned with defining object level components - which could be physical objects such as photos or digital URL's.  The second aspect of the resource set, are the permissions, or scopes, that are required to access those objects.  A basic example could be a read scope against a picture album.

The above schematic shows some of the flows required to create the resource sets on the OpenAM authorization server.  The UMA-Resource-Server here (where ultimately my resources will be shared from), is simply an OAuth2 client of OpenAM, with a specific scope of uma_protection.  The resource set CRUD (create, read, update, delete) API on the OpenAM authorization server, is protected via this OAuth2 scope.  The UMA terminology calls this a Protection API Token (or PAT).

The PAT allows my resource server to create the resource sets on the authorization server with minimal disruption to my resource owner - a simple OAuth2 consent is enough.

Creating Policies

OK, so my OpenAM authorization server, now knows what is being protected - a set of objects and some permissions or scopes are in place.  But we now need some policies, which govern who or what can access those new resources.  This is the job of the resource owner.  The resource owner is going to know who they want to share their data or things with - so there are no delegation or consent style flows here.  The resource owner needs to have a session on the authorization server to be able to access the CRUD API for policy creation.

The policy API allows the resource owner to match their resource sets, to individual users with the associated permissions or scopes.  For example, Alice (resource owner) may create a policy that gives read access (scope) to an entire photo album (resource) to bob (end user).

Now the authorization server knows about the resource sets and the policies that govern who and what can be accessed, we can start looking at the flow of how access is enforced.  This will be covered in my second blog "UMA Part 2: Accessing Protected Resources".

15 July 2015

API Throttling with OpenIG

A common requirement with regards to API access, is the ability to throttle the number of hits a user or service can have against a particular endpoint or set of endpoints - similar to a soft paywall style use case.

The nightly build of OpenIG contains a new filter - the ThrottlingFilter - which allows a simple way to limit and then time-out, a user who hits an endpoint x-number of times.

To test, I created a simple node.js API that allows the read (GET) and write (POST) of widgets.  Each widget has a numerical id that is used as the identifier.  I also created a basic queryAll route within my node.js API to do a wildcard-esque search to return all ids.

So now, I wanted to add OpenIG into the mix and do a little reverse-proxying.  I wanted to basically expose only the GET queries against my API, preventing POST requests entirely and only allowing GETs to specific endpoints.  To those endpoints I also wanted to limit the number of requests per user to 3 - if that threshold was hit I would redirect the user to a warning page and time them out for 10 seconds.

To set things up, I added a few things to my config.json main heap. Firstly, I used the defaultHandler attribute within my main router, to act as a catch all and handle all the requests that came in for which a specific route file was not defined.  I also added in the new ThrottlingFilter so I could use this from within any of my routes - as objects in the config.json main heap are visible to allow my lower level route handlers.  The ThrottlingFilter just looks like this:

I then setup a couple of static HTML files that I housed in a config/html folder in my OpenIG base directory. I had a noRouteResponse.html that my defaultHandler delivered via a StaticResponseHandler (note here, I also wanted to include an image in my HTML, so I included the image as a base64 encoded object, so I didn't have to worry about access to the image URL).  I also created a thresholdBreachedResponse.html, that I would redirect to, again via a StaticResponseHandler, when a user racked up 3 hits to my API.

In my config/routes directory I had two routes - one for a GET on an :id and another for a GET on the queryAll endpoint.  I added no explicit routes for POST requests, so they would be caught by my defaultHandler and redirected and thus preventing access.

The route for my throttling does a few things. Firstly, I added a conditional on what would trigger it - using the out of the box matches function, I added a basic regex for capturing requests that matched '/widget/[0-9][0-9]' - that is, only requests with digits as the :id - /widget/AA would fail for example.  The route passed all traffic that matched into a chain - which is just an ordered set of filters.  Here I could call my throttle filter and also a SwitchFilter.  The switch allowed me to check if a user had the threshold imposed by my throttle.  If the throttle was triggered, a 429 response code was hit back - if so, OpenIG would catch this and I would redirect to my thresholdBreachedResponse.html page.

At a high level, the set up looks like the following:

The redirect for the threshold breach in reality, may redirect to an identity provider like Facebook or OpenAM to log the user in, before allowing unlimited access that avoided the throttle filter entirely.

Code for the node.js API example is available here.

Artifacts for the OpenIG config are available here.

24 June 2015

Seamless User Profile Migration from MySQL to OpenDJ

Following on from my previous post on OpenDJ password schemes, a common requirement is often to migrate users into the OpenDJ profile store from an existing database. There are numerous ways to do this, such as LDIF file imports or using OpenIDM reconciliation and livesync. However, both methods only really do a like for like comparison – no data cleansing takes place - unless you start to configure some logic processing in there.

This might be fine, but if your existing repositories contain millions of entries, some of which you don't know are live, a quick way to migrate across only active users, is to use OpenAM, with it's Dynamic Profile creation feature.

The above describes the process at a high level. Basically there are 3 authentication modules in a chain, using the flexibility of sufficient and optional modules

In this flow, there are basically 3 states.

User in MySQL User in OpenDJ Authentication Works Against Password Captured
1st Run Yes No MySQL No
2nd Run Yes Yes MySQL Yes
3rd Run Yes Yes OpenDJ/MySQL No

On the first run through, authentication fails against OpenDJ, as the user only exists in MySQL.  The chain then flows down to the JDBC module to authenticate the user.  The scripted module doesn't have an impact yet, as the user is only created in OpenDJ once the authentication chain has completed.  

With regards to the JDBC module, depending on how the password has been stored in the SQL database, it's quite likely you will need to write a password syntax transformation class, to alter the submitted clear text password, into an algorithm that the database is using to store the password.  This is pretty simple and documented process, with an example I wrote for SHA1 hashing available here.

On the second run through, the same thing happens, except this time the scripted module has something to update in the DJ repository - the user was created at the end of the 1st run through remember. The script simply does an idRepository.setAttribute against the newly created DJ user, to update the userPassword attribute with the password value from the sharedState.password.  The script I used is available here.

If all things are working as expected... the 3rd run through is somewhat different.  Not only does the user now exist in the DJ store, but that store also contains the existing user password from MySQL. 

So, whilst the user logs in using the same credentials as if nothing has happened, the authentication chain will authenticate successfully against OpenDJ and then exit the chain.

The main benefit of this approach, is that the end user has not been impacted - they log in with the same credentials as they did when using the MySQL repository.  No impact to their journey and no dreaded password reset use case.  Secondly, only the users that have successfully logged in are created in the new DJ store.  The bi-product of this process, is that a data cleansing aspect as has taken place.  Any users captured in the MySQL database that no longer use the service will no be migrated.

Another benefit of the migration is following my blog on password storage in OpenDJ, you can also seamlessly upgrade the hashing algorithm too.

NB - To allow the flow down of the shared state username and password down between the initial LDAP module and the secondary JDBC module, edit the module options setting within the authentication chain to conain iplanet-am-auth-shared-state-enabled=true.

19 June 2015

Password Storage in OpenDJ

A common use case, is the migration of user profile data to OpenDJ.  Especially in large scale consumer facing identity projects, most clients already have repo's that contain user profile data.

Sometimes these stores also contain authentication data - that is the user name and password of the individuals.  Migrating data is relatively simple in this day and age regardless of whether that is identity data or not, but a common issue regarding login credentials, is how to migrate without impacting the login process.  For example, you don't necessarily want to get every user to reset their password for example, when they migrate to the new system.

Within OpenDJ this fortunately isn't a big deal.  A reason users might have to reset their password, is often to do with how the password has been stored on the source system.  When it comes to passwords there are generally two main approaches - symmetric encryption and hashing.  Symmetric encryption (meaning the password can be decrypted using the same encryption key) is seen a less secure method than something like hashing.  The argument for symmetric encryption was often around usability and speed and perhaps for password recovery style use cases - as opposed to password reset use cases if the password could not be recovered.

Password hashing is where a password is converted into a one-way set of opaque characters that visually have no relation to the clear text password - meaning hackers have a harder way of trying to get the original password.  The hash can also generally not be reversed - think of hashing like smashing a glass mirror - once smashed it's nearly impossible to get the mirror glued back together to look the same.  It's also nearly impossible to smash two identical mirrors in such a way that the broken pieces look the same.  So... hashing is seen as more secure and seen as irreversible.

But if it's irreversible...how do users login?  When the clear text password is entered, the password has the specific hashing algorithm applied to it and then compared to the existing hash that is stored. So we're performing a hash comparison not a clear text comparison.  I digress.

Back to OpenDJ.  OpenDJ provides a range of these different hashing algorithms out of the box. Take a look at the password storage schemes via the dsconfig interactive CLI (in ../bin/ of the main OpenDJ root folder).  Option 28 of the main menu takes you into the Password Storage Scheme area...

Most modern deployments will want to use a one way hash, generally with a salt, so something like Salted SHA512 is a nice bet.  Now the issue comes, when for example, the source data feed of users, has a hash of a lower security level than what you want in the modern world with OpenDJ.  So whilst OpenDJ supports things like SHA1 out of the box (and you can code new plugins for algorithms not supported...) you might want to migrate all users to a new more secure algorithm going forward.

Haha - the password reset scenario I mentioned above! Well not quite...OpenDJ has a neat feature that allows migration to new algorithms without getting users to reset their password.

Firstly you can set the appropriate default-password-storage-scheme to include the existing hashing algorithm (for example SSHA) when you migrate your users across.  This is done via the Password Policy option via the dsconfig main menu.  So we now have users in DJ with their password stored using the existing algorithm. A neat way to check this is the case, is to view the user via the ../bin/control-panel tool, switching to LDIF view.  Check for the userPassword attribute...and you will see the base64 encoded password.

Taking the encoded value and using something like the base64 utility that comes with most BASH distributions, you can decode the value to see the hashed value underneath.

Note the value is prefixed with the algorithm used, so it's easy to see what is happening. Next thing we can do, is to alter the default-password-storage-scheme to include our new algorithm...namely SSHA512.  Again, do this via editing the appropriate password policy.  At the same time, also alter deprecated-password-storage-scheme property include our initial algorithm - namely SSHA.

This on it's own doesn't alter the algorithm.  The change occurs, the next time the user authenticates. So logging into OpenAM with my existing user and their existing password...not only logs me successfully...it also updates the password in the background to be stored using the new algorithm.

This time checking the userPassword value in the LDIF view, I can instantly see the base64 value is much longer.

Doing a base64 decode, reveals the reason: we're now storing using the SSHA512 algorithm.

A quick and simple way to upgrade algorithms without impacting the user journey.

Of course, getting the data into DJ in the first place, would be a good use case for OpenIDM through basic reconciliation using the connector framework.  It is also simple to configure OpenIDM to leverage pass through authentication to leverage the password storage schemes just configured in DJ.

For more information on password storage schemes see here.

4 June 2015

Stateless Tokens within OpenAM 13.0

The unstable OpenAM nightly build of 13.0, contains a great new feature: the ability to create stateless or client side tokens.  This brings a range of new use cases to the access management table, including increased scale (less server side storage, CTS replication and in memory storage) and the potential for "offline" token introspection for authorization.  Stateless does of course lack a lot of the key features of the stateful architecture.

What is a stateless token?

The stateless token is basically a JWT token, that is stored in the existing iPlanetDirectoryPro cookie (if accessing via a browser) or within the tokenId response if authenticating over REST.  The JWT contains all of the content that would stored on the server side in a stateful session - so things like uid, expiryTime and any other profile or session attributes you want to define.

To quote my colleague Ashley Stevenson "Stateful is a phone call and Stateless is a text message".

The token can also be signed and/or encrypted using standard algorithms such as HS256 (which uses a shared secret) or RS256 (which uses a public / private key combo) so adding a bit of security.

Config can be done at the realm level too, which makes for a flexible approach to which realms, users and applications should use it.

Offline Authentication

An interesting bi-product of using stateless tokens, is that introspection can be done on the token, without going back to the originating source - ie OpenAM.  Once OpenAM issues the token (this would need to be at least cryptographically signed and ideally encrypted if it contained sensitive PII required for authorization), verification and decoding of the token can be done by a 3rd party application.  This is pretty straight forward to do as OpenAM leverages open standards such as JSON Web Tokens (JWT) with standard signing and encryption algorithms.

I created a quick sample node.js application that does just that.  It does the following simply using a few lines of JavaScript and can be run from a command line for testing.

  1. Authenticates to the pre-configured stateless realm in OpenAM over REST
  2. Receives the JSON response with the tokenId value and strips out the JWT component
  3. Verifies the tail signature using HS256 and a shared secret configured by OpenAM to prove the token hasn't been tampered with
  4. Decodes the token from base64 and introspects the JSON contents
The code is available here.

The introspection aspect in step 4, could be easily expanded to perform additional queries of the contents, such as looking for certain claims or profile attributes that could be used by an application, in order to perform an authorization decision.

See the following draft documentation for further details on configuration of stateless tokens and the implications of the approach over stateful - http://openam.forgerock.org/doc/bootstrap/admin-guide/index.html#chap-session-state

11 May 2015

Open Source, Binaries & Celebrity Chefs

Working for an open source software company, I am faced with questions surrounding the open source approach and model in most customer meetings.  Many customers understand what open source is, or they think they do, but still want to know more.

Open source vendors are now everywhere - Github, the social code repo, claims to house over 22 million projects.  That is a staggering number.  Many public sector and government departments, now have a preference to use open source vendors in their selection processes.  Some of the biggest vendors on the planet, are now the top contributors to the Linux Kernel - including IBM, Samsung and Google.

So I think it is fair to say, that Open Source Software (OSS) is here to stay.  However, there is often some confusion around how a commercial model around OSS materialises.  I use the following little flow to emphasis some of the differences between open source and compiled binaries and the subtle differences between customization and configuration.

The Recipe Book

If we switch to world of cooking for a second.  The seemingly omnipresent world of cooking and more specifically celebrity rock star chef style of 'cooking' (just how many ways are there to roast a chicken? I digress..)

Most celebrity chefs are themselves multiple brands.  If we take the recipe book, this is generally seen as unique raw output of the chef.  Albeit priced accordingly. Their unique touches, techniques and ingredients (aka user interface, design and libraries in the software world...) that they bring to their industry.  The recipe book is their relatively complete description of how to reproduce those wonderful professional quality dishes that adorn the TV shows and magazines.

I would say that recipe book is akin to an open source project.  The guts.  The entire inner gubbings of the final dish. But to make it work, you need to make (no pun intended) that dish yourself.  You basically need to be a professional chef to make it taste like a professional chef.

The same can be said of consumer food such as cola, chocolate bars and frozen pizza.  The ingredients are listed, but I personally don't have the skills, facilities or ambition to make my own bottle of cola.  I would rather buy a regular bottle from the supermarket, knowing full well the taste will be consistent and it wont poison me.

The Restaurant

If on the other hand, you don't fancy trawling through the complex ingredients list, soufflĂ© techniques and sugar browning blow torch approaches to make your perfect birthday meal, then a trip to the nearest Jamie's Italian restaurant (other celebrity chef restaurants are available), can result in the complete article, all fully supported and catered for.  However, that will come at a price.

That price dutifully covers all the ingredients, chef time, restaurant space, service, waiters, wine, ambience and piece of mind that the food will taste lovely.

This, I would describe as being the fully tested binary, supported with patches and guaranteed to work in a well documented way.

The Salt, Pepper and Sauces

One last addition, are the personal preferences, customizations and intricacies that come from the condiments.  The pouring of salt, pepper, mustard, ketchup and other wonderful spices on top of the aforementioned recipe book or restaurant meal, allow the eater, slightly more control over the finished dish. Lets call these configuration items.  The turning of a steak into a mustard adorned meat feast for example.  These intricacies are vitally important, as not every person has the same tastes.

This could also be the example of a cola being mixed with whiskey, or ice, or lemon or lime.

The same is true for each organisation or project that requires software to implement a solution.

I would say these last steps are akin to implementing any configuration or even customization tasks to the purchased software.

Not all chefs produce cookery books.  Those that do, are opening themselves to a new and different audience, and opening themselves up to a new level of transparency.

On a consumer project perspective, would you eat a frozen pizza not knowing the ingredients? That is akin to buying closed source proprietary software, where you have limited visibility to the true origins of the design.

30 March 2015

Building a Password Checkout Service in OpenIDM

A common use case within the identity life cycle management world, is what to do with shared and privileged accounts.  Common accounts such as administrator, root, backup operator accounts and other delegated administration accounts, lead to a significant anti-pattern approach when it comes to password management.  For example, many shared service or administration accounts....are just that: shared.  Sharing a password is a very insecure method of account administration.

This generally brings out several security issues:

  • The password is generally none-complex in order for many users to remember it
  • The sharing of the password is not tracked - people who shouldn't know the password generally do
  • It's difficult to track who is actually using an account at any moment in time
Whilst these issues are well known...they are still prevalent, and hence an entire sub industry focused on privileged account management (PAM).

Whilst OpenIDM isn't a PAM product, some basic password checkout service use cases can easily be coded out using the custom endpoint component, in a few lines (say 150!) of JavaScript.

The above flow was implemented via a single custom endpoint - the Password Checkout Service. This service basically leverages some of the core functionality of OpenIDM, such as the scheduler, OpenICF connectors, policy engine and role based access control model.

The PCS is a few JavaScript files that basically does a few things.  Firstly it applies an RBAC constraint on who can use the service - simply driven by a role called passwordCheckoutService. 

Only members of the role can use the service.  The PCS then checks a white list for accounts the PCS can work against - we don't want to be resetting the password of a normal user! This white list exists in a a CSV called pcsValidAccounts. The PCS then checks it's request store - pcsRequests.  This store contains records of the following format: 

  • requestId - unique reference for that particular request
  • requestTime - time stamp 
  • account - userid of the account being checked out
  • accountPath - the OpenIDM reference to where the account sits
  • checkoutActive - is the current checkout still alive (boolean)
  • checkedOutBy - userid of the user performing the check out
  • resetTime - time in the future then the account will be reset

This demo stores the above in a CSV file using the out of the box connector, but a SQL connector could equally be used in production pretty easily.

Configuration is available for the following items:
  • duration an account can be checked out for (minutes and hours)
  • the length of the password to be issued
  • number of upper case chars in the issued password
  • number of numbers in the issued password
  • number of special chars in the issued password
The checked out passwords are notified to the calling user in the JSON payload, but could easily be sent via the OpenIDM email service, to a pre-registered email address to add a little more security.

The PoC-level source code is available here.

9 March 2015

Building Hierarchical Relationships with OpenIDM

One of the common use cases I've seen recently, is the ability to link statically provisioned objects to one another.  Commonly known as linking or hierarchical linking, this provides a basic parent to child relationship, in an "1-to-many" style relationship.  For example, a literal parent, may have several children in his family, which needs to represented in a permanent or solid format.

If you apply that concept to identity, this could be required for things like production lines (parents) to sensors (children), bill payer (parent) to content subscriber (child) in the media world or perhaps floor manager (parent) and operator (child) in a retail setting.

The main idea being, is that the parent may have different characteristics (different schema, different policy validation, delegated administration rights) from the child, but still has a very permanent relationship with the other object.

OpenIDM has the ability to very quickly create different object types.  This feature is popular in the
complex and relationship device driven world of internet of things.  The concept of types, allows your identity framework to accommodate objects with different characteristics, such as those requiring different administrators or have different data syntax and policy validators.

Within the OpenIDM admin UI (http://openidm-server:port/admin) you can simply create a new vanilla object type.

This simple task, opens up the OpenIDM API to now accomodate the basic CRUDPAQ operations against your new object.  For example I could now do a GET on ../openidm/managed/parent/_id to retrieve a JSON of a parent.

In this case, I also created a child entry too.  Now, the interesting bit comes with a basic example of how we link these two together.  Each managed object has a number of hooks that come with OpenIDM, that allow us to apply logic via our scripting framework.  An interesting one with respect to linking is the onRetrieve hook.  This basically gets called on a GET or read of the object in question.  As a little food for thought, I can do the following to link my parents to their children using this hook.

When I create a child (perhaps via delegated admin as a parent), I add in a parent_id attribute, that simply contains the parent object _id.

We can use the child's parent attribute in the linking process.

To do so, simply add in a few lines of script within the parent managed object's onRetrieve hook.

The onRetrieve script (which can be JavaScript or Groovy), does a few things.  Firstly it finds and stores the parent _id as a variable.  That parent _id is then used to do a query against the managed/child objects list, using a default out of the box query: the get-by-field-value query.  This is basically a catch all search that allows me to filter the list of all managed children, for those with a specific criteria.  Basically, those that have a parent attribute as my parent _id.

The script can either be inline or saved to an external file.  In general I save as an external file as it makes it easier to manage in large deployments and allows for greater portability - but that's just my preference.

Once the query has completed, the results (which will be an array []) are simply dumped in to the parent object as a children attribute.

The end result is super simple - when you do a GET on a parent, there is a children attribute that contains an array of all the children objects that have the same parent _id!  You could probably cut that attribute down, to simply full out only the children _ids for example to make it cleaner, but that is for design.

The full onRetrieve example script is here:

//Save the current parent object _id
var parentId = object._id;

//Search all children for those with specific parent_id
foundChildren = openidm.query("managed/child", {"_queryId": "get-by-field-value", "field": "parent", "value": parentId } );

//All children [] - note this is full object
children = foundChildren.result;

//Populate against the parents children attribute
object.children = children;