Compromised twitter oauth keys

March 8, 2013

So twitter’s oauth keys have leaked. What does that mean? Don’t panic. The consequences of a client application’s key being compromised are not as serious as user credentials being compromised. The risk associated with this breach is that a malicious application tricking you in participating in an OAuth handshake (fishing) could access the twitter API on your behalf. Attackers might come up with clever ways to exploit this leak. In the meantime, avoid using twitter through any application other than the twitter application itself.

OAuth distinguishes between confidential and public clients. Applications that you can publicly download on your own device (mobile or not) fall in the public category because they are subject to their embedded secret being reverse engineered as probably happened in this case. This incident is a good illustration of the fact that client secrets should not form the basis of a secure session in public clients like mobile applications because, well, those secrets are easily discovered. Twitter may create new keys for their application and look for ways to better obfuscate them but it’s only a matter of time before these new secrets are also compromised.

As I was discussing at Cloud Security Alliance and in our last tech talk, authentication involving redirection between applications on mobile device has its risks. There are ways to completely secure this between applications of a same domain but solving this across 3rd party mobile apps, in a fool-proof way requires either something like a multi-factor authentication or the provisioning of client secrets post-application download which is often not practical.

Either way, API and application providers would do well not relying on pseudo-secrets embedded in publicly available applications as the basis of any security. In the case of client applications issued by the same provider as the API they consume (e.g. the official twitter app), the password grant type make a lot more sense to me and provides a better UX.


Enabling token distributors

February 6, 2013

Are you a token distributor? If you provide an API, you probably are.

One thing I like about tokens is that when they are compromised, your credentials are unaffected. Unfortunately, it doesn’t work so well the other way around. When your password is compromised, you should assume the attacker could get access tokens to act on your behalf too.

In his post The dilemma of the oauth token collector, and in this twitter conversation, Nishant and friends comment on the recent twitter hack and discuss the pros and cons of instantly revoking all access tokens when a password is compromised.

I hear the word of caution around automatically revoking all tokens at the first sign of a credential being compromised but in a mobile world where UX is sacred and where each tapping of password can be a painful process, partial token revocation shouldn’t automatically be ruled out.

Although, as Nishant suggests, “it is usually hard to pinpoint the exact time at which an account got compromised”, you may know that it happened within a range and use the worse case scenario. I’m not saying that was necessarily the right thing to do in reaction to twitter’s latest incident but only revoking tokens that were issued after the earliest time the hack could have taken place is a valid approach that needs to be considered. The possibility of doing this allows the api provider to mitigate the UX impact and helps avoid service interruptions (yes I know UX would be best served by preventing credentials being compromised in the first place).

Of course, acting at that level requires token governance. The ability to revoke tokens is essential to the api proviver. Any token management solution being developed today should pay great attention it. Providing a GUI to enable token revocation is a start but a token management solution should expose an API through which tokens can be revoked too. This lets existing portals and ops tooling programmatically act on token revocation. Tokens need to be easily revoked per user, per application, per creation date, per scope, etc, and per combination of any of these.

Are you a token distributor? You should think hard about token governance. You also think hard about scaling, security, integration to exiting identity assets and interop among other things. We’re covering these and more in this new eBook : 5 OAuth Essentials For APi Access Control. Check it out.


Give me a JWT, I’ll give you an Access Token

January 4, 2013

One of the common misconceptions about OAuth is that it provides identity federation by itself. Although supporting OAuth with federated identities is a valid pattern and is essential to many API providers, it does require the combination of OAuth with an additional federated authentication mechanism. Note that I’m not talking about leveraging OAuth for federation (that’s OpenID Connect), but rather, an OAuth handshake in which the OAuth Authorization Server (AS) federates the authentication of the user.

There are different ways to federate the authentication of an end user as part of an OAuth handshake. One approach is to simply incorporate it as part of the authorization server’s interaction with the end user (handshake within handshake). This is only possible with grant types where the user is redirected to the authorization server in the first place, such as implicit or autz code. In that case, the user is redirected from the app, to the authorization server, to the idp, back to the authorization server and finally back to the application. The federated authentication is transparent to the client application participating in the OAuth handshake. The OAuth spec (which describes the interaction between the client application and the OAuth Authorization Server) does not get involved.

illustration1

Another approach is for the client application to request the access token using an existing proof of authentication in the form of a signed claims (handshake after handshake). In this type of OAuth handshake, the redirection of the user (if any) is outside the scope of the OAuth handshake and is driven by the application. However, the exchange of the existing claim for an OAuth access token is the subject of a number of extension grant types.

One such extension grant type is defined in SAML 2.0 Bearer Assertion Profiles for OAuth 2.0 specification according to which a client application presents a SAML assertion to the OAuth authorization server in exchange for an OAuth access token. The Layer 7 OAuth Toolkit has implemented and provided samples for this extension grant type since its inception.

illustration2

Because of the prevalence of SAML in many environments and its support by many identity providers, this grant type has the potential to be leveraged in lots of ways in the Enterprise and across partners. There is however an alternative to bloated, verbose SAML assertions emerging, one that is more ‘API-friendly’, based on JSON: JSON Web Token (JWT). JWT allows the representation of claims in a compact, JSON format and the signing of such claims using JWS. For example, OpenID Connect’s ID Tokens are based on the JWT standard. The same way that a SAML assertion can be exchanged for an access token, a JWT can also be exchanged for an access token. The details of such a handshake is defined as part of another extension grant type defined as part of JSON Web Token (JWT) Bearer Token Profiles for OAuth 2.0.

Give me a JWT, I’ll give you an access token. Although I expect templates for this extension grant type to be featured as part of an upcoming revision of the OAuth Toolkit, the recent addition of JWT and JSON primitives enables me to extend the current OAuth authorization server template to support JWT Bearer Grants with the Layer 7 Gateway today.

The first thing I need for this exercise is to simulate an application getting a JWT claim issued on behalf of a user. For this, I create a simple endpoint on the Gateway that authenticates a user and issues a JWT returned as part of the response.

idppolicy

Pointing my browser to this endpoint produces the following output:

idoutput

Then, I extend the Authorization Server token endpoint policy to accept and support the JWT bearer grant type. The similarities between the SAML bearer and the JWT bearer grant types are most obvious in this step. I was able to copy the policy branch and substitute the SAML and XPath policy constructs for JWT and JSON path ones instead. I can also base trust on HMAC type signatures that involve a share secret instead of a PKI based signature validation if desired.

newAS

I can test this new grant type using REST client calling the OAuth Authorization Server’s token endpoint. I inject in this request the JWT issued by the JWT issuer endpoint and specify the correct grant type.

illustration5

I can now authorize an API call based on this new access token as I would any other access token. The original JWT claim is saved as part of the OAuth session and is available throughout the lifespan of this access token. This JWT can later be consulted at runtime when API calls are authorized inside the API runtime policy.


OAuth Token Management

February 11, 2012

Tokens are at the center of API access control in the Enterprise. Token management, the process through which the lifecycle of these tokens is governed emerges as an important aspect of Enterprise API Management.

OAuth access tokens, for example, can have a lot of session information associated to them:

  • scope;
  • client id;
  • subscriber id;
  • grant type;
  • associated refresh token;
  • a saml assertion or other token the oauth token was mapped from;
  • how often it’s been used, from where.

While some of this information is created during OAuth handshakes, some of it continues to evolve throughout the lifespan of the token. Token management is used during handshakes to capture all relevant information pertaining to granting access to an API and makes this information available to other relevant API management components at runtime.

During runtime API access, applications present OAuth access tokens issued during a handshake. The resource server component of your API management infrastructure, the gateway controlling access to your APIs, consults the Token management system to assess whether or not the token is still valid and to retrieve information associated to it which is essential to deciding whether or not access should be granted. A valid token in itself is not sufficient, does the scope associated to it grant access to the particular API being invoked? Does the identity (sometimes identities) associated with it also grant access to the particular resource requested? The Token management system also updates the runtime token usage for later reporting and monitoring purposes.

The ability to consult live tokens is important not only to API providers but also to owners of applications to which they are assigned. A Token management system must be able to deliver live token information such as statistics to external systems. An open API based integration is necessary for maximum flexibility. For example, an application developer may access this information through an API Developer Portal whereas a API publisher may get this information through a BI system or ops type console. Feeding such information into a BI system also opens the possibility of detecting potential threats from unusual token usage (frequency, location-based, etc). Monitoring and BI around tokens therefore relates to token revocation.

As one of the main drivers of API consumption in the enterprise is mobile applications, the ability to easily revoke a token when, for example, a mobile device is lost or compromised is crucial to the enterprise. The challenge around providing token revocation for an enterprise API comes from the fact that it can be triggered from so many sources. Obviously, the API provider itself needs to be able to easily revoke any tokens if a suspicious usage is detected or if it is made aware of an application being compromised. Application providers may need the ability to revoke access from their side and, obviously, service subscribers need the ability to do so as well. The instruction to revoke a token may come from Enterprise governance solutions, developer portals, subscriber portals, etc.

Finally, the revocation information is essential at runtime. The resource server authorizing access to APIs needs to be aware of whether or not a token has been revoked.

The management of API access tokens is an essential component of Enterprise API management. This token management must integrate with other key enterprise assets, ideally through open APIs. At the same time, token data must be protected and its access secured.


API management – Infrastructure VS SaaS

February 7, 2012

The Enterprise is buzzing with API initiatives these days. APIs not only serve mobile applications, they are increasingly redefining how the enterprise does B2B and integration in general. API management as a category follows different models. On one hand, certain technology vendors offer specialized infrastructure to handle the many aspects of API management. On the other, an increasing number of SaaS vendors offer a service which you subscribe to, providing a pre-installed, hosted, basic API management system. Hybrid models are emerging, but that’s a topic for a future post.

Before opting for a pure SaaS-based API management solution offering, consider these below.

The Cloud Advantage
One can realize the benefits of cloud computing from an API management solution without losing the ability to control its underlying infrastructure. For example, IaaS solutions let you host your own API management infrastructure. Private clouds are also ideal to host API management infrastructure and provide the added benefit of running ‘closer’ to key enterprise it assets. Through any of these SaaS alternatives, an API management infrastructure optimizes computing resources utilization. IaaS and private cloud based API management infrastructure also provide elasticity and can scale on-demand. Look for API management solutions that offer a virtual appliance form factor to maximize the benefits of cloud.

Return on investment
The advantage of a lower initial investment from SaaS delivered API management solutions quickly becomes irrelevant when the ongoing cost of a per-hit billing structure increases exponentially. With your own API management infrastructure in place, you leverage an initial investment over as many APIs as you want to deliver, no matter how popular the APIs become. Many early adopters, which originally opted for the SaaS model, (notably the more successful APIs) are currently making the switch to the infrastructure model in order to remedy a monthly cost that has grown to unmanageable levels. Unfortunately, such transitions are sometimes proving more costly than any initial costs savings.

Agility, Integration
SaaS solutions provide easy-to-use system isolated in their own silo. This isolation from the rest of your enterprise IT assets creates a challenge when you attempt to integrate the API management solution with other key systems. Do you have an existing web portal? How about existing identity, business intelligence, billing systems? If your API management solution is infrastructure based, you have access to all the low level controls and tooling that are required to integrate all these systems together. Integrating your API management with existing identity infrastructure can be important to achieve runtime access control. Integrating with billing systems is crucial to monetize your APIs. Feeding metrics from an API management infrastructure into an existing BI infrastructure provides better visibility, etc.

Security
Depending on the audience for your APIs, various regulations and security standards may apply. Sensitive information travelling through a SaaS is outside of your control. Are any of your APIs potentially dealing with cardholder information? Does PCI-DSS certification matter? If so, a SaaS-based API management solution is likely to be problematic. In addition to the off-premise security issue, SaaS based API management solutions offer limited security and access control options. For example, the ability to decide which versions of OAuth you choose to implement matters if you need to cater to a specific breed of developers.

Performance
Detours increase latency. By routing API traffic through a hosted system before getting to the source of the data, you introduce detours. By contrast, if you architect an API management infrastructure in such a way that the runtime controls happen in direct path of transaction, you minimize latencies. For example, using the infrastructure approach, you can deploy everything in a DMZ. Also, by owning the infrastructure, you have complete control over the computing resources allocated to it.


RESTful Web services and signatures

October 2, 2010

A common question relating to REST security is whether or not one can achieve message level integrity in the context of a RESTful web service exchange. Security at the message level (as opposed to transport level security such as HTTPS) presents a number of advantages and is essential for achieving a number of advanced security related goals.

When faced with the question of how to achieve message level integrity in REST, the typical reaction of an architect with a WS-* background is to incorporate an XML digital signature in the payload. Technically, including an XML dSig inside a REST payload is certainly possible. After all, XML dSig can be used independently of WS-Security. However there are a number of reasons why this approach is awkward. First, REST is not bound to XML. XML signatures only sign XML, not JSON, and other content types popular with RESTful web services. Also, it is practical to separate the signatures from the payload. This is why WS-Security defines signatures located in SOAP headers as opposed to using enveloped signatures. And most importantly, a REST ‘payload’ by itself has limited meaning without its associated network level entities such as the HTTP verb and the HTTP URI. This is a fundamental difference between REST and WS-*, let me explain further.

Below, I illustrate a REST message and a WS-* (SOAP) message. Notice how the SOAP messages has it’s own SOAP headers in addition to transport level headers such as HTTP headers.

The reason is simple: WS-* specifications go out of their way to be transport independent. You can take a soap message and send it over HTTP, FTP, SMTP, JMS, whatever. The ‘W’ in WS-* does stand for ‘Web’ but this etymology does not reflect today’s reality.

In WS-*, the SOAP envelope can be isolated. All the necessary information needed is in there including the action. In REST, you cannot separate the payload from the HTTP verb because this is what defines the action. You can’t separate the payload from the HTTP URI either because this defines the resource which is being acted upon.

Any signature based integrity mechanism for REST needs to have the signature not only cover the payload but also cover those HTTP URIs and HTTP verbs as well. And since you can’t separate the payload from those HTTP entities, you might as well include the signature in the HTTP headers.

This is what is achieved by a number of proprietary authentication schemes today. For example Amazon S3 REST authentication and Windows Azure Platform both use HMAC based signatures located in the HTTP Authorization header. Those signatures cover the payload as well as the verb, the URI and other key headers.

OAuth v1 also defined a standard signature based token which does just this: it covers the verb, the uri, the payload, and other crucial headers. This is an elegant way to achieve integrity for REST. Unfortunately, OAuth v2 dropped this signature component of the specification. Bearer type tokens are also useful but, as explained by Eran Hammer-Lahav in this post, dropping payload signatures completely from OAuth is very unfortunate.


Connecting the enterprise to the cloud marketplace

March 11, 2010

With Google launching its new cloud-based enterprise apps marketplace these days, many people are paying closer attention to a maturing overall cloud offering. One of its components which caught my attention today is ironically something that you are meant to install enterprise-side: the Secure Data Connector (SDC).

The SDC lets the enterprise control access to its private resources (resources behind the enterprise’s firewall) from google apps. This illustrates an increasingly popular pattern relating to enterprise cloud adoption where applications deployed on the cloud need to access private resources located securely behind the enterprise’s firewalls. This pattern is also referred to as the ‘distributed SOA’, the idea that an enterprise’s SOA spans across multiple service zones both on and off-premise.

Google’s SDC is essentially reverse-proxy software, which you install on a server deployed in your DMZ. SDC maintains a secure link with Google apps and enforces basic rules relating to access control. Although some aspects of the solution borrow concepts from standards such as OAuth, the solution as a whole is mostly proprietary.

There is no doubt that this pattern is very important to address for any enterprise leveraging cloud-side applications. However, before deploying Google’s own gateway, and the ones of each cloud provider that you will eventually rely on, consider a best-of-breed specialized piece of infrastructure (SOA gateway) that works across cloud providers using standards and meets the highest threat protection requirements.

As it is, google apps access private resources through such an SOA gateway just as well as they will through the proprietary SDC. This type of openness is crucial in your choice of cloud provider. Proprietary security mechanisms increase vendor lock-in – perhaps one of the most important barrier to adoption for rich enterprise cloud use. Investing in security solutions that only works with one cloud platform affects your long term ability to switch provider.


Follow

Get every new post delivered to your Inbox.