API Gateway and OpenID Connect

13 minute(s) read

API Management is the new black

The age of digital transformation has already begun. The API economy is not something new and was already announced a long time ago:

All service interfaces, without exception, must be designed from the ground up to be externalizable (Jeff Bezos).

API Management is the new Black

Being able to manage its APIs is becoming a strategic concern for many companies. Partly in order to quickly provide services to their clients.

APIs have become products and new types of architecture have emerged to take advantage of them. Micro-Services, Serverless or Function as a Service architectures for example rely heavily on APIs

Forrester, the US market research agency, has already estimated that 40% of US companies will use API management solutions as of 2020.

It is also interesting to note that this trend is reinforced by legal regulations. Indeed, on October 8, 2015, the European Parliament adopted the European Commission proposal to create safer and more innovative European payments: PSD2, Directive (EU) 2015/2366.

One of the consequences is that API Management is becoming a strong enabler in regulation directives implementation. This is another signal that puts API management at the forefront.

In a nutshell

So, API management is almost as trendy as a good Netflix show. But what is an API management platform in technical terms?

Main features of an API management solution are:

  • An API gateway: This is the entry point for incoming requests. The gateway routes the request to a back-end API using specific rules. In addition, the gateway is able to apply filters to both request and response of the targeted backend. The filters can relates to quotas, security (AuthN and AuthZ), accounting, transformation, etc.
  • An analytic platform: It collects metrics from the gateway and provides a dashboard displaying them. The main objective of this platform is to provide information on the API usage (for billing purposes for example).
  • A development portal: It exposes the API documentation and manages developer registration: obtaining API credentials, providing access to sandbox or production environment, etc.
  • A facade API: It allows high-level APIs to be exposed. Do composition, orchestration or transformation (SOAP to REST for example) requests. Although major SaaS vendors offers a frontend API (with an impressive graphical interface), this functionality is often implemented as a custom development due to important specific needs

Depending on your needs, you may only want a subset of those features. The monetization of APIs is often not the main target.

But don’t be abused. API management is nothing more than a tool. Before installing a complete solution, you may need to gradually shift your architecture to an API strategy. The transition to an API-driven architecture is, above all, a design and development problem. If your application expose a badly designed API, hiding it behind an API management system wont help making it better.

That being said, let’s get back to our topic.


As I said before, API management is quite trendy. There is fierce competition in this sector.

We can find major closed solutions like Apigee by Google or 3scale by Redhat. For the record, Apigee was acquired by Google in 2016 for a modest fee of $625 million! I think we can see that as a small indication of the global interest around this topic and prospective.

Both solutions provides all the functionality expected from a complete API management solution on public, hybrid or private Cloud platforms.

On premise is an option, as soon as you can afford it and you are able to install and operate such a complex platform. For more humble strategic ambitions or needs, public cloud offering are much more attractive.

But be careful, these companies are under the Patriot Act and if you are not an American company, this could be something you have to take seriously!

What about the Open Source ecosystem? Again, there is no lack of choice. There are very good Open Source alternatives.

Here is a non-exhaustive list:

Some limited Open Source solutions (regarding the commercial usage). Such as Tyk. A full-feature API Management solution built with Go.

Some fully Open Source solutions (without usage limitation). Such as Gravitee. A full-feature (and very promising) API Management solution built with Java.

And some hybrid Open Source solutions (only a subset of the API management solution is open). Such as Kong. A very popular API Gateway built in Lua on top of NGINX.

This last one will be used as playground for this article.

Not the King, but the Kong

Don’t get me wrong. Kong is not a complete API management solution, but *only* an API gateway which is the first feature of a complete API management solution.

If you want something more complete, you can take a look at the Enterprise Edition which provides missing features such as a graphical administration interface, a development portal and an analytic platform.

Kong logo

Then why talking about Kong? Currently, Kong is the most popular Open Source API gateway. But why? We can easily find a clue to the cause of this popularity:

  • The underlying technology: NGINX is by far a robust and very popular web server. Additional functions are powered by Lua which is not the most popular scripting language, but the most relevant in terms of simplicity and integration with NGINX interns.

  • The scalability: For simple yet powerful needs, you can use PostgreSQL as a database. But if scalability and high availability are mandatory to your API strategy, you can use Apache Cassandra, one of the most popular NoSQL databases.

  • The simplicity: Setup is trivial. Linux distribution package, Docker containers, K8S configurations, integration of the main PaaS suppliers, …. you have the choice. Operating the solution is also straightforward. Everything is an API and automating or monitoring the platform with your tooling should not be a big deal.

And the last but not the least: the pluggable features. Everything in Kong is a plugin and you can find:

  • Authentication plugins: Protect your API with authentication layers.
    • Basic: Add Basic Authentication to your APIs, with username and password protection.
    • Key: Add Key Authentication (also referred to API key) to your APIs.
    • OAuth2: Add an OAuth 2.0 authentication layer with the Authorization Code Grant, Client Credentials, Implicit Grant or Resource Owner Password Credentials Grant flow.
    • JWT: Verify requests containing HS256 or RS256 signed JSON Web Tokens.
    • HMAC: Add HMAC Signature Authentication to your APIs to establish the identity of the consumer.
    • LDAP: Add LDAP Bind Authentication to your APIs, with username and password protection.
  • Security plugins: Protect your API with additional security layers.
    • ACL: Restrict access to an API by whitelisting or blacklisting consumers using arbitrary ACL group names.
    • CORS: Add Cross-origin resource sharing (CORS) to your API.
    • IP restriction: Restrict accesses to an API by either whitelisting or blacklisting IP addresses.
    • Bot detection: Detects and blocks bots or specific HTTP clients.
    • SSL: Add an SSL certificate for an underlying service.
  • Traffic Control plugins: Manage, throttle and restrict inbound and outbound API traffic.
    • Rate limiting: Rate limit how many HTTP requests a developer can make.
    • Response rate limiting: Rate limit based on a custom response header value.
    • Request size limiting: Block requests with bodies greater than a specific size.
  • Analytics & Monitoring plugins: Visualize, inspect and monitor APIs and microservices traffic.
    • Datadog, Galileo, Runscope: Send metrics to external analytic platforms.
  • Transformation plugins: Transform request and responses on the fly.
    • Request transformation: Modify the request before hitting the upstream server.
    • Response transformer: Modify the upstream response before returning it to the client.
    • Correlation ID: Correlate requests and responses using a unique ID.
  • Logging plugins: Log requests and response data using several transport solution:
    • TCP, UDP, HTTP, File, Syslog, StatsD, Loggly (a logging SaaS).

Add to that a responsive and open community, and you get a popular Open Source project.

Let’s play

The subject of this playground will be to expose an API through a gateway, and protect it with traffic control rules and with our enterprise-class IAM solution.

Install the gateway

A straightforward way to setup Kong is Docker. The Kong team provides an official image and a source repository with a complete configuration that you can orchestrate with Docker Compose.

It’s also possible to do a manual setup if you want.

But in order to improve our playground, I created another repository with some automation tools:

$ git clone https://github.com/ncarlier/kong-integration-samples
$ cd kong-integration-samples
$ git submodule init && git submodule update
$ make deploy

This will setup:

  • A PostgreSQL database used by Kong as its data-store.
  • A Kong migration task used to setup the database.
  • A Kong instance.
  • And a Konga instance. Konga is a clean and complete graphical interface for Kong, built by the community. Konga is here only as an example (and because the tool deserves to be known). It’s a great tool to explore Kong’s features. Note that for the moment Konga don’t support the new Service and Route API (brought by the 0.13 version).

Once the containers have started, you can access the following exposed ports:

  • 8000: API entry point of the Gateway.
  • 8001: Configuration API of the Gateway.
  • 1337: Konga Web UI.

Setup the API

It’s time to play and declare our first API. Since its 0.13 revision Kong separates the API declaration into 2 distinct resources: Services and Routes.

First we need to create a service endpoint:

$ http --form POST :8001/services/ name='chuck' url='http://api.icndb.com/jokes/random'

Then, we expose this service by providing a routing strategy:

$ http --form POST :8001/services/chuck/routes paths\[\]='/chuck'

Now our service is available on the path ‘/chuck’. Kong allows you to refine this routing strategy. You can expose the service according to criteria of paths, methods, host names, and so on.

Let’s try our new service:

$ http :8000/chuck

Everything should work perfectly. Now it’s time to play with the API Gateway features.

Control the traffic

A common use of an API gateway is to control traffic. We all know that bothering Chuck Norris is not a good idea. We should therefore add a rate limit to this API.

$ http --form POST :8001/services/chuck/plugins name=rate-limiting config.minute=3

Now we can only call our API three times per minute.

$ for i in {1..4}; do http :8000/chuck; done

Note that the response header gives you information about your current usage quota.

This is just a brief introduction to illustrate how easy it is to use Kong. Now, let’s dive a little into the possibilities of the API Gateway by interfacing it with an Identity and Access Management system.

Identity & Access Management (IAM)

Kong provides some authentication plugins. But in a professional context it is common to have an external and central IAM solution. And you will certainly want to link your API gateway with it.

Regarding IAM solutions you have a lot of choices (as for API Management solutions). Fortunately, standards have emerged to manage authentication in a relatively common way. OpenID Connect is one of these standards.

OpenId Connect

Therefore, the impact of your choice does not really matter when it comes to integration. This is a very good point because it makes IAM a technical brick that can easily be replaced depending on your motivations.

Here is a non-exhaustive list:

Some complete and popular SaaS solutions like Okta or 0Auth.

Some complete Open Source solutions such as Keycloak or gluu.

Some on-premise commercial solutions such as Red Hat Single Sign-On (basically Keycloak but with Red Hat support).

And some DIY solutions such as Spring Cloud Security.

For our playground, we will use Keycloak which is perhaps, for the moment, the most robust and popular Open Source solution.

Let’s deploy our IAM:

$ make with-keycloak deploy

This will setup:

  • A Keycloak instance. Admin console is available on the 8080 local port.
  • And a configuration container. This Docker compose task will use the Keycloak CLI (kcadm.sh) to configure our instance as follow:
    • Create a sample realm. Named… “Sample”.
    • Register a new client named “sample-api”. This client will be “confidential” (capable of secure client authentication and able to maintain the confidentiality of their credentials). It will use the Kong URL as the authorized URL address and allow “direct access grants” flow to facilitate token requests.
    • Create an Role.
    • Create a Group.
    • Assign the role to the group.
    • Create a new user with his credentials (test/test) in the new group.

All this is possible thanks to the administration console (GUI). But automation is always a good friend.

Now we are ready to protect our API.

Protect the access (with JWT)

A simple way to connect our IAM with our Gateway API is to use the JWT plugin. OpenID Connect tokens are forged by using JWT. It is therefore quite simple to validate an access token provided by our IAM. Simply check the JWT signature and some claims such as validity date, issuer, etc.

By the way, if you are interested in JWT, I recommend you take a look at another article in this Blog that dissects the topic.

Let’s do this with Kong! We must first enable the JWT plugin for our service:

$ http --form POST :8001/services/chuck/plugins name=jwt

Now, if we try to access the service, we are properly rejected:

$ http :8080/chuck

To access the API, we must declare an API consumer (or API client for the IAM side).

$ http --form POST :8001/consumers username=httpie

Next, we configure the API consumer with the IAM information: the issuer name and the public key used to verify the token signature.

You can retrieve the Realm public key from the Keycloak management console or by using the API. I made a little helper to automate this:

$ make keys

It allows the Realm certificate and public key to be retrieved and stored in the current folder.

We can now configure the consumer:

$ http --form POST :8001/consumers/httpie/jwt key='http://localhost:8080/auth/realms/sample' algorithm=RS256 rsa_public_key=@pub.pem

Note that the consumer key must be the Realm URL because Keycloak puts that URL in the JWT issuer claim (iss). And this issuer must match the key.

Now we are able to claim access to the service. In order to simplify the access token recovery kinematics, we configured the client to allow “Direct Acces Grant” flow. In other words, we are able to get an access token with a single http call that will authenticate the client AND the subject directly. Usually, we use a standard flow that requires a first call to authenticate the client and obtain an access code and then use that access code with the credentials to authenticate the subject. This is a preferable flow, but for the purpose of the playground, we will simplify it.

As I said, we need to authenticate the consumer (or “client” from IAM perspective). A consumer is generally authenticated with an id and a secret (if the client is confidential and not public). You can retrieve the client’s secret on the admin console or by using the API. Once again, I made a little helper to help you:

$ make secret

This command will use the Keycloak CLI to recover the secret in a file : secret.json. We will store this secret in a variable for our future uses:

$ client_secret=`make secret | jq -r .value`

We now have everything we need to claim the access token:

$ access_token=`http --form POST :8080/auth/realms/sample/protocol/openid-connect/token client_id=sample-api client_secret=$client_secret grant_type=password username=test password=test | jq -r .access_token`

Note that Grant Type is set to Password to enable the “Direct Access Grant” flow.

To do this properly, we must also register the refresh token and use it to renew our access token for our future needs (and this to avoid providing our login information again).

And finally! We are able to call our API:

$ http :8000/chuck "Authorization: bearer $access_token"

And here we are. A well secured and protected API with powerful tools.

What next?

We have seen how to protect our API with OpenID Connect. Well… not exactly. We have protected our API with JWT which happens to be used by OpenID Connect. This is great because we don’t have a strong coupling between our IAM and our API management system. BUT we lost some interesting features such as:

  • Automatic redirection to the connection system when we try to access the API without tokens.
  • Automatic renewal of the access token using the refresh token.
  • Single Sign On/Out features.

And you know what? It’s also very easy to do with Kong!

But because this blog post is also labeled as TL;DR I will explain this in another blog post.

Written by

Nicolas Carlier

Developer junkie, open source enthusiast, data nerd, command line addict... and also a human