AWS Auth Caching Strategies
Warren Parad

Warren Parad @wparad

About: Long time software architect, CTO Authress, creating application security plug-ins for any software application with Authress. Talk to me about security in microservices or service authorization.

Location:
Switzerland
Joined:
Jul 22, 2018

AWS Auth Caching Strategies

Publish Date: Jun 17
16 1

Caching is difficult to get right and often means you need to pull in additional frameworks into your code. Fine tuning the balance between performance and data freshness takes time and experience. In case of User-Agent integrations (for example, an application UI running in your user’s browser), it is even more crucial, as the User-Agent is rarely under your control and yet demands fast response times. This is why often I opt to provide cache recommendations for the service side in many cases. One such example of this, is in the product I work heavily with—Authress.

That doesn’t mean you can’t cache returned values for longer.

I'm going to use Authress as example for caching, so a quick summary might make sense. Authress provides login and access control for the applications you write. This means permissions checks. (And yes, because we are Swiss company focusing on the EU market is critical).

So, in the case that you’re making a lot of the same, low variability permission checks, for example, you may want to build a cache on top of Authress to limit your costs. It is not strictly necessary though. I'm going to walk through how AWS can be utilized to provide different caching opportunities when interacting with third party services.

General caching strategies

In the context of Authorization, frequently the goal is to cache Authorization Requests as much as is useful. The following strategies will review the available possibilities. Let's assume that recommendations for cache times will always be returned in the Cache-Control header in the response from API Authorization User Permission Requests.

A. API Gateway

If you run an API Gateway, there is an automatic caching strategy to support caching data for a short period of time. If data can be cached on a per request basis, then adding into the cache details about the user's permissions and authorization is an option. This is known as "Caching Authorization checks in API Gateway".

Depending on your API Gateway, this can work better for serverless solutions compared to others. The API Gateway caching uses the Access Token as the default cache key, and that means you must add in to the cache key, the Resource URI Path and the Request HTTP Method to ensure a path specific authorization is cached.

The most common and effective cache examples would include A list of all the tenants or customer accounts a user has access to. Since these list would change rarely, storing this information in the AWS API Gateway cache works well.

Getting the list of tenants a user has access to in the API Gateway authorizor:

import { AuthressClient } from '@authress/sdk';
const authressClient = new AuthressClient({
  authressApiUrl: 'https://auth.yourdomain.com' });
const userResources = await  
  authressClient.userPermissions.getUserResources(
    userId, `tenants/*`, 10, null,
    CollectionConfiguration.TOP_LEVEL_ONLY);

return {
  context: {
    // Stringify is because API does not support arrays.
    userResources: userResources.data.resources.join(',')
  }
};
Enter fullscreen mode Exit fullscreen mode

Danger!

I'm going to repeat this: You must ensure that the cache key associated with the API includes the HTTP Method and the full resource URI. If you are not sure what this means please consult with your API Gateway documentation. In API Gateway, update the Identity Source to include both the HTTP Method and the Path, which are both sourced from the context.

See API Gateway configuration vulnerabilities for more information.

B. Content Delivery Networks and Edge-based caching

A CDN can often work to proxy all requests to a target provider. Instead of integrating directly with our API target of choice, you can proxy the requests through another solution that sits in front of your Auth provider. Some CDNs work well for this, others might not.

In the case of AWS, the canonical solution would be using AWS CloudFront. From the experience of my development team, using AWS CloudFront can be a bit finicky when putting CloudFront in front of other services that you don't own. Some of our users say that it has worked, others have run into limitations from CloudFront especially regarding cache times and configuration. Usually in these cases, you might need to use a Lambda@Edge function attached to your CloudFront to interact with the third party.

Due to this, there might be limited value in the benefit from the caching that CloudFront could provide. A common corner case I've found is that sometimes you are thinking about doing this to help reduce costs. Costs incurred by calling that third party API. Costs of course are relevant at scale, however at that same scale, I tend to think about partial volume discounts so that rather than forcing the use and therefore additionally paying for the CDN in above and beyond the third party.

Take for an example Authress, as a company we would much prefer to offer a discount than force you to have to build complexity. You would get the benefit directly from Authress Billing without having to write or maintain anything yourself or pay for a second technology on top (Price or Total Cost of Ownership). If you are investigating a caching solution to handle scale due primarily to costs, please contact your provider. If your provider won't offer alternatives to make your integration seamless, then that might not be a provider that makes sense to continue with. Rather than trying to wrap a bad solution, find a better one!

Once a request is passed to Lambda@Edge, that would grant full capabilities to storing and retrieving data through different data stores, such as DynamoDB. But, the implementation details would be up to you.

Troubleshoot AWS CloudFront

I do want to share a quick callout though. One possible error you might see is related to a CloudFront stacking issue. Since Authress itself is using CloudFront, depending on your setup you might run into a stacking problem. At the current moment, if you are seeing this issue, there isn't a way for CloudFront to be used in your scenario, so we recommend switch to Lambda@Edge with CloudFront and interacting with Authress through there. This is explored further in the next sections.

C. Self-hosted internal proxy

When you are at the point of wanting a proxy to cache authorization requests, a quick microservice service could be separated and created to proxy all the requests to your provider. This could be run as standalone service. The proxy would need to pass along requests Authress after interacting with your cache datastore.

Hopefully the Third Party's SDKs support an a configurable target endpoint. Instead of setting it to be your Custom Domain such as https://auth.yourdomain.com, you would set the target endpoint to be your own microservice's URL.

Proxy service for caching permissions requests:

import { AuthressClient } from '@authress/sdk';

// Switch this to be your cache's URL:
const authressClient = new AuthressClient({
  authressApiUrl: 'https://cache.yourdomain.com' });

const userId = 'User';
const resourceUri = `resources/${resourceId}`;
const permission = 'READ';

try {
  await authressClient.userPermissions.authorizeUser(
  userId, resourceUri, permission);
} catch (error) {
  if (error.code === 'UnauthorizedError') {
      return { statusCode: 403 };
  }

  throw error;
}
Enter fullscreen mode Exit fullscreen mode

For assistance with creating a proxy, I have to recommend reaching out to the provider with questions. Many products have secret fields and configurations in their SDKs, or in the case of our own SDKs we have increased security configuration in there, attempting to side-step the SDK to build a custom caching layer without the SDK will cause you to lose those optimizations.

D. SDK configured caching
Recently I've been investing further resources into improving built-in caching for our own SDKs, but in general each SDKs for each different language for each different provider has varying levels of support for caching.

Caching in the SDK works well for longer lived containers. For sustained requests to your API, even with a serverless solution, your function will have this data cached for the lifetime of the container. This works great for balanced predictable usage. This is less valuable for bursts. For non-serverless solutions when utilizing the caching if it is provided by the SDK, in your language, it can work out of the box.

Some SDKs support caching and caching configuration and others do not. The reason for this is contingent on the tools available in the language as well as libraries supporting memoization.

In-memory caching

Depending on the sort of caching you are looking for or how your requests look, in memory can often provide the best impact. This would give you full control over how caching is done. So there are a bunch of options available, and which levers you want to pull is going to be based on your core needs.

Long term, if the SDK you are using doesn't support the caching configuration you need and you have a solution you have been using effectively, please let us (or your provider) know and hopefully they'll opt for converting your In-memory caching configuration into a first-class option in the SDK for that language. (Note: Company Value of Customer-Obsession may be required for this last part to work)

This example of how a cache could work:

In-memory cache wrapper for javascript:

import { AuthressClient } from '@authress/sdk';
const authressClient = new AuthressClient({
  authressApiUrl: 'https://auth.yourdomain.com' });

// create a cache that stores the results for 10 seconds
const cache = new Cache(10 * 1000);

const userId = 'User';
const resourceUri = `resources/${resourceId}`;
const permission = 'READ';

let hasAccess = await cache.getValue(userId, resourceUri, permission);
// No value is cached
if (hasAccess === null) {
  try {
    await authressClient.userPermissions.authorizeUser(
      userId, resourceUri, permission);
    await cache.storeValue(userId, resourceUri, permission, true);
    hasAccess = true;
  } catch (error) {
    if (error.code === 'UnauthorizedError') {
      await cache.storeValue(userId, resourceUri, permission, false);
      hasAccess = false;
    }
    throw error;
  }
}

if (!hasAccess) {
  return { statusCode: 403 };
}
Enter fullscreen mode Exit fullscreen mode

Shared internal cache

One strategy that works well with multiple services when not using serverless or even sometimes when using serverless, is using a server that optimizes providing fast-lookup caches. That is, if you have multiple services that all need to interact with the same third party in the same way, and access to that third party isn't necessarily well-secured, or all your services use similar credentials for accessing that third party, you might benefit from a shared cache.

Back to the authorization example, after an SDK returns a success for an authorization request, you could store the result in cache-optimized solution. A recommendation for this strategy would be to use Valkey. Most cloud providers either support a Valkey solution or support deploying the open source container to your infrastructure, and AWS is no exception:

Further Caching Support

Have some ideas that aren't listed here, and think I should extend this list? Please let me know so I can extend the recommended caching strategies in this article.

For help understanding this article or how you can implement a solution like this one in your services, feel free to reach out to me and join my community:

Join the community

Comments 1 total

  • Admin
    AdminJun 17, 2025

    We’re thrilled to announce free tokens now live for Dev.to contributors as a thank-you for your contributions! Claim your rewards here (no gas fees). – Dev.to Community Support

Add comment