High-level things

Tecken requires the following services in production:

  1. PostgreSQL 9.5

  2. Redis for general “performance” caching

  3. Redis for LRU caching

General Configuration

The Django settings depends on there being an environment variable called DJANGO_CONFIGURATION. The Dockerfile automatically sets this to Prod and the overrides it to Dev.

# If production

# If stage

# If development server

You need to set a random DJANGO_SECRET_KEY. It should be predictably random and a decent length:


The ALLOWED_HOSTS needs to be a list of valid domains that will be used to from the outside to reach the service. If there is only one single domain, it doesn’t need to list any others. For example:

For Sentry the key is SENTRY_DSN which is sensitive but for the front-end (which hasn’t been built yet at the time of writing) we also need the public key called SENTRY_PUBLIC_DSN. For example:


Note! There are two configurations, related to S3, that needs to be configured (presumably different) on every deployment environment. They are: DJANGO_SYMBOL_URLS and DJANGO_UPLOAD_DEFAULT_URL. See the section below about AWS S3.


Parts of Tecken does use boto3 to talk directly to S3. For that to work the following environment variables needs to be set:


This S3 access needs to be able to talk to the org.mozilla.crash-stats.symbols-public bucket which is in us-west-2.


This default is likely to change in mid-2017.


At the moment, the only configuration for Gunicorn is that you can set the number of workers. The default is 4 and it can be overwritten by setting the environment variable GUNICORN_WORKERS.

The number should ideally be a function of the web head’s number of cores according to this formula: (2 x $num_cores) + 1 as documented here.


First of all, Tecken will never create S3 buckets for you. They are expected to already exist. There is one exception to this; if you do local development with Docker and minio, those configured buckets are automatically created when the server starts. This is a convenience just for local development to avoid needing any complicated instructions to get up and running.

S3 buckets needs to be specified in two distinct places. One for where Tecken can read symbols from and one for where Tecken can write.


The reading configuration (used for downloading) is called DJANGO_SYMBOL_URLS. It’s a comma separated string. Each value, comma separated, is expected to be a URL. The URL is deconstructed to extract out things like AWS region, bucket name, prefix and whether the bucket should be reached by HTTP (i.e. public) or by boto3 (i.e. private).

What determines if a symbol URL is private or public is if it has access=public inside the query string.

The bucket name is always expected to the be first part of the URL path. For example, in the bucket name is bucket-name-here and the prefix rest/is/prefix.


The write configuration (used for uploading) is called potentially by two different environment variables:

1. DJANGO_UPLOAD_DEFAULT_URL - a URL to indicate the bucket where, by default, all uploads goes into unless it matches an exception based on the uploader’s email address.

2. DJANGO_UPLOAD_URL_EXCEPTIONS - a Python dictionary that maps an email address or a email address glob pattern to a different URL.

As an example, imagine:


In this case, if someone, who does the upload, has email all files within the uploaded .zip gets uploaded to a bucket called mozilla-symbols-private.


This functionality with DJANGO_UPLOAD_BUCKET_EXCEPTIONS is a bit clunky to say the least. It exists to get parity with symbol upload when it was done in Socorro. In the future, this kind of configuration is best moved to user land. That way superusers can decided about these kinds of exceptions.

Upload By Download

To upload symbols, clients can either HTTP POST a .zip file, or the client can HTTP POST a form field called url. Tecken will then download the file from there and proceed as normal (as if the same file had been part of the upload).

The environment variable to control this is DJANGO_ALLOW_UPLOAD_BY_DOWNLOAD_DOMAINS. It’s default is:,

Note that, if you decide to add another domain, if requests to that domain trigger redirects to another domain you have to add that domain too. For example, if you have a that redirects to you need to add both.


Symbolication uses the same configuration as Download does, namely DJANGO_SYMBOL_URLS.

The value of the DJANGO_SYMBOL_URLS is encoded (as a short hash) into every key Redis uses to store previous downloads as structured data. Meaning, if you change DJANGO_SYMBOL_URLS on an already running, all existing Redis store caching will be reset. And the old keys, that are now no longer accessible, will slowly be recycled as the Redis store uses a LRU eviction policy.

Try Builds

Try build symbols are symbols that come from builds with a much more relaxed access policy. That’s why it’s important that these kinds of symbols don’t override the non-Try build symbols. Also, the nature of them is much more short-lived and when stored in S3 they should have a much shorter expiration time than all other symbols.

The configuration key to set is DJANGO_UPLOAD_TRY_SYMBOLS_URL and it works very similar to DJANGO_UPLOAD_DEFAULT_URL.

It’s blank (aka. unset) by default, and if not explicitly set it becomes the same as DJANGO_UPLOAD_DEFAULT_URL but with the prefix try after the bucket name and before anything else.


If the URL points to a S3 bucket that doesn’t already exist, you have to manually create the S3 bucket first.


The environment variable that needs to be set is: DATABASE_URL and it can look like this:


The connection needs to be able connect in SSL mode. The database server is expected to have a very small footprint. So, as long as it can scale up in the future it doesn’t need to be big.


Authors note; I don’t actually know the best practice for setting the credentials or if that’s automatically “implied” the VPC groups.

Redis Cache

The environment variable that needs to be set is: REDIS_URL and it can look like this:


The amount of space needed is minimal. No backups are necessary.

In future versions of Tecken this Redis will most likely be used as a broker for message queues by Celery.

Expected version is 3.2 or higher.

Redis Cache Errors

By default, all exceptions that might happen when django-redis uses the default Redis cache are swallowed. This is done to alleviate potential disruption when AWS Elasticache is unresponsive, such as when it’s upgraded. The Redis Cache is supposed to be for the sake of optimization in that it makes some slow computation unnecessary if repeated. But if the cache is not working at all (operational errors for example) it’s better that the service continue to work even if it’s slower than normal.

If you want to disable this and have all Redis Cache exceptions bubbled up, which ultimately yields a 500 server error, change the environment variable to:



If exceptions do happen, they are swallowed and logged and not entirely disregarded.

Redis Store

Aka. Redis Store. This is the cache used for downloaded symbol files. The environment value key is called REDIS_STORE_URL and it can look like this:


This Redis will steadily grow large so it needs to not fail when it reaches max memory capacity. For this to work, it needs to be configured to have a maxmemory-policy config set to the value allkeys-lru.

In Docker (development) this is automatically set at start-up time but in AWS ElastiCache config is not a valid command. So this needs to configured once in AWS by setting up an ElastiCache Redis Parameter Group. In particular the expected config is: maxmemory-policy=allkeys-lru.

Expected version is 3.2 or higher.

Redis Socket Timeouts

There are two Redis connections. The “Redis Cache” and the “Redis Store”. These have both have the same defaults for SOCKET_CONNECT_TIMEOUT (1 second) and SOCKET_TIMEOUT (2 seconds).

The environment variables and their defaults are listed below:



The three environment variables to control the statsd are as follows (with their defaults):

  1. DJANGO_STATSD_HOST (localhost)


  3. DJANGO_STATSD_NAMESPACE (‘’ (empty string))


In the production, stage, and development deployments, Tecken uses Mozilla SSO, a self-hosted Auth0 instance that integrates with Mozilla’s LDAP system.

For local development, Tecken uses a test OpenID Connect (OIDC) provider. This can be overridden to use an Auth0 account or other OIDC provider.


Local developement is configured to use oidcprovider, a containerized OpenID Connect provider that allows self-created accounts. The default configuration is:


To use the provider:

  1. Load http://localhost:3000

  2. Click “Sign In” to start an OpenID Connect session on oidcprovider

  3. Click “Sign up” to create an oidcprovider account:
    • Username: A non-email username, like username

    • Email: Your email address

    • Password: Any password, like password

  4. Click “Authorize” to authorize Tecken to use your oidcprovider account

  5. You are returned to http://localhost:3000. If needed, a parallel Tecken User will be created, with default permissions and identified by email address.

You’ll remain logged in to oidcprovider, and the account will persist until the oidcprovider container is stopped. You can visit to manually log out.

Auth0 and other OIDC providers

Mozilla SSO, a self-hosted instance of Auth0, is used in the production, stage, and development deployments, and Tecken has additional functionality that uses SSO / Auth0 features. See Authentication for details.

To use Auth0 in local development, customize your environment:


Any OpenID Connect (OIDC) provider can be used. Many OIDC providers publish their endpoints, for example

First Superuser

Users need to create their own API tokens but before they can do that they need to be promoted to have that permission at all. The only person/people who can give other users permissions is the superuser. To bootstrap the user administration you need to create at least one superuser. That superuser can promote other users to superusers too.

This action does NOT require that the user signs in at least once. If the user does not exist, it gets created.

The easiest way to create your first superuser is to use docker-compose:

docker-compose run web superuser