OpenCTI could be deployed using the docker-compose command.
Clone the repository
$ mkdir /path/to/your/app && cd /path/to/your/app $ git clone https://github.com/OpenCTI-Platform/docker.git $ cd docker
Configure the environment
Before running the docker-compose command, please change the admin token (this token must be a valid UUID) and password of the application in the file
- APP__ADMIN__PASSWORD=ChangeMe - APP__ADMIN__TOKEN=ChangeMe
And change the variable
worker-export) according to the value of
As OpenCTI has a dependency to ElasticSearch, you have to set the
vm.max_map_count before running the containers, as mentioned in the ElasticSearch documentation.
$ sysctl -w vm.max_map_count=262144
In order to have the best experience with Docker, we recommend to use the Docker stack feature. In this mode we will have the capacity to easily scale your deployment.
In Swarm or Kubernetes
$ docker stack deploy -c docker-compose.yml opencti
In standard Docker
$ docker-compose --compatibility up
You can now go to http://localhost:8080 and log in with the credentials configured in your environment variables.
Behind a reverse proxy
If you want to use OpenCTI behind a reverse proxy with a context path, like
https://myproxy.com/opencti, please change the base_path configuration.
By default OpenCTI use Websockets so dont forget to configure your proxy for this usage.
If you wish your OpenCTI data to be persistent in production, you should be aware of the
volumes section for both
ElasticSearch services in the
Here is an example of volumes configuration:
volumes: grakndata: driver: local driver_opts: o: bind type: none esdata: driver: local driver_opts: o: bind type: none
docker-compose.yml file does not provide any specific memory configuration. But if you want to adapt some dependencies configuration, you can find some links below.
OpenCTI - Platform
OpenCTI platform is based on a NodeJS runtime, with a memory limit of 512MB by default. We do not provide any option to change this limit today. If you encounter any
OutOfMemory exception, please open a Github issue.
OpenCTI - Workers and connectors
OpenCTI workers and connectors are Python processes. If you want to limit the memory of the process we recommend to directly use Docker to do that. You can find more information in the official Docker documentation.
If you do not use Docker stack, think about
Grakn is a JAVA process that rely on Cassandra (also a JAVA process). In order to setup the JAVA memory allocation, you can use the environment variable
The current recommendation is
-Xms4Gfor both options.
You can find more information in the official Grakn documentation.
ElasticSearch is also a JAVA process. In order to setup the JAVA memory allocation, you can use the environment variable
The minimal recommended option today is
You can find more information in the official ElasticSearch documentation.
Redis has a very small footprint and only provides an option to limit the maximum amount of memory that can be used by the process. You can use the option
--maxmemory to limit the usage.
You can find more information in the Redis docker hub.
The RabbitMQ memory configuration can be find in the RabbitMQ official documentation. Basically RabbitMQ will consumed memory until a specific threshold. So it should be configure along with the Docker memory limitation.