Skip to main content
Version: 0.1.0

Running in Docker

note

Refinery can run as a cloud-based Graph Solver as a Service without local installation. If you’re just looking to try Refinery, our online demo provides a seamless experience without installation.

info

Installing Refinery as a Docker container can support more advanced use cases, such as when generating models with more resources or a longer timeout.

To generate larger models with a longer timeout, you can use our Docker container on either amd64 or arm64 machines:

docker run --rm -it -p 8888:8888 ghcr.io/graphs4value/refinery:0.1.0

Once Docker pulls and starts the container, you can navigate to http://localhost:8888 to open the model generation interface and start editing.

A command-line interface (CLI) version of Refinery is also available as a Docker container.

Alternatively, you can follow the instructions to set up a local development environment and compile and run Refinery from source.

Environmental variables

The Docker container supports the following environmental variables to customize its behavior. Customizing these variables should only be needed if you want to increase resource limits or expose your Refinery instance over the network for others.

Notes for local-only instances are highlighted with the ➡️ arrow emoji.

Important security notices for public instances are highlighted with the ⚠️ warning emoji.

Networking

REFINERY_LISTEN_HOST

Hostname to listen at for incoming HTTP connections.

Default value: 0.0.0.0 (accepts connections on any IP address)

REFINERY_LISTEN_PORT

TCP port to listen at for incoming HTTP connections.

Refinery doesn’t support HTTPS connections out of the box, so there’s no point in setting this to 443. Use a reverse proxy instead if you wish to expose Refinery to encrypted connections.

If you change this value, don’t forget to adjust the -p 8888:8888 option of the docker run command to expose the selected port.

Default value: 8888

REFINERY_PUBLIC_HOST

Publicly visible hostname of the Refinery instance.

➡️ For installations only accessed locally (i.e., localhost:8888) without any reverse proxy, you can safely leave this empty.

⚠️ You should set this to the publicly visible hostname of your Refinery instance if you wish to expose Refinery over the network. Most likely, this will be the hostname of a reverse proxy that terminates TLS connections. Our online demo sets this to refinery.services.

Default value: empty

REFINERY_PUBLIC_PORT

Publicly visible port of the Refinery instance.

➡️ For installations only accessed locally (i.e., localhost:8888), this value is ignored because REFINERY_PUBLC_HOST is not set.

Default value: 443

REFINERY_ALLOWED_ORIGINS

Comma-separated list of allowed origins for incoming WebSocket connections. If this variable is empty, all incoming WebSocket connections are accepted.

➡️ For installations only accessed locally (i.e., localhost:8888) without any reverse proxy, you can safely leave this empty.

⚠️ The value inferred from REFINERY_PUBLIC_HOST and REFINERY_PUBLIC_PORT should be suitable for instances exposed over the network. For security reasons, public instances should never leave this empty.

Default value: equal to REFINERY_PUBLIC_HOST:REFINERY_PUBLIC_PORT if they are both set, empty otherwise

Timeouts

REFINERY_SEMANTICS_TIMEOUT_MS

Timeout for partial model semantics calculation in milliseconds.

➡️ Increase this if you have a slower machine and the editor times out before showing a preview of your partial model in the Graph or Table views.

⚠️ Increasing this timeout may increase server load. Excessively large timeout may allow users to overload you server by entering extremely complex partial models.

Default value: 1000

REFINERY_SEMANTICS_WARMUP_TIMEOUT_MS

Timeout for partial model semantics calculation in milliseconds when the server first start.

Due to various initialization tasks, the first partial model semantics generation may take longer the REFINERY_SEMANTICS_TIMEOUT_MS and display a timeout error. This setting increases the timeout for the first generation, leading to seamless use even after server start (especially in auto-scaling setups).

Default value: equal to 2 × REFINERY_SEMANTICS_TIMEOUT

REFINERY_MODEL_GENERATION_TIMEOUT_SEC

Timeout for model generation in seconds.

➡️ Adjust this value if you’re generating very large models (> 10000 nodes) and need more time to complete a generation. Note that some unsatisfiable model generation problems cannot be detected by Refinery and will result in model generation running for an arbitrarily long time without producing any solution.

⚠️ Long running model generation will block a model generation thread. Try to balance the number of threads and the timeout to avoid exhausting system resources, but keep the wait time for a free model generation thread for users reasonably short. Auto-scaling to multiple instances may help with bursty demand.

Default value: 600 (10 minutes)

Threading

➡️ If you only run a single model generation task at a time, you don’t need to adjust these settings.

⚠️ Excessively large thread counts may overload the server. Make sure that all Refinery threads can run at the same time to avoid thread starvation.

REFINERY_XTEXT_THREAD_COUNT

Number of threads used for non-blocking text editing operations. A value of 0 allows an unlimited number of threads by running each semantics calculation in a new thread.

Default value: 1

REFINERY_XTEXT_LOCKING_THREAD_COUNT

Number of threads used for text editing operations that lock the document. A value of 0 allows an unlimited number of threads by running each semantics calculation in a new thread.

Default value: equal to REFINERY_XTEXT_THREAD_COUNT

REFINERY_XTEXT_SEMANTICS_THREAD_COUNT

Number of threads used for model semantics calculation. A value of 0 allows an unlimited number of threads by running each semantics calculation in a new thread.

Must be at least as large as REFINERY_XTEXT_THREAD_COUNT.

Default value: equal to REFINERY_XTEXT_THREAD_COUNT

REFINERY_MODEL_GENERATION_THREAD_COUNT

Number of threads used for model semantics calculation. A value of 0 allows an unlimited number of threads by running each semantics calculation in a new thread.

⚠️ Each model generation task may also demand a large amount of memory in addition to CPU time.

Default value: equal to REFINERY_XTEXT_THREAD_COUNT

Libraries

REFINERY_LIBRARY_PATH

Modules (.refinery files) in this directory or colon-separated list of directories will be exposed to user via Refinery’s import mechanism.

➡️ Use this in conjunction with the mount volume (-v) option of docker run to work with multi-file projects in Refinery.

⚠️ Only expose files that you want to make public. It’s best to expose a directory that contains nothing other than .refinery files to minimize potential information leaks.

Default value: empty (no directories are exposed)

Pre-release versions

You can take advantage of the most recent code submitted to our repository by using the latest tag instead.

docker run --pull always --rm -it -p 8888:8888 ghcr.io/graphs4value/refinery:latest