Deploying Remotely
You need to follow the steps to deploy locally before attempting to deploy to a remote Kubernetes cluster as that section covers some common configuration changes that this section extends on.
The main differences for deploying to a remote cluster instead of locally are:
- Adding a kubeconfig for Pulumi to authenticate with the remote Kuberenetes cluster
- Adding the necessary Redwood configuration to create and publish the necessary Docker images
- Need to consider that the Docker images need to be uploaded to an externally-hosted Container Registry
Redwood Configuration
- 
Create a new configuration environment that inherits from the stagingorproductionenvironment presets and the configuration environment you created when you deployed locally:config/node/<production-config-environment>/_config.json:{
 "parentNames": ["production", "<project-kubernetes-config-environment>"]
 }For example: config/node/redwood-demo-production/_config.json:{
 "parentNames": ["production", "redwood-demo-kubernetes"]
 }
- 
Add a deployment/_index.yamlfil in your config env that has the below contents:cloud: "custom"
- 
Add a docker.yamlfile in your config env and use the below as a template, but you should change every variable based on your setup:registry:
 url: "yourcr.com/container-registry" # Do not include a trailing slash
 secret-name: "redwood-container-registry-secret"
 auth:
 username: "yourcr-username"
 password: "yourcr-password" # This can be a secret in the secrets provider
 image-prefix: "${docker.registry.url}/redwood"noteYou need to use an external Container Registry for your images. You can use Docker Hub, GitHub Packages, or deploy your own; many cloud providers provide an easy to deploy option. 
- 
Add a director.yamlfile to your config env. The below is a template you can use, but you should change every variable:image-tag: "1.0.0"
 persistence:
 database:
 # this is what the cluster uses. if you're deploying the cluster to the same region/datacenter as your database
 # there may be faster, private connection details
 runtime-access:
 host: "<your-external-postgresql-db-host>"
 port: 5432
 database: "<database-name>"
 user: "<username>"
 password: "<password>"
 # this is what your local machine uses, so if you may need to use different public connection details
 # you might also need to add your IP address to the firewall for the database
 deployment-access:
 host: "<your-external-postgresql-db-host>"
 port: 5432
 database: "<database-name>"
 user: "<username>"
 password: "<password>"
 backend:
 connection:
 # this is publicly facing connection details for the director backend
 # which is primarily used to authenticate external realms (i.e. realms
 # in other clusters or player-hosted realms)
 external:
 host: "demos-director-backend.redwoodmmo.com" # this must be a FQD hostname, it cannot be an IP
 port: 443
 tls: true
 frontend:
 # this is the publicly facing connection details for the director frontend
 # which is used by all clients when they launch the game
 connection:
 external:
 host: "demos-director-frontend.redwoodmmo.com" # this must be a FQD hostname, it cannot be an IP
 port: 443
 tls: true
- 
Add an override files for each of your Realm Instance Configs. If you kept the default and didn't add new realms, this should be at realm/instances/default.yamlimage-tag: "1.0.0"
 persistence:
 database:
 runtime-access:
 # this is what the cluster uses. if you're deploying the cluster to the same region/datacenter as your database
 # there may be faster, private connection details
 runtime-access:
 host: "<your-external-postgresql-db-host>"
 port: 5432
 database: "<database-name>"
 user: "<username>"
 password: "<password>"
 # this is what your local machine uses, so if you may need to use different public connection details
 # you might also need to add your IP address to the firewall for the database
 deployment-access:
 host: "<your-external-postgresql-db-host>"
 port: 5432
 database: "<database-name>"
 user: "<username>"
 password: "<password>"
 backend:
 # this is publicly facing connection details for this realm backend
 # which is primarily used if you're using an external game server
 # hosting provider (e.g. Hathora) so the sidecar can reach/authenticate
 # with the backend
 connection:
 external:
 host: "demos-rpg-realm-backend.redwoodmmo.com" # this must be a FQD hostname, it cannot be an IP
 port: 443
 tls: true
 frontend:
 # this is the publicly facing connection details for this realm frontend
 # which is used by all clients in the main menu to authenticate and join the realm's
 # servers/matchmaking
 connection:
 external:
 host: "demos-rpg-realm-frontend.redwoodmmo.com" # this must be a FQD hostname, it cannot be an IP
 port: 443
 tls: true
 game-servers:
 image-tag: "1.0.0"
- 
Create deployment/pulumi.yamlin your config env, using the below as a template:# yaml-language-server: $schema=./pulumi.yaml
 # The above comment prevents VSCode yaml language server
 # from thinking this should follow the Pulumi.yaml schema that
 # Pulumi CLI uses.
 # It's recommended to use an instance of Pulumi Cloud for production environments, which the below does
 local-mode: false
 access-token: "<pulumi-token>" # Get one by following https://www.pulumi.com/docs/pulumi-cloud/access-management/access-tokens/
 org: "<your-pulumi-org>"
 stack: "prod" # at a minimum add this variable to differentiate with the default `dev` stack
- 
Consider how you're going to handle DNS. Redwood comes with an integration with Cloudflare by creating deployment/dns.yamlwith the below template:provider: "cloudflare"
 cloudflare:
 credentials:
 account-id: "<your-account-id>" # https://developers.cloudflare.com/fundamentals/setup/find-account-and-zone-ids/
 token: "<cloudflare-api-token>" # https://developers.cloudflare.com/fundamentals/api/get-started/create-token/warningIf you don't use this Cloudflare integration, you will need to configure your DNS manually to point to the external connection hostnames you configured above. 
- 
Add the kubeconfigto correspondingconfig/node/yourenv/deployment/kubernetes/instances/<instance>.yaml:kubeconfig: "<the contents of a kubeconfig file>"This is retrieved after you provision your own Kubernetes cluster in a later step. Below are some hints to point you in the right direction, but contact your cloud provider/cluster software support if you need more help.TalosSee the official docs (please note the link may be pointing to an outdated page). K0sSee the official docs. K3sAfter you install K3s, you can find the kubeconfigcontents stored at/etc/rancher/k3s/k3s.yaml; see K3s Cluster Access docs.DigitalOcean- Open the Kubernetes cluster page in the dashboard
- In the top right, click on Actions
- Click Download Config
- The contents of the download file are the kubeconfig
 AWSSee the official docs. Azure- SSH into the master node, see these docs
- Run cat ~/.kube/config(may also be stored at/etc/kubernetes/admin.conf) to get thekubeconfigcontents
 Google Cloud- Add an entry to your local ~/.kube/configfile using the glcoud CLI
- List the contexts: kubectl config get-contexts
- Switch to your gcloud context: kubectl config set-context <context-id>
- Export the kubeconfig:kubectl config view --minify --flatten > kubeconfig.yaml
 LinodeRun linode-cli lke kubeconfig-view $clusterID --text | sed 1d | base64 --decode > kubeconfig.yaml.noteIf you're still only using one Kubernetes cluster (default for most studios), you likely want to modify the default instance at config/node/yourenv/deployment/kubernetes/instances/k8s-default.yaml.warningWe highly recommend that you use a Secrets provider to store the kubeconfigcontents.
- 
Reference config/node/default/deployment/kubernetes/redwoodBase.yamlto see if there are any variables you'd like to change by overriding in your owndeployment/kubernetes/<instance>.yamlfile.
You'll likely want to modify the region config variables. The name and ping variables are what's used in the GetRegions call in URedwoodClientGameSubsystem. Here's an example we use in one of our prod envs:
region:
  provider-id: "sfo3"
  name: "US West"
  ping: "${director.frontend.connection.external.host}" # this might not be correct if you're using CloudFlare proxying (which Redwood uses by default when using the Cloudflare DNS provider)
- 
Config envs that inherit from productionby default are configured to use an external PostgreSQL database. Redwood will not provision the managed database, nor create the credentials or initial database. Redwood will migrate/initialize the schemas/tables. We highly recommend using a managed database configured with backups/snapshots. If you're using DigitalOcean with a DigitalOcean database, you can override thedependencies.postgresql.externalDbIdvariable in your cluster config (by default this would be atdeployment/kubernetes/instances/k8s-default.yaml). You can retrieve this ID by just navigating to the database in the admin panel and getting the UUID in the URL. Providing this will write firewall rules for you so that the cluster can access the database.noteIf you want to keep using the helm chart that installs PostgreSQL in your cluster like in local deployment, you should note it has not been optimized for production and, if you haven't noticed yet when deploying locally, it is configured to delete it's data when destroyed as there's no persistent volume configured. You can find all the available configuration values in config/node/default/deployment/dependencies/postgresql.yaml(the ArtifactHUB page is also a helpful resource for figuring out what you can supply in thevaluesobject). You will definitely be "on your own" to figure this one out unless you purchased dedicated support from us.
Creating a Kubernetes Cluster
Redwood used to provision a Kubernetes cluster for you in older versions, but this was restrictive as it required us to implement an integration for every cloud provider. Starting in version 4.0, Redwood no longer provisions a Kubernetes cluster or the associated node pools for you. This gives you ultimate flexibility of where you want to deploy your cluster.
Cluster Hardware
You are encouraged to determine your own cluster needs, which will primarily depend on your game server and expected CCUs (concurrent users). Game servers usually have dedicated CPUs and not shared CPUs as it can cause hitching, but you're welcome to try using shared CPUs.
Our demo environments are on DigitalOcean using standard Dedicated CPU. Each node has 4 vCPU and 8 GB RAM nodes. Our RPG & Match environments use 2 nodes total with very little traffic (for example, we never need to shard).
Unmanaged Options
These options are solutions that you manually configure the cluster. This enables you to host at home or to leverage more affordable bare metal hosting options.
- Talos
- K0s
- K3s
- You need to disable the default Traefik ingress controller; you can do this during install curl -sfL https://get.k3s.io | sh -s - --disable=traefikor modify/etc/systemd/system/k3s.serviceto add the--disable=traefikargument to theservercommand if you already have it installed.
 
- You need to disable the default Traefik ingress controller; you can do this during install 
Managed Options
These are just some more popular managed cloud options; you don't have to manually provision the nodes in the cluster and the control plane is already set up for you. These are more straight forward to use, but you are limited to hosting on their cloud platform.
Creating a Container Registry
Redwood does not create a Container Registry for you. There are several options, but we recommend creating one with the cloud provider that hosts your cluster. Further, we highly recommend that the registry is hosted in the same region as your main backend cluster to reduce bandwidth costs and transfer speeds.
Make sure you update docker.yaml in your config env that you created above with the proper container registry details.
Building and Pushing Docker Images
When you deploy to a config env that inherits from production or staging config envs, the Docker images are no longer automatically built for you. This is a workflow choice as in production-like environments you will want to be explicit of which version is deployed based off a Docker image tag.
We've provided a separate yarn docker <config-env> script just for building Docker images and pushing them to your configured container registry; below you can find all the options (or by calling yarn docker --help yourself):
$ yarn docker --help
Usage: yarn docker [options] <config-environment>
Script for building, tagging, and pushing Redwood Docker images
Arguments:
  config-environment           The folder of the config environment located in `config/node` you want to use
Options:
  -t, --tag <tag>              Docker tag to use, otherwise the tag in the configuration will be used
  -l, --latest                 Also tag the image as 'latest' in the registry
  -o, --overwrite <overwrite>  Specify whether or not you want to overwrite existing images in the registry with the same tag. Set to 'true' or 'false'. If set to false, existing images will be
                               skipped and the script will continue.
  -s, --skip-push              Skip pushing images to the registry
  -i, --images <images...>     Optionally provide a CSV of image names to build, otherwise all will be built, can be a substring of the full image name (default: [])
  -h, --help                   display help for command
Done in 16.87s.
There's a lot of options here for flexibility, but we generally only use the -t, --tag <tag> and -l, --latest options for the most part. Here's our typical flow:
- 
Build the backend into the prepackaged binaries: - Standard License
- Full Source Code
 If you updated the packages/match-functionsource, you'll need to run the below commands. If you didn't modify thematch-function, you can skip this.yarn build && yarn pkg:match-functionThe full source code does not come with prepackage binaries; you must call this to generate them: yarn pkg
- 
Make sure you have the respective up-to-date LinuxServerfolder(s) located indist/game-servers.infoDon't forget that you can change your realm instance config to change the game-servers.local-dirvariable. This is useful if you want to ensure your production environment uses you a different dedicated server build than what you use for local Kubernetes deployment.
- 
Change the image-tagvariables found in your production env'sdirector.yamlandrealm/instances/*.yamlfiles to a new version (see above, they're set to1.0.0now)
- 
Run the yarn dockercommand:yarn docker <config-env>infoYou may want to use the --latest(or-lshorthand) option; we generally do so that if you do adocker pullcommand on the image name without the tag you'll retrieve thelatesttag. You may not want use--latestif you're testing a release candidate or just testing a set of changes in a staging environment.noteMake sure you review all the options of yarn dockerfor different use cases. For example, if you only want to push a new game server image because there were no changes to the backend, you can use the--images <images...>option. If you're using several defaults, this might look like--images game-serversince you don't need to provide the full image name.Here's a sample output: $ yarn docker redwood-demos-prod -l
 Pulling latest base images...
 Initiating building & pushing 3 images to registry: registry.digitalocean.com at path: incanta-generic-cr
 Image 1/3: redwood-demos-core-runner:3.0.1-4
 Checking registry...
 Building image...
 Pushing image...
 Adding 'latest' tag...
 Image 2/3: redwood-demos-match-function:3.0.1-4
 Checking registry...
 Building image...
 Pushing image...
 Adding 'latest' tag...
 Image 3/3: redwood-demos-game-server-rpg:3.0.1-4
 Checking registry...
 Building image...
 Pushing image...
 Adding 'latest' tag...
 Done in 396.92s.
- 
Verify your container registry is now showing the latest tagged images 
- 
If, for whatever reason, you used the -t, --tag <tag>option instead of updatingimage-tagvariables in the prior step, make sure that you update theimage-tagvariables with the correct tags before continuing below.
Deploying
Once everything is properly configured, deploying is just calling yarn deploy; note that provisioning all the resources in the cloud will take awhile.
yarn deploy <your-prod-config-env>
Testing
Testing is the same as deploying locally, the only difference is you'll use a different Director Uri to match your director.frontend.connection.external connection details.