Installation Guide
This guide provides comprehensive instructions for installing Codesphere Private Cloud. Codesphere is primarily delivered as a Helm chart, relying on two key external components: PostgreSQL for the database and Ceph for High-Available Distributed Storage. While these could be part of the Helm deployment, they are treated as external components for enhanced stability and robustness.
This document covers two distinct installation methods:
- Multi-Node Installer: This method utilizes a script that connects via SSH from a central setup machine to various hosts, installing components sequentially. It's designed for provisioning a complete Codesphere datacenter from scratch (Bare Metal, VMs) or integrating with some pre-existing components.
- Single-Node Installer: This script operates by installing one specific component at a time on a designated host. The component to be installed must be explicitly named in the command.
Both installation methods depend on a global config.yaml configuration file and require SSH access to all target hosts from the machine running the installation scripts.
Understanding the Installers
Why These Installers?
Codesphere's core is deployed using a Helm chart. However, for critical infrastructure like PostgreSQL and Ceph (for distributed, high-availability storage), we recommend setting them up as robust, external services rather than bundling them within the Helm chart. This approach ensures a more stable and resilient environment.
These installers are modular tools, allowing you to:
- Provision an entire Codesphere datacenter on new hardware (bare metal or VMs).
- Utilize and integrate certain pre-existing components if available.
Core Installer Components
The installation process involves several key components:
| Component Name | Description |
|---|---|
| Secrets | Manages sensitive data using Age and Sops Key Files. |
| Docker | Container runtime for deploying Codesphere services. |
| PostgreSQL | The primary database for Codesphere. |
| Ceph | Distributed storage solution for high availability. |
| Kubernetes | Container orchestration platform. |
| SetUpCluster | Foundational Kubernetes resources for Codesphere. |
| Monitoring | (Implicitly) Components for observing the cluster. |
| Codesphere | The main Codesphere application deployment. |
Global Prerequisites & Setup
These prerequisites apply to both Multi-Node and Single-Node installation methods.
Supported Operating System
- Ubuntu 22.04 LTS (Server Edition) is the recommended and supported OS for all nodes.
Hardware Requirements
-
PostgreSQL Nodes (if installing with this guide - 2 Servers recommended for HA):
- At least 2 CPU cores per server
- At least 4 GB RAM per server
- At least 200GB at
/(including /etc/codesphere)
-
Ceph Nodes (3 Servers minimum recommended for HA):
- At least 2 CPU Cores per server
- At least 8 GB RAM per server
- 1 disk with at least 200 GB for Ceph metadata (BlueStore DB/WAL). This storage should be fast (e.g., SSD/NVMe) and comprise at least 4% of the total block storage capacity. Must be completely empty (no filesystem).
- 1 or more disks with at least 300 GB each for Ceph block storage (OSDs). Must be completely empty (no filesystem).
- 1 root disk with at least 200GB for the OS.
-
Kubernetes Nodes:
- At least 1 control-plane node.
- At least 2 worker nodes.
- Each Kubernetes server:
- At least 200GB at
/(including /etc/codesphere) - An additional 200 GB at
/var/lib/docker(or/var/lib/k0sif k0s manages containerd directly, check k0s storage).
- At least 200GB at
- At least 8 CPU Threads on every node (as detected by Kubernetes).
- At least 16GB of total memory across all worker nodes combined.
- At least 8GB of memory for each control-plane node.
Required Software Packages
Ensure the following software is installed:
-
On the Installer Machine (Multi-Node only, or your management machine for Single-Node):
- OpenSSH client
scp(Secure Copy Protocol client)- Node.js version 22 (a
nodebinary is also shipped with the installer)
-
On all Target Nodes (Ceph, Kubernetes, PostgreSQL):
iptables- A text editor like
viornano curl
SSH Access
- The machine executing the installer scripts must have SSH access to all remote target machines (PostgreSQL, Ceph, Kubernetes nodes).
- The SSH configuration shall allow SSH sessions for up to 3 hours, e.g. by setting an appropriate
ServerAliveInterval(see below). - The installer expects a simple
ssh <IP-ADDRESS>to work, i.e. no password and no user has to be mentioned. - To allow passwordless login, configure a public key pair to use for SSH (replace
id_ed25519_csinstallwith the desired key name):ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519_csinstall -C "some identifier"
# Use ssh-copy-id or copy the public key (*.pub) onto /root/.ssh/authorized_keys on all VMs
ssh-copy-id -i ~/.ssh/id_ed25519_csinstall root@host - The user on the remote machines MUST be root currently.
- You can configure SSH access for different hosts via your SSH configuration file (e.g.,
~/.ssh/config). Example:Refer to theHost 10.10.123.1
HostName 10.10.123.1
User root
IdentityFile ~/.ssh/id_ed25519_csinstall
# Optional
ServerAliveInterval 30
# Repeat for all other machinesssh_configman page for more details. - Test SSH Connectivity: Before starting the installation, verify you can SSH into each node by simply running
ssh <IP-ADDRESS>without specifying a user, port or key file. The first connection might prompt:ConfirmAre you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'hostname,ip_address' (ED25519) to the list of known hosts.yesfor each new host.
Download Dependencies
- Obtain the Codesphere installer archive (
installer.tar.gz) from Codesphere. This might be via a direct download URL or a physical storage device. - For Multi-Node: Place the archive on the server from which you will perform the installation.
- For Single-Node: You will need to upload and extract this archive on every host involved in the installation.
- Extract the installer archive into a directory (e.g.
/etc/codesphere). You can also extract the dependencies archivedeps.tar.gzwhich might come handy later.mkdir /etc/codesphere
tar -xvzf ./installer.tar.gz -C /etc/codesphere
mkdir /etc/codesphere/deps
tar -xvzf ./etc/codesphere/deps.tar.gz -C /etc/codesphere/deps
Configure inotify Watcher Limits
On all Kubernetes nodes, increase the inotify limits to prevent issues with file watching. Create or edit /etc/sysctl.d/11-inotify.conf with the following content:
fs.inotify.max_queued_events=16384
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=524288
Apply the changes:
sudo sysctl -p /etc/sysctl.d/11-inotify.conf
Note: Larger servers might require proportionally higher limits.
Configuration Files and Secrets Management
Both installer methods rely on two main files:
config.yaml: Contains the main configuration for your Codesphere environment.prod.vault.yaml: Stores sensitive information, encrypted using SOPS and Age.
Generate Certificates and Keys
General Note: Skip adding passphrases to any key newly created.
Ceph SSH Key
This key allows cephadm to bootstrap and manage Ceph nodes. Generate it on your setup machine:
# This creates ceph_id_rsa (private key) and ceph_id_rsa.pub (public key)
ssh-keygen -t rsa -b 4096 -C "ceph" -f ./ceph_id_rsa
Cluster Ingress CA
Codesphere supports multiple options for issuing and managing the Cluster Ingress CA, which signs certificates for all ingress traffic within the cluster. Users' machines accessing Codesphere must trust this CA. You can:
- Generate a new self-signed CA (default, simple for quick start)
- Use your organization's existing CA or intermediate CA (recommended for production)
- Integrate with an external certificate authority (e.g., HashiCorp Vault, AWS PCA, Let's Encrypt) for automated certificate management
See the table below for a summary:
| Option | Description |
|---|---|
| Self-Signed (default) | Generate a new self-signed CA locally. |
| Organization CA | Use your organization's existing CA or intermediate CA to sign ingress certificates. |
| External Issuer | Integrate with an external certificate authority (e.g., Vault, AWS PCA, Let's Encrypt). |
For configuration details and examples, see Cluster Ingress CA Options.
(Optional): Generate a new CA
Replace MyOrg, DE, KA with your organization's details.
# Generate CA Key
openssl genrsa -out ca.key 2048
openssl rsa -in ca.key -outform PEM -pubout -out ca-pub.pem
# Generate CA Certificate
openssl req -x509 -new -nodes -key ca.key -sha256 -days 1068 \
-outform PEM -out ca.pem \
-subj '/CN=MyOrg Root CA/C=DE/L=KA/O=MyOrg'
Sign a new server key for Ingress using your CA
Replace <hostname> with the primary hostname for your Codesphere access and MyOrg with your organization's name.
# Create Certificate Signing Request (CSR) and new key for ingress
openssl req -new -nodes -out ingress.csr -newkey rsa:4096 -keyout ingress.key \
-subj '/CN=<hostname>/O=MyOrg'
# Sign the CSR with your existing CA
openssl x509 -req -in ingress.csr -CA existing_ca.pem -CAkey existing_ca.key -CAcreateserial \
-outform PEM -out ingress.pem \
-days 730 -sha256
(If you generated a new CA in the previous step, existing_ca.pem is ca.pem and existing_ca.key is ca.key)
PostgreSQL Certificates (If installing PostgreSQL)
If you plan to install PostgreSQL using the provided scripts, you'll need to generate certificates for it. This also involves a CA. You can use the same CA generated for Ingress or a dedicated one.
Generate CA (if not already done for Ingress): Follow the "Generate a new CA" steps in section 3.1.2 if you need a separate CA for PostgreSQL. Let's assume you are using pg_ca.key and pg_ca.pem.
Generate Primary PostgreSQL Server Certificate: Replace <primary_pg_hostname> and <primary_pg_ip_address>.
If your PostgresSQL server is not exposed by a hostname, you can also use any other descriptive name for the CN field
# Create CSR for Primary
openssl req -new -nodes -out pg_primary.csr -newkey rsa:4096 -keyout pg_primary.key \
-subj '/CN=<primary_pg_hostname>/O=MyOrg'
# Create extensions file (primary.v3.ext)
cat > pg_primary.v3.ext << EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
subjectAltName = @alt_names
[alt_names]
IP.1 = <primary_pg_ip_address> # e.g., 10.50.0.1
EOF
# Sign Primary Certificate
openssl x509 -req -in pg_primary.csr -CA pg_ca.pem -CAkey pg_ca.key -CAcreateserial \
-outform PEM -out pg_primary.pem \
-days 730 -sha256 -extfile pg_primary.v3.ext
Generate Replica PostgreSQL Server Certificate: Replace <replica_pg_hostname> and <replica_pg_ip_address>.
# Create CSR for Replica
openssl req -new -nodes -out pg_replica.csr -newkey rsa:4096 -keyout pg_replica.key \
-subj '/CN=<replica_pg_hostname>/O=MyOrg'
# Create extensions file (replica.v3.ext)
cat > pg_replica.v3.ext << EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
subjectAltName = @alt_names
[alt_names]
IP.1 = <replica_pg_ip_address> # e.g., 10.50.0.2
EOF
# Sign Replica Certificate
openssl x509 -req -in pg_replica.csr -CA pg_ca.pem -CAkey pg_ca.key -CAcreateserial \
-outform PEM -out pg_replica.pem \
-days 730 -sha256 -extfile pg_replica.v3.ext
Generate strong passwords for PostgresSQL users
Codesphere uses a number of different PostgresSQL users to access the database. The users will be automatically created in the DB by the installer, however the passwords for each user need to be specified in the secrets file. The passwords are shared with the Codesphere services, there is usually no direct interaction with them. Use any method to generate strong passwords, e.g.
openssl rand -base64 16
Generate domain auth keys
Codesphere features a domain validation to verify new custom domain. For this, secrets are created from a private/public keypair.
These need to be specified in the prod.vault.yaml. To generate use these commands:
openssl ecparam -name prime256v1 -genkey -noout -out domain_auth_key.pem
openssl ec -in domain_auth_key.pem -pubout -out domain_auth_public.pem
Create Secrets File (prod.vault.yaml)
This file will store all your secrets. Create it at a path like /home/<myuser>/secrets/prod.vault.yaml. <myuser> can be a dedicated user or root.
Please remove all comments from the prod.vault.yaml before encrypting it (see section 3.4).
The -----BEGIN PRIVATE KEY----- snippets are just meant as placeholders. Replace with the content/format of your generated files.
See OpenSSH vs. OpenSSL format for some background.
# /home/<myuser>/secrets/prod.vault.yaml
secrets:
# --- Common Secrets ---
- name: cephSshPrivateKey
file:
# Content of 'ceph_id_rsa' generated in section 3.1.1
name: id_rsa
content: |
-----BEGIN OPENSSH PRIVATE KEY----- # Or appropriate type for your key
...
-----BEGIN OPENSSH PRIVATE KEY-----
- name: selfSignedCaKeyPem # Or your existing CA key if using one for Ingress
file:
name: key.pem
# Content of 'ca.key' (or your existing CA key) from section 3.1.2
content: |
-----BEGIN PRIVATE KEY----- # Or appropriate type for your key
...
-----END PRIVATE KEY-----
- name: domainAuthPrivateKey
file:
name: key.pem
# Content of 'domain_auth_key.pem' from section 3.1.4
content: |
-----BEGIN EC PRIVATE KEY-----
...
-----END EC PRIVATE KEY-----
- name: domainAuthPublicKey
file:
name: key.pem
# Content of 'domain_auth_public.pem' from section 3.1.4
content: |
-----BEGIN PUBLIC KEY-----
...
-----END PUBLIC KEY-----
# (Optional) If you provisioned your managed service backend to authenticate
# requests using an API_KEY, then you need to set that API_KEY here
#- name: managedServiceSecrets
# fields:
# # JSON array of objects
# password: |-
# [
# {
# "name": "postgres", # provider name
# "version": "v1" # provider version
# "api": {
# "secret": "MY-API-KEY-123123"
# }
# }
# ]
# --- External Registry Credentials (if used) ---
# Optional when using a Codesphere-managed K8s, mandatory when using an external K8s
- name: registryUsername
fields:
password: 'YOUR_REGISTRY_USERNAME'
- name: registryPassword
fields:
password: 'YOUR_REGISTRY_PASSWORD'
# --- Optional: If Installing PostgreSQL ---
- name: postgresPassword
fields:
# Generate a strong primary admin password (e.g., 25 characters)
password: 'YOUR_POSTGRES_ADMIN_PASSWORD'
- name: postgresReplicaPassword
fields:
# Generate a strong replica password
password: 'YOUR_POSTGRES_REPLICA_PASSWORD'
- name: postgresPrimaryServerKeyPem
file:
name: primary.key # Internal name, doesn't need to match filename
# Content of 'pg_primary.key' from section 3.1.3
content: |
-----BEGIN RSA PRIVATE KEY-----
...
-----END RSA PRIVATE KEY-----
- name: postgresReplicaServerKeyPem
file:
name: replica.key # Internal name
# Content of 'replica.key' from section 3.1.3
content: |
-----BEGIN RSA PRIVATE KEY-----
...
-----END RSA PRIVATE KEY-----
# --- Optional: If Using External Kubernetes (see config.yaml) ---
- name: kubeConfig
file:
name: kubeConfig # Internal name
content: |
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: ...
server: https://<your-k8s-api-server>
name: external-cluster
contexts:
- context:
cluster: external-cluster
user: external-admin
name: external-context
current-context: external-context
users:
- name: external-admin
user:
client-certificate-data: ...
client-key-data: ...
# ... (rest of your admin kubeconfig)
# Postgres Codesphere Users & Passwords
# Leave usernames unchanged, including *_blue
- name: postgresUserAuth
fields:
password: auth_blue
- name: postgresUserDeployment
fields:
password: deployment_blue
- name: postgresUserIde
fields:
password: ide_blue
- name: postgresUserMarketplace
fields:
password: marketplace_blue
- name: postgresUserPayment
fields:
password: payment_blue
- name: postgresUserPublicApi
fields:
password: public_api_blue
- name: postgresUserTeam
fields:
password: team_blue
- name: postgresUserWorkspace
fields:
password: workspace_blue
- name: postgresPasswordAuth
fields:
password:
- name: postgresPasswordDeployment
fields:
password:
- name: postgresPasswordIde
fields:
password:
- name: postgresPasswordMarketplace
fields:
password:
- name: postgresPasswordPayment
fields:
password:
- name: postgresPasswordPublicApi
fields:
password:
- name: postgresPasswordTeam
fields:
password:
- name: postgresPasswordWorkspace
fields:
password:
# --- Optional: Git Provider OAuth Credentials (enable in config.yaml) ---
# GitHub
- name: githubAppsClientId
fields:
password: 'YOUR_GITHUB_APP_CLIENT_ID'
- name: githubAppsClientSecret
fields:
password: 'YOUR_GITHUB_APP_CLIENT_SECRET'
# GitLab
- name: gitlabAppClientId
fields:
password: 'YOUR_GITLAB_APP_CLIENT_ID'
- name: gitlabAppClientSecret
fields:
password: 'YOUR_GITLAB_APP_CLIENT_SECRET'
# Bitbucket
- name: bitbucketAppsClientId
fields:
password: 'YOUR_BITBUCKET_APP_CLIENT_ID'
- name: bitbucketAppsClientSecret
fields:
password: 'YOUR_BITBUCKET_APP_CLIENT_SECRET'
# Azure DevOps
- name: azureDevOpsAppClientId
fields:
password: 'YOUR_AZUREDEVOPS_APP_CLIENT_ID'
- name: azureDevOpsAppClientSecret
fields:
password: 'YOUR_AZUREDEVOPS_APP_CLIENT_SECRET'
Fill in the ... placeholders with your actual key/certificate content and provide strong passwords.
Create Configuration File (config.yaml)
This file defines the structure and settings of your Codesphere installation. Create it at a path like /home/<myuser>/secrets/config.yaml.
# /home/<myuser>/secrets/config.yaml
dataCenter:
id: 1
name: main
city: Karlsruhe # Your datacenter city
countryCode: DE # Your datacenter country code
secrets:
baseDir: /home/<myuser>/secrets/ # Path to your secrets directory (where prod.vault.yaml is)
# If using an external container registry
# Optional when using a Codesphere-managed K8s, mandatory when using an external K8s
registry:
server: "my-registry.example.com"
replaceImagesInBom: true # Optional, should be set true if using an external registry
loadContainerImages: true # Optional, set to true if images should be loaded from the installer bundle
# --- PostgreSQL Configuration ---
# Choose one: "Install New PostgreSQL" OR "Use External PostgreSQL"
postgres:
# Option 1: Install New PostgreSQL (Refer to secrets file for passwords & keys)
# CA certificate for PostgreSQL (pg_ca.pem from section 3.1.3)
caCertPem: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
primary:
sslConfig:
# Primary PostgreSQL server certificate (pg_primary.pem from section 3.1.3)
serverCertPem: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
ip: 10.50.0.2 # Real IP of primary PostgreSQL server
hostname: pg-primary-node # Hostname of the primary PostgreSQL node
replica:
ip: 10.50.0.3 # Real IP of replica PostgreSQL server
name: replica1 # can be arbitrary, however PostgresSQL only allows [a-z0-9_]
sslConfig:
# Replica PostgreSQL server certificate (pg_replica.pem from section 3.1.3)
serverCertPem: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
# End of Option 1
# Option 2: Use External PostgreSQL
# caCertPem: | # CA certificate of your external PostgreSQL server (if using SSL)
# -----BEGIN CERTIFICATE-----
# ...
# -----END CERTIFICATE-----
# serverAddress: "your-external-postgres-host:5432"
# # Ensure 'postgresPassword' is set in prod.vault.yaml for the external DB user
# End of Option 2
# --- Ceph Configuration ---
ceph:
csiKubeletDir: /var/lib/k0s/kubelet # Optional, set if not using k0s
cephAdmSshKey:
# Public key part of 'ceph_id_rsa.pub' generated in section 3.1.1
publicKey: >-
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC... ceph
nodesSubnet: 10.50.0.0/25 # Subnet where Ceph nodes are located
hosts:
# The actual hostname of the VM (confirm by running 'hostname').
# hostnames don't need to resolvable (DNS), but are checked by Ceph
- hostname: ceph-node-0
ipAddress: 10.50.0.2 # Replace with real IP of a Ceph node
isMaster: true # Only one master allowed
- hostname: ceph-node-1
ipAddress: 10.50.0.3 # Replace with real IP of another Ceph node
isMaster: false
- hostname: ceph-node-2
ipAddress: 10.50.0.4 # Replace with real IP of another Ceph node
isMaster: false
# OSD (Object Storage Daemon) configuration. Adjust to your hardware.
# 'dataDevices' cannot be empty. See Ceph documentation for 'size' and 'limit' syntax.
osds:
- specId: default
placement:
host_pattern: '*' # Apply to all hosts defined above
dataDevices: # Devices for storing data
# Example: all available devices, or specify by size, model, etc.
# all: true
size: '300G:' # Disks 300GB or larger
limit: 2 # Use up to 2 such disks per host for data
dbDevices: # Devices for BlueStore internal metadata (DB/WAL)
size: '100G:200G' # Disks between 100GB and 200GB
limit: 1 # Use 1 such disk per host for metadata-DB
# --- Kubernetes Configuration ---
# Choose one: "Install New Kubernetes" OR "Use External Kubernetes"
# kubernetes.managedByCodesphere should be set accordingly
kubernetes:
# Option 1: Install New Kubernetes (using k0s)
managedByCodesphere: true
apiServerHost: 10.50.0.2 # External address for K8s API (LB, DNS, or Control Plane IP)
controlPlanes:
- ipAddress: 10.50.0.2 # Real IP of the K8s control-plane server
workers:
- ipAddress: 10.50.0.2 # Can be a control-plane and worker
- ipAddress: 10.50.0.3 # Real IP of a K8s worker server
- ipAddress: 10.50.0.4 # Real IP of another K8s worker server
# End of Option 1
# Option 2: Use External Kubernetes
managedByCodesphere: false
podCidr: "100.96.0.0/11" # Pod network CIDR of your external cluster
serviceCidr: "100.64.0.0/13" # Service network CIDR of your external cluster
# Ensure 'kubeConfig' is set in prod.vault.yaml for the external cluster
# End of Option 2
# --- Cluster Wide Settings (Applies to both installed/external k8s) ---
cluster:
certificates: # CA for services accessed by Codesphere users
ca:
algorithm: RSA
keySizeBits: 2048
# Content of 'ca.pem' (or your Ingress CA cert) from section 3.1.2
certPem: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
monitoring:
prometheus:
# optional, enable if external monitoring by Codesphere SRE is agreed
remoteWrite:
enabled: false
clusterName: my-cluster-name
gateway: # For Codesphere internal services
serviceType: "LoadBalancer" # or "ExternalIP"
# annotations: # Optional: for cloud provider specific LB config
# Example Azure:
# service.beta.kubernetes.io/azure-load-balancer-ipv4: <IP>
# service.beta.kubernetes.io/azure-load-balancer-resource-group: <rg>
ipAddresses: # Required if serviceType is "ExternalIP"
- 10.51.0.2 # Example IP
- 10.51.0.3 # Example IP
publicGateway: # For user workspaces
serviceType: "LoadBalancer" # or "ExternalIP"
# annotations: {}
ipAddresses: # Required if serviceType is "ExternalIP"
- 10.52.0.2 # Example IP
- 10.52.0.3 # Example IP
metallb:
# This is the primary switch to enable or disable the MetalLB integration.
# Set to 'true' for MetalLB to be installed and configured.
enabled: true
# Defines a list of IP address pools that MetalLB can use. A service
# will be allocated an IP from a pool that is advertised by either L2 or BGP.
pools:
- # A unique name to identify this IP address pool.
# This name is referenced in the l2 or bgp advertisement sections.
name: "default-pool"
# A list of IP addresses that MetalLB can manage.
# These ranges determine what IPs are available for your LoadBalancer services.
ipAddresses:
- "10.10.10.100-10.10.10.200" # An IP range for general purpose services.
- "192.168.5.0/24" # A CIDR block for another set of services.
- # You can define multiple pools for different purposes.
name: "special-services-pool"
ipAddresses:
- "172.17.15.1-172.17.15.10" # A smaller, dedicated range for specific services.
# (Optional) Configures Layer 2 advertisement. In L2 mode, one node in the cluster
# advertises the service IP on the local network using ARP/NDP.
l2:
- # A unique name for this L2 advertisement configuration.
name: "default-l2-advertisement"
# Specifies which IP address pools this L2 configuration should announce.
# This links the L2 mechanism to the IPs defined in the 'pools' section.
pools:
- "default-pool"
# (Optional) Restricts which nodes can advertise the IPs for this L2 config.
# If this is not defined, all nodes in the cluster are eligible.
nodeSelectors:
- matchLabels:
# This selector ensures that only nodes with the label 'role' set to 'frontend'
# will advertise IPs from the 'default-pool' via L2.
'role': 'frontend'
# (Optional) Configures BGP (Border Gateway Protocol) advertisement. In BGP mode,
# nodes peer with your network routers to advertise routes for the service IPs.
bgp:
- # A unique name for this BGP advertisement configuration.
name: "main-bgp-advertisement"
# Specifies which IP address pools this BGP configuration should announce.
# Here, we are advertising a different pool via BGP.
pools:
- "special-services-pool"
# BGP peering configuration details.
config:
# The Autonomous System Number (ASN) of your Kubernetes cluster.
myASN: 65001
# The Autonomous System Number (ASN) of the external BGP peer (your router).
peerASN: 65100
# The IP address of the BGP peer to connect to.
peerAddress: "192.168.1.1"
# (Optional) The name of a BFD Profile for fast failure detection.
# This would be configured separately in MetalLB's native configuration.
bfdProfile: "fast-detection"
# (Optional) Restricts which nodes can establish this BGP peering session.
# Useful if only border nodes are connected to the peering router.
nodeSelectors:
- matchLabels:
# This selector ensures that only nodes with the label 'kubernetes.io/hostname'
# set to 'edge-node-01' will peer with 192.168.1.1.
'kubernetes.io/hostname': 'edge-node-01'
# --- Codesphere Application Configuration ---
codesphere:
domain: "codesphere.yourcompany.com" # Main domain for Codesphere UI/API
workspaceHostingBaseDomain: "ws.yourcompany.com" # Base domain for workspaces (*.ws.yourcompany.com should point to publicGateway IPs)
# A primary public IP for workspaces (use one of the publicGateway). If assigned by a LoadBalancer and not known yet,
# leave blank and add later once known.
publicIp: "10.52.0.2"
customDomains:
cNameBaseDomain: "custom.yourcompany.com" # For custom domain CNAMEs
dnsServers: [] # e.g., ["1.1.1.1", "8.8.8.8"] IP addresses of DNS servers for resolving custom domains
experiments: [] # List of Codesphere experimental features to enable
features: # Map of Codesphere enabled/disabled features. See [Feature Flags Docs](./feature-flags.mdx) for more details.
# email-signup: true
# email-signin: true
# billing: false
extraCaPem: "" # Optional: PEM of an extra custom root CA to be trusted by Codesphere services/workspaces
extraWorkspaceEnvVars: {} # e.g., { "HTTP_PROXY": "[http://proxy.example.com:8080](http://proxy.example.com:8080)" }
extraWorkspaceFiles: []
# - path: /etc/custom-certs/my-ca.crt
# content: |
# -----BEGIN CERTIFICATE-----
# ...
# -----END CERTIFICATE-----
# Optional: if specific metallb pools should be available for workspace -> ip address assignment
# ipService:
# loadBalancerKind: metallb
# addressPools:
# - "internal-pool"
# Override default workspace images. Might be needed if you are using custom base images
# workspaceImages:
# agent: optional
# bomRef:
# agentGpu: optional
# bomRef:
# server: optional
# bomRef:
# vpn: optional
# bomRef:
deployConfig:
images:
ubuntu-24.04:
name: 'Ubuntu 24.04'
supportedUntil: '2028-05-31'
flavors:
default:
# For custom images, specify the full image name including registry
# Do not add a tag, since the tag is managed by Codesphere (see Appendix)
# image: 'registry.corp42.net/codesphere-custom-images/workspace-agent-24.04-mycorp'
image:
# For default base images: needs to match a workspace image in the installer BOM.
# Double-check <deps.tar.gz>/bom.json when in doubt
bomRef: 'workspace-agent-24.04'
pool:
1: 1 # Number of warm instances to keep pooled
oauth:
oidc:
enabled: false
type: oidc
name: ""
issuerUrl: ""
scopes: ['openid', 'email', 'profile'] # For Azure AD must include 'https://graph.microsoft.com/User.Read'
plans: # Define available workspace and hosting plans
hostingPlans:
1: # ID must be a number
cpuTenth: 10 # 1 CPU core (10 tenths)
gpuParts: 0 # GPU resources
memoryMb: 2048 # 2 GB RAM
storageMb: 20480 # 20 GB persistent storage
tempStorageMb: 1024 # Temporary storage
workspacePlans:
1: # ID must be a number
name: "Standard Developer" # Display name of the plan
hostingPlanId: 1 # Maps to an ID from hostingPlans
maxReplicas: 3 # Max concurrent replicas for a workspace
onDemand: true # Allow on-demand (start/stop) workspaces
# Optional, actually requested resources are lower than defined in the plan
# by this factor to improve utilization. Leave on default values unless you have specific needs.
# underprovisionFactors:
# cpu: 0.5
# memory: 0.75
gitProviders:
github:
enabled: false
url: "https://github.com"
api:
baseUrl: "https://api.github.com"
oauth:
issuer: "https://github.com"
authorizationEndpoint: "https://github.com/login/oauth/authorize"
tokenEndpoint: "https://github.com/login/oauth/access_token"
gitlab:
enabled: false
url: "https://gitlab.com" # For self-hosted: "https://your-gitlab.example.com"
api:
baseUrl: "https://gitlab.com" # For self-hosted, just the base URL. "/api/v4" is appended automatically.
oauth:
issuer: "https://gitlab.com"
authorizationEndpoint: "https://gitlab.com/oauth/authorize"
tokenEndpoint: "https://gitlab.com/oauth/token"
bitbucket:
enabled: false # Bitbucket Server/Data Center
url: "https://bitbucket.org" # Your Bitbucket instance URL
api:
baseUrl: "https://api.bitbucket.org/2.0" # For Server: use appropriate API base
oauth: # OAuth1 for Bitbucket Server, OAuth2 for Cloud
issuer: "https://bitbucket.org"
authorizationEndpoint: "https://bitbucket.org/site/oauth2/authorize"
tokenEndpoint: "https://bitbucket.org/site/oauth2/access_token"
azureDevOps:
enabled: false
url: "https://dev.azure.com"
api:
baseUrl: "https://dev.azure.com"
oauth:
issuer: "https://login.microsoftonline.com"
authorizationEndpoint: "https://login.microsoftonline.com/common/oauth2/v2.0/authorize"
tokenEndpoint: "https://login.microsoftonline.com/common/oauth2/v2.0/token"
clientAuthMethod: 'client_secret_post'
scope: 'openid offline_access https://app.vssps.visualstudio.com/vso.code_full'
managedServices:
# Configure Managed Service Providers in this section.
# See Appendix for detailed configuration options
# - name: postgres
# api: # ...
# (Optional) If multiple clusters share the same masterdata database,
# set this scope variable to prevent reconciliation conflicts.
# managedServiceScope: production-cluster-1
# Managed services that use a landscape-based backend will use this plan ID
# Must match one of the entries in codesphere.plans
managedServiceWorkspacePlanId: 1
managedServiceBackends:
# If you configure providers in codesphere.managedServices, then you also need
# to provision the corresponding backends in this section.
postgres: {
# Leave empty for default configuration
}
Remember to replace placeholder values with your actual configuration. For detailed information on configuring plans and gitProviders
(including generating OAuth credentials), refer to the specific sections at the end of this guide or the original appendix.
Initialize SOPS Secret Manager (File-based)
In order to avoid storing secrets remotely in plain text, SOPS (Secrets OPerationS) is integrated into the installer workflow. It uses Age as a file-based encryption tool.
The goal to protect the secrets file (prod.vault.yaml) is achieved by this workflow:
- Ensure there are no commments in
prod.vault.yaml. This is necessary because SOPS does not ignore comments, and they change the file structure of the encrypted version. - Encrypt
prod.vault.yamlon your local machine - Store only the encrypted variant of
prod.vault.yamlon any remote machine - Where it's necessary, temporarily copy the decryption key to the remote machine
- Alternatively, use SSH Port Forwarding to let SOPS access the decryption key via HTTP (advanced, see 3.5)
Use the following steps:
- Install SOPS and Age on your local machine, e.g. on macOS:
brew install sops age - Generate an Age Keypair (private + public key)
This will create
age-keygen -o age_key.txtage_key.txtcontaining both the private and public key. Note down the public key (starts withage1...). You'll need it for encryption. The private key part is identified byAGE-SECRET-KEY-1.... Keep this extremely secure. - Encrypt the secrets file (
prod.vault.yaml) locally using the Age Keypair:This command overwrites the unencrypted# Get the public key
age-keygen -y age_key.txt
sops --encrypt --age <age1....> --in-place /home/<myuser>/secrets/prod.vault.yamlprod.vault.yamldirectly. - Initialize SOPS on the remote nodes (Multi-Node installer: only needed on the setup node).
You can use the installer script to install SOPS as a component. For this, the
deps.tar.gzneeds to be extracted first, if not already done. Then run:# If not already done
tar -xvzf <INSTALLER-DIR>/deps.tar.gz -C <INSTALLER-DIR>/deps && cd <INSTALLER-DIR>/
./node ./install-components.js \
--dependenciesDir=./deps \
--config=/root/secrets/config.yaml \
--component=sops
To edit the encrypted prod.vault.yaml file later:
export SOPS_AGE_KEY_FILE=/path/to/your/age_key.txt # Point to the file containing your private key
sops /home/<myuser>/secrets/prod.vault.yaml
This will open the decrypted file in your default editor. Save and close to re-encrypt.
If you need to apply changes to the prod.vault.yaml after the installer ran once, please apply changes directly to the file
on the setup machine. Reason: the installer steps add own generated secrets during runtime, i.e. the file gets changed.
Security Consideration for the Age Private Key
- Keep the
age_key.txt(containing the private key) highly secure and private. Do not store permanently on any remote node. - Multi-Node Installer: The
--privKey=/path/to/your/age_key.txtflag is used to provide the key to the main installation script. - Single-Node Installer / Manual SOPS usage on server: If you absolutely must use the key on a server (e.g., during single-node component installation), transfer it securely and remove it immediately after use, or use SSH Port Forwarding for temporary access if SOPS supports remote key files via HTTP (advanced).
To decrypt the secrets file, use:
# export SOPS_AGE_KEY_FILE=/path/to/temporarily/copied/age_key.txt
sops --decrypt /home/<myuser>/secrets/prod.vault.yaml
** HTTP Access to local age_key via SSH Port Forwarding**
- On your local machine (where
age_key.txtis):
cd /path/to/keypair_directory
python3 -m http.server 8000
- From your local machine, SSH to the remote server, forwarding the port:
ssh -L 9000:localhost:8000 user@remote-host
- On the remote server:
export SOPS_AGE_KEY_FILE_URL=http://localhost:9000/age_key.txt
sops --decrypt /home/<myuser>/secrets/prod.vault.yaml
This HTTP method is for temporary access and should be used with caution. Prefer passing the key file path directly when possible.
Multi-Node Installation Guide
This installer uses SSH to connect from a central setup machine to the various hosts and install components one by one.
Prerequisites (Recap & Specifics)
- All global prerequisites (Section 2) are met.
- Installer Machine: OpenSSH client,
scp, Node.js v22 (shipped with the installer). - Target Nodes:
iptables,vi/nano,curl. - SSH Access:
rootaccess from the installer machine to all target nodes. - Dependencies:
private-cloud-installer.jsscript anddeps.tar.gzarchive are on the installer machine. - Configuration:
config.yamland encryptedprod.vault.yamlare prepared (Section 3, see also below). - Age Private Key: The
age_key.txtfile is accessible to the installer machine.
Image Distribution Options
The multi-node installer offers two distinct option to distribute all needed container images to all target nodes.
When using an External Container Registry, the installer will load all images from the deps.tar.gz archive and push them to the registry from the management node.
For this registry.server in config.yaml and registryUsername and registryPassword in prod.vault.yaml need to be populated correctly. The provided registry needs to be accessible by all nodes.
When no registry is provided, the images are loaded from the deps.tar.gz directly on the node.
Synchronisation of Config Files and Secrets
The multi-node installer will ensure that the config.yaml, the encrypted prod.vault.yaml and the age_key.txt to decrypt it are synced with all targets nodes.
On the target nodes they be present within /etc/codesphere/.
The multi-node installer automatically modifies the value of secrets.baseDir in config.yaml before upload to the target nodes.
Pre-Installation Steps on Target Nodes
Configure Inotify Watcher Limits
See Global Setup - Section 2.5. Ensure this is done on all Kubernetes nodes.
Prepare Disks for Ceph
Ensure disks intended for Ceph OSDs (data and metadata-DB) are completely empty. Caution: This command will wipe data from the disk sdx. Double-check the disk identifier. For each Ceph data/DB disk on each Ceph node:
sudo dd if=/dev/zero of=/dev/sdx bs=1M count=100 conv=fsync # sdx is your target disk, e.g., sdb, sdc
sudo wipefs -a /dev/sdx
A reboot might be necessary after cleaning disks for the changes to be fully recognized by the OS.
sudo reboot
Synchronized Clocks
Ensure time is synchronized across all servers. Use NTP (Network Time Protocol).
sudo apt update
sudo apt install chrony -y
sudo systemctl enable chrony --now
sudo chronyc sources
Verify all nodes are synchronized to a reliable time source.
Perform the Installation
Run the main installer script from your central setup machine. This command will install all components defined in your config.yaml.
node ./private-cloud-installer.js \
--archive=./deps.tar.gz \
--config=/home/<myuser>/secrets/config.yaml \
--privKey=/path/to/your/age_key.txt
The installer will proceed through the components: Docker, Postgres (if configured), Ceph, Kubernetes (k0s), SetUpCluster, and finally Codesphere.
You can add --skipSteps=loadContainerImages --skipSteps=extract-dependencies to save time if you run the installer a second time because of an error and these two steps were finished already (more details below).
Post-Installation and Troubleshooting
Ceph Specifics
-
Known Issue: Sometimes, Ceph might initialize with only one monitor daemon becoming fully healthy, leading to a non-HA (but functional) monitor setup. The installer attempts to configure multiple.
-
Troubleshooting Ceph: If Ceph has issues, you can shell into the Ceph admin container on the Ceph master node:
# On the Ceph master node (as defined in your config.yaml)
sudo ./ceph/files/cephadm --image quay.io/ceph/ceph:v18.2 --docker \
shell ceph status
If the status becomes healthy after manual checks or fixes, you might be able to restart the installer, or it might pick up where it left off (behavior depends on the installer's idempotency for Ceph).
- Debugging Ceph Manager (mgr):
# On a Ceph node running a manager
systemctl list-units | grep ceph.*mgr
# Example: ceph-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx@mgr.hostname.service
sudo systemctl restart <ceph-mgr-service-name>
sudo journalctl -u <ceph-mgr-service-name> -r # -r for reverse chronological (newest first)
- Debugging Ceph OSDs (osd): List running Docker containers to see OSDs:
# On a Ceph OSD node
sudo docker ps | grep ceph-osd
- Check logs for a specific OSD container:
sudo docker logs <container_id_or_name_of_osd>
Skip certain installation steps
Use with care! --skipStep is a "hard branch" and does not verify if a certain step can actually be skipped safely.
In certain cases, e.g. when fixing a mistake in the config.yaml or changing a secret, some time-intensive installation steps can be skipped.
You can get a list of all supported skippable steps by running:
./node ./private-cloud-installer.js -h
One or multiple steps to be skipped can be set via the --skipStep flag of `prviate-cloud-installer.js', as a example:
node ./private-cloud-installer.js \
--archive=./deps.tar.gz \
--config=/home/<myuser>/secrets/config.yaml \
--privKey=/path/to/your/age_key.txt \
--skipStep=copy-dependencies \
--skipStep=extract-dependencies
Single-Node Installation Guide
This method involves manually running installation commands for each component on the respective target hosts. You'll need to copy the dependencies and configuration files to each host.
Prerequisites (Recap & Specifics)
- All global prerequisites (Section 2) are met.
- On each Target Node:
iptables,vi/nano,curl. Theinstall-components.jsscript (fromdeps.tar.gz) will be used. - SSH Access: You will be SSHing into each node to run commands. Commands generally require
rootorsudo. - Dependencies:
deps.tar.gzmust be uploaded to every host and extracted. - Configuration:
config.yamland encryptedprod.vault.yamlmust be available on every host whereinstall-components.jsis run. - Age Private Key:
age_key.txtneeds to be accessible when commands requiring secret decryption are run (e.g., on the control plane node for most steps).
Setup Environment on Each Host
- Upload and Extract Dependencies: For each host (PostgreSQL, Ceph, Kubernetes nodes):
# From your management machine
scp deps.tar.gz <user>@<host_ip>:./deps.tar.gz
# SSH into the host
ssh <user>@<host_ip>
sudo mkdir /opt/codesphere_deps
sudo tar xf deps.tar.gz -C /opt/codesphere_deps
cd /opt/codesphere_deps
-
(Ensure
install-components.jsis within./installer/files/inside the extracted directory) -
Upload Configuration Files: Upload
config.yamland the encryptedprod.vault.yaml(created in Section 3) to each host, typically to a directory like/home/<user>/secrets/or/root/secrets/. Also, make theage_key.txtavailable securely if needed for decryption on that host.
# From your management machine
scp /path/to/config.yaml <user>@<host_ip>:/root/secrets/config.yaml
scp /path/to/prod.vault.yaml <user>@<host_ip>:/root/secrets/prod.vault.yaml
# If direct key access is needed on the host (use with caution):
# scp /path/to/age_key.txt <user>@<host_ip>:/root/secrets/age_key.txt
-
Adjust paths in
config.yaml(secrets.baseDir) accordingly if not using/root/secrets. -
Configure Inotify Watcher Limits: (Covered in Global Setup - Section 2.5. Ensure this is done on all Kubernetes nodes.)
Step-by-Step Component Installation
All commands below should generally be run as root or with sudo. Navigate to the directory where you extracted the dependencies (e.g., /opt/codesphere_deps). The --privKey path should point to your age_key.txt.
Setup Container Engine (Docker)
Run this on every host (Ceph, Kubernetes, any node that will run containers for Codesphere components if not using K8s for everything). The filesystem at /var/lib/docker (or equivalent for your container engine) should have at least 100-200GB free.
sudo ./installer/files/node ./installer/files/install-components.js \
--dependenciesDir=. \
--config=/root/secrets/config.yaml \
--component=docker
# Load required images into the local Docker cache (primarily on K8s nodes)
sudo ./installer/files/node ./installer/files/install-components.js \
--dependenciesDir=. \
--config=/root/secrets/config.yaml \
--privKey=/root/secrets/age_key.txt \
--component=loadContainerImages
Verify: sudo docker info and sudo docker image ls.
Install PostgreSQL (If not using external)
A. Install Primary PostgreSQL Node: SSH to the designated primary PostgreSQL server (as per your config.yaml).
sudo ./installer/files/node ./installer/files/install-components.js \
--dependenciesDir=. \
--config=/root/secrets/config.yaml \
--privKey=/root/secrets/age_key.txt \
--component=postgresPrimary
Verify: sudo systemctl status codesphere-postgres.service
B. Install Replica PostgreSQL Node(s): SSH to the designated replica PostgreSQL server(s).
sudo ./installer/files/node ./installer/files/install-components.js \
--dependenciesDir=. \
--config=/root/secrets/config.yaml \
--privKey=/root/secrets/age_key.txt \
--component=postgresReplica
Verify: sudo systemctl status codesphere-postgres.service
Install Ceph
A. Prepare Nodes (Cleanup & Sync):
- Clean Disks: (Section 4.3.2) On each Ceph node, ensure disks for OSDs are wiped clean.
- Synchronized Clocks: (Section 4.3.3) Ensure time is synchronized on all Ceph nodes.
B. Install Ceph: Run the command first on all non-master Ceph nodes, and then finally on the master Ceph node (as defined in config.yaml).
sudo ./installer/files/node ./installer/files/install-components.js \
--dependenciesDir=. \
--config=/root/secrets/config.yaml \
--privKey=/root/secrets/age_key.txt \
--component=ceph
Troubleshooting: Refer to Section 4.5.1 for Ceph troubleshooting commands.
Install Kubernetes (k0s)
A. Install Control Plane Node(s): SSH to the designated Kubernetes control plane node(s).
sudo ./installer/files/node ./installer/files/install-components.js \
--dependenciesDir=. \
--config=/root/secrets/config.yaml \
--privKey=/root/secrets/age_key.txt \
--component=kubernetes # This installs k0s control plane
After the first control plane is up, obtain a bootstrap token for worker nodes:
sudo ./kubernetes/files/k0s token create --role=worker > /tmp/k0s_worker_token.txt
Securely copy this /tmp/k0s_worker_token.txt file to all planned worker nodes.
B. Install Worker Node(s): SSH to each designated Kubernetes worker node. This step is not required if a node is both a control plane and a worker (single-node K8s setup or co-located roles).
# Ensure /tmp/k0s_worker_token.txt is present on the worker node
export K0S_TOKEN_FILE=/tmp/k0s_worker_token.txt # k0s expects this env var
sudo ./installer/files/node ./installer/files/install-components.js \
--dependenciesDir=. \
--config=/root/secrets/config.yaml \
--privKey=/root/secrets/age_key.txt \
--component=kubernetes # This installs k0s worker
Wait for all worker nodes to become ready. Check from a control plane node:
sudo ./kubernetes/files/k0s kubectl get nodes -o wide
Note: Control plane nodes might not show as "nodes" in kubectl get nodes if they are not also tainted to be workers, or if they are master-only.
Create Initial Kubernetes Resources
Run these commands on a Kubernetes control plane node:
- Create Codesphere Namespace:
sudo ./kubernetes/files/k0s kubectl create ns codesphere
-
Create Dummy Error Page Server (Temporary): This is a placeholder needed by
setUpClusterif certain services aren't up yet.
sudo ./kubernetes/files/k0s kubectl -n codesphere create svc clusterip error-page-server --tcp=8080:8080
Install Foundational Components (setUpCluster)
Run this on a Kubernetes control plane node:
sudo ./installer/files/node ./installer/files/install-components.js \
--dependenciesDir=. \
--config=/root/secrets/config.yaml \
--privKey=/root/secrets/age_key.txt \
--component=setUpCluster
Install Codesphere
A. Update Docker Images (Optional - for upgrades or to ensure latest): If you are performing an update or want to ensure the latest images (as per deps.tar.gz) are used:
- Load images to Docker cache on K8s nodes: Run on each Kubernetes node (control plane and workers):
sudo ./installer/files/node ./installer/files/install-components.js \
--dependenciesDir=. \
--config=/root/secrets/config.yaml \
--privKey=/root/secrets/age_key.txt \
--component=loadContainerImages
-
Make new images available to k0s (if using k0s): Run on each Kubernetes node. This will temporarily make nodes NotReady.
sudo ./installer/files/node ./installer/files/install-components.js \
--dependenciesDir=. \
--config=/root/secrets/config.yaml \
--privKey=/root/secrets/age_key.txt \
--component=reloadKubernetesImages
- Wait for all nodes to become Ready again (approx. 5 minutes). Check with:
sudo ./kubernetes/files/k0s kubectl get nodes -o wide
B. Final Installation: Run this on a Kubernetes control plane node:
- Delete Dummy Error Page Server:
sudo ./kubernetes/files/k0s kubectl -n codesphere delete svc error-page-server
-
Install Codesphere Application:
sudo ./installer/files/node ./installer/files/install-components.js \
--dependenciesDir=. \
--config=/root/secrets/config.yaml \
--privKey=/root/secrets/age_key.txt \
--component=codesphere
- our Codesphere installation should now be complete. Access it via the domain specified in
codesphere.domainin yourconfig.yaml.
Appendix
Codesphere Plans Configuration
Plans define the resources available for developer workspaces. Configure this under the codesphere.plans section in your config.yaml.
hostingPlans: Defines raw resource allocations. IDs must be numbers.cpuTenth: CPU cores in tenths (e.g.,10= 1 core,25= 2.5 cores).gpuParts: GPU allocation (specific to your GPU setup).memoryMb: Memory in Megabytes.storageMb: Persistent storage for the workspace in Megabytes.tempStorageMb: Ephemeral storage in Megabytes.pooledInstances: Number of pre-warmed instances of this plan to keep ready.
workspacePlans: Defines user-selectable plans, mapping tohostingPlans. IDs must be numbers.name: Display name of the plan.hostingPlanId: Links to an ID inhostingPlans.maxReplicas: Maximum number of concurrent instances for a single workspace on this plan.onDemand:trueif users can start/stop these workspaces,falsefor always-on.
Example: (Already included in the main config.yaml example)
codesphere:
# ... other codesphere settings ...
plans:
hostingPlans:
1:
cpuTenth: 10
gpuParts: 0
memoryMb: 2048
storageMb: 20480
tempStorageMb: 1024
2:
cpuTenth: 20
memoryMb: 4096
storageMb: 51200
tempStorageMb: 2048
workspacePlans:
1:
name: "Basic"
hostingPlanId: 1
maxReplicas: 1
onDemand: true
2:
name: "Pro"
hostingPlanId: 2
maxReplicas: 3
onDemand: true
Git Provider Configuration
Configure Git provider integrations under codesphere.gitProviders in config.yaml. For each enabled provider, you'll also need to add corresponding clientId and clientSecret to your prod.vault.yaml secrets file (see Section 3.2).
General Structure for each provider:
# providerName e.g., github, gitlab
# providerName:
# enabled: true # or false
# url: "Base URL of the provider"
# api:
# baseUrl: "API base URL"
# oauth:
# issuer: "OAuth issuer URL"
# authorizationEndpoint: "OAuth authorization URL"
# tokenEndpoint: "OAuth token URL"
# # Other provider-specific OAuth settings like scope, clientAuthMethod
Generating Credentials (Examples):
-
GitLab:
- Go to your GitLab Group (or User Settings for a user-level app) > Settings > Applications.
- Create a new application (e.g., "Codesphere Git Integration").
- Redirect URI / Callback URL:
https://<codesphere.domain>/ide/auth/gitlab/callback(replace<codesphere.domain>with your Codesphere domain). - Scopes: Select
api,read_repository,write_repository. (Ensureopenid,profile,emailare also available/selected if needed for user profile info). - Save the application. You'll get an "Application ID" (
gitlabAppClientId) and a "Secret" (gitlabAppClientSecret).
-
GitHub:
- Go to your GitHub organization settings > Developer settings > OAuth Apps > New OAuth App.
- Application name: e.g., "Codesphere"
- Homepage URL:
https://<codesphere.domain> - Authorization callback URL:
https://<codesphere.domain>/ide/auth/github/callback - You'll get a "Client ID" (
githubAppsClientId) and generate a "Client Secret" (githubAppsClientSecret).
-
Bitbucket (Server/Data Center - typically Application Links for OAuth 1.0a or OAuth 2.0 if supported):
- Admin Settings > System > Application Links.
- Create a new link. Choose "External Application", "Incoming".
- Redirect URL:
https://<codesphere.domain>/ide/auth/bitbucket/callback - Permissions: Repository read/write.
- You'll get a "Consumer Key" (
bitbucketAppsClientId) and "Consumer Secret" (bitbucketAppsClientSecret) or similar, depending on OAuth version.
-
Azure DevOps:
- Register an application in Azure Active Directory.
- Redirect URI:
https://<codesphere.domain>/ide/auth/azureDevOps/callback(ensure it's added as a Web redirect URI). - Note the "Application (client) ID" (
azureDevOpsAppClientId). - Go to "Certificates & secrets" -> "New client secret" to generate
azureDevOpsAppClientSecret. Set a reminder to rotate this secret as it has an expiry. - API Permissions: Add permissions for "Azure DevOps" ->
user_impersonationand ensurevso.code_fullis included in the scope inconfig.yaml.
Remember to add these Client IDs and Secrets to your prod.vault.yaml file and encrypt it. Example secrets names:
githubAppsClientId,githubAppsClientSecretgitlabAppClientId,gitlabAppClientSecretbitbucketAppsClientId,bitbucketAppsClientSecretazureDevOpsAppClientId,azureDevOpsAppClientSecret
Managed Service Configuration
Every Service offered through the Managed Services section is configured as a individual Managed Service Provider in the codesphere.managedServices array in config.yaml.
The configuration follows a specific schema.
Managed Service Providers could be external to Codesphere, i.e. an external API is called, but there are also built-in Managed Services,
where the Provider is running within Codesphere. With the current release there is 1 such Provider for PostgreSQL.
The following snippet shows an examplary configuration of the Postgres Managed Service Provider. If using this Provider,
please ensure to enable the respective backend in the managedServiceBackends section as well.
- name: postgres
version: v1
backend:
api:
endpoint: "http://ms-backend-postgres.postgres-operator:3000/api/v1/postgres"
author: Codesphere
category: Database
displayName: PostgreSQL
iconUrl: /ide/assets/managed-services/postgresql.svg
configSchema:
type: object
properties:
version:
type: string
description: Version of the Postgres DB. Includes pre-installed extensions compatible with this version. Extension versions are managed and cannot be customized.
enum:
- '17.6'
- '16.10'
default: '17.6'
readOnly: false
userName:
type: string
default: app
pattern: '^(?!postgres$)'
databaseName:
type: string
default: app
required: []
additionalProperties: false
detailsSchema:
type: object
properties:
port:
type: integer
hostname:
type: string
dsn:
type: string
ready:
type: boolean
required:
- port
- hostname
- dsn
- ready
additionalProperties: false
secretsSchema:
type: object
properties:
userPassword:
type: string
format: password
superuserPassword:
type: string
format: password
required:
- userPassword
- superuserPassword
additionalProperties: false
description: >-
Open-source database system tailored for efficient data management and
scalability. Deployed on Codesphere using the CNPG K8s Operator.
plans:
- id: 0
description: 0.5 vCPU / 500 MB Memory
name: Small
parameters:
storage:
pricedAs: storage-mb
schema:
description: Storage (MB)
type: integer
default: 10000
readOnly: false
cpu:
pricedAs: cpu-tenths
schema:
description: CPU Tenths
type: number
default: 5
readOnly: true
memory:
pricedAs: ram-mb
schema:
description: Memory (MB)
type: integer
default: 500
readOnly: true
- id: 1
description: 1 vCPU / 1 GB Memory
name: Medium
parameters:
storage:
pricedAs: storage-mb
schema:
description: Storage (MB)
type: integer
default: 25000
readOnly: false
cpu:
pricedAs: cpu-tenths
schema:
description: CPU Tenths
type: number
default: 10
readOnly: true
memory:
pricedAs: ram-mb
schema:
description: Memory (MB)
type: integer
default: 1000
readOnly: true
- id: 2
description: 1 vCPU / 2 GB Memory
name: Medium High-Mem
parameters:
storage:
pricedAs: storage-mb
schema:
type: integer
default: 25000
readOnly: false
cpu:
pricedAs: cpu-tenths
schema:
type: number
default: 10
readOnly: true
memory:
pricedAs: ram-mb
schema:
type: integer
default: 2000
readOnly: true
- id: 3
description: 2 vCPU / 4 GB Memory
name: Large
parameters:
storage:
pricedAs: storage-mb
schema:
type: integer
default: 50000
readOnly: false
cpu:
pricedAs: cpu-tenths
schema:
type: number
default: 20
readOnly: true
memory:
pricedAs: ram-mb
schema:
type: integer
default: 4000
readOnly: true
- id: 4
description: 4 vCPU / 8 GB Memory
name: Extra Large
parameters:
storage:
pricedAs: storage-mb
schema:
type: integer
default: 150000
readOnly: false
cpu:
pricedAs: cpu-tenths
schema:
type: number
default: 40
readOnly: true
memory:
pricedAs: ram-mb
schema:
type: integer
default: 8000
readOnly: true
Create Custom Workspace Base Images
The default base image for Workspaces already ships with many useful tools and libraries.
However, you can create your own custom base images.
One reason to customize base images are packages that are not avilable through Nix but need to be installed via apt.
The Codesphere OMS CLI provides a convenient way to start building your own custom base images.
Prerequisites:
- On your host, install Docker and the Buildx plugin (if not already installed):
sudo apt install docker.io docker-buildx - Install the OMS ClI as described at https://github.com/codesphere-cloud/oms
- Use the
extend baseimagecommand in OMS CLI, see
oms-cli beta extend baseimage -h - This command extracts the default base image from the Codesphere installer bundle, loads it into your local Docker image cache, and generates a Dockerfile that you can extend.
- Confirm with
docker image lsthat the base image (ghcr.io/codesphere-cloud/codesphere-monorepo/workspace-agent-VERSION) is available locally. - Edit the generated Dockerfile to add your custom dependencies.
- As a best practise, place the Dockerfile into a subdirectory, e.g.
./docker. This ensures that in the following docker build command, only the necessary files are sent to the Docker daemon. - Retrieve the base image tag from the Dockerfile, e.g.
codesphere-1-67-1-4dd9b346ccand use the same tag for your custom image. - Build the custom image with the following command:
docker buildx build -f ./docker/custom.Dockerfile -t workspace-agent-24.04-mycorp:<TAG_TO_USE> --load ./docker - Tag and Push the image to your own registry with the same tag as the original image:
docker tag workspace-agent-24.04-mycorp:<TAG_TO_USE> <YOUR_REGISTRY_URL>/workspace-agent-24.04-mycorp:<TAG_TO_USE>
docker login <YOUR_REGISTRY_URL> # if not already logged in
docker push <YOUR_REGISTRY_URL>/workspace-agent-24.04-mycorp:<TAG_TO_USE>
The image name can be freely choosen, but double-check that the tag of your custom image matches the original base image tag exactly. If the tags do not match, Codesphere will currently not be able to find your custom image when starting Workspaces.
Access to Cluster Monitoring (Grafana)
As part of the regular Codesphere installation, a Grafana instance with predefined dashboards is automatically deployed into the cluster. The login credentials are automatically generated. The Grafana UI can be reached via port-forwarding to localhost.
Steps to access the Grafana instance:
- Retrieve credentials:
- Username:
admin - Password:
kubectl get secret grafana -n monitoring -o jsonpath={.data.admin-password} | base64 -d
- Username:
- Access Grafana in your local browser:
- Start port-forward with
kubectl port-forward deployment/grafana 3000:3000 -n monitoring - On your local browser visit
localhost:3000 - Login with the credentials from the previous step.
- Start port-forward with
- Hint: if you only have kubectl access via e.g. a jumphost, you can use ssh port-forwarding to continue to your local machine:
ssh -L 3000:localhost:3000 user@jumphost