Skip to main content
Version: Next

Automating Tasks with the API

While the Codesphere CLI is excellent for terminal workflows and usage in CI/CD pipelines in pre-determined use cases, the Public API unlocks the ability to write custom scripts to automate your pipelines, manage infrastructure lifecycles, and integrate Codesphere with your internal tooling.

This guide covers scripting fundamentals and provides practical automation examples.


Getting Started

To interact with the Codesphere API programmatically, you need two things: an HTTP client and a secure way to pass your authentication token.

Authentication Headers

Every API request requires an Authorization header containing your API token formatted as a Bearer token.

Secret Management

Never hardcode your API token in your scripts. Always load it from an environment variable or a secure secret manager (e.g., AWS Secrets Manager, GitHub Secrets) to prevent accidental leaks in version control.

Basic HTTP Client Setup

Here is an example on how to set up a basic authenticated request to fetch your team's workspaces in popular languages. Ideally you test individual requests with tools like Postman before writing these scripts.

Using the popular requests library:

import os
import requests

# Load credentials from environment variables
API_TOKEN = os.environ.get("CODESPHERE_TOKEN")
TEAM_ID = os.environ.get("CODESPHERE_TEAM_ID")
BASE_URL = "[https://codesphere.com/api](https://codesphere.com/api)"

headers = {
"Authorization": f"Bearer {API_TOKEN}",
"Content-Type": "application/json"
}

# Example: Fetch all workspaces
response = requests.get(f"{BASE_URL}/workspaces?teamId={TEAM_ID}", headers=headers)

if response.status_code == 200:
print("Workspaces fetched successfully!")
print(response.json())
else:
print(f"Error: {response.status_code} - {response.text}")

Automation Examples

Below are real-world examples of how you can use the API to automate tedious lifecycle management tasks.

Integrating Deployment Status into Internal Tools

If your company uses a custom internal developer portal (like Backstage) or a custom Slack bot, you may want to periodically poll the API to report the health and pipeline status of a specific deployment.

This script demonstrates how to fetch the status of a specific workspace's pipeline.

import os
import requests

API_TOKEN = os.environ.get("CODESPHERE_TOKEN")
WORKSPACE_ID = "YOUR_WORKSPACE_ID"
BASE_URL = "[https://codesphere.com/api](https://codesphere.com/api)"
HEADERS = {"Authorization": f"Bearer {API_TOKEN}"}

def check_deployment_status():
# Fetch the status of the run pipeline
url = f"{BASE_URL}/workspaces/{WORKSPACE_ID}/pipeline/run"
response = requests.get(url, headers=HEADERS)

if response.status_code == 200:
status_data = response.json()

# Extract relevant status information
state = status_data.get("status", "UNKNOWN")
is_running = state == "RUNNING"

print(f"Deployment Status: {state}")

# Example logic for internal tool integration
if not is_running:
print("Alert: The application is currently down or stopped!")
# trigger_slack_alert(f"Workspace {WORKSPACE_ID} is down!")
return state
else:
print(f"Failed to fetch status: {response.text}")
return None

if __name__ == "__main__":
check_deployment_status()
Handling API Limits

When writing scripts that poll endpoints (like checking deployment statuses in a loop), ensure you implement a delay (e.g., time.sleep(5) or setTimeout) between requests to avoid hitting rate limits.


Deployment & Release Workflows

While individual endpoints are valuable on their own, the real power stems from combining multiple API calls to form a complete release flow (e.g., triggering a release on merge in a GitHub Action).

Simple Release Case

The simple case is great for static websites and applications that can restart almost instantly.

  1. Pull Changes: POST /workspaces/{workspaceId}/git/pull/{remote}
  2. Rebuild Application (Optional): POST /workspaces/{workspaceId}/pipeline/prepare/start
  3. Wait for Build: Poll GET /workspaces/{workspaceId}/pipeline/prepare until it returns 200.
  4. Stop Application: POST /workspaces/{workspaceId}/pipeline/run/stop
  5. Restart Application: POST /workspaces/{workspaceId}/pipeline/run/start

Automating Zero Downtime Releases

For mission-critical applications where downtime is unacceptable, you can automate a Blue/Green zero-downtime release entirely via the API:

  1. Create New Workspace: POST /workspaces (Define the new git commit, team id, plan id, branch, and replica count).
  2. Build Dependencies: POST /workspaces/{workspaceId}/pipeline/prepare/start
  3. Wait for Build: Poll GET /workspaces/{workspaceId}/pipeline/prepare until successful.
  4. Run Tests (Optional): POST /workspaces/{workspaceId}/pipeline/test/start and poll until successful. Stop the flow if tests fail!
  5. Start Application: POST /workspaces/{workspaceId}/pipeline/run/start
  6. Switch Domain Routing: Once the new application is healthy, instantly route traffic to it by updating the workspace connection: PUT /domains/team/{teamId}/domain/{domainName}/workspace-connections

Scaling Replicas Programmatically

Codesphere Workspaces can run multiple services simultaneously as part of a Landscape (e.g., a frontend, a backend, and a database).

You can dynamically adjust the number of horizontal replicas for specific services within your workspace.

Using the PATCH /workspaces/{workspaceId}/landscape/scale endpoint, you can pass a JSON object where the keys are the exact names of your services, and the values are the desired number of replicas.

import os
import json
import urllib.request

API_TOKEN = os.environ.get("CODESPHERE_TOKEN")
WORKSPACE_ID = "YOUR_WORKSPACE_ID"
BASE_URL = "YOUR_CODESPHERE_INSTANCE_URL/api"

url = f"{BASE_URL}/workspaces/{WORKSPACE_ID}/landscape/scale"

payload = json.dumps({
"YOUR-SERVICE-NAME": 1 # <--- CHANGE THIS TO YOUR SERVICE NAME & desired number of replicas (i.e. "backend")
}).encode('utf-8')

headers = {
"Authorization": f"Bearer {API_TOKEN}",
"Content-Type": "application/json",
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36"
}

req = urllib.request.Request(url, data=payload, headers=headers, method='PATCH')

try:
with urllib.request.urlopen(req) as response:
if response.status in [200, 204]:
print(f"Success! Services in Workspace {WORKSPACE_ID} have been scaled.")
except urllib.error.HTTPError as e:
print(f"Failed to scale. Error code: {e.code}")
print(e.read().decode())
except Exception as e:
print(f"An error occurred: {e}")
400 Error: Plan Limits & Invalid Values

If you receive a 400 Bad Request error when trying to scale replicas, it typically means one of two things:

  • Exceeding Plan Limits: You are attempting to scale beyond the maximum number of replicas allowed by your service's current plan. For example, requesting 5 replicas on a plan limited to 3 will fail.
  • Invalid Integer: The API requires a positive integer. Setting the replica count to 0 or a negative number will result in a type error.