• Home
  • Pricing
  • Integrations
  • Blog
  • Documentation
Sign InSign Up

The next-generation cloud platform designed exclusively for AI agents

© Copyright 2026 Cognitora. All Rights Reserved.

Product
  • Documentation
  • Blog
  • Integrations
  • Roadmap
  • FAQ
Company
  • Drop me an email
Documentation
  • Getting Started
  • Code Interpreter API
  • Containers API
  • Cognitora SDK Guide
  • Technical Architecture

Containers API

Learn how to provision and manage containerized workloads for your AI agents using Cognitora's Containers API with hardware-level isolation.

The Cognitora Containers API enables you to autonomously provision, execute, and manage containerized workloads in secure Firecracker microVMs. Each container runs with hardware-level isolation, providing enterprise-grade security with sub-second cold starts.

Core Concepts

Understanding Containers in Cognitora

Cognitora's compute platform is built around containers as the fundamental execution units. Unlike traditional cloud platforms, every workload runs in a dedicated Firecracker microVM, ensuring complete isolation and security for AI agent operations.

Container Types

Cognitora supports multiple container execution patterns:

  • COMPUTE: Flexible containers that can run as one-shot executions OR persistent long-running environments with custom Docker images, file uploads, and state preservation
  • SESSION: Persistent interactive containers for code interpreter sessions with predefined runtime images (Python, JavaScript, Bash)
  • ONE_OFF: Temporary containers for single code executions (deprecated - use COMPUTE with one-shot mode)

Key Features

  • Hardware Isolation: Each container runs in a dedicated Firecracker microVM
  • Secure Runtime: Kata Containers provide secure container execution
  • Fast Startup: Sub-second cold start times optimized for agent workloads
  • Resource Isolation: Dedicated CPU, memory, and storage allocation
  • Networking Control: Optional internet access with security-first defaults
  • Real-time Monitoring: Live status updates and log streaming
  • Container Persistence: SESSION and persistent COMPUTE containers maintain state across executions
  • File Upload Support: Upload files to persistent containers with string and base64 encoding
  • Execution Tracking: Comprehensive execution history and management

Container Execution Patterns

One-Shot Containers (Traditional)

Perfect for batch jobs, CI/CD pipelines, and data processing tasks:

  • Execute a single command and terminate
  • No state preservation between runs
  • Cost-effective for standalone tasks
  • Automatic cleanup after completion

Persistent Containers

Ideal for development environments, interactive workflows, and AI agent workloads:

  • Stay alive for multiple command executions
  • Filesystem state persists between commands
  • File upload support with automatic environment setup
  • Simplified timeout management: defaults to 1 day, or specify timeout_seconds or expires_at
  • Perfect for iterative development and stateful AI agents

Getting Started

Quick Start - Create Your First Container

Using cURL

bash
Copy
curl -X POST "https://api.cognitora.dev/api/v1/compute/containers" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "image": "docker.io/library/ubuntu:22.04",
    "persistent": true,
    "timeout_seconds": 28800,  // 8 hours
    "cpuCores": 2,
    "memoryMb": 4096,
    "storageGb": 20,
    "maxCostCredits": 100,
    "environment": {
      "WORKSPACE": "/workspace"
    }
  }'

Using JavaScript SDK

javascript
Copy
import { Cognitora } from '@cognitora/sdk';

const client = new Cognitora({ apiKey: 'YOUR_API_KEY' });

const container = await client.containers.createContainer({
  image: 'docker.io/library/ubuntu:22.04',
  persistent: true,
  cpu_cores: 2,
  memory_mb: 4096,
  storage_gb: 20,
  max_cost_credits: 100,
  timeout_seconds: 28800  // 8 hours
});

console.log(`Container ID: ${container.id}`);

Using Python SDK

python
Copy
from cognitora import Cognitora

client = Cognitora(api_key="YOUR_API_KEY")

container = client.containers.create_container(
    image="docker.io/library/ubuntu:22.04",
    persistent=True,
    cpu_cores=2,
    memory_mb=4096,
    storage_gb=20,
    max_cost_credits=100,
    timeout_seconds=28800  # 8 hours
)

print(f"Container ID: {container.id}")

Execute Commands with File Uploads

Install Development Tools

bash
Copy
curl -X POST "https://api.cognitora.dev/api/v1/compute/containers/{CONTAINER_ID}/exec" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "command": ["bash", "-c", "apt update && apt install -y python3 python3-pip git"],
    "timeout_seconds": 300
  }'

Upload and Run Code

bash
Copy
curl -X POST "https://api.cognitora.dev/api/v1/compute/containers/{CONTAINER_ID}/exec" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "command": ["python3", "app.py"],
    "files": [
      {
        "name": "app.py",
        "content": "import json\ndata = {\"message\": \"Hello from long-running container!\"}\nprint(json.dumps(data, indent=2))\n# Save to demonstrate persistence\nwith open(\"/workspace/results.json\", \"w\") as f:\n    json.dump(data, f)",
        "encoding": "string"
      }
    ],
    "working_directory": "/workspace",
    "environment": {
      "PYTHONPATH": "/workspace"
    }
  }'

Check State Persistence

bash
Copy
curl -X POST "https://api.cognitora.dev/api/v1/compute/containers/{CONTAINER_ID}/exec" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "command": ["cat", "/workspace/results.json"],
    "working_directory": "/workspace"
  }'

Long-Running Container Examples

Data Science Workflow

Create a Jupyter-based data science environment for iterative analysis:

bash
Copy
curl -X POST "https://api.cognitora.dev/api/v1/compute/containers" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "image": "jupyter/datascience-notebook:latest",
    "persistent": true,
    "timeout_seconds": 14400,  // 4 hours
    "cpuCores": 4,
    "memoryMb": 8192,
    "storageGb": 50,
    "maxCostCredits": 200
  }'

Upload dataset and run analysis:

bash
Copy
curl -X POST "https://api.cognitora.dev/api/v1/compute/containers/{CONTAINER_ID}/exec" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "command": ["python3", "analyze.py"],
    "files": [
      {
        "name": "sales_data.csv",
        "content": "date,product,revenue\n2024-01-01,Widget A,1500\n2024-01-02,Widget B,2300",
        "encoding": "string"
      },
      {
        "name": "analyze.py",
        "content": "import pandas as pd\ndf = pd.read_csv(\"sales_data.csv\")\nprint(\"Dataset shape:\", df.shape)\nprint(df.head())\ntotal_revenue = df[\"revenue\"].sum()\nprint(f\"Total Revenue: ${total_revenue}\")\ndf.to_csv(\"processed_data.csv\", index=False)",
        "encoding": "string"
      }
    ],
    "working_directory": "/home/jovyan/work"
  }'

AI Agent Environment

bash
Copy
# Create long-running agent container
curl -X POST "https://api.cognitora.dev/api/v1/compute/containers" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "image": "docker.io/library/python:3.11-slim",
    "persistent": true,
    "timeout_seconds": 86400,  // 24 hours
    "cpuCores": 2,
    "memoryMb": 4096,
    "maxCostCredits": 150,
    "environment": {
      "AGENT_ENV": "development",
      "PYTHONPATH": "/agent"
    }
  }'

# Deploy agent code with state management
curl -X POST "https://api.cognitora.dev/api/v1/compute/containers/{CONTAINER_ID}/exec" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "command": ["python3", "agent.py"],
    "files": [
      {
        "name": "agent.py",
        "content": "import json\nfrom datetime import datetime\n\nclass TaskAgent:\n    def __init__(self):\n        self.state_file = \"/agent/state.json\"\n        self.load_state()\n    \n    def load_state(self):\n        try:\n            with open(self.state_file, \"r\") as f:\n                self.state = json.load(f)\n        except FileNotFoundError:\n            self.state = {\"tasks_completed\": 0, \"created_at\": datetime.now().isoformat()}\n    \n    def save_state(self):\n        with open(self.state_file, \"w\") as f:\n            json.dump(self.state, f)\n    \n    def process_task(self, task_data):\n        self.state[\"tasks_completed\"] += 1\n        self.state[\"last_task\"] = task_data\n        self.save_state()\n        return {\"status\": \"completed\", \"result\": f\"Processed {task_data}\"}\n\nagent = TaskAgent()\nresult = agent.process_task(\"analyze_user_data\")\nprint(f\"Agent Result: {json.dumps(result, indent=2)}\")",
        "encoding": "string"
      }
    ],
    "working_directory": "/agent"
  }'

SDK Examples

JavaScript - Complete Development Workflow

javascript
Copy
import { Cognitora } from '@cognitora/sdk';

const client = new Cognitora({ apiKey: 'YOUR_API_KEY' });

async function containerWorkflow() {
  // Create long-running container environment
  const container = await client.containers.createContainer({
    image: 'docker.io/library/python:3.11-slim',
    persistent: true,
    cpu_cores: 2,
    memory_mb: 2048,
    max_cost_credits: 100,
      timeout_seconds: 28800  // 8 hours
  });

  console.log(`Created container: ${container.id}`);

  // Install dependencies once
  await client.containers.executeInContainer(container.id, {
    command: ['pip', 'install', 'pandas', 'requests', 'matplotlib'],
    timeout_seconds: 120
  });

  // Upload and run analysis script
  const result = await client.containers.executeInContainer(container.id, {
    command: ['python', 'analysis.py'],
    files: [
      {
        name: 'analysis.py',
        content: `
import pandas as pd
import json
from datetime import datetime

print("Starting data analysis workflow...")

# Process data
data = {"timestamp": datetime.now().isoformat(), "status": "processing"}
with open("/workspace/workflow_state.json", "w") as f:
    json.dump(data, f)

print("Analysis complete - state saved!")
`,
        encoding: 'string'
      }
    ],
    working_directory: '/workspace'
  });

  console.log('Analysis output:', result.output);

  // Run additional processing in same environment
  const processResult = await client.containers.executeInContainer(container.id, {
    command: ['python', '-c', 'import json; data=json.load(open("/workspace/workflow_state.json")); print("Previous run:", data["timestamp"])'],
    working_directory: '/workspace'
  });

  console.log('State persistence confirmed:', processResult.output);

  // Extend container if needed
  await client.containers.updateContainer(container.id, {
    timeout_seconds: 43200  // 12 hours
  });

  // Clean up when done
  await client.containers.cancelContainer(container.id);
}

containerWorkflow();

Timeout Management

Simplified Timeout Approach

Persistent containers now use a simplified timeout system:

Default Behavior

  • If no timeout is specified, persistent containers default to 1 day (86400 seconds)
  • This provides a reasonable balance between resource usage and development convenience

Custom Timeout Options

Option 1: Timeout in Seconds

javascript
Copy
const container = await client.containers.createContainer({
  image: 'python:3.11',
  persistent: true,
  timeout_seconds: 7200,  // 2 hours from now
  cpu_cores: 2,
  memory_mb: 1024,
  max_cost_credits: 100
});

Option 2: Exact Expiration Time

javascript
Copy
const container = await client.containers.createContainer({
  image: 'python:3.11',
  persistent: true,
  expires_at: '2025-01-07T15:00:00Z',  // Exact ISO 8601 datetime
  cpu_cores: 2,
  memory_mb: 1024,
  max_cost_credits: 100
});

Option 3: Default (Recommended for Development)

javascript
Copy
const container = await client.containers.createContainer({
  image: 'python:3.11',
  persistent: true,  // Automatically expires after 1 day
  cpu_cores: 2,
  memory_mb: 1024,
  max_cost_credits: 100
});

Networking Control

Containers support optional networking control for enhanced security:

Default Networking Behavior

ServiceDefault NetworkingSecurity Rationale
Containersfalse (disabled)Security-first: isolated by default

Secure Container Execution (Default)

python
Copy
# Secure, isolated container execution (default)
execution = client.containers.create_container(
    image="docker.io/library/python:3.11",
    command=["python", "-c", "print('Secure isolated computation')"],
    cpu_cores=1.0,
    memory_mb=512,
    max_cost_credits=10,
    networking=False  # Default: isolated for security
)

Network-Enabled Container Execution

python
Copy
# Container with internet access for external API calls
execution = client.containers.create_container(
    image="docker.io/library/python:3.11",
    command=["python", "-c", """
import subprocess
import requests

# Install package (requires networking)
subprocess.run(['pip', 'install', 'requests'], check=True)

# Fetch external data
response = requests.get('https://api.coindesk.com/v1/bpi/currentprice.json')
data = response.json()
print(f"Bitcoin price: {data['bpi']['USD']['rate']}")
    """],
    cpu_cores=2.0,
    memory_mb=1024,
    max_cost_credits=20,
    networking=True  # Enable networking for package installs and API calls
)

Container Management

Container Execution Management

List All Container Executions

python
Copy
# List all container executions across account
executions = client.containers.list_all_container_executions(
    limit=50,
    status='running'
)

print(f"Active containers: {len(executions)}")
for execution in executions:
    print(f"Container {execution['id']}: {execution['status']}")
    print(f"  Image: {execution['image']}")
    print(f"  Runtime: {execution['runtime_seconds']}s")

Get Container Execution Details

python
Copy
# Get detailed information about a specific container execution
execution_details = client.containers.get_container_execution('exec_123456')

print(f"Execution status: {execution_details['status']}")
print(f"Command: {execution_details['command']}")
print(f"Exit code: {execution_details['exit_code']}")
print(f"Networking enabled: {execution_details['networking']}")

Container Execution History

python
Copy
# Get all executions for a specific container
container_executions = client.containers.get_container_executions('container_123456')

print(f"Container has {len(container_executions)} executions")
for execution in container_executions:
    print(f"- {execution['status']}: {execution['runtime_seconds']}s")

JavaScript SDK Container Management

typescript
Copy
// List all container executions
const containerExecutions = await client.containers.listAllContainerExecutions({
  limit: 50,
  status: 'running'
});

console.log(`Active containers: ${containerExecutions.executions.length}`);

// Get specific container execution details
const executionDetails = await client.containers.getContainerExecution('exec_123456');
console.log(`Container execution: ${executionDetails.status}`);

// Get executions for specific container
const containerHistory = await client.containers.getContainerExecutions('container_123456');
console.log(`Container has ${containerHistory.executions.length} executions`);

Resource Management

Resource Specification

You can precisely control the resources allocated to your containers:

Cost Estimation

Always estimate costs before creating expensive containers:

python
Copy
# Estimate cost before container creation
estimate = client.containers.estimate_cost(
    cpu_cores=8.0,
    memory_mb=32768,
    storage_gb=100,
    timeout_seconds=7200
)

print(f"Estimated cost: {estimate.estimated_credits} credits")

# Create execution if cost is acceptable
if estimate.estimated_credits <= budget:
    execution = client.containers.create_container(...)

Lifecycle Management

Monitoring Container Status

python
Copy
# Get container details and status
execution = client.containers.get_container("cnt_abc123def456")
print(f"Status: {execution.status}")
print(f"Actual cost: {execution.actual_cost_credits} credits")
print(f"Started at: {execution.started_at}")

# Container states: QUEUED, STARTING, RUNNING, IDLE, COMPLETED, FAILED, CANCELLED, TERMINATED, TIMEOUT

Retrieving Container Logs

python
Copy
# Get complete container logs
logs = client.containers.get_container_logs("cnt_abc123def456")
print("Container output:", logs.logs)
bash
Copy
# cURL example for logs
curl -X GET $API_URL/api/v1/compute/containers/{container-id}/logs \
  -H "Authorization: Bearer <api-key>"

Cancelling Containers

python
Copy
# Cancel a running container to stop charges
result = client.containers.cancel_container("cnt_abc123def456")
print(f"Container cancelled: {result.message}")

Execution Tracking

For SESSION containers, you can track individual executions within the container:

List Container Executions

python
Copy
# Get all executions for a specific container
executions = client.containers.list_containers(limit=50)
for execution in executions:
    print(f"Container {execution.id}: {execution.status}")
    print(f"Runtime: {execution.runtime_seconds}s")

Get Execution Details

python
Copy
# Get detailed information about a specific execution
execution = client.containers.get_container("exec_xyz789abc123")
print(f"Command executed: {execution.command}")
print(f"Exit code: {execution.exit_code}")
print(f"Error message: {execution.error_message}")

Advanced Topics

Environment Variables and Custom Images

Environment Variables and Secrets

python
Copy
# Container with environment variables
execution = client.containers.create_container(
    image="my-registry/ai-agent:latest",
    command=["python", "agent.py"],
    cpu_cores=2.0,
    memory_mb=4096,
    environment={
        "LOG_LEVEL": "INFO",
        "API_ENDPOINT": "https://api.example.com",
        "WORKER_ID": "agent-001",
        "DEBUG": "false"
    },
    max_cost_credits=200,
    networking=True  # Enable networking for external API access
)

Custom Docker Images

python
Copy
# Use your own Docker image with specific configurations
execution = client.containers.create_container(
    image="myregistry.com/my-ai-agent:v2.1.0",
    command=["./run_agent.sh", "--config", "production"],
    cpu_cores=4.0,
    memory_mb=8192,
    storage_gb=50,
    environment={
        "ENVIRONMENT": "production",
        "SCALE_FACTOR": "10"
    },
    timeout_seconds=3600,
    max_cost_credits=1000,
    networking=True  # Enable networking for production agent
)

Security Best Practices

Image Security

python
Copy
# Use specific image versions for reproducibility
execution = client.containers.create_container(
    image="python:3.11.6-slim-bullseye",  # Specific version
    command=["python", "secure_script.py"],
    cpu_cores=1.0,
    memory_mb=512,
    networking=False  # Secure by default
)

# Use private registries for sensitive workloads
execution = client.containers.create_container(
    image="your-private-registry.com/secure-agent:v1.2.3",
    command=["./secure_agent"],
    cpu_cores=2.0,
    memory_mb=2048,
    networking=False  # Secure by default
)

Resource Limits

python
Copy
# Set conservative resource limits to prevent runaway costs
execution = client.containers.create_container(
    image="untrusted-image:latest",
    command=["python", "user_script.py"],
    cpu_cores=1.0,          # Limited CPU
    memory_mb=512,          # Limited memory
    storage_gb=1,           # Minimal storage
    timeout_seconds=300,    # 5-minute timeout
    max_cost_credits=10,    # Low cost limit
    networking=False        # Secure: no internet access for untrusted code
)

Monitoring and Debugging

Real-time Status Monitoring

python
Copy
async def monitor_execution(execution_id, check_interval=5):
    """Monitor execution status until completion"""
    
    while True:
        execution = await client.containers.get_container(execution_id)
        print(f"Container {execution_id}: {execution.status}")
        
        if execution.status in ['COMPLETED', 'FAILED', 'CANCELLED', 'TIMEOUT']:
            print(f"Final status: {execution.status}")
            print(f"Total cost: {execution.actual_cost_credits} credits")
            print(f"Networking enabled: {execution.networking}")
            
            # Get final logs
            logs = await client.containers.get_container_logs(execution_id)
            print("Final output:", logs.logs)
            break
        
        await asyncio.sleep(check_interval)

# Usage
await monitor_execution("cnt_abc123def456")

Execution Performance Analysis

python
Copy
def analyze_execution_performance(execution):
    """Analyze execution performance metrics"""
    
    runtime_seconds = (execution.completed_at - execution.started_at).total_seconds()
    cost_per_second = execution.actual_cost_credits / runtime_seconds
    
    efficiency = {
        "runtime_seconds": runtime_seconds,
        "cost_per_second": cost_per_second,
        "cpu_utilization": execution.actual_cost_credits / execution.max_cost_credits,
        "resource_efficiency": "high" if cost_per_second < 0.1 else "low"
    }
    
    return efficiency

# Analyze completed execution
execution = client.containers.get_container("cnt_abc123def456")
metrics = analyze_execution_performance(execution)
print(f"Container efficiency: {metrics}")

REST API Reference

Create Container

One-Shot Container

bash
Copy
POST /api/v1/compute/containers
Content-Type: application/json
Authorization: Bearer your_api_key

{
  "image": "docker.io/library/python:3.11-slim",
  "command": ["python", "-c", "print('Hello World')"],
  "cpuCores": 1.0,
  "memoryMb": 512,
  "storageGb": 5,

  "maxCostCredits": 100,
  "timeoutSeconds": 300,
  "environment": {
    "LOG_LEVEL": "INFO"
  }
}

Persistent Container

bash
Copy
POST /api/v1/compute/containers
Content-Type: application/json
Authorization: Bearer your_api_key

{
  "image": "docker.io/library/ubuntu:22.04",
  "persistent": true,
      "timeout_seconds": 28800,  // 8 hours
  "cpuCores": 2.0,
  "memoryMb": 4096,
  "storageGb": 20,
  "maxCostCredits": 200,
  "environment": {
    "WORKSPACE": "/workspace"
  }
}

Execute Command in Persistent Container

bash
Copy
POST /api/v1/compute/containers/{container_id}/exec
Content-Type: application/json
Authorization: Bearer your_api_key

{
  "command": ["python3", "script.py"],
  "files": [
    {
      "name": "script.py",
      "content": "print('Hello from persistent container!')",
      "encoding": "string"
    }
  ],
  "timeout_seconds": 60,
  "working_directory": "/workspace",
  "environment": {
    "PYTHONPATH": "/workspace"
  }
}

Update Container Settings

bash
Copy
PATCH /api/v1/compute/containers/{container_id}
Content-Type: application/json
Authorization: Bearer your_api_key

{
      "timeout_seconds": 43200  // 12 hours
}

Get Container Status

bash
Copy
GET /api/v1/compute/containers/{container_id}
Authorization: Bearer your_api_key

List Containers

bash
Copy
GET /api/v1/compute/containers?limit=20&status=RUNNING&containerType=COMPUTE
Authorization: Bearer your_api_key

Get Container Logs

bash
Copy
GET /api/v1/compute/containers/{container_id}/logs
Authorization: Bearer your_api_key

Cancel Container

bash
Copy
DELETE /api/v1/compute/containers/{container_id}
Authorization: Bearer your_api_key

List Container Executions

bash
Copy
GET /api/v1/compute/containers/{container_id}/executions
Authorization: Bearer your_api_key

List All Container Executions

bash
Copy
GET /api/v1/compute/containers/executions?limit=50&status=running
Authorization: Bearer your_api_key

Get Container Execution Details

bash
Copy
GET /api/v1/compute/containers/executions/{execution_id}
Authorization: Bearer your_api_key

List All Executions (Global)

bash
Copy
GET /api/v1/compute/containers/executions?limit=50&language=python
Authorization: Bearer your_api_key

Port Mapping & Networking

Port mapping allows your containers to expose web services, APIs, databases, and other network applications to the internet through secure HTTPS URLs. This feature enables powerful use cases like running web servers, API endpoints, Jupyter notebooks, and database services within isolated containers.

Core Concepts

When you enable port mapping on a container:

  • Your application becomes accessible via a unique public HTTPS URL
  • Traffic is automatically encrypted with SSL/TLS
  • Each container gets a random subdomain like https://sunny-meadow-15389u7ccx.cgn.my
  • No manual SSL certificate management required

Basic Port Mapping Setup

To enable port mapping, set networking: true and specify the container port your application listens on:

Web Server Example

bash
Copy
curl -X POST "https://api.cognitora.dev/api/v1/compute/containers" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "image": "docker.io/library/nginx:alpine",
    "command": ["nginx", "-g", "daemon off;"],
    "persistent": true,
    "networking": true,
    "portMapping": {
      "containerPort": 80,
      "protocol": "tcp"
    },
    "cpuCores": 1,
    "memoryMb": 512,
    "storageGb": 5,
    "maxCostCredits": 100,
    "timeout_seconds": 3600  // 1 hour
  }'

Response includes port mapping info (URL populated asynchronously):

json
Copy
{
  "data": {
    "id": "cnt_abc123",
    "status": "RUNNING",
    "networking": true,
    "portMapping": {
      "containerPort": 80,
      "protocol": "tcp", 
      "url": "pending"
    }
  }
}

Note: The url field will initially show "pending" and will be populated with the actual HTTPS URL once the container is fully running. To get the final URL, poll the container status:

bash
Copy
GET /api/v1/compute/containers/{container_id}

Once the container is running and port mapping is active, the response will include:

json
Copy
{
  "data": {
    "portMapping": {
      "containerPort": 80,
      "protocol": "tcp", 
      "url": "https://sunny-meadow-15389u7ccx.cgn.my"
    }
  }
}

Real-World Examples

Python Flask API Server

bash
Copy
curl -X POST "https://api.cognitora.dev/api/v1/compute/containers" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "image": "docker.io/library/python:3.11-slim",
    "command": [
      "bash", "-c",
      "pip install flask && python -c \"
from flask import Flask, jsonify, request
import json
import os
from datetime import datetime

app = Flask(__name__)

# In-memory data store
data_store = {}

@app.route('/api/health')
def health():
    return jsonify({
        \"status\": \"healthy\",
        \"service\": \"Flask API\",
        \"timestamp\": datetime.now().isoformat()
    })

@app.route('/api/data', methods=['GET', 'POST'])
def handle_data():
    if request.method == 'POST':
        item_id = len(data_store) + 1
        data_store[item_id] = request.json
        return jsonify({\"id\": item_id, \"data\": request.json}), 201
    else:
        return jsonify({\"items\": data_store, \"count\": len(data_store)})

@app.route('/api/data/<int:item_id>')
def get_item(item_id):
    if item_id in data_store:
        return jsonify(data_store[item_id])
    return jsonify({\"error\": \"Item not found\"}), 404

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000, debug=True)
\""
    ],
    "persistent": true,
    "networking": true,
    "portMapping": {
      "containerPort": 5000,
      "protocol": "tcp"
    },
    "cpuCores": 1,
    "memoryMb": 1024,
    "storageGb": 5,
    "maxCostCredits": 200,
    "timeout_seconds": 7200  // 2 hours
  }'

Node.js Express API

bash
Copy
curl -X POST "https://api.cognitora.dev/api/v1/compute/containers" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "image": "docker.io/library/node:18-alpine",
    "command": [
      "sh", "-c",
      "npm init -y && npm install express cors helmet && node -e \"
const express = require('express');
const cors = require('cors');
const helmet = require('helmet');

const app = express();
const port = 3000;

// Middleware
app.use(helmet());
app.use(cors());
app.use(express.json());

// Routes
app.get('/', (req, res) => {
  res.json({
    message: 'Welcome to Cognitora Container API',
    version: '1.0.0',
    timestamp: new Date().toISOString(),
    endpoints: ['/api/health', '/api/users', '/api/metrics']
  });
});

app.get('/api/health', (req, res) => {
  res.json({
    status: 'healthy',
    uptime: process.uptime(),
    memory: process.memoryUsage(),
    platform: process.platform
  });
});

// Simple user management
let users = [
  { id: 1, name: 'Alice', email: 'alice@example.com' },
  { id: 2, name: 'Bob', email: 'bob@example.com' }
];

app.get('/api/users', (req, res) => {
  res.json({ users, count: users.length });
});

app.post('/api/users', (req, res) => {
  const newUser = { id: users.length + 1, ...req.body };
  users.push(newUser);
  res.status(201).json(newUser);
});

app.listen(port, '0.0.0.0', () => {
  console.log('Server running on http://0.0.0.0:' + port);
});
\""
    ],
    "persistent": true,
    "networking": true,
    "portMapping": {
      "containerPort": 3000,
      "protocol": "tcp"
    },
    "cpuCores": 1,
    "memoryMb": 512,
    "storageGb": 5,
    "maxCostCredits": 150,
    "timeout_seconds": 5400  // 1.5 hours
  }'

Jupyter Notebook Development Environment

bash
Copy
curl -X POST "https://api.cognitora.dev/api/v1/compute/containers" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "image": "docker.io/jupyter/base-notebook:latest",
    "command": [
      "start-notebook.sh",
      "--NotebookApp.token=''",
      "--NotebookApp.password=''",
      "--NotebookApp.ip=0.0.0.0",
      "--NotebookApp.port=8888",
      "--NotebookApp.allow_origin='*'",
      "--NotebookApp.disable_check_xsrf=True"
    ],
    "persistent": true,
    "networking": true,
    "portMapping": {
      "containerPort": 8888,
      "protocol": "tcp"
    },
    "cpuCores": 2,
    "memoryMb": 2048,
    "storageGb": 10,
    "maxCostCredits": 300,
    "timeout_seconds": 14400  // 4 hours
  }'

Database Services

PostgreSQL Database

bash
Copy
curl -X POST "https://api.cognitora.dev/api/v1/compute/containers" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "image": "docker.io/library/postgres:15-alpine",
    "environment": {
      "POSTGRES_DB": "myapp",
      "POSTGRES_USER": "devuser",
      "POSTGRES_PASSWORD": "securedev123"
    },
    "persistent": true,
    "networking": true,
    "portMapping": {
      "containerPort": 5432,
      "protocol": "tcp"
    },
    "cpuCores": 2,
    "memoryMb": 1024,
    "storageGb": 20,
    "maxCostCredits": 500,
    "timeout_seconds": 28800  // 8 hours
  }'

Redis Cache Server

bash
Copy
curl -X POST "https://api.cognitora.dev/api/v1/compute/containers" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "image": "docker.io/library/redis:7-alpine",
    "command": ["redis-server", "--bind", "0.0.0.0", "--protected-mode", "no"],
    "persistent": true,
    "networking": true,
    "portMapping": {
      "containerPort": 6379,
      "protocol": "tcp"
    },
    "cpuCores": 1,
    "memoryMb": 512,
    "storageGb": 5,
    "maxCostCredits": 200,
    "timeout_seconds": 21600  // 6 hours
  }'

SDK Examples

JavaScript SDK with Port Mapping

javascript
Copy
import { Cognitora } from '@cognitora/sdk';

const client = new Cognitora({ apiKey: 'YOUR_API_KEY' });

async function createWebServer() {
  // Create a containerized web server
  const container = await client.containers.createContainer({
    image: 'docker.io/library/nginx:alpine',
    command: ['nginx', '-g', 'daemon off;'],
    persistent: true,
    networking: true,
    port_mapping: {
      container_port: 80,
      protocol: 'tcp'
    },
    cpu_cores: 1,
    memory_mb: 512,
    storage_gb: 5,
    max_cost_credits: 100,
    timeout_seconds: 3600  // 1 hour
  });

  console.log(`Web server created: ${container.id}`);
  console.log(`Access your server at: ${container.port_mapping.url}`);
  
  // Upload custom HTML content
  await client.containers.executeInContainer(container.id, {
    command: ['bash', '-c', 'echo "<h1>Hello from Cognitora!</h1>" > /usr/share/nginx/html/index.html'],
    timeout_seconds: 30
  });
  
  return container;
}

// Usage
createWebServer()
  .then(container => {
    console.log('Your web server is live!');
    console.log(`URL: ${container.port_mapping.url}`);
  })
  .catch(console.error);

Advanced Use Cases

Multi-Service Architecture

bash
Copy
# Create a Flask API backend
BACKEND_ID=$(curl -X POST "https://api.cognitora.dev/api/v1/compute/containers" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "image": "docker.io/library/python:3.11-slim",
    "command": ["python", "app.py"],
    "persistent": true,
    "networking": true,
    "portMapping": {"containerPort": 5000, "protocol": "tcp"},
    "cpuCores": 1,
    "memoryMb": 1024,
    "maxCostCredits": 200
  }' | jq -r '.data.id')

# Upload Flask application
curl -X POST "https://api.cognitora.dev/api/v1/compute/containers/$BACKEND_ID/exec" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "command": ["python", "app.py"],
    "files": [{
      "name": "app.py",
      "content": "from flask import Flask, jsonify\\napp = Flask(__name__)\\n\\n@app.route('/')\\ndef home():\\n    return jsonify({'message': 'Multi-Service Demo API', 'version': '1.0'})\\n\\n@app.route('/api/health')\\ndef health():\\n    return jsonify({'status': 'healthy', 'service': 'Backend'})\\n\\n@app.route('/api/data')\\ndef get_data():\\n    return jsonify({'users': [{'id': 1, 'name': 'Alice'}], 'total': 1})\\n\\nif __name__ == '__main__':\\n    app.run(host='0.0.0.0', port=5000)",
      "encoding": "string"
    }]
  }'

Port Mapping Configuration

ParameterTypeRequiredDescription
networkingbooleanYesMust be true to enable port mapping
portMapping.containerPortnumberYesInternal container port (1-65535)
portMapping.protocolstringNoProtocol: "tcp" or "udp" (default: "tcp")

Important Notes

  • Bind Address: Your application must listen on 0.0.0.0 (all interfaces), not 127.0.0.1 (localhost only)
  • Port Range: Container ports must be between 1-65535
  • Public Access: All mapped ports are publicly accessible via HTTPS
  • SSL/TLS: Automatic SSL termination provided by Cognitora
  • One Port Per Container: Each container can expose exactly one port
  • Protocol Support: TCP is recommended for web services, UDP for streaming/gaming

Security Best Practices

  • Authentication: Always implement authentication in your applications
  • Input Validation: Validate and sanitize all user inputs
  • Environment Variables: Use environment variables for configuration, not secrets
  • Regular Updates: Keep base images updated with security patches
  • Minimal Images: Use slim/alpine variants to reduce attack surface

Best Practices

Cost Optimization:

  • ✅ Set reasonable idle timeouts based on your workflow (30-120 minutes for development, longer for training)
  • ✅ Right-size CPU, memory, and resources for your specific workload
  • ✅ Clean up containers when work is complete to avoid unnecessary charges
  • ✅ Monitor usage patterns and adjust timeout settings accordingly
  • ✅ Use persistent: true for multi-step workflows, persistent: false for single tasks

Development Workflow:

  • ✅ Use long-running containers for iterative development and debugging
  • ✅ Upload code via file uploads rather than rebuilding custom images
  • ✅ Leverage filesystem persistence for multi-step workflows and state management
  • ✅ Extend timeouts during active development sessions
  • ✅ Install dependencies once and reuse the environment

Security:

  • ✅ Use minimal, specific base images (e.g., python:3.11-slim vs python:latest)
  • ✅ Avoid storing secrets in environment variables - use secure file uploads instead
  • ✅ Keep containers isolated by default (networking disabled unless required)
  • ✅ Regularly update base images to include security patches
  • ✅ Use private registries for sensitive workloads

Performance:

  • ✅ Pre-install common dependencies in custom Docker images for faster startup
  • ✅ Use dedicated working directories to organize files and outputs
  • ✅ Batch multiple file uploads in single execution calls
  • ✅ Monitor execution times and resource usage to optimize costs
  • ✅ Use appropriate storage sizes - don't over-provision