Aura Workshop / Documentation

Aura Workshop Usage Guide

Complete reference for setting up, configuring, and using Aura Workshop — from first launch to advanced multi-agent orchestration.

First Launch

When you open Aura Workshop for the first time, the app creates a local Aurix database using SQLite at:

All settings, conversations, listener configurations, and scheduled tasks are stored in this database.

On first launch the app also:

API Key Setup

  1. Open Settings by clicking the gear icon in the sidebar or top toolbar.
  2. Select a provider tab (Anthropic, OpenAI, Google, Ollama, etc.).
  3. Enter your API key in the key field. The key is encrypted before being stored in the database.
  4. Select a model from the dropdown or type a custom model ID.
  5. The base URL is filled automatically based on the provider preset but can be overridden.

Each provider stores its API key independently. When you switch between providers, the app loads the previously saved key for that provider.

Local providers (Ollama, Aura AI, LocalAI, vLLM, TGI, SGLang) do not require an API key. Select one of these providers and ensure the corresponding inference server is running locally.

Web UI (Browser Access)

Aura Workshop includes an embedded axum HTTP server that serves the full SolidJS UI via any web browser. This enables headless Linux server deployment and remote access from any device on the network.

Accessing the Web UI

The Web UI server auto-starts on port 18800 by default. Open your browser and navigate to:

http://<machine-ip>:18800

The actual machine IP address is displayed in Settings > System Management > Web UI Server.

Configuration

Configure the Web UI server in Settings > System Management > Web UI Server:

SettingDescriptionDefault
EnabledToggle the Web UI server on/offOn
PortHTTP port for the web server18800
Auth TokenOptional Bearer token for authenticationEmpty (no auth)

Settings are persisted in the database and survive app restarts.

Authentication

When an auth token is configured, all requests to the Web UI must include it:

How It Works

The browser-based UI uses the same SolidJS frontend with a web transport layer instead of Tauri IPC:

All features work identically in browser mode: agent tasks, chat, settings, listeners, webhooks, schedules, skills, MCP servers, and teams.

Headless Server Usage

For headless Linux servers without a desktop environment:

  1. Install the .deb or .AppImage package
  2. Launch with a virtual display: xvfb-run aura-workshop
  3. Access the full UI at http://<server-ip>:18800
  4. Configure API keys, models, and all settings through the browser

Creating and Running Tasks

Starting a New Task

  1. Type your request in the input field at the bottom of the main panel.
  2. Press Enter or click the send button.
  3. The agent creates a plan, then executes it step by step using available tools.
  4. Progress is shown in real time: you see the agent's text output, tool invocations, and results as they happen.

How Task Classification Works

When you send a prompt, Aura's classification system analyzes it and routes to the right execution path:

ClassificationWhat HappensExample
SINGLEOne agent handles it directlySimple questions, code scripts, file edits
CLARIFYAsks clarifying questions, pauses for your answerVague requests like "help me with my project"
TEAM:NUses an existing team's workflowRoutes to Software Dev Team, Content Writing Team, etc.
WORKFLOW:NReuses a previously saved workflowSame prompt pattern as a prior task
NEWCreates a new team with specialized rolesComplex tasks needing multiple specialists

You don't need to pick a team or workflow manually -- just describe what you want and the system figures out the best approach.

Sample Prompts (Validated Test Scenarios)

These prompts have been tested end-to-end and demonstrate Aura's core capabilities.

1. Simple Question (SINGLE)

What are the three laws of thermodynamics? One sentence each.

What happens: Single agent answers directly in 1 turn. No tools, no team. Status: completed.

2. Code Generation (SINGLE)

Write a Python function called is_palindrome that checks if a string reads the same forwards and backwards. Save it to palindrome.py

What happens: Single agent writes the code, saves the file, optionally runs tests. You'll see write_file and bash tool calls in the sidebar.

3. Clarification (CLARIFY)

Help me with my project

What happens: The system detects the request is too vague. Instead of guessing, it asks 3-5 specific clarifying questions. Task status shows "Needs Response" in amber. Reply with details and the system re-classifies your answer -- if it needs a team, it creates one automatically.

4. Team Workflow -- Software Development (TEAM)

Build a full REST API for a todo list app with CRUD endpoints, database schema, and unit tests

What happens: Routes to the Software Dev Team (5 roles: Product Manager, Architect, Developer, QA Engineer, DevOps). The PM may ask clarifying questions -- the workflow pauses until you answer. The Workflow Progress panel shows each role's status. Click any node to see that agent's specific output and tool calls.

5. Team Workflow -- Content Writing (TEAM)

Research and write a 2000 word blog post about the future of AI agents in enterprise software

What happens: Routes to the Content Writing Team (3 roles: Research Lead, Writer, Editor). Each role produces its deliverable sequentially. The final blog post is saved as a markdown file.

6. Scheduled Task (SCHEDULE)

Every Monday at 9am, compile a summary of all git commits from the past week and email it to [email protected]

What happens: Classification detects the scheduling intent. A schedule is created and appears in the SCHEDULED section of the sidebar. The agent runs the task immediately as a first execution. The schedule fires automatically at the configured time.

7. New Team Creation (NEW)

Design and build a data ingestion pipeline that reads CSV files, validates the data, transforms it, and loads it into PostgreSQL

What happens: No existing team matches, so the classification creates a new pipeline team (e.g., Data Architect, Developer, DevOps Engineer) in one shot. The team is saved for future reuse. A workflow runs with all agents producing files in your mounted folder.

8. Workflow Reuse (TEAM/WORKFLOW)

Build a REST API for a bookmark manager with full CRUD and tests

What happens: The system recognizes this is similar to Test 4 and reuses the Software Dev Team. A new workflow is saved for this specific task. No duplicate teams created.

9. Translation with Parallel Roles (NEW)

Translate this product documentation into Spanish, French, and German simultaneously, then have each reviewed by a native speaker

What happens: Creates a Localization Team with parallel workflow -- translators run simultaneously, then reviewers check each language. The Workflow Progress panel shows parallel nodes running at the same time.

10. Newsletter + Schedule (NEW + SCHEDULE)

Every Friday at 5pm, research the top AI news, write a newsletter with 5 sections, and email it to [email protected]

What happens: Creates a new newsletter team AND a recurring schedule. Both appear in the sidebar immediately after the task completes. The first newsletter is generated right away.

11. Listener + Automation (NEW + LISTENER)

When a customer asks about pricing on WhatsApp, look up their account, generate a personalized quote, and reply plus send a follow-up email to sales

What happens: Creates a Customer Support team, a listener (WhatsApp, disabled until auth is configured), and a workflow. The listener appears in the LISTENERS section. WhatsApp listeners start disabled because they require authentication setup.

Chat Mode

In addition to the full Agent mode, Aura Workshop provides a lightweight Chat mode for quick questions and conversations that do not require agent tooling.

Switching Modes

Toggle between Agent and Chat tabs in the sidebar. Agent mode gives you the full tool-calling agent loop; Chat mode gives you a fast, conversational interface.

Features

How the Agent Works

The agent operates in a loop of up to 50 turns (configurable). Each turn:

  1. Sends the conversation history and available tools to the LLM.
  2. Receives a response that may contain text and/or tool calls.
  3. Executes any tool calls (file reads, writes, bash commands, etc.).
  4. Feeds tool results back to the LLM for the next turn.

The loop ends when the LLM responds with only text (no tool calls), or the maximum turn count is reached.

Available Tools

In native mode (default when Docker is unavailable):

ToolDescription
read_fileRead file contents
write_fileCreate or overwrite a file
edit_fileMake targeted edits to a file
bashExecute shell commands on the host
globFind files by pattern
grepSearch file contents with regex
list_dirList directory contents
web_fetchFetch a web page and return clean text

In Docker mode, three additional tools are available:

ToolDescription
docker_runRun commands in Docker containers
docker_listList running containers
docker_imagesList available images

Device tools (opt-in via Settings):

ToolDescription
system_notifySend a system notification
screen_captureCapture a screenshot (macOS)
camera_captureTake a photo via webcam (macOS, requires imagesnap)

Chat Commands

Type these slash commands in the message input to control the agent session without sending a message to the LLM.

CommandDescription
/statusShow current model, provider, execution mode, and thinking level
/new or /resetClear the conversation and start fresh
/compactSummarize the conversation history to reduce token usage. The agent compresses all previous messages into a summary and keeps only the last user message.
/think <level>Set the thinking/reasoning level. Valid levels: off, low, medium, high. Without an argument, shows the current level.
/usageShow token usage for the current session: input tokens, output tokens, total, and estimated cost
/toolsList all available tools, both native and MCP
/helpShow the help table of all available commands

Mounted Folders

The mount folder button (folder icon next to Send) tells the agent where to work on your filesystem. Always mount a folder when you expect the agent to create or modify files.

How to Use

  1. Click the folder icon in the input toolbar.
  2. A native file dialog opens. Select one or more directories.
  3. Selected folders appear as a "MOUNTED FOLDERS" badge above the input field.
  4. When you send a message, the selected paths are passed as the project_path to the agent.
  5. In Docker mode, these paths are bind-mounted into the container at /workspace.
  6. In native mode, the first path is used as the working directory for bash commands.

When to Mount

ScenarioMount folder?
"Build me a new project"Yes -- mount where you want the project created (e.g., ~/Desktop/test)
"Read this codebase and refactor it"Yes -- mount the project root
"What's the capital of France?"No -- no file access needed
Running a team task from the dropdownYes -- mount the output directory so all roles write there
Agent creating + running a teamYes -- the agent passes the mounted path to platform_run_team_task

Important Notes

How Aura Workshop Orchestrates Work

Aura Workshop has a built-in AI orchestration engine. Instead of doing everything as a single agent, the system can automatically route complex tasks to multi-agent teams, create automation workflows, set up scheduled tasks, and wire triggers -- all from natural language prompts.

How It Works

When you send a prompt, the system decides the best approach:

What you askWhat happensExample
Simple one-off taskSingle agent handles it directly"What time is it in Tokyo?"
Complex multi-step projectAuto-routes to a multi-agent team"Build me a REST API for a todo app"
Recurring taskAgent creates a scheduled task"Every morning at 8am, check if my website is up"
Event-driven automationAgent creates a workflow with triggers"When a GitHub webhook fires, run tests and deploy"
Messaging automationAgent creates a listener"Set up a WhatsApp bot that answers pricing questions"

Auto-Routing

For complex tasks, the system automatically detects the best team:

This happens at the application level -- no model cooperation required.

Model Recommendations

Orchestration features (creating teams, workflows, schedules via natural language) work best with capable cloud models:

ModelSingle AgentTeam ExecutionWorkflow CreationOrchestration
Claude (Anthropic)ExcellentExcellentExcellentExcellent
GPT-4 (OpenAI)ExcellentExcellentGoodGood
DeepSeek ChatGoodGoodLimitedLimited
Ollama (local)GoodGoodPoorPoor

Local models work well for single-agent tasks and executing pre-configured teams. For creating new workflows and orchestrating complex automations via natural language, use a cloud model.

Multi-Agent Teams

Teams define multiple AI roles that work together. Each role runs as a separate agent with its own prompt, and the workflow engine manages execution order, parallel processing, and data passing between roles.

Default Teams

Software Dev Team (5 roles, fan-out enabled):

Content Writing Team (3 roles, sequential):

Using Teams

Automatic -- just describe what you need. Complex tasks auto-route to the matching team:

Build me a Python CLI tool for managing bookmarks

Manual via Settings -- create or edit teams in Settings > Teams with the visual workflow editor.

Via natural language -- ask the agent to create a team:

Create a translation team with a Localization Manager, Translator with fan-out, and Cultural Reviewer

Creating and Editing Teams

  1. Open Settings > Teams and click Create Team.
  2. Add roles -- each role needs a name and a system prompt.
  3. Choose a workflow type: Sequential or Pipeline (with validation gates).
  4. Use the Workflow Editor to customize: add Script, Webhook, Validate, or Approval Gate steps between roles.
  5. Enable Fan-Out on any role by double-clicking the node and checking "Enable Fan-Out".
  6. Import/Export -- click Import to load a team JSON, or Export on any team to download it.

Workflow Pause & Resume

When the first agent in a team workflow (typically the PM) asks clarifying questions, the workflow pauses automatically:

  1. The PM asks questions (e.g., "What database do you prefer?")
  2. Task status changes to "Needs Response"
  3. You type your answer in the input box
  4. The workflow resumes from where it paused, feeding your answer to the next agents

This ensures you get exactly the product you want instead of the agent guessing.

Fan-Out (Parallel Agents)

Fan-Out lets a role automatically spawn multiple agents in parallel -- one per item from an upstream role's output list.

How to enable: Double-click a role node in the Workflow Editor → check "Enable Fan-Out" → set Source Node and Max Parallel Agents.

How it works: The source role produces a numbered list. The fan-out executor detects the list, splits it, and spawns one agent per item. Results are merged for the next role.

Team TypeSource Role ProducesFan-Out Role Does
Software DevArchitect lists tasksOne developer per task
Content WritingResearch Lead lists sectionsOne writer per section
ResearchLead lists questionsOne researcher per question
TranslationManager lists languagesOne translator per language

Workflow Progress

When a team runs, the right panel shows:

Role Guardrails

Every agent in a team workflow automatically receives system-enforced rules:

These guardrails are injected at the executor level and apply to every team, including user-created ones.

Automation Workflows

Automation workflows are pipelines that orchestrate triggers, conditions, scripts, webhooks, and teams. Unlike teams (which are multi-agent role-based), workflows handle the plumbing: when to run, what data to route, which conditions to check.

Creating Workflows

Via natural language -- describe the automation you need:

Every morning at 9am, check our server health. If anything is down, have the incident team diagnose and email me a report.

The agent creates the workflow using platform_create_workflow, connecting script nodes, conditional nodes, team nodes, and webhook nodes.

Via Settings -- go to Settings > Workflows > Create Workflow. Use the visual editor to add nodes, connect them, and configure each one.

Via Import -- click Import on the Workflows tab to load a workflow JSON file.

Available Node Types

NodeTypeDescription
Agent Taskagent-taskLLM agent with tools
TeamteamRuns a saved multi-agent team as a step
ScriptscriptRuns bash, Python, Node.js, or Go code
WebhookwebhookHTTP request (GET/POST/PUT/PATCH/DELETE)
ConditionalconditionalIF/ELSE branching based on expressions
TransformtransformData manipulation via JS/Python expression
Fan-Outfan-outSplits a list into parallel executions
MergemergeCombines results from parallel branches
DelaydelayWaits a specified duration before continuing
ValidatevalidateLLM quality check on a previous node's output
Approval Gatehuman-in-the-loopPauses for human approval

Conditional Expressions

The conditional node evaluates expressions against workflow data:

Routes to true or false output ports based on the result.

Connecting to Triggers

Scheduled execution -- use platform_create_schedule with prompt conventions:

Webhook-triggered -- create a workflow with a webhook node as the entry point.

Listener-triggered -- use platform_create_listener for messaging platforms.

Workflow Templates

Pre-built templates are available in the templates/ directory:

TemplateFan-OutUse Case
Marketing CampaignCopywriter per channelMulti-channel campaigns
Competitive AnalysisResearcher per competitorMarket research
Course CreatorLesson Writer per lessonEducational content
Data AnalysisCollector per sourceAnalytics and reporting
TranslationTranslator per languageLocalization
Code MigrationMigrator per moduleCodebase conversion
Proposal WriterSection Writer per sectionRFP responses

Import any template via Settings > Workflows > Import.

Platform Tools Reference (53 tools)

The agent has full CRUD operations for all platform resources:

Listeners (8 tools)

ToolDescription
platform_create_listenerCreate a messaging listener
platform_list_listenersList all listeners
platform_get_listenerGet listener details and status
platform_edit_listenerEdit a listener's configuration
platform_delete_listenerDelete a listener
platform_start_listenerStart a listener
platform_stop_listenerStop a listener
platform_get_listener_statusGet listener status and QR data

Webhooks (7 tools)

ToolDescription
platform_create_webhookCreate a webhook endpoint
platform_list_webhooksList all webhooks
platform_get_webhookGet webhook details
platform_edit_webhookEdit a webhook's configuration
platform_delete_webhookDelete a webhook
platform_start_webhookStart a webhook
platform_stop_webhookStop a webhook

Schedules (7 tools)

ToolDescription
platform_create_scheduleCreate a scheduled task
platform_list_schedulesList all scheduled tasks
platform_get_scheduleGet schedule details
platform_edit_scheduleEdit a scheduled task
platform_delete_scheduleDelete a scheduled task
platform_start_scheduleStart a schedule
platform_stop_scheduleStop a schedule

Skills (5 tools)

ToolDescription
platform_create_skillCreate a new skill
platform_list_skillsList installed skills
platform_get_skillGet skill details
platform_edit_skillEdit a skill's SKILL.md content
platform_delete_skillDelete a skill

MCP Servers (5 tools)

ToolDescription
platform_connect_mcpConnect to an MCP server
platform_list_mcpList connected MCP servers
platform_get_mcpGet MCP server details
platform_edit_mcpEdit an MCP server configuration
platform_disconnect_mcpDisconnect an MCP server

Teams (6 tools)

ToolDescription
platform_create_teamCreate a multi-agent team with roles and optional fan-out
platform_list_teamsList all teams
platform_get_teamGet team details
platform_edit_teamEdit a team's configuration
platform_delete_teamDelete a team
platform_run_team_taskExecute a team's workflow (blocks until complete, returns results)

Automation Workflows (4 tools)

ToolDescription
platform_create_workflowCreate an automation workflow with nodes, edges, and triggers
platform_list_workflowsList all automation workflows
platform_run_workflowExecute a workflow (blocks until complete)
platform_delete_workflowDelete a workflow

Credentials (5 tools)

ToolDescription
platform_store_credentialStore a credential securely
platform_list_credentialsList stored credential names and types (no secrets)
platform_get_credentialRetrieve a decrypted credential by name
platform_edit_credentialEdit a stored credential
platform_delete_credentialDelete a credential

Settings (2 tools)

ToolDescription
platform_get_settingsView current settings (no API keys)
platform_update_settingsUpdate settings

Security Notes

Skills System

Skills are structured instruction sets that guide the agent when performing specific types of tasks.

How Skills Work

  1. Skills are stored in the skills directory: ~/Library/Application Support/aura-workshop/skills/ (macOS).
  2. Each skill is a folder containing a SKILL.md file with YAML frontmatter (name, description) and markdown instructions.
  3. When the agent starts, all available skills are listed in the system prompt.
  4. When a user request matches a skill, the agent reads the skill's SKILL.md file and follows its instructions.

Bundled Skills

Document and creative skills:

Development workflow skills (superpowers):

Platform integration:

Editing Skills

Each skill in Settings > Skills has an Edit button. Clicking it opens a modal editor for the skill's SKILL.md file, where you can modify the YAML frontmatter (name, description) and the markdown instructions. Changes take effect on the next agent task.

The agent can also edit skills programmatically using the platform_edit_skill tool.

Adding Custom Skills

  1. Click "+ Add Skill" in the Skills panel to import a skill folder from your computer.
  2. Alternatively, create a folder in the skills directory with a SKILL.md file.
  3. The SKILL.md must have YAML frontmatter with name and description fields.

Using MCP Servers

The Model Context Protocol (MCP) allows Aura Workshop to connect to external tool servers, extending the agent's capabilities beyond built-in tools.

Adding an MCP Server

  1. Open Settings and navigate to the MCP section.
  2. Click Add Server.
  3. Choose a transport type:
    • HTTP: provide the server URL (e.g., http://localhost:3000/mcp).
    • stdio: provide the command and arguments to spawn the server process (e.g., command npx, args @playwright/mcp).
  4. Optionally configure OAuth credentials if the server requires authentication.
  5. Toggle the server to enabled.
  6. The app connects and discovers available tools. These tools appear in the /tools listing with an mcp_ prefix.

Playwright Browser Automation

Playwright MCP is a popular stdio-based MCP server for browser automation.

  1. Add an MCP server with:
    • Transport: stdio
    • Command: npx
    • Args: @playwright/mcp
  2. Enable the server. The agent can now browse the web, interact with pages, take screenshots, and extract content.
  3. MCP tool results are truncated to 8000 characters to prevent context overflow with large accessibility trees.

Browser-Use MCP (Bundled)

Browser-Use is bundled as a backup browser automation MCP server alongside Playwright. It is auto-connected on startup and provides an alternative approach to web interaction using a Python-based browser agent.

Custom MCP Servers

Any server implementing the MCP protocol can be added. Servers can expose tools with custom schemas, and the agent will see them as additional callable tools. Tool names are formatted as mcp_{server_id}_{tool_name} with hyphens and colons replaced by underscores.

Cloud Storage

Aura Workshop can connect to popular cloud storage services, allowing agents to read from and write to your cloud files as part of any task.

Supported Providers

Connecting a Provider

  1. Open Settings and navigate to the Cloud Storage tab.
  2. Click Connect next to the provider you want to add.
  3. An OAuth2 authorization flow opens in your default browser. Approve access when prompted.
  4. A local OAuth callback server on port 18793 receives the authorization token and completes the connection.
  5. The provider appears as connected in the Cloud Storage settings.

Agent Access

Once a provider is connected, agents can read files from and write files to that cloud storage during tasks. For example, you can ask the agent to "download the quarterly report from my Google Drive and summarize it" or "upload this PDF to my Dropbox."

Access to cloud storage is gated behind biometric authentication (Touch ID on macOS, Windows Hello on Windows) to prevent unauthorized use. See the Biometric Authentication section below for details.

Chrome Extension

The Aura Workshop Chrome Extension brings your AI agent into any browser tab. Chat with your agent, ask about the page you're viewing, summarize content, or use selected text -- all from a side panel.

Prerequisites

Installation

  1. Download aura-workshop-chrome-extension.zip from the release page
  2. Unzip the file to a permanent location (e.g., ~/aura-chrome-extension/)
  3. Open Chrome and navigate to chrome://extensions
  4. Enable Developer mode (toggle in the top-right corner)
  5. Click Load unpacked
  6. Select the unzipped extension folder
  7. The Aura Workshop icon appears in your Chrome toolbar

Usage

Click the extension icon to open the side panel. The panel connects to your running Aura Workshop instance via WebSocket on the WebChat listener port (default 18792).

Keyboard shortcut: Press Ctrl+Shift+Y (Windows/Linux) or Cmd+Shift+Y (macOS) to quickly toggle the side panel open or closed.

Status indicator: The header shows a green dot when connected, red when disconnected. The current model name is displayed next to the status dot.

Chat: Type a message and press Enter or click Send. The agent processes your request using all available tools (file operations, web fetch, bash commands, MCP tools, credentials) and returns the response.

Tab-aware conversations: Each browser tab maintains its own conversation context. Switching tabs automatically switches to that tab's conversation, so you can have independent chats running on different pages.

Quick Actions

ButtonWhat it does
Ask about pageExtracts the current page's title, URL, and text content (up to 3000 chars), sends it to the agent with a prompt to help understand the page
SummarizeSends the page content with a summarization prompt
Use selectionSends your highlighted text selection from the page to the agent
StopStops the current response generation mid-stream
TaskConverts the current chat conversation into a full agent task in the desktop app
New chatClears the conversation and starts fresh

File attachments: You can drag and drop files onto the input area or paste screenshots. Attached files are shown as thumbnails and sent to the agent as base64 data.

Troubleshooting

IssueSolution
Red status dot / "Disconnected"Make sure Aura Workshop is running and the WebChat listener is started
"Connection failed"Check that the WebChat listener port (default 18792) isn't blocked by a firewall
Agent not respondingVerify your model and API key are configured in Aura Workshop Settings
Quick actions don't extract contentSome pages (PDFs, iframes, cross-origin) may block content extraction -- use copy-paste instead

Privacy

The extension communicates only with your local Aura Workshop instance (localhost:18792). No data is sent to external servers by the extension itself. Page content is only extracted when you click a quick action button -- it is not automatically collected.

Embeddable Chat Widget

Aura Workshop provides an embeddable chat widget that you can add to any website, enabling visitors to interact with your configured agent directly from a web page.

Setup

  1. Create a new Listener in Settings with the platform type set to WebChat.
  2. Start the listener. It launches a WebSocket server for real-time communication.
  3. Copy the provided JavaScript snippet from the listener configuration panel.
  4. Paste the snippet into your website's HTML.

Features

Embedding

The JavaScript snippet creates a floating chat button on your page. When clicked, it opens a chat panel that connects to your running Aura Workshop instance. The snippet handles the WebSocket connection, message rendering, and UI automatically.

Custom Provider Manager

The Custom Provider Manager lets you save and quickly switch between custom LLM provider configurations.

Adding a Custom Provider

  1. Open Settings and navigate to the provider configuration area.
  2. Click Add Custom Provider (or use the custom provider manager UI).
  3. Fill in the configuration:
    • Name -- A display name for this provider (e.g., "My vLLM Server").
    • Base URL -- The endpoint URL (e.g., http://192.168.1.100:8000/v1).
    • Model ID -- The model identifier the server expects.
    • API Key -- Optional, depending on the server's auth requirements.
  4. Save the configuration.

Quick Switching

Saved custom providers appear in the model selector dropdown alongside built-in providers. Select one to instantly switch to that provider's configuration -- no need to re-enter URLs or keys each time.

Compatibility

The Custom Provider Manager supports any OpenAI-compatible endpoint. This includes vLLM, TGI, SGLang, LocalAI, LiteLLM, and any other server that implements the OpenAI chat completions API.

Billing & Spend Tracking

Usage Dashboard

Open Settings > Billing to see:

Spend Limits

Set daily and monthly spending caps per configured provider:

  1. Go to Settings > Billing > Spend Limits
  2. Enter a daily limit (e.g., $5.00) and/or monthly limit (e.g., $50.00)
  3. When a provider hits its limit, Aura automatically switches to the next available provider in your fallback list

Provider Fallback Order

Configure a priority-ordered list of backup providers:

  1. Go to Settings > Billing > Provider Fallback Order
  2. Add providers in priority order (e.g., DeepSeek first, then Anthropic, then Ollama as local fallback)
  3. When the primary provider hits its spend limit, Aura switches mid-session with a notification
  4. Local models (Aura AI, Ollama) serve as zero-cost final fallbacks

Model Pricing

Edit the per-model pricing used for cost calculations:

  1. Go to Settings > Billing > Model Pricing
  2. Adjust input/output prices per million tokens for any model
  3. Prices are used to calculate your spend -- keep them updated with your provider's current rates

How Token Tracking Works

Settings Overview

Settings now opens as a full-window panel, hiding the sidebar and task panel to give you an uncluttered view of all configuration options. Click the back arrow or press Escape to return to the main workspace.

SettingDescriptionDefault
API KeyProvider-specific API key (encrypted in database)Empty
ModelLLM model identifier(none -- must be selected)
Base URLAPI endpoint URL(depends on provider)
Max TokensMaximum output tokens per response4096
TemperatureResponse randomness (0.0 - 1.0)0.7
Top PNucleus sampling threshold (0.0 - 1.0)1.0
Top KTop-K sampling limit (0 = disabled)0
Min PMinimum probability threshold (0.0 - 1.0)0.0
Repeat PenaltyPenalty for token repetition (1.0 = no penalty)1.0
Native ModeRun commands on host instead of DockerAuto-detected
Thinking LevelReasoning depth: off, low, medium, highoff
Elevated BashAllow elevated/sudo commandsfalse
Screen CaptureEnable the screen_capture toolfalse
Camera CaptureEnable the camera_capture toolfalse
System NotificationsEnable the system_notify tooltrue
Voice EnabledEnable text-to-speech for responsesfalse
TTS ProviderVoice synthesis: system, openai, elevenlabssystem
Selected VoiceVoice name/ID for TTS(system default)
Custom ProvidersSaved custom LLM provider configurations(none)
Cloud StorageConnected cloud storage providers (Dropbox, Box, OneDrive, Google Drive)(none)
Biometric AuthRequire biometric authentication for sensitive settingsEnabled

Sampling Parameters

Not all parameters are supported by every provider. Unsupported parameters are silently ignored by the backend.

Provider-specific keys are stored in a provider_keys JSON map, allowing you to have API keys saved for multiple providers simultaneously. OpenAI users can optionally configure Organization ID and Project ID.

Biometric Authentication

How It Works

Biometric authentication is required to access the Credentials and Cloud Storage settings tabs. When you navigate to either of these tabs, a biometric prompt appears. Once you authenticate successfully, access is granted for the remainder of the session -- you do not need to authenticate again until you restart the app.

The experience is transparent and lightweight: a single Touch ID or Windows Hello prompt is all that stands between you and the protected settings.

Aura Workshop REST API

Aura Workshop includes an embedded HTTP server (default port 18800) that exposes the full platform as a REST API. Every feature available in the desktop app is also available via HTTP. All endpoints are prefixed with /api.

Base URL: http://localhost:18800/api
Content-Type: application/json for all POST/PUT requests
Authentication: Optional Bearer token (configured in Settings > Web Server)

Tasks

MethodEndpointDescription
GET/api/tasksList all tasks
POST/api/tasksCreate a new task
GET/api/tasks/{id}Get task details
DELETE/api/tasks/{id}Delete a task
GET/api/tasks/{id}/messagesGet task conversation messages
GET/api/tasks/interruptedList interrupted tasks for resume
POST/api/tasks/{id}/runRun a task agent (SSE stream)
POST/api/tasks/{id}/resumeResume an interrupted task (SSE stream)

Example: Create and run a task

# Create task
curl -X POST http://localhost:18800/api/tasks \
  -H "Content-Type: application/json" \
  -d '{"title":"Build API","description":"Build a REST API","prompt":"Build a REST API"}'

# Run it (returns SSE stream)
curl -N -X POST http://localhost:18800/api/tasks/{task_id}/run \
  -H "Content-Type: application/json" \
  -d '{"task_id":"...","message":"Build a REST API for a todo app","project_path":"/path/to/folder"}'

Conversations (Chat)

MethodEndpointDescription
GET/api/conversationsList all conversations
POST/api/conversationsCreate a new conversation
DELETE/api/conversations/{id}Delete a conversation
PUT/api/conversations/{id}/titleUpdate conversation title
GET/api/conversations/{id}/messagesGet conversation messages
POST/api/conversations/{id}/messagesAdd a message
POST/api/chat/sendSend chat message (SSE stream)
POST/api/chat/enhancedSend chat with tools (SSE stream)

Teams

MethodEndpointDescription
GET/api/teamsList all teams
POST/api/teamsCreate a team
PUT/api/teams/{id}Update a team
DELETE/api/teams/{id}Delete a team
POST/api/teams/runRun a team task

Automation Workflows

MethodEndpointDescription
GET/api/workflowsList all workflows
POST/api/workflowsCreate a workflow
GET/api/workflows/{id}Get workflow details
PUT/api/workflows/{id}Update a workflow
DELETE/api/workflows/{id}Delete a workflow
POST/api/workflows/{id}/runExecute a workflow
GET/api/workflow/runs/{run_id}Get workflow run status
POST/api/workflow/approvals/{id}/resolveResolve a human-in-the-loop approval

Schedules

MethodEndpointDescription
GET/api/schedulesList all scheduled tasks
POST/api/schedulesCreate a schedule
DELETE/api/schedules/{id}Delete a schedule
POST/api/schedules/{id}/toggleEnable/disable a schedule

Listeners

MethodEndpointDescription
GET/api/listenersList all listeners
POST/api/listenersCreate a listener
PUT/api/listeners/{id}Update a listener
DELETE/api/listeners/{id}Delete a listener
POST/api/listeners/{id}/startStart a listener
POST/api/listeners/{id}/stopStop a listener
POST/api/listeners/{id}/toggleToggle listener on/off
GET/api/listeners/{id}/logsGet listener execution logs
GET/api/listeners/platformsGet supported messaging platforms

Webhooks

MethodEndpointDescription
GET/api/webhooksList all webhooks
POST/api/webhooksCreate a webhook
DELETE/api/webhooks/{id}Delete a webhook
POST/api/webhooks/{id}/toggleEnable/disable a webhook
GET/api/webhooks/{id}/urlGet the webhook's trigger URL
GET/api/webhooks/{id}/logsGet webhook execution logs

Billing & Usage

MethodEndpointDescription
GET/api/billing/summaryGet spend summary per provider
GET/api/billing/limitsGet all spend limits
POST/api/billing/limitsSet a spend limit for a provider
GET/api/billing/fallback-orderGet provider fallback priority
POST/api/billing/fallback-orderSet provider fallback priority
GET/api/billing/pricingGet model pricing table
POST/api/billing/pricingUpdate model pricing
POST/api/billing/resetReset all usage data
GET/api/billing/dailyGet daily usage (last 30 days)
GET/api/billing/daily-by-modelGet daily usage per model

Settings & Data Management

MethodEndpointDescription
GET/api/settingsGet all settings
PUT/api/settingsUpdate settings
POST/api/settings/testTest provider connection
GET/api/platformGet platform info (OS, version)
GET/api/diagnosticsRun system diagnostics
POST/api/inference/stopStop running inference (optional task_id in body)
POST/api/data/clear-historyClear all conversation history
POST/api/data/reset-keysReset all API keys
POST/api/data/reset-databaseReset entire database
POST/api/data/reset-allFactory reset

Streaming (Server-Sent Events)

Several endpoints return SSE streams for real-time updates. Connect with EventSource or curl -N.

EndpointDescription
POST /api/tasks/{id}/runAgent task execution — streams text, tool calls, plan steps, done/error
POST /api/tasks/{id}/resumeResume interrupted task — same event types as run
POST /api/chat/sendChat message — streams text chunks + done
POST /api/chat/enhancedChat with tools — streams text + tool events + done
GET /api/eventsGlobal event stream — receives ALL workflow events from any client (desktop or web)

SSE Event Types:

{"type":"text","content":"Hello world..."}          // Streaming text
{"type":"tool_start","tool":"write_file","input":{}} // Tool execution started
{"type":"tool_end","tool":"write_file","success":true} // Tool completed
{"type":"node_running","node_id":"role_0","label":"PM"} // Workflow node status
{"type":"done","total_turns":5}                      // Task completed
{"type":"error","message":"..."}                     // Task failed

Authentication

API authentication is optional. When a token is configured in Settings > Web Server, include it as a Bearer token:

curl -H "Authorization: Bearer YOUR_TOKEN" http://localhost:18800/api/tasks

If no token is configured, all API requests are allowed without authentication (suitable for local-only access).

Check authentication status:

GET /api/auth/check
# Returns: {"authenticated": true}

OpenAI API Compatibility

Aura Workshop can connect to any OpenAI-compatible API endpoint. This includes OpenAI itself, DeepSeek, Moonshot/Kimi, vLLM, TGI, SGLang, LocalAI, LiteLLM, and any server implementing the chat completions format.

Configuration

In Settings, select a provider or enter a custom base URL:

ProviderBase URLAuth Header
OpenAIhttps://api.openai.com/v1Authorization: Bearer sk-...
DeepSeekhttps://api.deepseek.comAuthorization: Bearer sk-...
Moonshothttps://api.moonshot.cn/v1Authorization: Bearer sk-...
Custom / Self-hostedhttp://your-server:8000/v1Optional

Supported Endpoints

POST /v1/chat/completions    # Standard chat completions
POST /chat/completions       # Also accepted (without /v1 prefix)

Request Format

{
  "model": "deepseek-chat",
  "messages": [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ],
  "max_tokens": 4096,
  "temperature": 0.7,
  "stream": true,
  "tools": [...]  // Optional: function calling
}

Streaming Response

data: {"choices":[{"delta":{"content":"Hello"},"index":0}]}
data: {"choices":[{"delta":{"content":" there"},"index":0}]}
data: {"choices":[{"delta":{"tool_calls":[...]},"index":0}]}
data: [DONE]

Anthropic API

Aura Workshop natively supports the Anthropic Messages API for Claude models.

Configuration

SettingValue
Base URLhttps://api.anthropic.com
Auth Headerx-api-key: sk-ant-...
API Versionanthropic-version: 2023-06-01

Supported Models

Request Format

{
  "model": "claude-sonnet-4-20250514",
  "max_tokens": 4096,
  "messages": [
    {"role": "user", "content": "Hello!"}
  ],
  "system": "You are a helpful assistant.",
  "tools": [...]  // Optional: tool use
}

Response Format

{
  "content": [{"type": "text", "text": "Hello! How can I help?"}],
  "model": "claude-sonnet-4-20250514",
  "usage": {
    "input_tokens": 12,
    "output_tokens": 8
  }
}

Anthropic responses include exact token counts in the usage field, which Aura uses for precise billing tracking.

Ollama API

Aura Workshop connects to Ollama via its OpenAI-compatible endpoint. Run any local model through Ollama and Aura treats it like any other provider.

Configuration

SettingValue
Base URLhttp://localhost:11434/v1
API KeyLeave empty
ModelAny Ollama model name (e.g., llama3.1, qwen2.5, deepseek-r1)

Setup

# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh

# Pull a model
ollama pull llama3.1

# Ollama runs automatically on localhost:11434
# Aura Workshop auto-detects it

Supported Models

Any model available in the Ollama registry works. Popular choices:

Ollama models run entirely locally with zero API costs. They serve as the final fallback in the provider fallback chain when all cloud provider spend limits are reached.