Core Concepts
Cortask is built around a few key concepts that work together: you pick a Provider and model, select a Workspace (your project), start a Session (a conversation), and the Agent uses Skills and built-in tools to complete tasks. Important context is stored in Memory, rich outputs are displayed as Artifacts, and everything is secured by the Credential store.
Agents
The agent is the core of Cortask — it orchestrates multi-turn conversations between the AI model and tools.
How it works:
- You send a message (prompt)
- The agent sends it to the selected LLM along with the conversation history and available tools
- The model responds — either with text, a request to call tools, or both
- If tools are called, the agent executes them and feeds results back to the model
- This repeats until the model finishes its response or reaches the maximum turn limit
Key behaviors:
- Streaming — responses arrive in real-time as the model generates them. You see thinking output, text, and tool calls as they happen.
- Tool concurrency — the agent can execute up to 5 tool calls in parallel per turn for faster task completion.
- Cancellation — you can abort a running agent at any time.
- Attachments — you can attach images or reference workspace files (up to 50KB) as context.
Configuration:
| Setting | Description | Default |
|---|---|---|
| Max turns | Maximum conversation rounds per run | 25 |
| Temperature | Creativity vs. consistency (0–2) | 0.7 |
| Max tokens | Token limit per response | Provider default |
Workspaces
A workspace is a project context — a directory on your filesystem plus Cortask-specific settings and data.
What a workspace contains:
- A root directory (your project folder)
- A
.cortask/hidden folder with workspace-specific data (memory, session database) - Per-workspace settings: default provider, model, and enabled skills
What you can do:
- Create multiple workspaces for different projects
- Switch between them from the sidebar
- Reorder them by drag-and-drop
- Browse the file tree and manage files directly
- Set workspace-specific LLM and skill configurations
Each workspace keeps its own sessions, memory, and settings — so context never leaks between projects.
Sessions
A session is a single conversation thread within a workspace.
Each session stores the full message history — your prompts, the agent's responses, tool calls, and tool results. Sessions persist between app restarts, so you can pick up where you left off.
Features:
- Multiple sessions — run several conversations in parallel within the same workspace, managed via tabs.
- Forking — branch from an existing session to explore an alternative approach while preserving the original.
- Channel sessions — sessions can originate from channels (Telegram, Discord, WhatsApp), not just the UI.
Providers
Cortask supports multiple LLM providers behind a unified interface. You configure API keys once, then switch between providers and models freely.
Supported providers:
| Provider | Models | Auth |
|---|---|---|
| Anthropic | Claude (Sonnet, Opus, Haiku) | API key |
| OpenAI | GPT-4o, GPT-4, GPT-3.5 | API key |
| Gemini (Pro, Flash) | API key | |
| xAI | Grok | API key |
| Moonshot | Kimi | API key |
| OpenRouter | Multi-provider router | API key |
| MiniMax | MiniMax models | API key |
| Ollama | Local models (Llama, Mistral, etc.) | None (local) |
How model selection works:
- Add your API key for a provider
- Set a default provider and model globally
- Optionally override the provider/model per workspace
- The agent uses the selected model for all runs in that context
Providers also handle token counting for usage tracking and embeddings for memory search.
Skills
Skills are plugins that extend the agent's capabilities. They're defined as SKILL.md files with YAML frontmatter and markdown instructions.
Three tiers:
| Tier | What it is | Example |
|---|---|---|
| Text-only | Markdown instructions that teach the agent to use existing tools | Weather skill: teaches the agent to use curl wttr.in |
| HTTP templates | Tool definitions with templated API requests | Notion skill: defines API calls with {{credential:apiKey}} placeholders |
| Code-based | Custom JavaScript with tool handlers (index.js) | Speech-to-text skill: runs local inference via Node.js |
Skill lifecycle:
- Skills declare their requirements (API keys, binaries, OS compatibility)
- A skill is "eligible" only when all requirements are met
- You enable/disable skills per workspace
- The agent automatically discovers and uses eligible skills during a run
Built-in skills include: web browsing, GitHub, Notion, Slack, Spotify, Google Workspace, email, image generation, text-to-speech, PDF tools, and more (30+ total).
Memory
Memory gives the agent persistent context across conversations. Without it, each session starts from scratch.
Two scopes:
- Workspace memory (
.cortask/memory.md) — project-specific notes, decisions, architecture context - Global memory (
~/.cortask/memory.md) — personal preferences, common patterns
How it works:
- Before each run, the agent retrieves relevant memory passages based on the conversation context
- The agent can also save new memories during a run (e.g., "remember that the API uses v2 auth")
- You can manually edit memory files at any time
Search methods:
- Full-text search — fast keyword matching
- Semantic search — finds conceptually similar content using vector embeddings (even if the exact words differ)
- Hybrid — combines both for best relevance
Embedding providers for semantic search: local (no API needed), OpenAI, Google, or Ollama.
Artifacts
Artifacts are rich outputs that need their own display — not just inline text in the chat.
Supported types:
| Type | Use case |
|---|---|
| HTML | Interactive dashboards, reports, previews |
| CSV | Tabular data exports |
| JSON | Structured data |
| Image | Generated or processed images |
| SVG | Diagrams, charts |
When a tool generates an artifact, it appears in the preview panel alongside the chat. Artifacts auto-expire after 24 hours.
Credentials
Cortask securely stores API keys and auth tokens needed by providers and skills.
Security model:
- All credentials encrypted at rest with AES-256-GCM
- Decryption key derived from a master secret
- Credentials are only decrypted when actively needed by a skill or provider
Credential types:
- API key — simple key string (most common)
- OAuth2 — full OAuth flow with automatic token refresh
- Bearer token — opaque auth tokens
- Basic auth — username + password
- Custom — arbitrary key-value pairs
How skills use credentials:
- Skills declare a credential schema (what keys they need, labels, descriptions)
- You fill in the values via the UI or CLI
- HTTP template skills reference them as
{{credential:fieldName}} - Code skills access them programmatically via the credential store
You never see raw credential values after saving — they're encrypted immediately.