Getting started #
A 10-minute walkthrough: install the free edition for your CMS, configure a provider, send your first chat turn, then add Pro when you want the agent to build for you.
1. Install the free edition #
WordPress (5 min) #
Requires WordPress 6.4+ (tested up to 6.9, multisite supported) and PHP 8.1+. WordPress 6.3 and earlier are unsupported.
- Plugins → Add New in the admin.
- Upload Plugin and pick
seanosai-1.0.0.zipfrom
/products/seanosai-wordpress (or, once we're listed, search "Seanos AI" on WordPress.org).
- Activate. A new top-level menu Seanos AI appears with
submenus: Dashboard, Conversations, Tools, Reporting, Settings.
- Visit any of those once so the plugin runs its first-time database
setup.
Joomla 4.2+ / 5.x (5 min) #
Requires Joomla 4.2+ or 5.x and PHP 8.1+. Joomla 3.x and Joomla 4.0 / 4.1 are unsupported (the codebase uses J4-only patterns and requires PHP 8.1, which Joomla 4.2 was the first to mandate).
- System → Install → Extensions → Upload Package File.
- Upload
com_seanosai-1.0.0.zipfrom
/products/seanosai-joomla (or, once we're listed, install from JED).
- Components → Seanos AI lands you on the Dashboard. Submenus
appear in Joomla's left admin sidebar: Dashboard, Conversations, Reporting, Tools, Settings.
- Reload the admin once so the install script seeds the chat /
conversation tables.
The free edition works on its own from this point — no API key, no remote service. The next step adds an LLM provider so the agent can actually answer.
2. Configure a provider #
Seanos AI → Settings.
Pick whichever provider you have a key for:
| Provider | Where to get a key | Notes |
|---|---|---|
| Anthropic (Claude) | console.anthropic.com | Best tool-use reasoning. Free tier sufficient for casual use. |
| OpenAI | platform.openai.com | Widest model selection. |
| Gemini | aistudio.google.com | Free tier, fast. |
| DeepSeek | platform.deepseek.com | Cheap. Good code-aware reasoning. |
| Grok (xAI) | console.x.ai | Strong long-context and reasoning. |
| GitHub Models | github.com/marketplace/models | Free for personal use. |
| Ollama (local) | install Ollama | Runs on your hardware. No data leaves the box. |
| OpenAI-compatible | any URL | Self-hosted vLLM, LM Studio, LocalAI, etc. |
Toggle the provider on, paste the API key, click Test connection.
A successful test also persists the key — no separate Save click
needed. The plugin / component encrypts the key at rest with
sodium-secretbox keyed off the platform's site secret (AUTH_KEY on
WordPress, the Joomla secret on Joomla); it's never returned to the
browser, and the Settings UI shows Key saved once configured.
3. Pick the model in the chat header (per conversation) #
There is no global default model any more. Each conversation has its
own provider / model selector in a compact chat header — pick a chat
provider, a chat model, and (optionally) a separate image provider and
image model for generate_image calls. Defaults are sensible (Claude
Sonnet on Anthropic, GPT-4o-mini on OpenAI, Gemini Flash on Gemini),
and the first option per provider is Default, which auto-tracks
whatever the latest model is for that provider.
You can chat with one provider and generate images with another in the
same conversation — for example, chat with Anthropic, generate images
with OpenAI's gpt-image-2. If you don't pick an image provider, the
tool falls back to the chat provider (which only works with OpenAI or
Gemini today; other providers return a "switch provider" error).
Click Test connection in Settings to verify each enabled key round-trips before you commit to it.
4. Send your first chat turn #
Dashboard has the curated prompt library — every card is a multi-step prompt the agent will follow once you click Start.
A safe first run on either platform:
- Click the Site health summary card (free tier, read-only tools
only).
- The agent inspects the install, lists active extensions, reads
the global config, and writes back a one-screen overview. Nothing is written to disk or the database.
Or click Blank chat in the top-right and just type. Try:
List my logs, then tail the most recent one and pick out anything that looks like a recurring error.
4.5. Try image generation #
If you have an OpenAI or Gemini key configured, every conversation can generate images. In the chat header, set the image provider to OpenAI (DALL·E 3 / gpt-image-1 / gpt-image-2) or Gemini (Imagen 3 / gemini-2.5-flash-image / "Nano Banana"), then ask:
Generate a header image for a blog post about beekeeping — soft watercolor style, no text.
The agent calls generate_image, the result is saved into the media
library (wp-content/uploads/seanosai-ai/ on WordPress,
/images/seanosai-ai/ on Joomla), and the chat renders it inline. Like
every write tool, it pauses for approval the first time it runs. Image
generation bills against the configured provider's API; see the
FAQ for cost notes.
Other providers (Anthropic, Grok, DeepSeek, GitHub Models, Ollama, generic OpenAI-compatible) don't implement image generation and return a "switch provider" error if you try.
5. Drop attachments into chat #
You can drop files into the chat composer alongside your prompt. Four buckets are accepted:
- Images (
.png/.jpg/.gif/.webp) — the model sees the
image natively if it's vision-capable; text-only models return a graceful error and you can switch model.
- PDFs — Anthropic and Gemini read them natively (tables, layout,
embedded images intact). On OpenAI / Grok / DeepSeek / GitHub Models /
Ollama / generic OpenAI-compatible the plugin shells out to
pdftotext (poppler-utils) for plain-text extraction; install
poppler-utils on your server, or switch to Anthropic / Gemini for
full-fidelity PDF reading.
- Office files (
.docx/.xlsx/.pptx) — extracted to plain
text server-side via PHP's native ZipArchive + DOMXPath and
inlined. Legacy .doc / .xls / .ppt aren't supported — re-save
as OOXML. 10 MB cap per file.
- Plain text (
.txt/.md/.csv/.json/ source files) —
inlined as a text block, capped at 200 KB per attachment.
Every provider sees attachments now (it used to be Anthropic + Gemini only).
6. Approval mode #
Every conversation starts in strict approval mode. When the agent
calls a write tool (create_article, disable_extension, etc.), it
pauses with an inline approval card:
awaiting approval — create_article [Approve] [Reject]
Approve runs the tool. Reject writes a synthetic "rejected" tool result and the model continues without the side effect. You can flip a conversation to trusted mode (Conversation page → top-right) to run writes immediately — useful once you trust a particular prompt or workflow.
7. Install Pro #
Want the agent to actually build things for you? File edits, plugin / module / template / component scaffolding, write SQL, PHP lint, package builds, snapshot rollback?
WordPress #
— $49, lifetime updates.
- Download
seanosai-pro-1.0.0.zipfrom your receipt email. - Plugins → Add New → Upload Plugin → upload the zip → Activate.
- The Tools page now shows 16 new entries; the Dashboard shows the
advanced prompt cards as startable (they were locked with a "Get Pro" button before).
Joomla #
- Buy Seanos AI Pro for Joomla —
$49, lifetime updates.
- Download
plg_system_seanosaipro-1.0.0.zipfrom your receipt
email.
- System → Install → Extensions → Upload Package File → upload
the zip.
- Extensions → Plugins → enable Seanos AI Pro. The Tools
page picks up the 16 new entries on the next request.
Pro registers itself on the free edition's public extension hook
(seanosai_register_tools action on WordPress; onSeanosaiRegisterTools
event on Joomla), so the free edition picks it up the moment Pro
activates. No setting to flip, no key to paste.
What's next #
- Tools reference — what every tool does, how it's
gated, and which platform it runs on.
- Prompts library — the full catalog of curated
multi-step prompts.
- FAQ — common gotchas.