ROOT CAUSE: setAppStorage(new AppStorage(backend)) passed 1 arg
to a 5-arg constructor (settings, providerKeys, sessions,
customProviders, backend). This left providerKeys=undefined,
crashing sendMessage() at AgentInterface.ts:223 every time.
Fix:
- Create all Store instances first
- Create IndexedDBStorageBackend with proper config (dbName, stores)
- Wire backend into each store via setBackend()
- Construct AppStorage with all 5 required arguments
This was the root cause of all "messages wont send" bugs.
- Add diagnostics.html page to test Lit event binding
- Add SES protection script in index.html (runs before modules)
- Force Vite dep optimization to prevent stale caches
- Fixes for users with crypto wallet extensions (MetaMask etc.)
Root cause of all web UI bugs: web-ui depends on jae-ai and jae-agent-core
being compiled first. Without their dist/ output, all tool imports
(bash, browser, web-search, voice-tts, memory) silently fail,
causing the entire app to crash at load time.
New scripts:
- build:all: builds packages in correct order
- build:example: builds all + example app
- dev:example: builds all + starts dev server
- FIX 1: Agent constructor now wraps model/systemPrompt in initialState
(was passed as top-level keys which Agent class ignores, causing it to
default to Google Gemini with empty prompt and no tools)
- FIX 2: chatPanel.setAgent now includes onApiKeyRequired handler using
ApiKeyPromptDialog.prompt() (was missing, causing silent abort when no
API key found for provider)
- FIX 3: Fixed toolsFactory signature to match ChatPanel expected params
(agent, iface, artifacts, runtimeFactory) instead of () => allTools
- FIX 4: Removed broken allTools.map(t => t.tool) - createXxxTool()
returns AgentTool directly, .tool property does not exist
- FIX 5: Removed invalid convertToLlm from both Agent constructor and
setAgent config (not a valid AgentOptions or setAgent config key)
- FIX 6: Added pointer-events:none to empty state overlay wrapper with
pointer-events:auto on interactive child to prevent input blocking
- agent.on() does not exist - replaced all 3 calls with single agent.subscribe() handler
- Maps AgentEvent types: agent_start/end, turn_start/end, tool_execution_start, message_start/end
- Cost tracking now reads usage from message_end event message.usage field
- Added favicon.ico and apple-touch-icon.png from mascot
- Fixed page title from Pi Web UI to Agent JAE
ROOT CAUSE: agent.setTools() registered 8 tools (bash, browser, web-search,
image-gen, tts, memory, repl), then chatPanel.setAgent(agent) with no config
called this.agent.setTools([artifactsTool]) internally, OVERWRITING all tools.
FIX: Pass tools via toolsFactory config parameter so ChatPanel merges them
with its own artifacts tool instead of replacing them.
This fixes all 'Tool X not found' errors in the web UI.
- Fix 'Tool browser not found': createTools callback was silently ignored
by Agent constructor. Now uses agent.setTools() after instantiation.
- Add Wrench (Tools) and Code icons + filter buttons to ModelSelector
alongside existing Brain (Reasoning) and Image (Vision).
- Add refreshSidebar() call in createAgent for session list updates.
- Merge 3 servers into single tool-server.mjs on port 7700
- HTTP API: POST /api/bash, /api/browser/*
- WebSocket: /ws/terminal (xterm.js panel)
- WebSocket: /ws/browser (live browser panel)
- SHARED Playwright instance between LLM browser tool and user panel
- When AI navigates a page, user sees it live in browser panel
- When user clicks in panel, AI tools see the same page state
- Remove standalone terminal-server.mjs (was :7701)
- Remove standalone browser-server.mjs (was :7702)
- Update browser-panel.ts: ws://localhost:7700/ws/browser
- Update terminal-panel.ts: ws://localhost:7700/ws/terminal
- Agent Zero-inspired system prompt with:
- Structured problem-solving methodology (analyse/plan/execute/verify/report)
- Clear tool usage rules (no tools for casual chat)
- Detailed tool descriptions with usage guidance
- Resourceful retry behaviour on failures
- npm run dev starts both vite + unified server via concurrently
The system prompt told the model to "use tools whenever helpful" which
caused Llama-3.3-70b to fire every tool on simple inputs like "hi".
New prompt explicitly instructs conversational responses for chat and
tool use only when explicitly requested.
- Cost tracker: fix event type message -> message_end, handle usage.input fallback
- Model badge: update immediately on model select via onModelSelect hook
- Empty state: hide completely when hasMessages (not after LLM responds)
- Provider tabs: add renderProviderTabs() to ModelSelector content + filtering
- Memory tools: register memory_save/recall in tool index, export from web-ui, add to createTools
- Session save: save before newSession, relax shouldSaveSession to user-only, title fallback
- Dark mode: add text-foreground to memory-manager dialog + inputs
- View toggles: add tool-message and thinking-block element CSS selectors
- Empty state faded: return empty html instead of ghost mascot
- Add 5 mascot images (default, fire, point-up, point-self, camo) to public/mascot/
- New empty state component with floating mascot, tagline, and suggestion chips
- New utility toggle component (show/hide tool calls, thinking, system msgs, timestamps)
- Mascot logo in header with wobble animation on hover
- Floating animation and orange glow for mascot in empty state
- Suggestion chips dispatch events to fill chat textarea
- CSS visibility classes for all toggle states
- web-search: DuckDuckGo search with inline result cards
- image-gen: Venice AI image generation with inline preview + download
- voice-tts: Venice AI TTS with inline audio player
- All use correct ToolRenderer class pattern matching jae-web-ui API
- Remove generate-models from build chain to prevent Venice entries being wiped
- Add separate update-models script for manual model list refreshes
- build now runs: tsgo -p tsconfig.build.json only
- update-models must be run explicitly when refreshing provider model lists
- Add venice.ts OAuth provider implementation
- Register Venice in BUILT_IN_OAUTH_PROVIDERS
- Add KnownProvider type entry for venice
- Map VENICE_API_KEY in env-api-keys.ts
- Set llama-3.3-70b as default Venice model in model-resolver.ts
- Add full Venice model catalog (158 models) to models.generated.ts
- Bump jae-tui and jae-agent-core to 0.62.1
- Added early --help handler in list_video_models.py before API call
- Fixed venice-video-quote self-test: audio=False for wan-2.5 model
- VENICE_API_KEY added to ~/.bashrc for runtime
Skills included:
- venice-chat: Chat with Venice LLM models, vision, reasoning
- venice-chat-benchmark: Benchmark chat models with infographics
- venice-image-gen: Generate images via Venice API
- venice-list-image-models: List available image models
- venice-list-text-models: List available text models
- venice-list-video-models: List available video models
- venice-tts: Text-to-speech via Venice API
- venice-video-generate: Generate videos from text/images
- venice-video-queue: Queue video generation jobs
- venice-video-quote: Get video generation cost quotes
- venice-video-retrieve: Retrieve completed videos
All rebranded from Agent Zero paths to Agent JAE (~/.jae/agent/skills/).
Requires VENICE_API_KEY environment variable.