The system prompt told the model to "use tools whenever helpful" which
caused Llama-3.3-70b to fire every tool on simple inputs like "hi".
New prompt explicitly instructs conversational responses for chat and
tool use only when explicitly requested.
- Cost tracker: fix event type message -> message_end, handle usage.input fallback
- Model badge: update immediately on model select via onModelSelect hook
- Empty state: hide completely when hasMessages (not after LLM responds)
- Provider tabs: add renderProviderTabs() to ModelSelector content + filtering
- Memory tools: register memory_save/recall in tool index, export from web-ui, add to createTools
- Session save: save before newSession, relax shouldSaveSession to user-only, title fallback
- Dark mode: add text-foreground to memory-manager dialog + inputs
- View toggles: add tool-message and thinking-block element CSS selectors
- Empty state faded: return empty html instead of ghost mascot
- Add 5 mascot images (default, fire, point-up, point-self, camo) to public/mascot/
- New empty state component with floating mascot, tagline, and suggestion chips
- New utility toggle component (show/hide tool calls, thinking, system msgs, timestamps)
- Mascot logo in header with wobble animation on hover
- Floating animation and orange glow for mascot in empty state
- Suggestion chips dispatch events to fill chat textarea
- CSS visibility classes for all toggle states
- web-search: DuckDuckGo search with inline result cards
- image-gen: Venice AI image generation with inline preview + download
- voice-tts: Venice AI TTS with inline audio player
- All use correct ToolRenderer class pattern matching jae-web-ui API
- Remove generate-models from build chain to prevent Venice entries being wiped
- Add separate update-models script for manual model list refreshes
- build now runs: tsgo -p tsconfig.build.json only
- update-models must be run explicitly when refreshing provider model lists
- Add venice.ts OAuth provider implementation
- Register Venice in BUILT_IN_OAUTH_PROVIDERS
- Add KnownProvider type entry for venice
- Map VENICE_API_KEY in env-api-keys.ts
- Set llama-3.3-70b as default Venice model in model-resolver.ts
- Add full Venice model catalog (158 models) to models.generated.ts
- Bump jae-tui and jae-agent-core to 0.62.1
- Added early --help handler in list_video_models.py before API call
- Fixed venice-video-quote self-test: audio=False for wan-2.5 model
- VENICE_API_KEY added to ~/.bashrc for runtime
Skills included:
- venice-chat: Chat with Venice LLM models, vision, reasoning
- venice-chat-benchmark: Benchmark chat models with infographics
- venice-image-gen: Generate images via Venice API
- venice-list-image-models: List available image models
- venice-list-text-models: List available text models
- venice-list-video-models: List available video models
- venice-tts: Text-to-speech via Venice API
- venice-video-generate: Generate videos from text/images
- venice-video-queue: Queue video generation jobs
- venice-video-quote: Get video generation cost quotes
- venice-video-retrieve: Retrieve completed videos
All rebranded from Agent Zero paths to Agent JAE (~/.jae/agent/skills/).
Requires VENICE_API_KEY environment variable.