fix: prevent LLM from using tools on casual conversation
Some checks are pending
CI / build-check-test (push) Waiting to run

The system prompt told the model to "use tools whenever helpful" which
caused Llama-3.3-70b to fire every tool on simple inputs like "hi".
New prompt explicitly instructs conversational responses for chat and
tool use only when explicitly requested.
This commit is contained in:
JAE 2026-03-26 23:12:17 +00:00
parent 92a294a7a2
commit 92eea4ea61

View file

@ -9,13 +9,13 @@ import {
CustomProvidersStore,
createJavaScriptReplTool,
IndexedDBStorageBackend,
ModelSelector,
ProviderKeysStore,
ProvidersModelsTab,
ProxyTab,
SessionListDialog,
SessionsStore,
SettingsDialog,
ModelSelector,
SettingsStore,
setAppStorage,
} from "@jaeswift/jae-web-ui";
@ -308,8 +308,19 @@ const createAgent = async (initialState?: Partial<AgentState>) => {
if (agentUnsubscribe) agentUnsubscribe();
agent = new Agent({
initialState: initialState || {
systemPrompt:
"You are JAE, a helpful AI assistant and coding agent with access to tools including web search, image generation, JavaScript REPL, text-to-speech, and artifact creation. Use these tools whenever helpful.",
systemPrompt: `You are JAE, a friendly AI assistant and coding agent.
IMPORTANT RULES:
- For casual conversation (greetings, questions, chat), just respond naturally in plain text. Do NOT use any tools for simple conversation.
- Only use tools when the user explicitly asks you to do something that requires them:
- Web Search: when user asks to look something up online
- Image Generation: when user asks to create/generate an image
- JavaScript REPL: when user asks to run code or create an interactive artifact
- Text-to-Speech: when user asks to read something aloud
- Memory: when user asks to remember or recall something
- Artifacts: when user asks to create a file, document, or visual output
- If the user just says "hi" or asks a question, respond conversationally WITHOUT calling any tools.
- Be concise and helpful. Do not demonstrate tools unprompted.`,
model: getModel("venice", "llama-3.3-70b"),
thinkingLevel: "off",
messages: [],
@ -573,11 +584,15 @@ ${sidebar}
}}
></div>
<div class="flex flex-col flex-1 min-w-0 min-h-0 relative">
${!hasMessages ? html`
${
!hasMessages
? html`
<div class="absolute inset-x-0 top-0 z-10 flex flex-col" style="bottom:130px" @suggestion=${handleSuggestion}>
<jae-empty-state style="display:flex;flex-direction:column;flex:1;width:100%;min-height:0"></jae-empty-state>
</div>
` : html``}
`
: html``
}
<div id="chat-wrapper" class="flex flex-col flex-1 min-h-0" >
${chatPanel}
</div>