Agent-JAE/default-skills/venice-chat/README.md
jae 19b25341bd
Some checks are pending
CI / build-check-test (push) Waiting to run
feat: add 11 Venice AI skills as bundled defaults
Skills included:
- venice-chat: Chat with Venice LLM models, vision, reasoning
- venice-chat-benchmark: Benchmark chat models with infographics
- venice-image-gen: Generate images via Venice API
- venice-list-image-models: List available image models
- venice-list-text-models: List available text models
- venice-list-video-models: List available video models
- venice-tts: Text-to-speech via Venice API
- venice-video-generate: Generate videos from text/images
- venice-video-queue: Queue video generation jobs
- venice-video-quote: Get video generation cost quotes
- venice-video-retrieve: Retrieve completed videos

All rebranded from Agent Zero paths to Agent JAE (~/.jae/agent/skills/).
Requires VENICE_API_KEY environment variable.
2026-03-23 18:46:23 +01:00

2.5 KiB

Venice Chat

Chat with Venice.ai LLM models. Supports system prompts, image analysis (vision), reasoning mode, and web search. Auto-selects the best model based on the task.

Features

  • Text chat with any Venice.ai LLM model
  • Vision/image analysis -- describe, analyze, or ask questions about images
  • Reasoning mode -- extended thinking for complex problems
  • Web search -- augment responses with live web results
  • Auto model selection -- picks the optimal model based on task type

Prerequisites

pip install requests
export VENICE_API_KEY="your_venice_api_key"

Usage

Simple chat

python scripts/chat.py "What is the capital of France?"

With system prompt

python scripts/chat.py "Write a haiku" --system "You are a poet"

Image analysis

python scripts/chat.py "What's in this image?" --image /path/to/image.png

Reasoning mode

python scripts/chat.py "Solve this complex math problem" --reasoning
python scripts/chat.py "What happened in tech news today?" --web_search

Options

Option Short Default Description
message -- (required) Your message
--system -s None System prompt
--model -m auto Model ID (auto-selected if not provided)
--image -i None Image path for vision analysis
--reasoning -r off Enable reasoning mode
--temperature -t 0.7 Temperature (0.0-2.0)
--max_tokens -- None Max response tokens
--web_search -w off Enable web search

Default Models

Task Model Notes
General chat zai-org-glm-4.7 GLM 4.7 -- most intelligent
Vision/image qwen3-vl-235b-a22b Qwen3 VL 235B -- 250K context
Reasoning qwen3-235b-a22b-thinking-2507 Extended thinking

Python Import

from chat import chat

result = chat(
    message="Explain quantum computing",
    system="You are a physics professor",
    temperature=0.5
)
print(result["response"])

Response Format

{
    "success": True,
    "model": "zai-org-glm-4.7",
    "response": "The capital of France is Paris.",
    "usage": {
        "prompt_tokens": 12,
        "completion_tokens": 8,
        "total_tokens": 20
    }
}

Environment Variables

Variable Required Description
VENICE_API_KEY Yes Venice.ai API key