AI Configuration
Dockerizer can use AI to generate Docker configurations when automatic detection fails or for unsupported frameworks. This page explains how to configure different AI providers.
When Is AI Used?
AI generation is triggered in these scenarios:
- Low confidence detection - When framework detection confidence is below 70%
- Unknown frameworks - Languages or frameworks without built-in templates
- Forced AI mode - When using the
--aiflag - Complex setups - Monorepos or multi-service architectures
Supported Providers
Anthropic Claude Recommended
Claude models provide excellent Docker configuration generation with deep understanding of containerization best practices.
Setup
export ANTHROPIC_API_KEY="sk-ant-api03-..."
Usage
# Use Claude (default model)
dockerizer --ai --ai-provider anthropic
# Specify model
dockerizer --ai --ai-provider anthropic --ai-model claude-sonnet-4-20250514
| Model | Best For |
|---|---|
claude-sonnet-4-20250514 |
Balanced speed and quality (default) |
claude-opus-4-20250514 |
Complex configurations, best quality |
claude-haiku-3-20250514 |
Fast generation, simple projects |
OpenAI GPT
OpenAI models are widely used and provide reliable Docker configuration generation.
Setup
export OPENAI_API_KEY="sk-proj-..."
Usage
# Use OpenAI (default model)
dockerizer --ai --ai-provider openai
# Specify model
dockerizer --ai --ai-provider openai --ai-model gpt-4o
| Model | Best For |
|---|---|
gpt-4o |
Best quality, multimodal (default) |
gpt-4o-mini |
Faster, cost-effective |
o1 |
Advanced reasoning for complex setups |
Ollama Local
Run AI locally without sending code to external services. Great for privacy-sensitive projects.
Setup
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull a model
ollama pull llama3.2
# Ollama runs on localhost by default
# Optional: set custom host
export OLLAMA_HOST="http://localhost:11434"
Usage
# Use Ollama with default model
dockerizer --ai --ai-provider ollama
# Specify model
dockerizer --ai --ai-provider ollama --ai-model codellama
| Model | Best For |
|---|---|
llama3.2 |
General purpose (default) |
codellama |
Code-focused tasks |
deepseek-coder |
Specialized coding model |
Environment Variables
Configure AI settings via environment variables:
| Variable | Description | Example |
|---|---|---|
ANTHROPIC_API_KEY |
Anthropic API key | sk-ant-api03-... |
OPENAI_API_KEY |
OpenAI API key | sk-proj-... |
OLLAMA_HOST |
Ollama server URL | http://localhost:11434 |
DOCKERIZER_AI_PROVIDER |
Default provider | anthropic |
DOCKERIZER_AI_MODEL |
Default model | claude-sonnet-4-20250514 |
Configuration File
Create a .dockerizer.yml in your home directory or project root:
# ~/.dockerizer.yml
ai:
provider: anthropic
model: claude-sonnet-4-20250514
# Or for OpenAI
ai:
provider: openai
model: gpt-4o
# Or for local Ollama
ai:
provider: ollama
host: http://localhost:11434
model: llama3.2
Provider Priority
Dockerizer checks for AI providers in this order:
- Command-line flags (
--ai-provider) - Environment variable (
DOCKERIZER_AI_PROVIDER) - Configuration file (
.dockerizer.yml) - Auto-detect based on available API keys
Best Practices
Security Tip: Never commit API keys to version control. Use environment variables or a
.env file (added to .gitignore).
- Start with detection - Let Dockerizer try automatic detection first. Only use AI when needed.
- Review generated configs - Always review AI-generated Dockerfiles before using in production.
- Use local models for sensitive code - If your code is proprietary, consider Ollama for local processing.
- Set defaults - Configure your preferred provider in
~/.dockerizer.ymlto avoid repeated flags.
Troubleshooting
API Key Issues
# Verify API key is set
echo $ANTHROPIC_API_KEY
echo $OPENAI_API_KEY
# Test with verbose output
dockerizer --ai --verbose
Ollama Connection Issues
# Check if Ollama is running
curl http://localhost:11434/api/tags
# Restart Ollama
ollama serve
Model Not Found
# List available Ollama models
ollama list
# Pull missing model
ollama pull llama3.2