GitHub Copilot LiteLLM Integration¶
This document describes the GitHub Copilot Language Model API integration with LiteLLM provider support in the agentic coding framework.
Overview¶
The GitHub Copilot LiteLLM integration provides:
- OAuth Device Flow Authentication: Secure GitHub authentication with Copilot access
- LiteLLM Provider Support: Standardized integration following LiteLLM's GitHub Copilot provider
- Model Mapping: Seamless mapping between OpenAI and GitHub Copilot models
- Enhanced Configuration: Extended .env configuration for GitHub Copilot settings
- Proxy Integration: Full integration with existing proxy server architecture
Features¶
OAuth Device Flow¶
- GitHub OAuth device flow for secure authentication
- Automatic detection and usage of existing
gh auth logintokens - Token validation and refresh management
- Secure token storage and handling
LiteLLM Provider Integration¶
- Native LiteLLM GitHub Copilot provider support
- Automatic model prefix handling (
github/copilot-gpt-4) - Request/response transformation for OpenAI compatibility
- Streaming response support
Model Support¶
copilot-gpt-4: GitHub Copilot's GPT-4 modelcopilot-gpt-3.5-turbo: GitHub Copilot's GPT-3.5 Turbo model- Automatic mapping from OpenAI model names
- Custom model configuration support
Configuration¶
Environment Variables¶
Add these variables to your .env or .github.env file:
# Required: GitHub token with Copilot access
GITHUB_TOKEN=gho_your_github_token_here # pragma: allowlist secret
# Enable GitHub Copilot proxy mode
GITHUB_COPILOT_ENABLED=true
PROXY_TYPE=github_copilot
# Enable LiteLLM GitHub Copilot provider integration
GITHUB_COPILOT_LITELLM_ENABLED=true
# Optional: Specify default GitHub Copilot model
GITHUB_COPILOT_MODEL=copilot-gpt-4
# Optional: GitHub Copilot endpoint (defaults to api.github.com)
GITHUB_COPILOT_ENDPOINT=https://api.github.com
# Proxy server settings
PORT=8080
HOST=localhost
# Performance settings
REQUEST_TIMEOUT=300
MAX_RETRIES=3
LOG_LEVEL=INFO
# Optional: Rate limiting (GitHub Copilot has built-in limits)
MAX_TOKENS_LIMIT=8192
Example Configuration¶
Copy and customize the example configuration:
Authentication Setup¶
Option 1: Use Existing GitHub CLI Token¶
If you have GitHub CLI installed and authenticated:
The integration will automatically detect and use your existing token.
Option 2: OAuth Device Flow¶
If no existing token is found, the system will initiate OAuth device flow:
- Start the proxy server
- Visit the provided GitHub authorization URL
- Enter the device code
- Complete GitHub OAuth authorization
- Token is automatically saved for future use
Option 3: Manual Token¶
Generate a personal access token with Copilot scope:
- Visit https://github.com/settings/tokens
- Generate new token with
copilotscope - Add to your
.github.envfile
Usage¶
Starting the Proxy¶
# Set environment variables
export GITHUB_TOKEN="your_github_token" # pragma: allowlist secret
export GITHUB_COPILOT_ENABLED="true"
export GITHUB_COPILOT_LITELLM_ENABLED="true"
# Start proxy server
python src/amplihack/proxy/server.py
Making Requests¶
Use standard OpenAI API format with GitHub Copilot models:
import openai
# Configure client to use proxy
client = openai.OpenAI(
base_url="http://localhost:8080/v1",
api_key="not-needed" # pragma: allowlist secret
)
# Request with GitHub Copilot model
response = client.chat.completions.create(
model="copilot-gpt-4", # or "github/copilot-gpt-4"
messages=[
{"role": "user", "content": "Hello, GitHub Copilot!"}
]
)
print(response.choices[0].message.content)
Model Mapping¶
The integration automatically maps models:
| OpenAI Model | GitHub Copilot Model | LiteLLM Format |
|---|---|---|
gpt-4 | copilot-gpt-4 | github/copilot-gpt-4 |
gpt-3.5-turbo | copilot-gpt-3.5-turbo | github/copilot-gpt-3.5-turbo |
You can use any of these formats in your requests.
Architecture¶
Components¶
- GitHubEndpointDetector: Detects GitHub Copilot endpoints and validates configuration
- GitHubAuthManager: Handles OAuth device flow and token management
- GitHubCopilotClient: Direct GitHub Copilot API client (fallback)
- ProxyConfig: Extended configuration management for GitHub Copilot
- LiteLLM Integration: Native LiteLLM provider support in proxy server
Request Flow¶
- Client sends request to proxy server
- Proxy detects GitHub Copilot model
- Request is routed to LiteLLM GitHub provider
- LiteLLM handles GitHub Copilot API communication
- Response is transformed to OpenAI format
- Client receives standard OpenAI response
Authentication Flow¶
- Check for existing GitHub CLI token
- If found and valid, use for LiteLLM provider
- If not found, initiate OAuth device flow
- Save token for future use
- Configure LiteLLM provider with token
Testing¶
Run comprehensive tests for the integration:
# Run all GitHub Copilot tests
pytest tests/proxy/test_github_copilot_litellm_integration.py -v
# Run specific test categories
pytest tests/proxy/test_github_copilot_litellm_integration.py::TestGitHubCopilotLiteLLMIntegration::test_github_copilot_model_mapping -v
# Run with coverage
pytest tests/proxy/test_github_copilot_litellm_integration.py --cov=src.amplihack.proxy
Test Coverage¶
The test suite covers:
- LiteLLM provider detection and configuration
- GitHub OAuth integration
- Model mapping and validation
- Configuration validation
- Request/response processing
- Error handling and edge cases
- Rate limiting and endpoint validation
Troubleshooting¶
Common Issues¶
1. Authentication Errors
- Ensure GitHub token is set in environment or .env file
- Verify token has
copilotscope - Check token format (starts with
gho_,ghp_, etc.)
2. Model Not Found
- Verify GitHub Copilot access on your account
- Check model availability in your region
- Ensure LiteLLM provider is properly configured
3. Rate Limiting
- GitHub Copilot has built-in rate limits
- Implement request throttling
- Check your Copilot subscription status
4. LiteLLM Provider Issues
- Ensure LiteLLM is updated to latest version
- Verify GitHub provider support in your LiteLLM version
- Check LiteLLM configuration
Debug Mode¶
Enable debug logging:
This will show detailed request/response logs and model mapping information.
Token Validation¶
Test your GitHub token:
from src.amplihack.proxy.github_auth import GitHubAuthManager
auth = GitHubAuthManager()
token = "your_github_token"
is_valid = auth._verify_copilot_access(token)
print(f"Token valid: {is_valid}")
Security Considerations¶
- GitHub tokens are transmitted over HTTPS only
- Tokens are not logged in debug output
- OAuth device flow uses secure GitHub endpoints
- Rate limiting prevents token abuse
- Token validation before usage
Performance¶
- LiteLLM provider provides optimized GitHub Copilot access
- Request/response caching when appropriate
- Streaming support for real-time responses
- Efficient token management and reuse
Limitations¶
- Requires GitHub Copilot subscription
- Limited to GitHub Copilot model availability
- Subject to GitHub Copilot rate limits
- Regional availability restrictions may apply
Contributing¶
When contributing to the GitHub Copilot integration:
- Follow the existing architecture patterns
- Add comprehensive tests for new features
- Update configuration documentation
- Ensure backward compatibility
- Test with both OAuth flows and direct tokens
License¶
This integration follows the same license as the main project.