n8n Integration
Integrate Demeterics with n8n to access multiple AI providers through a unified API with built-in observability, analytics, and cost tracking in your workflow automations.
Overview
The n8n-nodes-demeterics community node package provides five powerful nodes:
- Demeterics Chat Model - LangChain-compatible AI model for use with n8n's AI chains and agents
- Demeterics Speech Gen - Text-to-speech generation across OpenAI, ElevenLabs, and Google
- Demeterics Image Gen - Image generation across OpenAI DALL-E, Google Imagen, and Stability AI
- Demeterics Conversion - Track business outcomes and metrics linked to LLM interactions
- Demeterics Extract - Export interaction data for analysis, compliance, or data pipelines
Package: n8n-nodes-demeterics
npm: npmjs.com/package/n8n-nodes-demeterics
GitHub: github.com/bluefermion/n8n-nodes-demeterics
Installation
Method 1: GUI Install (Recommended)
- Go to Settings > Community Nodes in your n8n instance
- Click Install
- Enter
n8n-nodes-demetericsand confirm - Restart n8n if prompted
Method 2: Docker with Custom Dockerfile
Create a Dockerfile:
FROM n8nio/n8n:latest
USER root
RUN npm install -g n8n-nodes-demeterics
USER node
Update docker-compose.yml:
version: '3.8'
services:
n8n:
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
ports:
- "5678:5678"
volumes:
- n8n_data:/home/node/.n8n
volumes:
n8n_data:
Build and start:
docker-compose build
docker-compose up -d
Method 3: Docker Volume Mount
# Install locally
mkdir -p ~/n8n-custom-nodes
cd ~/n8n-custom-nodes
npm install n8n-nodes-demeterics
Add to docker-compose.yml:
volumes:
- ~/n8n-custom-nodes/node_modules/n8n-nodes-demeterics:/home/node/.n8n/custom/node_modules/n8n-nodes-demeterics
Method 4: Non-Docker Installation
cd ~/.n8n
npm install n8n-nodes-demeterics
Restart n8n after installation.
Setting Up Credentials
1. Get Your API Key
- Sign up or log in at demeterics.ai
- Navigate to API Keys in your dashboard
- Click Create API Key
- Copy your API key (format:
dmt_xxx...)
2. Add Credentials in n8n
- Go to Credentials in n8n
- Click Add Credential
- Search for "Demeterics API"
- Configure your credentials:
| Field | Required | Description |
|---|---|---|
| BYOK Mode | No | Toggle on to use your own provider API keys |
| Demeterics API Key | Yes | Your Demeterics API key (dmt_xxx...) |
| Groq API Key | BYOK only | Your Groq API key for BYOK routing |
| OpenAI API Key | BYOK only | Your OpenAI API key for BYOK routing |
| Anthropic API Key | BYOK only | Your Anthropic API key for BYOK routing |
| Gemini API Key | BYOK only | Your Google Gemini API key for BYOK routing |
| OpenRouter API Key | BYOK only | Your OpenRouter API key for BYOK routing |
| API Base URL | No | Override for self-hosted (default: https://api.demeterics.com) |
- Click Save
Authentication Modes
Managed Key (Default): Use only your Demeterics API key. Demeterics manages provider credentials on your behalf.
Bring Your Own Key (BYOK): Toggle BYOK on and provide your own provider API keys. Only add keys for providers you'll actually use.
Node 1: Demeterics Chat Model
The Demeterics Chat Model provides access to multiple AI providers and works with n8n's AI nodes.
Compatible AI Nodes
- AI Agent - Build intelligent agents with tool use
- Basic LLM Chain - Simple prompt → response workflows
- Summarization Chain - Summarize documents
- Question and Answer Chain - RAG-based Q&A
Basic Workflow Example
[Chat Trigger] → [AI Agent] ← [Demeterics Chat Model]
↑
[Tools]
- Add a Chat Trigger node
- Add an AI Agent node
- Connect Demeterics Chat Model to the AI Agent's "Chat Model" input
- Select your provider and model
- Add tools as needed (Calculator, Code, HTTP Request, etc.)
Configuration Options
| Option | Description | Default | Range |
|---|---|---|---|
| Provider | AI provider to use | Groq | - |
| Model | Specific model from provider | (varies) | - |
| Temperature | Controls randomness | 0.7 | 0-2 |
| Max Tokens | Maximum response length | 4096 | 1-128000 |
| Top P | Nucleus sampling parameter | 1 | 0-1 |
| Frequency Penalty | Reduce repetition | 0 | -2 to 2 |
| Presence Penalty | Encourage new topics | 0 | -2 to 2 |
| Timeout | Request timeout in seconds | 60 | 1-600 |
Supported Providers & Models
Groq (Fastest Inference)
| Model | Description |
|---|---|
| Llama 3.3 70B Versatile | High-quality general purpose |
| Llama 3.1 8B Instant | Fast, lightweight |
| Llama 4 Maverick 17B | Latest Llama 4 |
| Llama 4 Scout 17B | Efficient Llama 4 |
| Compound (Multi-model) | Automatic model routing |
| Compound Mini | Lightweight multi-model |
| Qwen3 32B | Alibaba's latest |
| Kimi K2 Instruct | Moonshot AI |
| GPT-OSS 120B | Open-source GPT |
| GPT-OSS 20B | Lightweight open-source |
OpenAI
| Model | Description |
|---|---|
| GPT-5 | Latest flagship |
| GPT-5 Mini | Efficient GPT-5 |
| GPT-5 Nano | Ultra-lightweight |
| GPT-5 Codex | Code-optimized |
| GPT-4.1 | Previous generation flagship |
| GPT-4.1 Mini | Efficient GPT-4.1 |
| GPT-4.1 Nano | Lightweight GPT-4.1 |
| GPT-4o | Multimodal |
| GPT-4o Mini | Efficient multimodal |
Anthropic
| Model | Description |
|---|---|
| Claude Opus 4.5 | Most capable |
| Claude Opus 4.1 | Previous Opus |
| Claude Sonnet 4.5 | Balanced performance |
| Claude Sonnet 4 | Previous Sonnet |
| Claude 3.7 Sonnet | Legacy Sonnet |
| Claude Haiku 4.5 | Fast and efficient |
| Claude 3.5 Haiku | Legacy Haiku |
| Model | Description |
|---|---|
| Gemini 3 Pro Preview | Latest preview |
| Gemini 2.5 Pro | Current flagship |
| Gemini 2.5 Flash | Fast inference |
| Gemini 2.5 Flash Lite | Ultra-lightweight |
| Gemini 2.0 Flash | Previous generation |
| Gemini 1.5 Pro | Long context (2M tokens) |
| Gemini 1.5 Flash | Fast with long context |
OpenRouter
| Model | Description |
|---|---|
| OpenRouter Auto | Automatic model selection |
| Claude 3.5 Sonnet | Via OpenRouter |
| Gemini 1.5 Pro | Via OpenRouter |
| Llama 3.1 70B Instruct | Via OpenRouter |
| Qwen 2.5 72B Instruct | Via OpenRouter |
| Mixtral 8x7B Instruct | Via OpenRouter |
Node 2: Demeterics Speech Gen
Generate natural-sounding speech from text using multiple TTS providers through a single node.
Configuration Options
| Option | Description | Default |
|---|---|---|
| Provider | TTS provider to use | OpenAI |
| Model | Provider-specific model | (varies) |
| Voice | Voice identifier | alloy |
| Format | Output audio format | mp3 |
| Speed | Playback speed (0.25-4.0) | 1.0 |
Input
| Parameter | Type | Required | Description |
|---|---|---|---|
| text | string | Yes | Text to convert to speech |
Output
| Field | Type | Description |
|---|---|---|
| audio_url | string | Signed URL to audio file (15 min expiry) |
| duration_seconds | number | Audio duration |
| cost_usd | number | Generation cost |
| format | string | Audio format |
Supported Providers
OpenAI
| Model | Voices |
|---|---|
| tts-1 | alloy, echo, fable, onyx, nova, shimmer |
| tts-1-hd | alloy, echo, fable, onyx, nova, shimmer |
ElevenLabs
| Model | Description |
|---|---|
| eleven_multilingual_v2 | Best quality, 29 languages |
| eleven_turbo_v2_5 | Fast, English-optimized |
Google Cloud TTS
| Model | Description |
|---|---|
| wavenet | High quality WaveNet voices |
| neural2 | Neural network based |
| standard | Basic quality |
Basic Workflow Example
[Webhook] → [Demeterics Speech Gen] → [HTTP Response with audio URL]
- Receive text input via webhook
- Generate speech using Demeterics Speech Gen
- Return the audio URL to the caller
Use Cases
- Voice Assistants - Generate spoken responses for chatbots
- Content Creation - Convert articles and posts to audio
- Accessibility - Provide audio versions of text content
- Notifications - Generate audio alerts and announcements
- Podcasting - Create AI-narrated podcast content
Node 3: Demeterics Image Gen
Generate images from text prompts using multiple providers through a single node.
Configuration Options
| Option | Description | Default |
|---|---|---|
| Provider | Image generation provider | OpenAI |
| Model | Provider-specific model | gpt-image-1 |
| Size | Output image size | 1024x1024 |
| Quality | Image quality | standard |
| Style | Image style | natural |
| Count | Number of images | 1 |
Input
| Parameter | Type | Required | Description |
|---|---|---|---|
| prompt | string | Yes | Description of the image to generate |
| negative_prompt | string | No | What to avoid in the image |
Output
| Field | Type | Description |
|---|---|---|
| images | array | Generated images |
| images[].url | string | Signed URL to image (15 min expiry) |
| images[].width | number | Image width in pixels |
| images[].height | number | Image height in pixels |
| cost_usd | number | Generation cost |
| revised_prompt | string | Provider's modified prompt |
Supported Providers
OpenAI DALL-E
| Model | Sizes | Quality |
|---|---|---|
| gpt-image-1 | 1024x1024, 1792x1024, 1024x1792 | standard, hd |
Google Imagen
| Model | Sizes |
|---|---|
| imagen-3.0-generate-002 | 1024x1024, 1536x1536 |
| imagen-3.0-fast-generate-001 | 1024x1024 |
Stability AI
| Model | Description |
|---|---|
| stable-image-ultra | Highest quality |
| stable-image-core | Balanced quality/speed |
| stable-diffusion-xl-1024-v1-0 | SDXL 1.0 |
Basic Workflow Example
[HTTP Trigger] → [Demeterics Image Gen] → [Upload to S3] → [HTTP Response]
- Receive image description via HTTP trigger
- Generate image using Demeterics Image Gen
- Upload to permanent storage (S3, GCS, etc.)
- Return the permanent URL
Use Cases
- Marketing Content - Generate ad visuals and social media images
- Product Design - Create concept images and mockups
- E-commerce - Generate product variations and lifestyle images
- Creative Tools - Power image generation features in your apps
- Content Pipelines - Automate thumbnail and header image creation
Node 4: Demeterics Conversion
Track business outcomes and metrics linked to your LLM interactions using cohort IDs.
Operations
Submit Outcome
Submit or update conversion metrics for a cohort.
| Parameter | Type | Required | Description |
|---|---|---|---|
| cohortId | string | Yes | Identifier linking LLM interactions to outcomes |
| outcome | number | No | Primary metric (e.g., views, conversion rate) |
| outcomeV2 | number | No | Secondary metric (e.g., revenue, time saved) |
| label | string | No | Human-readable label (e.g., "7d engagement") |
| eventDate | string | No | Date of outcome (YYYY-MM-DD format) |
Example:
{
"cohort_id": "campaign_001",
"outcome": 95,
"outcome_v2": 150,
"label": "7d engagement",
"event_date": "2025-01-15"
}
Get Outcome
Retrieve conversion information for a cohort.
| Parameter | Type | Required | Description |
|---|---|---|---|
| cohortId | string | Yes | Identifier used to tag LLM interactions |
Use Cases
- Performance Tracking - Link LLM interactions to business outcomes
- A/B Testing - Compare results across different prompt variations
- Model Comparison - Track metrics per provider/model combination
- Campaign Analysis - Monitor effectiveness of AI-generated content
- Conversion Attribution - Measure ROI of LLM features
Node 5: Demeterics Extract
Export interaction data from Demeterics for analysis, compliance, or data pipeline integration.
Operations
Export Interactions (Simple)
Create an export and immediately fetch its contents.
| Parameter | Type | Default | Description |
|---|---|---|---|
| format | options | json | Export format: JSON, CSV, or Avro |
| startDate | string | - | Filter start date (YYYY-MM-DD) |
| endDate | string | - | Filter end date (YYYY-MM-DD) |
| tables | multiOptions | interactions | Tables: interactions, eval_runs, eval_results |
Returns:
- JSON: Parsed items as individual workflow items
- CSV/Avro: Binary file attachments for downstream processing
Create Export Job
Create an export request and return the request ID for async processing.
Stream Export by Request ID
Fetch data for an existing export request ID.
Available Tables
| Table | Description |
|---|---|
| interactions | LLM request/response logs with metadata |
| eval_runs | Evaluation runs and their configurations |
| eval_results | Results from evaluation runs |
Use Cases
- Data Analysis - Export to analytics tools (BigQuery, Tableau, etc.)
- Compliance Auditing - Full interaction logs for audit trails
- Model Training - Use interactions as training data
- Cost Analysis - Detailed cost breakdowns
- Data Pipeline - Integrate into ETL workflows
Workflow Examples
Example 1: AI Agent with Multi-Provider Fallback
[HTTP Trigger] → [AI Agent + Demeterics Chat Model (Groq)]
↓ (on error)
[AI Agent + Demeterics Chat Model (Anthropic)]
↓
[Demeterics Conversion (submit outcome)]
↓
[HTTP Response]
Example 2: Daily Performance Export
[Schedule Trigger (daily)] → [Demeterics Extract]
↓
[Transform to metrics]
↓
[Save to database]
Example 3: Conversion Attribution
[User Event] → [Save cohort_id] → [Call LLM with cohort_id]
↓
[Process response]
↓
[Wait for conversion]
↓
[Demeterics Conversion (submit)]
Benefits of Using Demeterics
Unified API
Access all major AI providers through a single credential. Switch between Groq, OpenAI, Anthropic, and Google without changing your workflow.
Cost Tracking
Every request is automatically logged to your Demeterics dashboard. See exactly what each workflow costs in real-time.
Full Observability
Log every prompt, response, and token for debugging and compliance. Understand usage patterns and performance metrics.
No Vendor Lock-in
Switch providers anytime without code changes. Compare performance across providers easily.
Troubleshooting
Node Not Appearing
- Restart n8n completely after installing the community node
- Check that the installation completed successfully in Settings > Community Nodes
- Clear browser cache and refresh
Authentication Errors
- Verify your API key format starts with
dmt_ - Check that your API key is active at demeterics.ai/api-keys
- Ensure the Base URL is set to
https://api.demeterics.com(default) - For BYOK mode, ensure at least one provider key is configured
Model Not Working
- Check the API Reference for the latest supported models
- Verify the model is enabled for your account
- Check your credit balance at demeterics.ai
Export Issues
- Verify date range is valid (start date ≤ end date)
- Check date format is YYYY-MM-DD
- Ensure selected tables have data in the specified range
Resources
Support
- Documentation: demeterics.ai/docs
- GitHub Issues: Report a bug
- Email: support@demeterics.com