Mixed Frameworks

Click to zoom
Mixed Frameworks
Multi-framework demo showcasing OpenAI to ElevenLabs pipelines with local storage for voice synthesis and AI-generated content.
The Mixed Frameworks example demonstrates how Habits can seamlessly orchestrate multiple AI services and frameworks in a single workflow.
What It Does
- OpenAI Integration: Generate text content using GPT models
- ElevenLabs Voice: Convert text to natural-sounding speech
- Local Storage: Save generated assets locally for reuse
- Pipeline Architecture: Chain services together elegantly
This example is perfect for understanding how to combine different AI providers into cohesive workflows using Habits.
Requirements
- OpenAI API key
- ElevenLabs API key
Key Files
yaml
version: "1.0"
workflows:
- id: text-to-voice-to-s3
path: ./habit.yaml
enabled: true
server:
port: 13000
host: "0.0.0.0"
frontend: ./frontend
logging:
level: info # trace, debug, info, warn, error, fatal, none
outputs: [console] # console, file, json
format: text # text or json
colorize: trueyaml
id: text-to-voice-to-s3
name: text-to-voice-to-s3
nodes:
- id: generate-text
type: activepieces
data:
framework: activepieces
module: '@activepieces/piece-openai'
operation: ask_chatgpt
source: npm
credentials:
apiKey: '{{habits.env.OPENAI_API_KEY}}'
params:
prompt: 'Write a short 2-sentence motivational quote related to: {{habits.input.prompt}}'
model: gpt-4o-mini
- id: text-to-speech
type: n8n
data:
framework: n8n
module: n8n-nodes-elevenlabs
operation: text-to-speech
source: npm
credentials:
elevenLabsApi:
xiApiKey: '{{habits.env.ELEVENLABS_API_KEY}}'
params:
resource: speech
text: '{{generate-text}}'
voice_id: 21m00Tcm4TlvDq8ikWAM
- id: save-locally
type: script
data:
label: Save Audio Locally
framework: script
source: inline
params:
audio: '{{text-to-speech.result.0.base64}}'
language: deno
script: |
export async function main(audio: string | ArrayBuffer) {
const outputDir = '/tmp/habits-audio';
await Deno.mkdir(outputDir, { recursive: true });
const filename = `audio-${Date.now()}.mp3`;
const filepath = `${outputDir}/${filename}`;
// Use Buffer for base64 decoding (works in Node.js environment)
const buffer = typeof audio === 'string'
? Buffer.from(audio, 'base64')
: new Uint8Array(audio);
await Deno.writeFile(filepath, buffer);
const stats = await Deno.stat(filepath);
return {
success: true,
filepath,
filename,
size: stats.size,
modifiedAt: stats.mtime?.toISOString()
};
}
edges:
- source: generate-text
target: text-to-speech
- source: text-to-speech
target: save-locallyexample
HABITS_OPENAPI_ENABLED=true
HABITS_MANAGE_ENABLED=true
OPENAI_API_KEY=sk-proj-key12-here34-makesuretocopyitall56
ELEVENLABS_API_KEY=Quick Start
Run directly using Cortex package, recommended for production runs, does not inlcude base or extra depdencies.
# First, download the example files
npx @ha-bits/cortex@latest server --config ./mixed/stack.yaml