Deploy Qwen3 as a Slack or Discord Chatbot |
Step-by-Step Guide
Introduction: Bring Qwen3 Into Your Team Chats
What if your team could chat with a powerful LLM directly from Slack or Discord?
With Qwen3 models, you can create:
-
Private AI assistants
-
Company knowledge bots
-
Workflow automation agents
This guide shows you how to deploy a Qwen3-powered chatbot into Slack or Discord, hosted locally or in the cloud.
1. Prerequisites
Requirement | Slack | Discord |
---|---|---|
Bot account setup | ✅ Slack API App | ✅ Discord Developer Portal |
Token (OAuth) | ✅ Bot token | ✅ Bot token |
Python libraries | slack_sdk |
discord.py or nextcord |
Qwen3 model deployment | Local or via vLLM | Same |
Optional: Use ngrok or Cloudflare Tunnel for public access.
2. Deploy Qwen3 Model with vLLM
Use Qwen/Qwen1.5-7B-Chat
or Qwen1.5-14B-Chat
.
bashpip install vllm python -m vllm.entrypoints.openai.api_server \ --model Qwen/Qwen1.5-7B-Chat \ --port 8000
This serves the model on an OpenAI-compatible endpoint.
3. Slack Bot Integration
✅ Step-by-Step
-
Go to api.slack.com/apps
-
Create a new app → Enable bot permissions
-
Add
chat:write
,app_mentions:read
-
Install app to your workspace → Get bot token
Python Bot Code (Slack)
pythonfrom slack_sdk import WebClient from slack_sdk.rtm_v2 import RTMClient import openai openai.api_key = "none" openai.api_base = "http://localhost:8000" slack_bot_token = "xoxb-..." # Your token client = WebClient(token=slack_bot_token) rtm = RTMClient(token=slack_bot_token) @rtm.on("message") def handle_msg(event_data): text = event_data["data"].get("text", "") channel = event_data["data"]["channel"] if "<@your_bot_id>" in text: response = openai.ChatCompletion.create( model="Qwen1.5-7B-Chat", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": text} ] ) reply = response["choices"][0]["message"]["content"] client.chat_postMessage(channel=channel, text=reply) rtm.start()
4. Discord Bot Integration
Step-by-Step
-
Go to discord.com/developers
-
Create a new bot app
-
Get bot token and invite bot to your server
Python Bot Code (Discord)
pythonimport discord import openai openai.api_key = "none" openai.api_base = "http://localhost:8000" intents = discord.Intents.default() intents.message_content = True client = discord.Client(intents=intents) @client.event async def on_ready(): print(f'Bot connected as {client.user}') @client.event async def on_message(message): if message.author == client.user: return if client.user.mention in message.content: response = openai.ChatCompletion.create( model="Qwen1.5-7B-Chat", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": message.content} ] ) reply = response["choices"][0]["message"]["content"] await message.channel.send(reply) client.run("YOUR_DISCORD_BOT_TOKEN")
Customize: Add Slash Commands or Memory
Feature | Slack | Discord |
---|---|---|
Slash commands | Yes (/ask-ai ) |
Yes (/commands ) |
Persistent memory | Use Redis or JSON file | Same |
Private replies | client.chat_postEphemeral() |
await ctx.respond() |
You can also add tools like search, weather, or databases.
Security Tips
-
Store tokens securely using
.env
or secrets manager -
Restrict bot access to selected channels
-
Log usage for debugging or audit
Conclusion: Qwen3 in Your Chat Stack
With this setup:
-
Your team can ask questions in Slack/Discord
-
Data stays local with Qwen3
-
Replies match your tone and domain
Qwen3 brings the power of open-source LLMs into everyday workflows—securely and cost-effectively.
Resources
Qwen3 Coder - Agentic Coding Adventure
Step into a new era of AI-powered development with Qwen3 Coder the world’s most agentic open-source coding model.