How to Build a Multi-Agent System Using
Qwen3 and CrewAI
Introduction: Multi-Agent Systems Are the Next Frontier
LLMs are evolving from single-task bots to collaborative, multi-role agents.
With CrewAI, you can:
-
Create specialized agents (planner, researcher, coder)
-
Run them in workflows using Qwen3 models
-
Add tools like browsers, APIs, vector DBs
-
Host everything locally or in the cloud
In this guide, we’ll show you how to build a full-stack multi-agent system using Qwen3 + CrewAI.
1. What You Need
-
✅ Python 3.10+
-
✅ Qwen3 model (e.g.,
Qwen/Qwen1.5-14B-Chat) -
✅ CrewAI library
-
✅ Optional tools: SerpAPI, LangChain, browser agent, Redis memory
Install dependencies:
bashpip install crewai langchain openai transformers accelerate
2. Project Structure
arduinoqwen3-agents/ ├── main.py ├── agents/ │ ├── planner.py │ ├── researcher.py │ └── coder.py ├── tools/ │ └── websearch.py └── config.py
3. Create Your Agents
Example: agents/planner.py
pythonfrom crewai import Agent def PlannerAgent(llm): return Agent( role="Project Planner", goal="Create step-by-step instructions for any task", backstory="An expert at breaking down goals into logical sequences.", verbose=True, llm=llm )
Repeat for ResearcherAgent and CoderAgent.
4. Add Tools (Optional)
Example: Web Search tool (using SerpAPI or LangChain tools):
pythonfrom langchain.tools import DuckDuckGoSearchRun search_tool = DuckDuckGoSearchRun()
You can then pass this tool to your agents during setup.
5. Use Qwen3 as the LLM
In config.py:
pythonfrom transformers import AutoModelForCausalLM, AutoTokenizer from langchain.llms import HuggingFacePipeline import torch from transformers import pipeline def load_qwen3_model(): model_id = "Qwen/Qwen1.5-14B-Chat" tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", trust_remote_code=True) pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=512) return HuggingFacePipeline(pipeline=pipe)
6. Create the Crew
pythonfrom crewai import Crew from agents.planner import PlannerAgent from agents.researcher import ResearcherAgent from agents.coder import CoderAgent from config import load_qwen3_model llm = load_qwen3_model() crew = Crew( agents=[ PlannerAgent(llm), ResearcherAgent(llm), CoderAgent(llm) ], verbose=True ) result = crew.run("Build a Python app that scrapes weather data and displays it in a dashboard.") print(result)
7. Add Memory or Persistence (Optional)
Use Redis, ChromaDB, or LangChain’s memory modules to give agents:
-
Dialogue history
-
Task memory
-
Cross-session persistence
8. What You Can Build
| Use Case | Description |
|---|---|
| Dev Workflow Agent | Planner → Researcher → Coder |
| AI Research Assistant | Summarizer → Analyzer → Visualizer |
| Data QA Chain | Validator → Rewriter → Publisher |
| Web Navigation Agent | Searcher → Reader → Extractor |
| Legal Assistant | Case Retriever → Clause Checker → Writer |
Why Qwen3 Works So Well in CrewAI
-
✅ Large context window (up to 128K)
-
✅ Works with structured prompts (system + user roles)
-
✅ Instruction-tuned for role-based behavior
-
✅ Fast local inference with 7B/14B options
-
✅ Fully open-source (Apache 2.0)
Conclusion: Bring Autonomous Agents to Life with Qwen3
With Qwen3 + CrewAI, you get:
-
Modular agents with custom goals
-
Tool integration
-
Agent collaboration
-
Full control via local models
Now you can go beyond chatbots—and build fully open, intelligent systems that reason, plan, and build together.
Resources
Qwen3 Coder - Agentic Coding Adventure
Step into a new era of AI-powered development with Qwen3 Coder the world’s most agentic open-source coding model.