Technology & AI

How to Build AgentScope-Friendly Production Workflows with React Agents, Custom Tools, Multi-Agent Debugging, Scheduled Outputs and Similar Queues

In this lesson, we create a complete AgentScope workflow from bottom to top and run everything in Colab. We start by wiring OpenAI with AgentScope and verifying a basic model call to understand how messages and responses are handled. From there, we define the functions of the custom tools, register them in the toolkit, and test the auto-generated schemas to see how the tools are exposed to the agent. We then move on to a ReAct-based agent that automatically decides when to call tools, followed by a multi-agent dialog setup using MsgHub to simulate structured interactions between agents. Finally, we enforce systematic results with Pydantic and use the same multi-agent pipeline where many experts analyze the problem in parallel, and the synthesizer combines their information.

import subprocess, sys


subprocess.check_call([
   sys.executable, "-m", "pip", "install", "-q",
   "agentscope", "openai", "pydantic", "nest_asyncio",
])


print("āœ…  All packages installed.n")


import nest_asyncio
nest_asyncio.apply()


import asyncio
import json
import getpass
import math
import datetime
from typing import Any


from pydantic import BaseModel, Field


from agentscope.agent import ReActAgent
from agentscope.formatter import OpenAIChatFormatter, OpenAIMultiAgentFormatter
from agentscope.memory import InMemoryMemory
from agentscope.message import Msg, TextBlock, ToolUseBlock
from agentscope.model import OpenAIChatModel
from agentscope.pipeline import MsgHub, sequential_pipeline
from agentscope.tool import Toolkit, ToolResponse


OPENAI_API_KEY = getpass.getpass("šŸ”‘  Enter your OpenAI API key: ")
MODEL_NAME = "gpt-4o-mini"


print(f"nāœ…  API key captured. Using model: {MODEL_NAME}n")
print("=" * 72)


def make_model(stream: bool = False) -> OpenAIChatModel:
   return OpenAIChatModel(
       model_name=MODEL_NAME,
       api_key=OPENAI_API_KEY,
       stream=stream,
       generate_kwargs={"temperature": 0.7, "max_tokens": 1024},
   )


print("n" + "═" * 72)
print("  PART 1: Basic Model Call")
print("═" * 72)


async def part1_basic_model_call():
   model = make_model()
   response = await model(
       messages=[{"role": "user", "content": "What is AgentScope in one sentence?"}],
   )
   text = response.content[0]["text"]
   print(f"nšŸ¤–  Model says: {text}")
   print(f"šŸ“Š  Tokens used: {response.usage}")


asyncio.run(part1_basic_model_call())

We include all the necessary dependencies and attach an event loop to ensure that the corresponding code runs correctly in Colab. We securely capture the OpenAI API key and configure the model with a helper function for reuse. Then we run the base model call to validate the setup and test the response and token usage.

print("n" + "═" * 72)
print("  PART 2: Custom Tool Functions & Toolkit")
print("═" * 72)


async def calculate_expression(expression: str) -> ToolResponse:
   allowed = {
       "abs": abs, "round": round, "min": min, "max": max,
       "sum": sum, "pow": pow, "int": int, "float": float,
       "sqrt": math.sqrt, "pi": math.pi, "e": math.e,
       "log": math.log, "sin": math.sin, "cos": math.cos,
       "tan": math.tan, "factorial": math.factorial,
   }
   try:
       result = eval(expression, {"__builtins__": {}}, allowed)
       return ToolResponse(content=[TextBlock(type="text", text=str(result))])
   except Exception as exc:
       return ToolResponse(content=[TextBlock(type="text", text=f"Error: {exc}")])


async def get_current_datetime(timezone_offset: int = 0) -> ToolResponse:
   now = datetime.datetime.now(datetime.timezone(datetime.timedelta(hours=timezone_offset)))
   return ToolResponse(
       content=[TextBlock(type="text", text=now.strftime("%Y-%m-%d %H:%M:%S %Z"))],
   )


toolkit = Toolkit()
toolkit.register_tool_function(calculate_expression)
toolkit.register_tool_function(get_current_datetime)


schemas = toolkit.get_json_schemas()
print("nšŸ“‹  Auto-generated tool schemas:")
print(json.dumps(schemas, indent=2))


async def part2_test_tool():
   result_gen = await toolkit.call_tool_function(
       ToolUseBlock(
           type="tool_use", id="test-1",
           name="calculate_expression",
           input={"expression": "factorial(10)"},
       ),
   )
   async for resp in result_gen:
       print(f"nšŸ”§  Tool result for factorial(10): {resp.content[0]['text']}")


asyncio.run(part2_test_tool())

We describe custom tool functions for statistical evaluation and time-of-day retrieval using controlled execution. We register these tools in the toolkit and examine their auto-generated JSON schemas to understand how AgentScope exposes them. We then simulate a direct tool call to verify that the tool execution pipeline is working correctly.

print("n" + "═" * 72)
print("  PART 3: ReAct Agent with Tools")
print("═" * 72)


async def part3_react_agent():
   agent = ReActAgent(
       name="MathBot",
       sys_prompt=(
           "You are MathBot, a helpful assistant that solves math problems. "
           "Use the calculate_expression tool for any computation. "
           "Use get_current_datetime when asked about the time."
       ),
       model=make_model(),
       memory=InMemoryMemory(),
       formatter=OpenAIChatFormatter(),
       toolkit=toolkit,
       max_iters=5,
   )


   queries = [
       "What's the current time in UTC+5?",
   ]
   for q in queries:
       print(f"nšŸ‘¤  User: {q}")
       msg = Msg("user", q, "user")
       response = await agent(msg)
       print(f"šŸ¤–  MathBot: {response.get_text_content()}")
       agent.memory.clear()


asyncio.run(part3_react_agent())


print("n" + "═" * 72)
print("  PART 4: Multi-Agent Debate (MsgHub)")
print("═" * 72)


DEBATE_TOPIC = (
   "Should artificial general intelligence (AGI) research be open-sourced, "
   "or should it remain behind closed doors at major labs?"
)

We’re building a ReAct agent that reasons when tools should be used and leverages them. We pass user questions and observe how the agent combines reasoning and tool use to generate answers. We also reset memory between queries to ensure independent and clean interactions.

async def part4_debate():
   proponent = ReActAgent(
       name="Proponent",
       sys_prompt=(
           f"You are the Proponent in a debate. You argue IN FAVOR of open-sourcing AGI research. "
           f"Topic: {DEBATE_TOPIC}n"
           "Keep each response to 2-3 concise paragraphs. Address the other side's points directly."
       ),
       model=make_model(),
       memory=InMemoryMemory(),
       formatter=OpenAIMultiAgentFormatter(),
   )


   opponent = ReActAgent(
       name="Opponent",
       sys_prompt=(
           f"You are the Opponent in a debate. You argue AGAINST open-sourcing AGI research. "
           f"Topic: {DEBATE_TOPIC}n"
           "Keep each response to 2-3 concise paragraphs. Address the other side's points directly."
       ),
       model=make_model(),
       memory=InMemoryMemory(),
       formatter=OpenAIMultiAgentFormatter(),
   )


   num_rounds = 2
   for rnd in range(1, num_rounds + 1):
       print(f"n{'─' * 60}")
       print(f"  ROUND {rnd}")
       print(f"{'─' * 60}")


       async with MsgHub(
           participants=[proponent, opponent],
           announcement=Msg("Moderator", f"Round {rnd} — begin. Topic: {DEBATE_TOPIC}", "assistant"),
       ):
           pro_msg = await proponent(
               Msg("Moderator", "Proponent, please present your argument.", "user"),
           )
           print(f"nāœ…  Proponent:n{pro_msg.get_text_content()}")


           opp_msg = await opponent(
               Msg("Moderator", "Opponent, please respond and present your counter-argument.", "user"),
           )
           print(f"nāŒ  Opponent:n{opp_msg.get_text_content()}")


   print(f"n{'─' * 60}")
   print("  DEBATE COMPLETE")
   print(f"{'─' * 60}")


asyncio.run(part4_debate())


print("n" + "═" * 72)
print("  PART 5: Structured Output with Pydantic")
print("═" * 72)


class MovieReview(BaseModel):
   year: int = Field(description="The release year.")
   genre: str = Field(description="Primary genre of the movie.")
   rating: float = Field(description="Rating from 0.0 to 10.0.")
   pros: list[str] = Field(description="List of 2-3 strengths of the movie.")
   cons: list[str] = Field(description="List of 1-2 weaknesses of the movie.")
   verdict: str = Field(description="A one-sentence final verdict.")

We create two agents with conflicting roles and connect them using MsgHub for a structured multi-agent debate. We simulate multiple rounds in which each agent responds to the others while maintaining context through shared communication. We see that the interaction of the agent enables mutual exchange in all possibilities.

async def part5_structured_output():
   agent = ReActAgent(
       name="Critic",
       sys_prompt="You are a professional movie critic. When asked to review a movie, provide a thorough analysis.",
       model=make_model(),
       memory=InMemoryMemory(),
       formatter=OpenAIChatFormatter(),
   )


   msg = Msg("user", "Review the movie 'Inception' (2010) by Christopher Nolan.", "user")
   response = await agent(msg, structured_model=MovieReview)


   print("nšŸŽ¬  Structured Movie Review:")
   print(f"    Title   : {response.metadata.get('title', 'N/A')}")
   print(f"    Year    : {response.metadata.get('year', 'N/A')}")
   print(f"    Genre   : {response.metadata.get('genre', 'N/A')}")
   print(f"    Rating  : {response.metadata.get('rating', 'N/A')}/10")
   pros = response.metadata.get('pros', [])
   cons = response.metadata.get('cons', [])
   if pros:
       print(f"    Pros    : {', '.join(str(p) for p in pros)}")
   if cons:
       print(f"    Cons    : {', '.join(str(c) for c in cons)}")
   print(f"    Verdict : {response.metadata.get('verdict', 'N/A')}")


   print(f"nšŸ“  Full text response:n{response.get_text_content()}")


asyncio.run(part5_structured_output())


print("n" + "═" * 72)
print("  PART 6: Concurrent Multi-Agent Pipeline")
print("═" * 72)


async def part6_concurrent_agents():
   specialists = {
       "Economist": "You are an economist. Analyze the given topic from an economic perspective in 2-3 sentences.",
       "Ethicist": "You are an ethicist. Analyze the given topic from an ethical perspective in 2-3 sentences.",
       "Technologist": "You are a technologist. Analyze the given topic from a technology perspective in 2-3 sentences.",
   }


   agents = []
   for name, prompt in specialists.items():
       agents.append(
           ReActAgent(
               name=name,
               sys_prompt=prompt,
               model=make_model(),
               memory=InMemoryMemory(),
               formatter=OpenAIChatFormatter(),
           )
       )


   topic_msg = Msg(
       "user",
       "Analyze the impact of large language models on the global workforce.",
       "user",
   )


   print("nā³  Running 3 specialist agents concurrently...")
   results = await asyncio.gather(*(agent(topic_msg) for agent in agents))


   for agent, result in zip(agents, results):
       print(f"n🧠  {agent.name}:n{result.get_text_content()}")


   synthesiser = ReActAgent(
       name="Synthesiser",
       sys_prompt=(
           "You are a synthesiser. You receive analyses from an Economist, "
           "an Ethicist, and a Technologist. Combine their perspectives into "
           "a single coherent summary of 3-4 sentences."
       ),
       model=make_model(),
       memory=InMemoryMemory(),
       formatter=OpenAIMultiAgentFormatter(),
   )


   combined_text = "nn".join(
       f"[{agent.name}]: {r.get_text_content()}" for agent, r in zip(agents, results)
   )
   synthesis = await synthesiser(
       Msg("user", f"Here are the specialist analyses:nn{combined_text}nnPlease synthesise.", "user"),
   )
   print(f"nšŸ”—  Synthesised Summary:n{synthesis.get_text_content()}")


asyncio.run(part6_concurrent_agents())


print("n" + "═" * 72)
print("  šŸŽ‰  TUTORIAL COMPLETE!")
print("  You have covered:")
print("    1. Basic model calls with OpenAIChatModel")
print("    2. Custom tool functions & auto-generated JSON schemas")
print("    3. ReAct Agent with tool use")
print("    4. Multi-agent debate with MsgHub")
print("    5. Structured output with Pydantic models")
print("    6. Concurrent multi-agent pipelines")
print("═" * 72)

We enforce structured results using Pydantic schema to extract consistent fields from model responses. Then we build a simultaneous multi-agent pipeline where multiple expert agents analyze the topic in parallel. Finally, we combine their output using a synthesizer agent to produce a unified and cohesive summary.

In conclusion, we implemented a full-stack agent system that goes beyond simple information and into structured thinking, tool use, and collaboration. We now understand how AgentScope manages memory, formatting, and tooling under the hood, and how ReAct agents integrate thinking into action. We also saw how multiple agent systems can be linked sequentially and simultaneously, and how structured output ensures reliability in downstream applications. With these building blocks, we are in a position to design more advanced agent architectures, extend tool ecosystems, and deliver faster, production-ready AI systems.


Check it out The complete Notebook is here. Also, feel free to follow us Twitter and don’t forget to join our 120k+ ML SubReddit and Subscribe to Our newspaper. Wait! are you on telegram? now you can join us on telegram too.


Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button