OpenManus官方仓库
网址: https://github/mannaandpoem/OpenManus
下载
如果可以访问,并且可以使用 git 的,可以在终端(CMD) 中使用 git clone 下载
git clone https://github/mannaandpoem/OpenManus
其他下载方式
访问官方仓库,点击页面上的 "Code" -> "Download Zip " 下载代码压缩包
基础环境部署
参考官方文档中的readme教程,结合自己的电脑环境进行部署
config文件配置(配置模型)
# Global LLM configuration
[llm]
model = "Qwen/QwQ-32B"
base_url = "https://api.siliconflow/v1"
api_key = "123********************69k8"
max_tokens = 4096
temperature = 0.0
# 备注: 目前多模态还没有整合,暂时可以不用修改
# Optional configuration for specific LLM models
[llm.vision]
model = "claude-3-5-sonnet"
base_url = "https://api.openai/v1"
api_key = "sk-..."
如果是调用ollama把[llm]底下的model换成模型的名称(ollama list可查看已经拉下来的模型名称);api_key修改成"ollama";base_url如果ollama在本地就修改成"http://localhost:11434/v1"
bing_search
pip install aiohttp beautifulsoup4
tool目录底下新建bing_search.py
import aiohttp
from bs4 import BeautifulSoup
from typing import List
from app.tool.base import BaseTool
class BingSearch(BaseTool):
name: str = "bing_search"
description: str = """使用必应搜索返回相关链接列表。
当需要查找网络信息、获取最新数据或研究特定主题时使用此工具。
该工具返回与搜索查询匹配的URL列表。
"""
parameters: dict = {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "(必填) 提交给必应的搜索查询。",
},
"num_results": {
"type": "integer",
"description": "(可选) 要返回的搜索结果数量。默认为10。",
"default": 10,
},
},
"required": ["query"],
}
async def execute(self, query: str, num_results: int = 10) -> List[str]:
"""
执行必应搜索并返回URL列表。
参数:
query (str): 要提交给必应的搜索查询。
num_results (int, optional): 要返回的搜索结果数量。默认为10。
返回:
List[str]: 与搜索查询匹配的URL列表。
"""
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36"
}
search_url = f"https://www.bing/search?q={query}"
async with aiohttp.ClientSession() as session:
try:
async with session.get(search_url, headers=headers) as response:
response.raise_for_status()
html = await response.text()
except Exception as e:
raise RuntimeError(f"必应搜索请求失败: {str(e)}")
soup = BeautifulSoup(html, 'html.parser')
links = []
# 必应搜索结果链接通常在类名为"b_algo"的div内,具体选择器可能需要根据实际页面结构调整
for result in soup.select('.b_algo'):
a_tag = result.select_one('a')
if a_tag and 'href' in a_tag.attrs:
link = a_tag['href']
links.append(link)
if len(links) >= num_results:
break
return links[:num_results]
baidu_search
pip install baidusearch
tool目录底下新建baidu_search.py
import asyncio
from typing import List
from baidusearch.baidusearch import search
from app.tool.base import BaseTool
class BaiduSearch(BaseTool):
name: str = "baidu_search"
description: str = """Perform a Baidu search and return a list of relevant links.
Use this tool when you need to find information on the web, get up-to-date data, or research specific topics.
The tool returns a list of URLs that match the search query.
"""
parameters: dict = {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "(required) The search query to submit to Baidu.",
},
"num_results": {
"type": "integer",
"description": "(optional) The number of search results to return. Default is 10.",
"default": 10,
},
},
"required": ["query"],
}
async def execute(self, query: str, num_results: int = 10) -> List[str]:
"""
Execute a Baidu search and return a list of URLs.
Args:
query (str): The search query to submit to Baidu.
num_results (int, optional): The number of search results to return. Default is 10.
Returns:
List[str]: A list of URLs matching the search query.
"""
# Run the search in a thread pool to prevent blocking
loop = asyncio.get_event_loop()
links = await loop.run_in_executor(
None, lambda: [result['url'] for result in search(query, num_results=num_results)]
)
return links
修改OpenManus/app/agent/manus.py文件,如下图所示(以bing_search为例):
文件开头导包,从bing_search添加BingSearch
prompt/manus.py 修改GoogleSearch为BingSearch
发布评论