CrewAI の「クイックスタート」を翻訳しました。プロジェクト作成から crew の実行までを簡単にガイドします。CrewAI はマルチエージェント用プラットフォームで、複雑なタスクに取り組むために協力する AI エージェント・チームを構築します。
CrewAI マルチエージェント・プラットフォーム : Get Started : クイックスタート
作成 : クラスキャット・セールスインフォメーション
作成日時 : 03/28/2025
* 本記事は github.com/crewAIInc/crewAI/docs の以下のページを独自に翻訳した上でまとめ直し、補足説明を加えています :
* サンプルコードの動作確認はしておりますが、必要な場合には適宜、追加改変しています。
* ご自由にリンクを張って頂いてかまいませんが、sales-info@classcat.com までご一報いただけると嬉しいです。
◆ お問合せ : 下記までお願いします。
- クラスキャット セールス・インフォメーション
- sales-info@classcat.com
- ClassCatJP
CrewAI マルチエージェント・プラットフォーム : Get Started : クイックスタート
CrewAI を使用して 5 分以内に最初の AI エージェントを構築します。
最初の CrewAI エージェントの構築
単純な crew を作成しましょう、これは指定されたトピックやサブジェクトについて最新の AI 開発の調査やレポートに役立ちます。
Follow the steps below to get Crewing! 🚣♂️
- crew の作成
ターミナルで以下のコマンドを実行することで新しい crew プロジェクトを作成します。これは crew の基本的な構造を含む latest-ai-development という名前の新しいディレクトリを作成します。crewai create crew latest-ai-development
- 新しい crew プロジェクトに移動する
cd latest-ai-development
- `agents.yaml` ファイルの変更
必要に応じて、ユースケースに合わせて agents を変更したり、そのままプロジェクトにコピー&ペーストすることもできます。{topic} のような agents.yaml と tasks.yaml ファイルで補間された任意の変数は main.py ファイルの変数の値で置換されます。# src/latest_ai_development/config/agents.yaml researcher: role: > {topic} Senior Data Researcher goal: > Uncover cutting-edge developments in {topic} backstory: > You're a seasoned researcher with a knack for uncovering the latest developments in {topic}. Known for your ability to find the most relevant information and present it in a clear and concise manner. reporting_analyst: role: > {topic} Reporting Analyst goal: > Create detailed reports based on {topic} data analysis and research findings backstory: > You're a meticulous analyst with a keen eye for detail. You're known for your ability to turn complex data into clear and concise reports, making it easy for others to understand and act on the information you provide.
- `tasks.yaml` ファイルの変更
# src/latest_ai_development/config/tasks.yaml research_task: description: > Conduct a thorough research about {topic} Make sure you find any interesting and relevant information given the current year is {current_year}. expected_output: > A list with 10 bullet points of the most relevant information about {topic} agent: researcher reporting_task: description: > Review the context you got and expand each topic into a full section for a report. Make sure the report is detailed and contains any and all relevant information. expected_output: > A fully fledged report with the main topics, each with a full section of information. Formatted as markdown without '```' agent: reporting_analyst
- `crew.py` ファイルの変更
# src/latest_ai_development/crew.py from crewai import Agent, Crew, Process, Task from crewai.project import CrewBase, agent, crew, task # If you want to run a snippet of code before or after the crew starts, # you can use the @before_kickoff and @after_kickoff decorators # https://docs.crewai.com/concepts/crews#example-crew-class-with-decorators @CrewBase class LatestAiDevelopment(): """LatestAiDevelopment crew""" # Learn more about YAML configuration files here: # Agents: https://docs.crewai.com/concepts/agents#yaml-configuration-recommended # Tasks: https://docs.crewai.com/concepts/tasks#yaml-configuration-recommended agents_config = 'config/agents.yaml' tasks_config = 'config/tasks.yaml' # If you would like to add tools to your agents, you can learn more about it here: # https://docs.crewai.com/concepts/agents#agent-tools @agent def researcher(self) -> Agent: return Agent( config=self.agents_config['researcher'], verbose=True ) @agent def reporting_analyst(self) -> Agent: return Agent( config=self.agents_config['reporting_analyst'], verbose=True ) # To learn more about structured task outputs, # task dependencies, and task callbacks, check out the documentation: # https://docs.crewai.com/concepts/tasks#overview-of-a-task @task def research_task(self) -> Task: return Task( config=self.tasks_config['research_task'], ) @task def reporting_task(self) -> Task: return Task( config=self.tasks_config['reporting_task'], output_file='report.md' ) @crew def crew(self) -> Crew: """Creates the LatestAiDevelopment crew""" # To learn how to add knowledge sources to your crew, check out the documentation: # https://docs.crewai.com/concepts/knowledge#what-is-knowledge return Crew( agents=self.agents, # Automatically created by the @agent decorator tasks=self.tasks, # Automatically created by the @task decorator process=Process.sequential, verbose=True, # process=Process.hierarchical, # In case you wanna use that instead https://docs.crewai.com/how-to/Hierarchical/ )
- [オプション] before と after crew 関数の追加
# src/latest_ai_development/crew.py from crewai import Agent, Crew, Process, Task from crewai.project import CrewBase, agent, crew, task, before_kickoff, after_kickoff from crewai_tools import SerperDevTool @CrewBase class LatestAiDevelopmentCrew(): """LatestAiDevelopment crew""" @before_kickoff def before_kickoff_function(self, inputs): print(f"Before kickoff function with inputs: {inputs}") return inputs # You can return the inputs or modify them as needed @after_kickoff def after_kickoff_function(self, result): print(f"After kickoff function with result: {result}") return result # You can return the result or modify it as needed # ... remaining code
- crew にカスタムキーを自由に渡してください。
例えば、topic 入力を crew に渡して調査とレポートをカスタマイズできます。
#!/usr/bin/env python # src/latest_ai_development/main.py import sys from latest_ai_development.crew import LatestAiDevelopmentCrew def run(): """ Run the crew. """ inputs = { 'topic': 'AI Agents' } LatestAiDevelopmentCrew().crew().kickoff(inputs=inputs)
- 環境変数の設定
crew を実行する前に、.env ファイルの環境変数として以下のキーが設定されていることを確認してください :
- OpenAI API キー (or 他の LLM API キー): OPENAI_API_KEY=sk-…
- Serper.dev API キー: SERPER_API_KEY=YOUR_KEY_HERE
- 依存関係をロックしてインストールする
- 依存関係をロックして CLI コマンドを使用してそれらをインストールします :
crewai install
- インストールしたい追加のパッケージがあれば、以下を実行してください :
uv add <package-name>
- 依存関係をロックして CLI コマンドを使用してそれらをインストールします :
- crew の実行
- crew を実行するには、プロジェクトのルートで次のコマンドを実行します :
crewai run
- crew を実行するには、プロジェクトのルートで次のコマンドを実行します :
- 最終レポートの閲覧
コンソールに出力が表示され、最終的なレポートを含む report.md ファイルがプロジェクトのルートに作成されるはずです。
以下はレポートがどのようなものかのサンプルです :
# The State of AI Language Models in 2025 - A Comprehensive Report ## 1. Enhanced Contextual Understanding In 2025, AI Language Learning Models (LLMs) have significantly advanced in their contextual understanding capabilities. This progress is marked by their ability to comprehend and generate content with a high degree of contextual accuracy. These models now possess a deeper grasp of nuanced language, enabling them to produce responses that are not only precise but also relevant to a diverse array of topics. The improved contextual understanding is crucial as it allows LLMs to engage more effectively in natural language processing tasks, providing users with outputs that are closely aligned with their intents and expectations. This development supports a wide range of applications, from customer service chatbots and automated writing tools to more sophisticated uses in education and content generation. ## 2. Responsible AI and Bias Mitigation Addressing ethical concerns surrounding AI usage, 2025 has witnessed substantial advancements in reducing bias across LLMs. The AI community has focused heavily on developing new methodologies and training protocols aimed at ensuring that AI outputs are balanced and fair, regardless of the dataset inputs. This includes implementing techniques such as diversified data sampling, fairness-aware machine learning models, and regular bias audits. As a result, AI today is positioned as a more reliable tool for all users, enhancing trust and credibility in AI technologies. This move towards responsible AI not only mitigates biased representations in generated content but also ensures inclusivity and equality, crucial for applications in sensitive areas like recruitment, legal advice, and social media moderation. ## 3. Energy-Efficient Models The AI community has achieved remarkable progress in creating more energy-efficient language models. Noteworthy innovations in model architecture and training techniques have led to significant reductions in energy consumption and carbon footprint associated with LLM training and deployment. This sustainable approach is facilitated by methods like pruning, quantization, and the distillation of smaller yet powerful models that deliver performance similar to their larger counterparts. Such energy-efficient designs are not only environmentally responsible but also economically beneficial, reducing operational costs for businesses and making AI technologies more accessible to a broader range of users and industries. ## 4. Integration with Edge Computing A pivotal development in 2025 is the optimization of LLMs for operation on edge devices, driving faster processing speeds and enabling real-time interactions without continuous reliance on cloud connectivity. This shift aligns with the growing demand for powerful yet localized computing services, especially within the Internet of Things (IoT) and smart device ecosystems. By facilitating immediate data processing at the device level, integration with edge computing fosters new possibilities in applications such as personalized health monitoring, real-time logistical support, and adaptive home automation systems. This technological leap ensures low-latency, cost-effective, and efficient computing solutions that are pivotal in today’s interconnected digital landscape. ## 5. Improved Multimodal Capabilities 2025’s LLMs epitomize advanced multimodal capabilities, adept at handling and integrating various forms of data including text, image, and video. This ability enhances their functionality across several domains like content creation, virtual reality applications, and human-computer interaction. The synergistic processing of different data types allows for more comprehensive analysis and richer output, crucial for augmented reality developments and sophisticated virtual assistant deployment. These capabilities enable a seamless blend of text, audio, and visual data, creating engaging and immersive user experiences that propel forward industries such as entertainment, education, and digital marketing. ## 6. Advanced Personalization Techniques In 2025, LLMs have embraced advanced personalization techniques, becoming capable of finely adapting to individual user preferences and linguistic styles. By leveraging vast computational power and deep learning algorithms, LLMs now offer bespoke content recommendations, tailored communication styles, and personalized interactions—vital for enhancing user satisfaction and engagement. Such customized experiences are vastly impactful in areas such as digital marketing, personalized learning environments, and tailored health advice, where user-specific data can be harnessed to maximize the effectiveness and relevance of AI-driven initiatives. ## 7. Healthcare and Biomedical Applications LLMs are now indispensable in healthcare and biomedical fields, where they contribute significantly to diagnostic processes, patient outcome predictions, and personalized treatment planning. The models process voluminous datasets to discern patterns and insights that human practitioners may overlook, thus augmenting traditional medical practices with data-driven precision and efficiency. The application of LLMs in genomics, disease detection, and remote monitoring fundamentally improves healthcare delivery, enabling proactive patient care and fostering better health outcomes through more informed clinical decisions. ## 8. Collaborative AI Systems The integration of LLMs into collaborative AI systems marks a transformative shift in productivity and decision-making across industries. By working alongside human counterparts, LLMs enhance workflows and elevate decision-making processes in finance, legal sectors, and creative industries, among others. The synergy between humans and AI accelerates innovation, supports complex problem-solving, and expands the scope of what can be achieved in a range of professional environments. This collaborative approach to AI empowers organizations, driving efficiency while opening new avenues for creative and analytical work. ## 9. Advanced Security Protocols In response to heightened data privacy concerns, 2025's LLMs come equipped with enhanced security protocols to protect user data and ensure secure interactions. The deployment of advanced encryption methods and rigorous data protection techniques staves off vulnerabilities and fortifies privacy. Such security measures are vital as more sensitive information—be it financial, personal, or health-related—gets processed through AI systems. This emphasis on security ensures that LLMs maintain user trust and comply with international regulations, making AI applications safe and dependable for widespread adoption. ## 10. Open-Source Movement Growth 2025 has seen a surge in the open-source development of LLMs, an approach that fosters collaboration and innovation across the AI landscape. By inviting researchers and developers from around the world to contribute, the open-source movement democratizes AI advancements, encouraging shared learning and collective problem-solving. This trend not only accelerates the pace of innovation but also ensures that more diverse perspectives influence the evolution of AI technologies. Open-source LLMs have thus become a platform for enriching global research and nurturing an inclusive AI community that benefits from shared tools, knowledge, and innovations.
Congratulations!
crew プロジェクトのセットアップは成功し、独自のエージェント・ワークフローの構築を始める準備ができました!
名前付けの一貫性についての注意
YAML ファイル (agents.yaml と tasks.yaml) で使用する名前は Python コードのメソッド名と一致している必要があります。例えば、tasks.yaml ファイルからの特定のタスクのエージェントを参照できます。この名前付けの一貫性は CrewAI が構成を自動的にコードにリンクすることを可能にします ; そうでないなら、タスクは参照を正しく認識しません。
Example References
agents.yaml (email_summarizer) ファイル内のエージェントに crew.py (email_summarizer) ファイルのメソッド名と同じ名前を使用している方法に注意してください。
# agents.yaml
email_summarizer:
role: >
Email Summarizer
goal: >
Summarize emails into a concise and clear summary
backstory: >
You will create a 5 bullet point summary of the report
llm: openai/gpt-4o
tasks.yaml (email_summarizer_task) ファイル内のタスクに crew.py (email_summarizer_task) ファイルのメソッド名と同じ名前を使用している方法に注意してください。
# tasks.yaml
email_summarizer_task:
description: >
Summarize the email into a 5 bullet point summary
expected_output: >
A 5 bullet point summary of the email
agent: email_summarizer
context:
- reporting_task
- research_task
以上