Skip to content

Agents llm log enhancer #2750

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

orcema
Copy link
Contributor

@orcema orcema commented May 4, 2025

No description provided.

@joaomdmoura
Copy link
Collaborator

Disclaimer: This review was made by a crew of AI Agents.

Code Review Comment for PR #2750 - Log Agent Specific LLM Assignment

Overview

The modifications introduced in this pull request significantly enhance the logging functionality by displaying LLM model information during the execution of agent executors. This is a valuable addition that supports debugging and performance tracking.

Detailed Findings & Suggestions

1. src/crewai/agents/crew_agent_executor.py

Code Improvements:

  • Spacing Consistency: Adjust spacing around function definitions for readability. For instance:

    def _show_logs(self, formatted_answer: Union[AgentAction, AgentFinish]):
        """Show logs for the agent's execution."""
        ...
    # Add a newline here for better readability
    def _summarize_messages(self) -> None:
  • Optimizing Import Order: Reorganize imports to follow PEP 8 standards. Example:

    # Standard library imports
    
    # Third-party imports
    from typing import Union
    
    # Local application imports
    from crewai.utilities.agent_utils import ...

2. src/crewai/utilities/agent_utils.py

Code Improvements:

  • Enhanced Type Hints: Ensure all functions, especially show_agent_llm_model, have appropriate type hints. Revise it as:

    def show_agent_llm_model(self: Any) -> None:
        ...
  • Error Handling: Introduce error handling around LLM model access to prevent runtime errors:

    try:
        if hasattr(self, "llm") and getattr(self.llm, "model", None):
            ...
    except AttributeError as e:
        logger.debug(f"Unable to display LLM model information: {str(e)}")

General Recommendations:

  1. Robust Error Handling: Ensure all attribute accesses are enclosed in try-catch blocks to avoid exceptions disrupting the flow, especially in show_agent_llm_model.

  2. Documentation Enhancements: Expand docstrings to include information about return values, exceptions, and usage examples, especially in utility functions.

  3. Move Color Codes to Constants: Consider defining color codes at the top of the file for reusability:

    ANSI_PURPLE = "\033[95m"
    ANSI_GREEN = "\033[92m"
  4. Unit Testing: Implement tests for the new logging functionality to cover various scenarios, including different configurations of the LLM and cases where attributes might be missing.

Historical Context

While I couldn't retrieve insights from previous PRs due to access issues, it's often beneficial to reference changes related to logging modifications. Reviewing how prior PRs have handled similar enhancements could yield useful patterns for error management and logging consistency.

Conclusion

Overall, the changes in this pull request are a positive step towards improving the logging mechanisms within the agent executors. Addressing the points highlighted in this review will significantly enhance code maintainability and robustness. Thank you for your contributions to improving our code quality!

@orcema orcema mentioned this pull request May 4, 2025
@lorenzejay lorenzejay self-requested a review May 5, 2025 22:01
@lorenzejay
Copy link
Collaborator

nice, but what about putting it console.formatter.py ?

@orcema
Copy link
Contributor Author

orcema commented May 6, 2025

i don't get exactly the point about console.formatter.py ? why should this file be updated too with the enhanced logging ?

@lorenzejay
Copy link
Collaborator

Going to deprecate the old logger for the console_formatter.py as they are duplicating as all the logs are now deriving from event_listener. which is the tree structure

@orcema orcema force-pushed the agents_llm_log_enhancer branch from f8d62e6 to 9a617ae Compare May 9, 2025 09:55
@orcema
Copy link
Contributor Author

orcema commented May 9, 2025

i can't figure out why it blocs passing the tests for python 3.11 and i do not get an error message
image

@orcema
Copy link
Contributor Author

orcema commented May 11, 2025

i even tried to push a new branch #2800 but this one failed too on tests for the python version 3.11

@lucasgomide
Copy link
Contributor

@orcema would you mind to address linter issues?

@orcema
Copy link
Contributor Author

orcema commented May 13, 2025

@orcema would you mind to address linter issues?

I would like but i don't know how to address the issue in a correct manner.

My point is that when i try to run the tests according to crewai instructions https://github.com/crewAIInc/crewAI?tab=readme-ov-file#running-tests i get a lot of failed tests 7 errors

= 153 failed, 623 passed, 4 skipped, 116 warnings, 7 errors in 262.22s (0:04:22) = [274042 ms] postCreateCommand from devcontainer.json failed with exit code 1. Skipping any further user-provided commands.

I'm using .devcontainers with the with settings as here below:

{ "name": "CrewAI Development", "image": "mcr.microsoft.com/devcontainers/python:3.10", "features": { "ghcr.io/devcontainers-contrib/features/poetry:2": {} }, "remoteUser": "vscode", "postCreateCommand": "mkdir -p /home/vscode/.local/lib && pip install uv && uv lock && uv sync --dev --all-extras && uv run pytest --block-network --timeout=60 -vv", "customizations": { "vscode": { "extensions": [ "ms-python.python", "ms-python.vscode-pylance", "njpwerner.autodocstring" ], "settings": {} } }

@lucasgomide
Copy link
Contributor

@orcema it's a flaky test

Regarding linter, it seems you have a single file with issues.. you can solve by running:

uv run ruff check --fix src/crewai/agents/crew_agent_executor.py

@orcema orcema force-pushed the agents_llm_log_enhancer branch from 413452b to b97f22a Compare May 17, 2025 18:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants