Skip to content

fix(copilot): remove is_long_running hack from agent generation tools#12180

Closed
majdyz wants to merge 22 commits intodevfrom
fix/agent-generation-completion-blocking
Closed

fix(copilot): remove is_long_running hack from agent generation tools#12180
majdyz wants to merge 22 commits intodevfrom
fix/agent-generation-completion-blocking

Conversation

@majdyz
Copy link
Contributor

@majdyz majdyz commented Feb 21, 2026

Summary

Fixes agent generation by removing async delegation from the executor while keeping is_long_running for frontend UI hints. Introduces stream-based communication for long-running tools.

Problem

The executor was still spawning background tasks when it saw is_long_running = True, causing:

  • "Operation is still running" messages instead of actual results
  • Session timeouts waiting for async completion
  • Mini-game not displaying because execution wasn't streaming properly

Solution

1. Remove Async Delegation from Executor

  • Removed background task spawning code from _yield_tool_call in service.py
  • All tools now execute synchronously with streaming, regardless of is_long_running
  • Removed 154 lines of async delegation code

2. Add Stream Event for Long-Running Tools

  • Added StreamLongRunningStart event type to response_model.py
  • Backend yields this event when tool.is_long_running = True
  • Frontend listens for event to show UI feedback (mini-game)

3. Frontend Event-Based Detection

  • ToolWrapper detects long-running-start events from stream
  • Removed hardcoded LONG_RUNNING_TOOLS list
  • Single source of truth: backend's is_long_running property
  • Mini-game auto-hides when tool completes

Architecture

Before:

  • Backend: is_long_running = True β†’ spawns background task β†’ async delegation
  • Frontend: Hardcoded list of tools to show mini-game

After:

  • Backend: is_long_running = True β†’ yields StreamLongRunningStart event β†’ runs synchronously
  • Frontend: Receives event β†’ shows mini-game β†’ hides when complete

Benefits

βœ… Tools run synchronously with proper streaming
βœ… Completion messages appear in chat immediately
βœ… No hardcoded lists to synchronize
βœ… Backend has full control over UI hints
βœ… Mini-game automatically shows/hides based on tool state

Testing

  • Backend formatted and type-checked
  • Frontend formatted and linted
  • Removed sample.logs accidental commit
  • Manual test: Create agent and verify result + mini-game display

Remove the `is_long_running = True` override from create_agent,
edit_agent, and customize_agent tools. Now that CoPilot runs in the
executor service (which already handles background execution), the
async delegation pattern is unnecessary.

This fixes the issue where agent generation completion messages
never appeared in chat because the code was exiting early expecting
an external Redis Stream completion that never came.

The tools now execute synchronously in the CoPilot executor and
stream completion messages back to chat immediately.

Fixes: Agent generation completion not showing in chat
…ation

Remove all dead code related to the async processing delegation pattern
that is no longer needed after removing the is_long_running hack:

- Remove `_operation_id` and `_task_id` parameter extraction
- Remove passing these params to generate_agent/generate_agent_patch
- Remove `status: "accepted"` checks and AsyncProcessingResponse returns
- Remove AsyncProcessingResponse class definition from models.py
- Remove operation_id/task_id params from agent_generator functions:
  - generate_agent() and generate_agent_external()
  - generate_agent_patch() and generate_agent_patch_external()
  - generate_agent_dummy() and generate_agent_patch_dummy()
- Remove 202 Accepted handling for async processing

This cleanup removes 126 lines of code that was supporting the old
async delegation workflow.
@majdyz majdyz requested a review from a team as a code owner February 21, 2026 00:23
@majdyz majdyz requested review from Pwuts and Swiftyos and removed request for a team February 21, 2026 00:23
@github-project-automation github-project-automation bot moved this to πŸ†• Needs initial review in AutoGPT development kanban Feb 21, 2026
@github-actions github-actions bot added platform/backend AutoGPT Platform - Back end size/l labels Feb 21, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 21, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▢️ Resume reviews
  • πŸ” Trigger review

Walkthrough

This PR removes asynchronous operation processing from the agent generation backend flow and updates the frontend to use synchronous operations with mini-game UI during long-running tool execution. Backend functions eliminate operation_id and task_id parameters, the AsyncProcessingResponse model is deleted, and frontend components are restructured to conditionally display a LongRunningToolDisplay wrapper instead of handling operation state responses.

Changes

Cohort / File(s) Summary
Backend Agent Generator Service
backend/backend/copilot/tools/agent_generator/core.py, backend/backend/copilot/tools/agent_generator/dummy.py, backend/backend/copilot/tools/agent_generator/service.py
Removed operation_id and task_id parameters from generate_agent and generate_agent_patch functions and their external/dummy implementations; eliminated async acceptance response handling; simplified return semantics to always return agent JSON or error dict.
Backend Tool Implementations
backend/backend/copilot/tools/create_agent.py, backend/backend/copilot/tools/customize_agent.py, backend/backend/copilot/tools/edit_agent.py
Removed AsyncProcessingResponse import and usage; eliminated operation_id/task_id extraction and passing to agent generation; removed async acceptance flow handling; added is_long_running docstring descriptions.
Backend Models and Tests
backend/backend/copilot/tools/models.py, backend/test/agent_generator/test_core_integration.py
Deleted AsyncProcessingResponse class definition; updated test assertions for generate_agent/generate_agent_patch external calls to reflect reduced parameter arities.
Backend Tool Base Class
backend/backend/copilot/tools/base.py
Simplified is_long_running property docstring to describe UI mini-game duration instead of background task behavior.
Frontend Long-Running Tool Infrastructure
frontend/src/app/(platform)/copilot/tools/long-running-tools.ts, frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx, frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
Added new utility module defining LONG_RUNNING_TOOLS list and isLongRunningTool checker; created LongRunningToolDisplay component that conditionally renders mini-game UI; created ToolWrapper that wraps tool components and shows LongRunningToolDisplay when appropriate.
Frontend Tool UI Updates
frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx, frontend/src/app/(platform)/copilot/tools/CreateAgent/helpers.tsx, frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx, frontend/src/app/(platform)/copilot/tools/EditAgent/helpers.tsx
Removed operation-state response types (OperationStartedResponse, OperationPendingResponse, OperationInProgressResponse) from tool output unions; deleted related type-guard helpers (isOperationStartedOutput, isOperationPendingOutput, isOperationInProgressOutput); removed MiniGame/ContentHint inline rendering and OrbitLoader usage; simplified hasExpandableContent logic to focus on agent preview, saved, clarification, and error states.
Frontend Chat Messages
frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
Wrapped tool component rendering in new ToolWrapper component to enable unified long-running tool display handling.
Sample Logs
backend/sample.logs
Added verbose multi-service startup and operational log traces for reference/debugging purposes.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • Swiftyos
  • Pwuts

Poem

🐰 Async operations hop away,
No more waiting for tasks to play!
Sync flows swift, mini-games in sight,
Long-running tools now feel so right!
The rabbit's refactor is clean and tight ✨

πŸš₯ Pre-merge checks | βœ… 3
βœ… Passed checks (3 passed)
Check name Status Explanation
Title check βœ… Passed The title clearly and specifically describes the main change: removing the is_long_running hack from agent generation tools, which aligns with the primary objective of the PR.
Docstring Coverage βœ… Passed Docstring coverage is 80.95% which is sufficient. The required threshold is 80.00%.
Description check βœ… Passed The PR description clearly explains the problem (async delegation causing blocking issues), the solution (removing async delegation while keeping is_long_running for UI hints), and the architectural changes made. It directly relates to the changeset which removes async delegation code, operation_id/task_id parameters, and AsyncProcessingResponse handling across multiple files.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
πŸ§ͺ Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/agent-generation-completion-blocking

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❀️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 21, 2026

Greptile Summary

Removed async delegation dead code from agent generation tools. The tools (create_agent, edit_agent, customize_agent) no longer override is_long_running = True, allowing them to execute synchronously in the CoPilot executor. This ensures completion messages stream back to chat immediately instead of expecting an external Redis Stream completion that never arrives.

Key changes:

  • Removed is_long_running property overrides from 3 agent generation tools
  • Removed _operation_id and _task_id parameter extraction and passing (24 lines)
  • Removed AsyncProcessingResponse class and related response handling (30+ lines)
  • Removed operation_id/task_id parameters from 6 agent_generator functions
  • Removed HTTP 202 Accepted handling in external service calls (40+ lines)
  • Total cleanup: 126+ lines of dead code removed

The is_long_running infrastructure remains intact for other tools that may need it.

Confidence Score: 5/5

  • This PR is safe to merge - it's a clean refactoring that removes dead code without introducing new functionality
  • This is purely a code cleanup PR that removes 126+ lines of dead code. The async delegation pattern was already disabled in the previous commit by removing is_long_running overrides. This PR completes the cleanup by removing all the supporting infrastructure that's no longer used. No new logic is introduced, only deletions of unused parameters, checks, and response types. The changes are well-documented and follow a clear architectural decision.
  • No files require special attention - all changes are straightforward deletions of dead code

Sequence Diagram

sequenceDiagram
    participant User
    participant CoPilot as CoPilot Service
    participant Tool as Agent Generation Tool
    participant Generator as Agent Generator Service
    participant Chat as Chat History

    Note over Tool: is_long_running = False (default)
    
    User->>CoPilot: Request agent creation
    CoPilot->>Tool: execute(description, context)
    Note over CoPilot,Tool: Synchronous execution<br/>(no background task)
    
    Tool->>Generator: generate_agent(instructions, library_agents)
    Note over Tool,Generator: No operation_id/task_id params
    Generator-->>Tool: Agent JSON
    Note over Generator,Tool: No 202 Accepted response
    
    Tool-->>CoPilot: AgentSavedResponse
    CoPilot->>Chat: Save completion message
    CoPilot-->>User: Stream completion to chat
    
    Note over User,Chat: βœ“ User sees completion immediately
Loading

Last reviewed commit: 66c2416

…task_id

Update test assertions to match new function signatures after removing
operation_id and task_id parameters from generate_agent_external and
generate_agent_patch_external.

Fixes:
- TestGenerateAgent::test_calls_external_service
- TestGenerateAgentPatch::test_calls_external_service
…game display

- Add is_long_running property to BaseTool for UI feedback control
- Mark create_agent, edit_agent, customize_agent as long-running tools
- Create LongRunningToolDisplay component for generic mini-game UI
- Clean up CreateAgent and EditAgent to use shared component
- Remove manual title configuration, use generic message
- Create LONG_RUNNING_TOOLS constant for frontend reference

This makes it easy to add new long-running tools without UI changes.
@github-actions github-actions bot added the platform/frontend AutoGPT Platform - Front end label Feb 21, 2026
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx (2)

2-5: Convoluted relative import path β€” simplify to ../ToolAccordion/AccordionContent

The import resolves correctly but the path ../../tools/CreateAgent/../../components/ToolAccordion/AccordionContent is unnecessarily indirect (it looks copy-pasted from a tools/ file). From this file's location in components/LongRunningToolDisplay/, the direct path is simply:

πŸ”§ Proposed fix
-import {
-  ContentGrid,
-  ContentHint,
-} from "../../tools/CreateAgent/../../components/ToolAccordion/AccordionContent";
+import {
+  ContentGrid,
+  ContentHint,
+} from "../ToolAccordion/AccordionContent";
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@autogpt_platform/frontend/src/app/`(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx
around lines 2 - 5, The import for ContentGrid and ContentHint in
LongRunningToolDisplay.tsx uses a convoluted relative path; update the import
statement that currently references
"../../tools/CreateAgent/../../components/ToolAccordion/AccordionContent" to the
simplified relative path "../ToolAccordion/AccordionContent" so ContentGrid and
ContentHint are imported directly from the neighboring ToolAccordion folder.

7-7: Dependency direction: shared components/ importing from feature-specific tools/CreateAgent/

MiniGame lives under tools/CreateAgent/components/MiniGame/, but LongRunningToolDisplay is a shared component under components/. Having a shared component depend on a feature-local component inverts the expected dependency direction (features β†’ shared, not shared β†’ feature). Since MiniGame is now used by both CreateAgent and EditAgent via LongRunningToolDisplay, it should be moved to a shared location (e.g., components/MiniGame/) and the import updated accordingly.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@autogpt_platform/frontend/src/app/`(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx
at line 7, LongRunningToolDisplay imports the feature-local MiniGame, inverting
dependency direction; move the MiniGame component from
tools/CreateAgent/components/MiniGame/ into the shared components area (e.g.,
components/MiniGame/) and update the import in LongRunningToolDisplay.tsx to
reference the new shared path; ensure any internal imports/exports inside the
moved MiniGame module stay correct (export name MiniGame) and update any other
callers (CreateAgent/EditAgent) to the shared location so shared β†’ features
dependency is preserved.
autogpt_platform/backend/backend/copilot/tools/create_agent.py (1)

225-238: No timeout guard on the synchronous generate_agent call

generate_agent is now awaited synchronously and can take several minutes. If the external agent generator service hangs, _execute will block indefinitely with no timeout or circuit-breaker. Ensure the executor service that hosts CoPilot enforces an upper-bound deadline (e.g., via asyncio.wait_for) for long-running tool invocations.

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@autogpt_platform/backend/backend/copilot/tools/create_agent.py` around lines
225 - 238, The await of generate_agent inside _execute has no timeout and can
block indefinitely; wrap the call to generate_agent with asyncio.wait_for using
a configurable timeout constant (e.g., AGENT_GENERATION_TIMEOUT or
DEFAULT_TIMEOUT) and import asyncio, then handle asyncio.TimeoutError to return
an ErrorResponse (similar shape to the existing AgentGeneratorNotConfiguredError
handling) with a clear message and error code like "timeout" or
"service_unresponsive"; ensure you catch and log the timeout separately from
other exceptions so long-running or hung external generator calls are bounded.
πŸ“œ Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled

You can enable these sources in your CodeRabbit configuration.

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between eef3946 and bfdc1ed.

πŸ“’ Files selected for processing (10)
  • autogpt_platform/backend/backend/copilot/tools/base.py
  • autogpt_platform/backend/backend/copilot/tools/create_agent.py
  • autogpt_platform/backend/backend/copilot/tools/customize_agent.py
  • autogpt_platform/backend/backend/copilot/tools/edit_agent.py
  • autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/helpers.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/helpers.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.ts
πŸ’€ Files with no reviewable changes (2)
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/helpers.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/helpers.tsx
🚧 Files skipped from review as they are similar to previous changes (1)
  • autogpt_platform/backend/backend/copilot/tools/customize_agent.py
🧰 Additional context used
πŸ““ Path-based instructions (15)
autogpt_platform/frontend/**/*.{ts,tsx,js,jsx}

πŸ“„ CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/frontend/**/*.{ts,tsx,js,jsx}: Use Node.js 21+ with pnpm package manager for frontend development
Always run 'pnpm format' for formatting and linting code in frontend development

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.ts
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
autogpt_platform/frontend/**/*.{tsx,ts}

πŸ“„ CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/frontend/**/*.{tsx,ts}: Use function declarations for components and handlers (not arrow functions) in React components
Only use arrow functions for small inline lambdas (map, filter, etc.) in React components
Use PascalCase for component names and camelCase with 'use' prefix for hook names in React
Use Tailwind CSS utilities only for styling in frontend components
Use design system components from 'src/components/' (atoms, molecules, organisms) in frontend development
Never use 'src/components/legacy/' in frontend code
Only use Phosphor Icons (@phosphor-icons/react) for icons in frontend components
Use generated API hooks from '@/app/api/generated/endpoints/' instead of deprecated 'BackendAPI' or 'src/lib/autogpt-server-api/
'
Use React Query for server state (via generated hooks) in frontend development
Default to client components ('use client') in Next.js; only use server components for SEO or extreme TTFB needs
Use '' component for rendering errors in frontend UI; use toast notifications for mutation errors; use 'Sentry.captureException()' for manual exceptions
Separate render logic from data/behavior in React components; keep comments minimal (code should be self-documenting)

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.ts
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
autogpt_platform/frontend/**/*.{ts,tsx}

πŸ“„ CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/frontend/**/*.{ts,tsx}: No barrel files or 'index.ts' re-exports in frontend code
Regenerate API hooks with 'pnpm generate:api' after backend OpenAPI spec changes in frontend development

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.ts
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
autogpt_platform/frontend/src/**/*.{ts,tsx}

πŸ“„ CodeRabbit inference engine (autogpt_platform/frontend/CLAUDE.md)

autogpt_platform/frontend/src/**/*.{ts,tsx}: Fully capitalize acronyms in symbols, e.g. graphID, useBackendAPI
Use function declarations (not arrow functions) for components and handlers
Separate render logic (.tsx) from business logic (use*.ts hooks)
Use shadcn/ui (Radix UI primitives) with Tailwind CSS styling for UI components
Use Phosphor Icons only for icons
Use ErrorCard for render errors, toast for mutations, and Sentry for exceptions
Use design system components from src/components/ (atoms, molecules, organisms)
Never use src/components/__legacy__/* components
Use generated API hooks from @/app/api/__generated__/endpoints/ with pattern use{Method}{Version}{OperationName}
Use Tailwind CSS only for styling, with design tokens
Do not use useCallback or useMemo unless asked to optimize a given function
Never type with any unless a variable/attribute can ACTUALLY be of any type

autogpt_platform/frontend/src/**/*.{ts,tsx}: Structure components as ComponentName/ComponentName.tsx + useComponentName.ts + helpers.ts and use design system components from src/components/ (atoms, molecules, organisms)
Use generated API hooks from @/app/api/__generated__/endpoints/ with pattern use{Method}{Version}{OperationName} and regenerate with pnpm generate:api
Use function declarations (not arrow functions) for components and handlers
Separate render logic from business logic with component.tsx + useComponent.ts + helpers.ts structure
Colocate state when possible, avoid creating large components, use sub-components in local /components folder
Avoid large hooks, abstract logic into helpers.ts files when sensible
Use arrow functions only for callbacks, not for component declarations
Avoid comments at all times unless the code is very complex
Do not use useCallback or useMemo unless asked to optimize a given function

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.ts
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
autogpt_platform/frontend/src/app/(platform)/**/components/**/*.{ts,tsx}

πŸ“„ CodeRabbit inference engine (autogpt_platform/frontend/CLAUDE.md)

Put sub-components in local components/ folder within feature directories

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx
autogpt_platform/frontend/src/**/*.tsx

πŸ“„ CodeRabbit inference engine (autogpt_platform/frontend/CLAUDE.md)

Component props should be type Props = { ... } (not exported) unless it needs to be used outside the component

Component props should be interface Props { ... } (not exported) unless the interface needs to be used outside the component

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
autogpt_platform/frontend/**/*.{js,jsx,ts,tsx}

πŸ“„ CodeRabbit inference engine (AGENTS.md)

autogpt_platform/frontend/**/*.{js,jsx,ts,tsx}: Format frontend code using pnpm format
Never use components from src/components/__legacy__/*

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.ts
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
autogpt_platform/frontend/**/*.{js,jsx,ts,tsx,css}

πŸ“„ CodeRabbit inference engine (AGENTS.md)

Use Tailwind CSS only for styling, use design tokens, and use Phosphor Icons only

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.ts
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
autogpt_platform/**/*.{ts,tsx}

πŸ“„ CodeRabbit inference engine (AGENTS.md)

Never type with any, if no types available use unknown

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.ts
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
autogpt_platform/frontend/src/app/(platform)/**/*.tsx

πŸ“„ CodeRabbit inference engine (AGENTS.md)

If adding protected frontend routes, update frontend/lib/supabase/middleware.ts

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
autogpt_platform/frontend/src/**/*.ts

πŸ“„ CodeRabbit inference engine (AGENTS.md)

Do not type hook returns, let Typescript infer as much as possible

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.ts
autogpt_platform/backend/**/*.py

πŸ“„ CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/backend/**/*.py: Use Python 3.11 (required; managed by Poetry via pyproject.toml) for backend development
Always run 'poetry run format' (Black + isort) before linting in backend development
Always run 'poetry run lint' (ruff) after formatting in backend development

Files:

  • autogpt_platform/backend/backend/copilot/tools/base.py
  • autogpt_platform/backend/backend/copilot/tools/create_agent.py
  • autogpt_platform/backend/backend/copilot/tools/edit_agent.py
autogpt_platform/backend/**/*.{py,txt}

πŸ“„ CodeRabbit inference engine (autogpt_platform/backend/CLAUDE.md)

Use poetry run prefix for all Python commands, including testing, linting, formatting, and migrations

Files:

  • autogpt_platform/backend/backend/copilot/tools/base.py
  • autogpt_platform/backend/backend/copilot/tools/create_agent.py
  • autogpt_platform/backend/backend/copilot/tools/edit_agent.py
autogpt_platform/backend/backend/**/*.py

πŸ“„ CodeRabbit inference engine (autogpt_platform/backend/CLAUDE.md)

Use Prisma ORM for database operations in PostgreSQL with pgvector for embeddings

Files:

  • autogpt_platform/backend/backend/copilot/tools/base.py
  • autogpt_platform/backend/backend/copilot/tools/create_agent.py
  • autogpt_platform/backend/backend/copilot/tools/edit_agent.py
autogpt_platform/**/*.py

πŸ“„ CodeRabbit inference engine (AGENTS.md)

Format Python code with poetry run format

Files:

  • autogpt_platform/backend/backend/copilot/tools/base.py
  • autogpt_platform/backend/backend/copilot/tools/create_agent.py
  • autogpt_platform/backend/backend/copilot/tools/edit_agent.py
🧠 Learnings (8)
πŸ“š Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/frontend/**/*.{tsx,ts} : Separate render logic from data/behavior in React components; keep comments minimal (code should be self-documenting)

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx
πŸ“š Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/frontend/**/*.{ts,tsx} : No barrel files or 'index.ts' re-exports in frontend code

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.ts
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
πŸ“š Learning: 2026-02-04T16:50:51.495Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-02-04T16:50:51.495Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Avoid large hooks, abstract logic into `helpers.ts` files when sensible

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.ts
πŸ“š Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/frontend/**/*.{tsx,ts} : Use design system components from 'src/components/' (atoms, molecules, organisms) in frontend development

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx
πŸ“š Learning: 2026-02-04T16:50:33.615Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/frontend/CLAUDE.md:0-0
Timestamp: 2026-02-04T16:50:33.615Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Use shadcn/ui (Radix UI primitives) with Tailwind CSS styling for UI components

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx
πŸ“š Learning: 2026-02-04T16:50:33.615Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/frontend/CLAUDE.md:0-0
Timestamp: 2026-02-04T16:50:33.615Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Use design system components from `src/components/` (atoms, molecules, organisms)

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx
πŸ“š Learning: 2026-02-04T16:50:51.495Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-02-04T16:50:51.495Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Colocate state when possible, avoid creating large components, use sub-components in local `/components` folder

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
πŸ“š Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/frontend/**/*.{tsx,ts} : Use '<ErrorCard />' component for rendering errors in frontend UI; use toast notifications for mutation errors; use 'Sentry.captureException()' for manual exceptions

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
🧬 Code graph analysis (3)
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx (3)
autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolAccordion/ToolAccordion.tsx (1)
  • ToolAccordion (21-102)
autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolAccordion/AccordionContent.tsx (2)
  • ContentGrid (9-17)
  • ContentHint (126-138)
autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/components/MiniGame/MiniGame.tsx (1)
  • MiniGame (9-50)
autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx (3)
autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/helpers.tsx (1)
  • isAgentPreviewOutput (60-64)
autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/helpers.tsx (1)
  • isAgentPreviewOutput (63-67)
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx (1)
  • LongRunningToolDisplay (18-35)
autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx (3)
autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/helpers.tsx (1)
  • isAgentPreviewOutput (60-64)
autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/helpers.tsx (1)
  • isAgentPreviewOutput (63-67)
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx (1)
  • LongRunningToolDisplay (18-35)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
  • GitHub Check: types
  • GitHub Check: Seer Code Review
  • GitHub Check: end-to-end tests
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.13)
  • GitHub Check: test (3.11)
  • GitHub Check: Check PR Status
  • GitHub Check: Analyze (python)
πŸ”‡ Additional comments (7)
autogpt_platform/backend/backend/copilot/tools/base.py (1)

40-42: LGTM β€” docstring accurately reflects the new semantics.

The updated docstring correctly decouples is_long_running from async backend processing and ties it to the UI mini-game indication, which is consistent with the broader PR change of removing async delegation from agent generation tools.

autogpt_platform/backend/backend/copilot/tools/create_agent.py (1)

50-52: is_long_running still returns True β€” contradicts PR objectives

The PR description states "Tools now rely on the base class default is_long_running = False", but the property here still returns True. The added docstring ("show mini-game") further confirms the intent is to keep it True. The PR objectives text appears to be stale relative to the actual implementation, where is_long_running = True is intentionally retained for the frontend mini-game UI while only the async delegation path (operation_id/task_id) was removed.

autogpt_platform/backend/backend/copilot/tools/edit_agent.py (1)

48-50: Same is_long_running / PR objectives mismatch as create_agent.py

Same observation as create_agent.py: the PR objectives claim this property should fall back to the base-class False, but the implementation keeps return True. The added docstring is consistent with the actual behavior, making the PR description the inaccurate source.

autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx (1)

82-128: Cleanly simplified β€” LGTM

Operation-state guards removed, LongRunningToolDisplay properly replaces the inline mini-game, hasExpandableContent reflects only real output states. No concerns.

autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx (1)

94-149: Cleanly simplified β€” LGTM

LongRunningToolDisplay integration is consistent with EditAgent.tsx. Operation-state checks correctly removed. No concerns.

autogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.ts (2)

18-20: Remove isLongRunningTool functionβ€”it is unused dead code

isLongRunningTool is exported but has no callers anywhere in the codebase. Neither the function nor the module it resides in (long-running-tools.ts) is imported or referenced elsewhere. CreateAgent.tsx and EditAgent.tsx check isStreaming (part.state) instead of calling this utility.


7-11: Backend parity is confirmed β€” customize_agent.py has is_long_running returning True and properly implements async delegation with the async def _execute() method, consistent with create_agent and edit_agent.

πŸ€– Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@autogpt_platform/frontend/src/app/`(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx:
- Around line 24-31: The accordion title and the ContentHint inside
LongRunningToolDisplay both repeat the same wait message; remove the redundant
ContentHint component (or replace its text with supplementary info such as
keyboard controls) to avoid duplication. Locate the ContentHint element in the
LongRunningToolDisplay component (adjacent to MiniGame and within the accordion
props: title="This may take a few minutes. Play while you wait.") and either
delete that ContentHint node or update its content to provide additional,
non-duplicative guidance (e.g., game keyboard controls) so only one wait message
remains in the UI.

---

Nitpick comments:
In `@autogpt_platform/backend/backend/copilot/tools/create_agent.py`:
- Around line 225-238: The await of generate_agent inside _execute has no
timeout and can block indefinitely; wrap the call to generate_agent with
asyncio.wait_for using a configurable timeout constant (e.g.,
AGENT_GENERATION_TIMEOUT or DEFAULT_TIMEOUT) and import asyncio, then handle
asyncio.TimeoutError to return an ErrorResponse (similar shape to the existing
AgentGeneratorNotConfiguredError handling) with a clear message and error code
like "timeout" or "service_unresponsive"; ensure you catch and log the timeout
separately from other exceptions so long-running or hung external generator
calls are bounded.

In
`@autogpt_platform/frontend/src/app/`(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx:
- Around line 2-5: The import for ContentGrid and ContentHint in
LongRunningToolDisplay.tsx uses a convoluted relative path; update the import
statement that currently references
"../../tools/CreateAgent/../../components/ToolAccordion/AccordionContent" to the
simplified relative path "../ToolAccordion/AccordionContent" so ContentGrid and
ContentHint are imported directly from the neighboring ToolAccordion folder.
- Line 7: LongRunningToolDisplay imports the feature-local MiniGame, inverting
dependency direction; move the MiniGame component from
tools/CreateAgent/components/MiniGame/ into the shared components area (e.g.,
components/MiniGame/) and update the import in LongRunningToolDisplay.tsx to
reference the new shared path; ensure any internal imports/exports inside the
moved MiniGame module stay correct (export name MiniGame) and update any other
callers (CreateAgent/EditAgent) to the shared location so shared β†’ features
dependency is preserved.

Comment on lines +24 to +31
title="This may take a few minutes. Play while you wait."
defaultExpanded={true}
>
<ContentGrid>
<MiniGame />
<ContentHint>
This could take a few minutes β€” play while you wait!
</ContentHint>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

Redundant wait messaging β€” title and hint say the same thing

The accordion title ("This may take a few minutes. Play while you wait.") and the ContentHint inside ("This could take a few minutes β€” play while you wait!") convey identical information. The hint adds no additional value and the duplication is noticeable to users. Remove the ContentHint or replace it with genuinely supplementary text (e.g., keyboard controls for the game).

βœ‚οΈ Proposed fix
       <ContentGrid>
         <MiniGame />
-        <ContentHint>
-          This could take a few minutes β€” play while you wait!
-        </ContentHint>
       </ContentGrid>
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
title="This may take a few minutes. Play while you wait."
defaultExpanded={true}
>
<ContentGrid>
<MiniGame />
<ContentHint>
This could take a few minutes β€” play while you wait!
</ContentHint>
title="This may take a few minutes. Play while you wait."
defaultExpanded={true}
>
<ContentGrid>
<MiniGame />
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@autogpt_platform/frontend/src/app/`(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx
around lines 24 - 31, The accordion title and the ContentHint inside
LongRunningToolDisplay both repeat the same wait message; remove the redundant
ContentHint component (or replace its text with supplementary info such as
keyboard controls) to avoid duplication. Locate the ContentHint element in the
LongRunningToolDisplay component (adjacent to MiniGame and within the accordion
props: title="This may take a few minutes. Play while you wait.") and either
delete that ContentHint node or update its content to provide additional,
non-duplicative guidance (e.g., game keyboard controls) so only one wait message
remains in the UI.

- Create LongRunningToolWrapper component that wraps ALL tools
- Automatically detects if tool is long-running and shows mini-game
- Remove manual LongRunningToolDisplay from CreateAgent/EditAgent
- All tools (GenericTool, CustomizeAgent, etc.) now automatic
- No need to add mini-game to individual tool components

This makes the system completely generic - just mark is_long_running=True
in backend and frontend automatically shows mini-game!
@github-actions
Copy link
Contributor

github-actions bot commented Feb 21, 2026

πŸ” PR Overlap Detection

This check compares your PR against all other open PRs targeting the same branch to detect potential merge conflicts early.

πŸ”΄ Merge Conflicts Detected

The following PRs have been tested and will have merge conflicts if merged after this PR. Consider coordinating with the authors.

  • chore(frontend): Fix react-doctor warnings + add CI jobΒ #12163 (0ubbe Β· updated 1d ago)

    • πŸ“ autogpt_platform/frontend/src/app/(platform)/
      • build/components/legacy-builder/BlocksControl.tsx (deleted here, modified there)
      • build/components/legacy-builder/CustomNode/CustomNode.tsx (deleted here, modified there)
      • build/components/legacy-builder/DataTable.tsx (deleted here, modified there)
      • build/components/legacy-builder/Flow/Flow.tsx (deleted here, modified there)
      • build/components/legacy-builder/NodeOutputs.tsx (deleted here, modified there)
      • copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx (11 conflicts, ~77 lines)
      • monitoring/components/FlowRunsTimeline.tsx (deleted here, modified there)
  • fix(copilot): workspace file listing fixΒ #12190 (majdyz Β· updated 9m ago)

    • πŸ“ autogpt_platform/
      • backend/backend/copilot/tools/models.py (1 conflict, ~43 lines)
      • frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx (2 conflicts, ~25 lines)
      • frontend/src/app/(platform)/copilot/tools/EditAgent/helpers.tsx (1 conflict, ~4 lines)

🟒 Low Risk β€” File Overlap Only

These PRs touch the same files but different sections (click to expand)

Summary: 2 conflict(s), 0 medium risk, 1 low risk (out of 3 PRs with file overlap)


Auto-generated on push. Ignores: openapi.json, lock files.

ToolWrapper is a better name since it wraps ALL tools, not just
long-running ones. It conditionally shows mini-game for long-running
tools based on LONG_RUNNING_TOOLS list.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx (1)

10-14: Trim the comments β€” the code is self-documenting.

The JSDoc (lines 10–14) restates what the function name and 3-line body already communicate, and the inline comments (// Extract tool name…, // Automatically show mini-game…, // Render the actual tool component) are equally redundant for a 15-line wrapper.

βœ‚οΈ Suggested cleanup
-/**
- * Wrapper that automatically shows mini-game for long-running tools.
- * Checks the tool name against LONG_RUNNING_TOOLS and displays
- * LongRunningToolDisplay during streaming.
- */
 export function LongRunningToolWrapper({ part, children }: Props) {
-  // Extract tool name from part.type (e.g., "tool-create_agent" -> "create_agent")
   const toolName = part.type.replace(/^tool-/, "");
   const isStreaming =
     part.state === "input-streaming" || part.state === "input-available";

   return (
     <>
-      {/* Automatically show mini-game if tool is long-running and streaming */}
       {isLongRunningTool(toolName) && (
         <LongRunningToolDisplay isStreaming={isStreaming} />
       )}
-      {/* Render the actual tool component */}
       {children}
     </>
   );
 }

Based on learnings: "Avoid comments at all times unless the code is very complex."

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@autogpt_platform/frontend/src/app/`(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx
around lines 10 - 14, Remove the redundant JSDoc and inline comments in the
LongRunningToolWrapper component: delete the block comment describing behavior
at the top and remove the short inline comments near the tool name extraction,
LONG_RUNNING_TOOLS check, and rendering, leaving the component logic and
identifiers (LongRunningToolWrapper, LONG_RUNNING_TOOLS, LongRunningToolDisplay)
intact so the code itself documents its behavior.
πŸ“œ Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled

You can enable these sources in your CodeRabbit configuration.

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between bfdc1ed and 1de260c.

πŸ“’ Files selected for processing (4)
  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
🚧 Files skipped from review as they are similar to previous changes (1)
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx
🧰 Additional context used
πŸ““ Path-based instructions (10)
autogpt_platform/frontend/**/*.{ts,tsx,js,jsx}

πŸ“„ CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/frontend/**/*.{ts,tsx,js,jsx}: Use Node.js 21+ with pnpm package manager for frontend development
Always run 'pnpm format' for formatting and linting code in frontend development

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
autogpt_platform/frontend/**/*.{tsx,ts}

πŸ“„ CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/frontend/**/*.{tsx,ts}: Use function declarations for components and handlers (not arrow functions) in React components
Only use arrow functions for small inline lambdas (map, filter, etc.) in React components
Use PascalCase for component names and camelCase with 'use' prefix for hook names in React
Use Tailwind CSS utilities only for styling in frontend components
Use design system components from 'src/components/' (atoms, molecules, organisms) in frontend development
Never use 'src/components/legacy/' in frontend code
Only use Phosphor Icons (@phosphor-icons/react) for icons in frontend components
Use generated API hooks from '@/app/api/generated/endpoints/' instead of deprecated 'BackendAPI' or 'src/lib/autogpt-server-api/
'
Use React Query for server state (via generated hooks) in frontend development
Default to client components ('use client') in Next.js; only use server components for SEO or extreme TTFB needs
Use '' component for rendering errors in frontend UI; use toast notifications for mutation errors; use 'Sentry.captureException()' for manual exceptions
Separate render logic from data/behavior in React components; keep comments minimal (code should be self-documenting)

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
autogpt_platform/frontend/**/*.{ts,tsx}

πŸ“„ CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/frontend/**/*.{ts,tsx}: No barrel files or 'index.ts' re-exports in frontend code
Regenerate API hooks with 'pnpm generate:api' after backend OpenAPI spec changes in frontend development

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
autogpt_platform/frontend/src/**/*.{ts,tsx}

πŸ“„ CodeRabbit inference engine (autogpt_platform/frontend/CLAUDE.md)

autogpt_platform/frontend/src/**/*.{ts,tsx}: Fully capitalize acronyms in symbols, e.g. graphID, useBackendAPI
Use function declarations (not arrow functions) for components and handlers
Separate render logic (.tsx) from business logic (use*.ts hooks)
Use shadcn/ui (Radix UI primitives) with Tailwind CSS styling for UI components
Use Phosphor Icons only for icons
Use ErrorCard for render errors, toast for mutations, and Sentry for exceptions
Use design system components from src/components/ (atoms, molecules, organisms)
Never use src/components/__legacy__/* components
Use generated API hooks from @/app/api/__generated__/endpoints/ with pattern use{Method}{Version}{OperationName}
Use Tailwind CSS only for styling, with design tokens
Do not use useCallback or useMemo unless asked to optimize a given function
Never type with any unless a variable/attribute can ACTUALLY be of any type

autogpt_platform/frontend/src/**/*.{ts,tsx}: Structure components as ComponentName/ComponentName.tsx + useComponentName.ts + helpers.ts and use design system components from src/components/ (atoms, molecules, organisms)
Use generated API hooks from @/app/api/__generated__/endpoints/ with pattern use{Method}{Version}{OperationName} and regenerate with pnpm generate:api
Use function declarations (not arrow functions) for components and handlers
Separate render logic from business logic with component.tsx + useComponent.ts + helpers.ts structure
Colocate state when possible, avoid creating large components, use sub-components in local /components folder
Avoid large hooks, abstract logic into helpers.ts files when sensible
Use arrow functions only for callbacks, not for component declarations
Avoid comments at all times unless the code is very complex
Do not use useCallback or useMemo unless asked to optimize a given function

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
autogpt_platform/frontend/src/app/(platform)/**/components/**/*.{ts,tsx}

πŸ“„ CodeRabbit inference engine (autogpt_platform/frontend/CLAUDE.md)

Put sub-components in local components/ folder within feature directories

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
autogpt_platform/frontend/src/**/*.tsx

πŸ“„ CodeRabbit inference engine (autogpt_platform/frontend/CLAUDE.md)

Component props should be type Props = { ... } (not exported) unless it needs to be used outside the component

Component props should be interface Props { ... } (not exported) unless the interface needs to be used outside the component

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
autogpt_platform/frontend/**/*.{js,jsx,ts,tsx}

πŸ“„ CodeRabbit inference engine (AGENTS.md)

autogpt_platform/frontend/**/*.{js,jsx,ts,tsx}: Format frontend code using pnpm format
Never use components from src/components/__legacy__/*

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
autogpt_platform/frontend/**/*.{js,jsx,ts,tsx,css}

πŸ“„ CodeRabbit inference engine (AGENTS.md)

Use Tailwind CSS only for styling, use design tokens, and use Phosphor Icons only

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
autogpt_platform/**/*.{ts,tsx}

πŸ“„ CodeRabbit inference engine (AGENTS.md)

Never type with any, if no types available use unknown

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
autogpt_platform/frontend/src/app/(platform)/**/*.tsx

πŸ“„ CodeRabbit inference engine (AGENTS.md)

If adding protected frontend routes, update frontend/lib/supabase/middleware.ts

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
🧠 Learnings (12)
πŸ“š Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/frontend/src/components/**/*.{tsx,ts} : Structure React components as: ComponentName/ComponentName.tsx + useComponentName.ts + helpers.ts (exception: small 3-4 line components can be inline; render-only components can be direct files)

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
πŸ“š Learning: 2026-02-04T16:50:33.615Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/frontend/CLAUDE.md:0-0
Timestamp: 2026-02-04T16:50:33.615Z
Learning: Applies to autogpt_platform/frontend/src/**/*.tsx : Component props should be `type Props = { ... }` (not exported) unless it needs to be used outside the component

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx
πŸ“š Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/frontend/**/*.{tsx,ts} : Separate render logic from data/behavior in React components; keep comments minimal (code should be self-documenting)

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
πŸ“š Learning: 2026-02-04T16:50:51.495Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-02-04T16:50:51.495Z
Learning: Applies to autogpt_platform/frontend/src/**/*.tsx : Component props should be `interface Props { ... }` (not exported) unless the interface needs to be used outside the component

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx
πŸ“š Learning: 2026-02-04T16:50:51.495Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-02-04T16:50:51.495Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Colocate state when possible, avoid creating large components, use sub-components in local `/components` folder

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
πŸ“š Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/frontend/**/*.{tsx,ts} : Use design system components from 'src/components/' (atoms, molecules, organisms) in frontend development

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
πŸ“š Learning: 2026-02-04T16:50:33.615Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/frontend/CLAUDE.md:0-0
Timestamp: 2026-02-04T16:50:33.615Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Use design system components from `src/components/` (atoms, molecules, organisms)

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
πŸ“š Learning: 2026-02-04T16:50:33.615Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/frontend/CLAUDE.md:0-0
Timestamp: 2026-02-04T16:50:33.615Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Separate render logic (`.tsx`) from business logic (`use*.ts` hooks)

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
πŸ“š Learning: 2026-02-04T16:50:51.495Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-02-04T16:50:51.495Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Separate render logic from business logic with component.tsx + useComponent.ts + helpers.ts structure

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
πŸ“š Learning: 2026-02-04T16:50:51.495Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-02-04T16:50:51.495Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Structure components as `ComponentName/ComponentName.tsx` + `useComponentName.ts` + `helpers.ts` and use design system components from `src/components/` (atoms, molecules, organisms)

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
πŸ“š Learning: 2026-02-04T16:50:33.615Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/frontend/CLAUDE.md:0-0
Timestamp: 2026-02-04T16:50:33.615Z
Learning: Applies to autogpt_platform/frontend/src/components/**/*.{ts,tsx} : Structure components as `ComponentName/ComponentName.tsx` + `useComponentName.ts` + `helpers.ts`

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
πŸ“š Learning: 2026-02-04T16:50:51.495Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-02-04T16:50:51.495Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Avoid large hooks, abstract logic into `helpers.ts` files when sensible

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
🧬 Code graph analysis (2)
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx (2)
autogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.ts (1)
  • isLongRunningTool (18-20)
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx (1)
  • LongRunningToolDisplay (18-35)
autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx (1)
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx (1)
  • LongRunningToolWrapper (15-31)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
  • GitHub Check: lint
  • GitHub Check: integration_test
  • GitHub Check: types
  • GitHub Check: end-to-end tests
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.13)
  • GitHub Check: test (3.11)
  • GitHub Check: Seer Code Review
  • GitHub Check: Check PR Status
  • GitHub Check: Analyze (python)
πŸ”‡ Additional comments (3)
autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx (1)

90-96: LGTM β€” simplified hasExpandableContent condition is correct.

Removing the three operation-state predicates and retaining only the four terminal output predicates accurately reflects the coordinated backend removal of is_long_running. The getAccordionMeta fallthrough to the error case remains safe because hasExpandableContent already gates the accordion behind one of the four predicates, so getAccordionMeta is never called with an unhandled output type.

autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx (1)

212-317: LGTM β€” wrapping is clean and key placement is correct.

Moving key to LongRunningToolWrapper (the outermost element in each list item) is the right React pattern. The part as ToolUIPart cast on both the wrapper and the inner tool components is redundant for the inner components since the type is unchanged, but it's harmless and consistent with the existing style.

autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx (1)

18-19: isStreaming state coverage is correct.

ToolUIPart.state is a discriminated union of exactly 'input-streaming', 'input-available', and 'output-available' β€” there is no 'output-streaming' state on tool parts. The two-state OR covers all in-progress phases (model generating arguments β†’ arguments received, tool executing) and the mini-game correctly disappears once 'output-available' is reached.

πŸ€– Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In
`@autogpt_platform/frontend/src/app/`(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx:
- Around line 10-14: Remove the redundant JSDoc and inline comments in the
LongRunningToolWrapper component: delete the block comment describing behavior
at the top and remove the short inline comments near the tool name extraction,
LONG_RUNNING_TOOLS check, and rendering, leaving the component logic and
identifiers (LongRunningToolWrapper, LONG_RUNNING_TOOLS, LongRunningToolDisplay)
intact so the code itself documents its behavior.

…ing only for frontend UI

The executor was still spawning background tasks when it saw is_long_running=True,
triggering the old async delegation pattern with 'operation is still running' messages.

This caused:
- Async delegation instead of synchronous execution with streaming
- Session timeouts waiting for async completion
- Mini-game not displaying because tool execution wasn't streaming properly

Fix:
- Remove async delegation code from _yield_tool_call (lines 1434-1586 in service.py)
- All tools now execute synchronously with heartbeats, regardless of is_long_running
- The is_long_running property is now ONLY used by frontend to show mini-game UI
- Update function docstring to reflect new behavior
- Remove unused imports: OperationStartedResponse, OperationPendingResponse, OperationInProgressResponse

The mini-game feature now works as intended:
1. Backend tools set is_long_running = True for UI display hint
2. Executor runs ALL tools synchronously with streaming
3. Frontend ToolWrapper detects is_long_running and shows mini-game during streaming
@github-actions github-actions bot added the conflicts Automatically applied to PRs with merge conflicts label Feb 21, 2026
@github-actions
Copy link
Contributor

This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx (1)

10-13: Remove JSDoc and inline comments β€” they violate the "avoid comments" guideline.

The JSDoc block and the two inline comments inside the JSX ({/* Automatically show mini-game... */} and {/* Render the actual tool component */}) add no information beyond what the code already expresses.

♻️ Proposed fix
-/**
- * Wrapper for all tool components. Automatically shows mini-game
- * for long-running tools by checking LONG_RUNNING_TOOLS list.
- */
 export function ToolWrapper({ part, children }: Props) {
   const toolName = part.type.replace(/^tool-/, "");
   const isStreaming =
     part.state === "input-streaming" || part.state === "input-available";

   return (
     <>
-      {/* Automatically show mini-game if tool is long-running and streaming */}
       {isLongRunningTool(toolName) && (
         <LongRunningToolDisplay isStreaming={isStreaming} />
       )}
-      {/* Render the actual tool component */}
       {children}
     </>
   );
 }

As per coding guidelines: "Avoid comments at all times unless the code is very complex."

πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@autogpt_platform/frontend/src/app/`(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
around lines 10 - 13, Remove the JSDoc header and the two JSX inline comments
inside the ToolWrapper component: delete the top JSDoc block above the
ToolWrapper component and remove the comments that reference
LONG_RUNNING_TOOLS/mini-game and the "Render the actual tool component" JSX
comments so the code contains no explanatory comments; keep all references to
LONG_RUNNING_TOOLS and the ToolWrapper component logic unchanged.
πŸ“œ Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled

You can enable these sources in your CodeRabbit configuration.

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between 1de260c and 95afa8c.

πŸ“’ Files selected for processing (3)
  • autogpt_platform/backend/sample.logs
  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
βœ… Files skipped from review due to trivial changes (1)
  • autogpt_platform/backend/sample.logs
🚧 Files skipped from review as they are similar to previous changes (1)
  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
🧰 Additional context used
πŸ““ Path-based instructions (10)
autogpt_platform/frontend/**/*.{ts,tsx,js,jsx}

πŸ“„ CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/frontend/**/*.{ts,tsx,js,jsx}: Use Node.js 21+ with pnpm package manager for frontend development
Always run 'pnpm format' for formatting and linting code in frontend development

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
autogpt_platform/frontend/**/*.{tsx,ts}

πŸ“„ CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/frontend/**/*.{tsx,ts}: Use function declarations for components and handlers (not arrow functions) in React components
Only use arrow functions for small inline lambdas (map, filter, etc.) in React components
Use PascalCase for component names and camelCase with 'use' prefix for hook names in React
Use Tailwind CSS utilities only for styling in frontend components
Use design system components from 'src/components/' (atoms, molecules, organisms) in frontend development
Never use 'src/components/legacy/' in frontend code
Only use Phosphor Icons (@phosphor-icons/react) for icons in frontend components
Use generated API hooks from '@/app/api/generated/endpoints/' instead of deprecated 'BackendAPI' or 'src/lib/autogpt-server-api/
'
Use React Query for server state (via generated hooks) in frontend development
Default to client components ('use client') in Next.js; only use server components for SEO or extreme TTFB needs
Use '' component for rendering errors in frontend UI; use toast notifications for mutation errors; use 'Sentry.captureException()' for manual exceptions
Separate render logic from data/behavior in React components; keep comments minimal (code should be self-documenting)

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
autogpt_platform/frontend/**/*.{ts,tsx}

πŸ“„ CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/frontend/**/*.{ts,tsx}: No barrel files or 'index.ts' re-exports in frontend code
Regenerate API hooks with 'pnpm generate:api' after backend OpenAPI spec changes in frontend development

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
autogpt_platform/frontend/src/**/*.{ts,tsx}

πŸ“„ CodeRabbit inference engine (autogpt_platform/frontend/CLAUDE.md)

autogpt_platform/frontend/src/**/*.{ts,tsx}: Fully capitalize acronyms in symbols, e.g. graphID, useBackendAPI
Use function declarations (not arrow functions) for components and handlers
Separate render logic (.tsx) from business logic (use*.ts hooks)
Use shadcn/ui (Radix UI primitives) with Tailwind CSS styling for UI components
Use Phosphor Icons only for icons
Use ErrorCard for render errors, toast for mutations, and Sentry for exceptions
Use design system components from src/components/ (atoms, molecules, organisms)
Never use src/components/__legacy__/* components
Use generated API hooks from @/app/api/__generated__/endpoints/ with pattern use{Method}{Version}{OperationName}
Use Tailwind CSS only for styling, with design tokens
Do not use useCallback or useMemo unless asked to optimize a given function
Never type with any unless a variable/attribute can ACTUALLY be of any type

autogpt_platform/frontend/src/**/*.{ts,tsx}: Structure components as ComponentName/ComponentName.tsx + useComponentName.ts + helpers.ts and use design system components from src/components/ (atoms, molecules, organisms)
Use generated API hooks from @/app/api/__generated__/endpoints/ with pattern use{Method}{Version}{OperationName} and regenerate with pnpm generate:api
Use function declarations (not arrow functions) for components and handlers
Separate render logic from business logic with component.tsx + useComponent.ts + helpers.ts structure
Colocate state when possible, avoid creating large components, use sub-components in local /components folder
Avoid large hooks, abstract logic into helpers.ts files when sensible
Use arrow functions only for callbacks, not for component declarations
Avoid comments at all times unless the code is very complex
Do not use useCallback or useMemo unless asked to optimize a given function

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
autogpt_platform/frontend/src/app/(platform)/**/components/**/*.{ts,tsx}

πŸ“„ CodeRabbit inference engine (autogpt_platform/frontend/CLAUDE.md)

Put sub-components in local components/ folder within feature directories

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
autogpt_platform/frontend/src/**/*.tsx

πŸ“„ CodeRabbit inference engine (autogpt_platform/frontend/CLAUDE.md)

Component props should be type Props = { ... } (not exported) unless it needs to be used outside the component

Component props should be interface Props { ... } (not exported) unless the interface needs to be used outside the component

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
autogpt_platform/frontend/**/*.{js,jsx,ts,tsx}

πŸ“„ CodeRabbit inference engine (AGENTS.md)

autogpt_platform/frontend/**/*.{js,jsx,ts,tsx}: Format frontend code using pnpm format
Never use components from src/components/__legacy__/*

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
autogpt_platform/frontend/**/*.{js,jsx,ts,tsx,css}

πŸ“„ CodeRabbit inference engine (AGENTS.md)

Use Tailwind CSS only for styling, use design tokens, and use Phosphor Icons only

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
autogpt_platform/**/*.{ts,tsx}

πŸ“„ CodeRabbit inference engine (AGENTS.md)

Never type with any, if no types available use unknown

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
autogpt_platform/frontend/src/app/(platform)/**/*.tsx

πŸ“„ CodeRabbit inference engine (AGENTS.md)

If adding protected frontend routes, update frontend/lib/supabase/middleware.ts

Files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
🧠 Learnings (4)
πŸ“š Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/frontend/src/components/**/*.{tsx,ts} : Structure React components as: ComponentName/ComponentName.tsx + useComponentName.ts + helpers.ts (exception: small 3-4 line components can be inline; render-only components can be direct files)

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
πŸ“š Learning: 2026-02-04T16:50:33.615Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/frontend/CLAUDE.md:0-0
Timestamp: 2026-02-04T16:50:33.615Z
Learning: Applies to autogpt_platform/frontend/src/components/**/*.{ts,tsx} : Structure components as `ComponentName/ComponentName.tsx` + `useComponentName.ts` + `helpers.ts`

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
πŸ“š Learning: 2026-02-04T16:50:33.615Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/frontend/CLAUDE.md:0-0
Timestamp: 2026-02-04T16:50:33.615Z
Learning: Applies to autogpt_platform/frontend/src/**/*.tsx : Component props should be `type Props = { ... }` (not exported) unless it needs to be used outside the component

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
πŸ“š Learning: 2026-02-04T16:50:51.495Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-02-04T16:50:51.495Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Structure components as `ComponentName/ComponentName.tsx` + `useComponentName.ts` + `helpers.ts` and use design system components from `src/components/` (atoms, molecules, organisms)

Applied to files:

  • autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
🧬 Code graph analysis (1)
autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx (2)
autogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.ts (1)
  • isLongRunningTool (18-20)
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx (1)
  • LongRunningToolDisplay (18-35)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
  • GitHub Check: end-to-end tests
  • GitHub Check: test (3.12)
  • GitHub Check: test (3.13)
  • GitHub Check: test (3.11)
  • GitHub Check: Check PR Status
πŸ€– Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@autogpt_platform/frontend/src/app/`(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx:
- Line 7: The prop type uses React.ReactNode without importing React; update the
ToolWrapper component to import the type and use it explicitly: add an import
like `import type { ReactNode } from 'react'` at the top of ToolWrapper.tsx and
change the prop declaration from `children: React.ReactNode;` to `children:
ReactNode;` (ensure the change is applied to the ToolWrapper props/interface
where `children` is declared).

---

Nitpick comments:
In
`@autogpt_platform/frontend/src/app/`(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx:
- Around line 10-13: Remove the JSDoc header and the two JSX inline comments
inside the ToolWrapper component: delete the top JSDoc block above the
ToolWrapper component and remove the comments that reference
LONG_RUNNING_TOOLS/mini-game and the "Render the actual tool component" JSX
comments so the code contains no explanatory comments; keep all references to
LONG_RUNNING_TOOLS and the ToolWrapper component logic unchanged.


interface Props {
part: ToolUIPart;
children: React.ReactNode;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | πŸ”΄ Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash

# Find and read the main tsconfig
echo "=== Main tsconfig.json ==="
find autogpt_platform/frontend -name "tsconfig.json" -type f | head -5 | while read file; do
  echo "File: $file"
  cat "$file"
  echo ""
done

# Check for global type declarations
echo "=== Global declarations ==="
find autogpt_platform/frontend -name "global.d.ts" -o -name "globals.d.ts" | while read file; do
  echo "File: $file"
  cat "$file"
  echo ""
done

# Check next.config and jsx settings
echo "=== next.config files ==="
find autogpt_platform/frontend -name "next.config.*" | while read file; do
  echo "File: $file"
  head -50 "$file"
  echo ""
done

Repository: Significant-Gravitas/AutoGPT

Length of output: 2718


🏁 Script executed:

#!/bin/bash

# Also check the actual file to see all its imports
echo "=== ToolWrapper.tsx file content ==="
cat -n autogpt_platform/frontend/src/app/\(platform\)/copilot/components/ToolWrapper/ToolWrapper.tsx

Repository: Significant-Gravitas/AutoGPT

Length of output: 1301


Remove React namespace; import ReactNode explicitly.

React.ReactNode is used on line 7 without importing React. With strict TypeScript mode enabled and no automatic React type declarations configured, this should cause a compilation error. Use explicit imports for type clarity and correctness.

♻️ Proposed fix
 import type { ToolUIPart } from "ai";
+import type { ReactNode } from "react";
 import { isLongRunningTool } from "../../tools/long-running-tools";
 import { LongRunningToolDisplay } from "../LongRunningToolDisplay/LongRunningToolDisplay";

 interface Props {
   part: ToolUIPart;
-  children: React.ReactNode;
+  children: ReactNode;
 }
πŸ€– Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@autogpt_platform/frontend/src/app/`(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
at line 7, The prop type uses React.ReactNode without importing React; update
the ToolWrapper component to import the type and use it explicitly: add an
import like `import type { ReactNode } from 'react'` at the top of
ToolWrapper.tsx and change the prop declaration from `children:
React.ReactNode;` to `children: ReactNode;` (ensure the change is applied to the
ToolWrapper props/interface where `children` is declared).

Replace hardcoded LONG_RUNNING_TOOLS list with stream-based communication.
Backend now yields StreamLongRunningStart event when a long-running tool begins.

Changes:
- Add LONG_RUNNING_START to ResponseType enum
- Add StreamLongRunningStart class to response_model.py
- Yield StreamLongRunningStart after StreamToolInputAvailable when tool.is_long_running
- Import get_tool in service.py

Frontend will listen for this event to show UI feedback (e.g., mini-game)
during long-running operations, eliminating the need for hardcoded tool lists.
…nning tools

Replace hardcoded LONG_RUNNING_TOOLS list with event-based detection.
Frontend now listens for 'long-running-start' stream events from backend.

Changes:
- Update ToolWrapper to accept message prop and check for long-running-start events
- Pass message to all ToolWrapper instances in ChatMessagesContainer
- Remove long-running-tools.ts (hardcoded list)
- Check if any message part has type 'long-running-start' with matching toolCallId
- Update comments to be more generic ("UI feedback" instead of "mini-game")

Benefits:
- Single source of truth (backend is_long_running property)
- No list synchronization needed between backend and frontend
- More flexible - backend can decide at runtime
- Cleaner architecture using existing streaming infrastructure
Replace all references to 'mini-game' in comments/docstrings with generic
'UI feedback' to allow for future UI variations.

Changes:
- base.py: 'shows mini-game in UI' β†’ 'triggers long-running UI'
- create/edit/customize_agent.py: Remove '- show mini-game' from docstrings
- service.py: 'mini-game UI' β†’ 'UI feedback'
- response_model.py: Remove '(like a mini-game)' example
- LongRunningToolDisplay: 'Displays a mini-game' β†’ 'Displays UI feedback'
- ToolWrapper: Remove '(e.g., mini-game)' example

Keep implementation flexible for future UI changes.
Remove the long-running callback that was spawning background tasks
for tools like create_agent and edit_agent in the SDK path. Tools
now run synchronously with heartbeats, matching the behavior of the
main service.py executor.

Changes:
- Remove _build_long_running_callback function
- Set long_running_callback=None in set_execution_context
- Remove unused imports (LongRunningCallback, OperationPendingResponse, etc.)
- Update tool supplement comment to reflect synchronous execution
- Remove accidentally committed sample.logs file

This fixes the "stream timed out" issue where tools were delegated to
background and session would stop prematurely.
Comment on lines 1424 to 1434
input=arguments,
)

# Check if this tool is long-running (survives SSE disconnection)
# Notify frontend if this is a long-running tool (e.g., agent generation)
tool = get_tool(tool_name)
if tool and tool.is_long_running:
# Atomic check-and-set: returns False if operation already running (lost race)
if not await _mark_operation_started(tool_call_id):
logger.info(
f"Tool call {tool_call_id} already in progress, returning status"
)
# Build dynamic message based on tool name
if tool_name == "create_agent":
in_progress_msg = "Agent creation already in progress. Please wait..."
elif tool_name == "edit_agent":
in_progress_msg = "Agent edit already in progress. Please wait..."
else:
in_progress_msg = f"{tool_name} already in progress. Please wait..."

yield StreamToolOutputAvailable(
toolCallId=tool_call_id,
toolName=tool_name,
output=OperationInProgressResponse(
message=in_progress_msg,
tool_call_id=tool_call_id,
).model_dump_json(),
success=True,
)
return

# Generate operation ID and task ID
operation_id = str(uuid_module.uuid4())
task_id = str(uuid_module.uuid4())

# Build a user-friendly message based on tool and arguments
if tool_name == "create_agent":
agent_desc = arguments.get("description", "")
# Truncate long descriptions for the message
desc_preview = (
(agent_desc[:100] + "...") if len(agent_desc) > 100 else agent_desc
)
pending_msg = (
f"Creating your agent: {desc_preview}"
if desc_preview
else "Creating agent... This may take a few minutes."
)
started_msg = (
"Agent creation started. You can close this tab - "
"check your library in a few minutes."
)
elif tool_name == "edit_agent":
changes = arguments.get("changes", "")
changes_preview = (changes[:100] + "...") if len(changes) > 100 else changes
pending_msg = (
f"Editing agent: {changes_preview}"
if changes_preview
else "Editing agent... This may take a few minutes."
)
started_msg = (
"Agent edit started. You can close this tab - "
"check your library in a few minutes."
)
else:
pending_msg = f"Running {tool_name}... This may take a few minutes."
started_msg = (
f"{tool_name} started. You can close this tab - "
"check back in a few minutes."
)

# Track appended message for rollback on failure
pending_message: ChatMessage | None = None

# Wrap session save and task creation in try-except to release lock on failure
try:
# Create task in stream registry for SSE reconnection support
await stream_registry.create_task(
task_id=task_id,
session_id=session.session_id,
user_id=session.user_id,
tool_call_id=tool_call_id,
tool_name=tool_name,
operation_id=operation_id,
)

# Attach tool_call and save pending result β€” lock serialises
# concurrent session mutations during parallel execution.
async def _save_pending() -> None:
nonlocal pending_message
session.add_tool_call_to_current_turn(tool_calls[yield_idx])
pending_message = ChatMessage(
role="tool",
content=OperationPendingResponse(
message=pending_msg,
operation_id=operation_id,
tool_name=tool_name,
).model_dump_json(),
tool_call_id=tool_call_id,
)
session.messages.append(pending_message)
await upsert_chat_session(session)

await _with_optional_lock(session_lock, _save_pending)
logger.info(
f"Saved pending operation {operation_id} (task_id={task_id}) "
f"for tool {tool_name} in session {session.session_id}"
)

# Store task reference in module-level set to prevent GC before completion
bg_task = asyncio.create_task(
_execute_long_running_tool_with_streaming(
tool_name=tool_name,
parameters=arguments,
tool_call_id=tool_call_id,
operation_id=operation_id,
task_id=task_id,
session_id=session.session_id,
user_id=session.user_id,
)
)
_background_tasks.add(bg_task)
bg_task.add_done_callback(_background_tasks.discard)

# Associate the asyncio task with the stream registry task
await stream_registry.set_task_asyncio_task(task_id, bg_task)
except Exception as e:
# Roll back appended messages β€” use identity-based removal so
# it works even when other parallel tools have appended after us.
async def _rollback() -> None:
if pending_message and pending_message in session.messages:
session.messages.remove(pending_message)

await _with_optional_lock(session_lock, _rollback)

# Release the Redis lock since the background task won't be spawned
await _mark_operation_completed(tool_call_id)
# Mark stream registry task as failed if it was created
try:
await stream_registry.mark_task_completed(task_id, status="failed")
except Exception as mark_err:
logger.warning(f"Failed to mark task {task_id} as failed: {mark_err}")
logger.error(
f"Failed to setup long-running tool {tool_name}: {e}", exc_info=True
)
raise

# Return immediately - don't wait for completion
yield StreamToolOutputAvailable(
yield StreamLongRunningStart(
toolCallId=tool_call_id,
toolName=tool_name,
output=OperationStartedResponse(
message=started_msg,
operation_id=operation_id,
tool_name=tool_name,
task_id=task_id, # Include task_id for SSE reconnection
).model_dump_json(),
success=True,
)
return

This comment was marked as outdated.

Add logic to detect long-running tools in the SDK execution path
and emit StreamLongRunningStart event to trigger UI feedback display.

Changes:
- Import StreamLongRunningStart and get_tool
- Check if tool has is_long_running=True when StreamToolInputAvailable is received
- Yield StreamLongRunningStart event to notify frontend

This ensures the mini-game UI displays for long-running tools
like create_agent when using the SDK execution path.
Changed StreamLongRunningStart event type from "long-running-start" to
"data-long-running-start" to match the Vercel AI SDK's DataUIPart format.
This ensures the event is properly added to message.parts and can be
detected by the frontend.

Changes:
- Backend: Update event type to "data-long-running-start"
- Backend: Wrap toolCallId/toolName in a "data" object
- Frontend: Check for "data-long-running-start" type and access data.toolCallId

This follows the AI SDK protocol for custom data events.
…able

Instead of sending a separate custom event, add isLongRunning boolean
to the existing StreamToolInputAvailable event. This is much simpler
and works with the AI SDK without needing custom event handling.

Backend changes:
- Add isLongRunning field to StreamToolInputAvailable
- Check tool.is_long_running in response_adapter and set the flag
- Remove separate StreamLongRunningStart emission

Frontend changes:
- Check part.isLongRunning directly on the tool part
- Remove message prop from ToolWrapper (no longer needed)
- Simplify detection logic

This approach piggybacks on the existing tool-input-available event
that the AI SDK already recognizes and adds to message.parts.
Comment on lines 1425 to 1435
)

# Check if this tool is long-running (survives SSE disconnection)
# Notify frontend if this is a long-running tool (e.g., agent generation)
tool = get_tool(tool_name)
if tool and tool.is_long_running:
# Atomic check-and-set: returns False if operation already running (lost race)
if not await _mark_operation_started(tool_call_id):
logger.info(
f"Tool call {tool_call_id} already in progress, returning status"
)
# Build dynamic message based on tool name
if tool_name == "create_agent":
in_progress_msg = "Agent creation already in progress. Please wait..."
elif tool_name == "edit_agent":
in_progress_msg = "Agent edit already in progress. Please wait..."
else:
in_progress_msg = f"{tool_name} already in progress. Please wait..."

yield StreamToolOutputAvailable(
toolCallId=tool_call_id,
toolName=tool_name,
output=OperationInProgressResponse(
message=in_progress_msg,
tool_call_id=tool_call_id,
).model_dump_json(),
success=True,
)
return

# Generate operation ID and task ID
operation_id = str(uuid_module.uuid4())
task_id = str(uuid_module.uuid4())

# Build a user-friendly message based on tool and arguments
if tool_name == "create_agent":
agent_desc = arguments.get("description", "")
# Truncate long descriptions for the message
desc_preview = (
(agent_desc[:100] + "...") if len(agent_desc) > 100 else agent_desc
)
pending_msg = (
f"Creating your agent: {desc_preview}"
if desc_preview
else "Creating agent... This may take a few minutes."
)
started_msg = (
"Agent creation started. You can close this tab - "
"check your library in a few minutes."
)
elif tool_name == "edit_agent":
changes = arguments.get("changes", "")
changes_preview = (changes[:100] + "...") if len(changes) > 100 else changes
pending_msg = (
f"Editing agent: {changes_preview}"
if changes_preview
else "Editing agent... This may take a few minutes."
)
started_msg = (
"Agent edit started. You can close this tab - "
"check your library in a few minutes."
)
else:
pending_msg = f"Running {tool_name}... This may take a few minutes."
started_msg = (
f"{tool_name} started. You can close this tab - "
"check back in a few minutes."
)

# Track appended message for rollback on failure
pending_message: ChatMessage | None = None

# Wrap session save and task creation in try-except to release lock on failure
try:
# Create task in stream registry for SSE reconnection support
await stream_registry.create_task(
task_id=task_id,
session_id=session.session_id,
user_id=session.user_id,
tool_call_id=tool_call_id,
tool_name=tool_name,
operation_id=operation_id,
)

# Attach tool_call and save pending result β€” lock serialises
# concurrent session mutations during parallel execution.
async def _save_pending() -> None:
nonlocal pending_message
session.add_tool_call_to_current_turn(tool_calls[yield_idx])
pending_message = ChatMessage(
role="tool",
content=OperationPendingResponse(
message=pending_msg,
operation_id=operation_id,
tool_name=tool_name,
).model_dump_json(),
tool_call_id=tool_call_id,
)
session.messages.append(pending_message)
await upsert_chat_session(session)

await _with_optional_lock(session_lock, _save_pending)
logger.info(
f"Saved pending operation {operation_id} (task_id={task_id}) "
f"for tool {tool_name} in session {session.session_id}"
)

# Store task reference in module-level set to prevent GC before completion
bg_task = asyncio.create_task(
_execute_long_running_tool_with_streaming(
tool_name=tool_name,
parameters=arguments,
tool_call_id=tool_call_id,
operation_id=operation_id,
task_id=task_id,
session_id=session.session_id,
user_id=session.user_id,
)
)
_background_tasks.add(bg_task)
bg_task.add_done_callback(_background_tasks.discard)

# Associate the asyncio task with the stream registry task
await stream_registry.set_task_asyncio_task(task_id, bg_task)
except Exception as e:
# Roll back appended messages β€” use identity-based removal so
# it works even when other parallel tools have appended after us.
async def _rollback() -> None:
if pending_message and pending_message in session.messages:
session.messages.remove(pending_message)

await _with_optional_lock(session_lock, _rollback)

# Release the Redis lock since the background task won't be spawned
await _mark_operation_completed(tool_call_id)
# Mark stream registry task as failed if it was created
try:
await stream_registry.mark_task_completed(task_id, status="failed")
except Exception as mark_err:
logger.warning(f"Failed to mark task {task_id} as failed: {mark_err}")
logger.error(
f"Failed to setup long-running tool {tool_name}: {e}", exc_info=True
)
raise

# Return immediately - don't wait for completion
yield StreamToolOutputAvailable(
toolCallId=tool_call_id,
toolName=tool_name,
output=OperationStartedResponse(
message=started_msg,
operation_id=operation_id,
tool_name=tool_name,
task_id=task_id, # Include task_id for SSE reconnection
).model_dump_json(),
success=True,
yield StreamLongRunningStart(
data={
"toolCallId": tool_call_id,
"toolName": tool_name,
}
)

This comment was marked as outdated.

Address CodeRabbit review comment by using direct relative paths
instead of convoluted ../../tools/CreateAgent/../../components paths.
Comment on lines 1424 to +1432
input=arguments,
)

# Check if this tool is long-running (survives SSE disconnection)
# Notify frontend if this is a long-running tool (e.g., agent generation)
tool = get_tool(tool_name)
if tool and tool.is_long_running:
# Atomic check-and-set: returns False if operation already running (lost race)
if not await _mark_operation_started(tool_call_id):
logger.info(
f"Tool call {tool_call_id} already in progress, returning status"
)
# Build dynamic message based on tool name
if tool_name == "create_agent":
in_progress_msg = "Agent creation already in progress. Please wait..."
elif tool_name == "edit_agent":
in_progress_msg = "Agent edit already in progress. Please wait..."
else:
in_progress_msg = f"{tool_name} already in progress. Please wait..."

yield StreamToolOutputAvailable(
toolCallId=tool_call_id,
toolName=tool_name,
output=OperationInProgressResponse(
message=in_progress_msg,
tool_call_id=tool_call_id,
).model_dump_json(),
success=True,
)
return

# Generate operation ID and task ID
operation_id = str(uuid_module.uuid4())
task_id = str(uuid_module.uuid4())

# Build a user-friendly message based on tool and arguments
if tool_name == "create_agent":
agent_desc = arguments.get("description", "")
# Truncate long descriptions for the message
desc_preview = (
(agent_desc[:100] + "...") if len(agent_desc) > 100 else agent_desc
)
pending_msg = (
f"Creating your agent: {desc_preview}"
if desc_preview
else "Creating agent... This may take a few minutes."
)
started_msg = (
"Agent creation started. You can close this tab - "
"check your library in a few minutes."
)
elif tool_name == "edit_agent":
changes = arguments.get("changes", "")
changes_preview = (changes[:100] + "...") if len(changes) > 100 else changes
pending_msg = (
f"Editing agent: {changes_preview}"
if changes_preview
else "Editing agent... This may take a few minutes."
)
started_msg = (
"Agent edit started. You can close this tab - "
"check your library in a few minutes."
)
else:
pending_msg = f"Running {tool_name}... This may take a few minutes."
started_msg = (
f"{tool_name} started. You can close this tab - "
"check back in a few minutes."
)

# Track appended message for rollback on failure
pending_message: ChatMessage | None = None

# Wrap session save and task creation in try-except to release lock on failure
try:
# Create task in stream registry for SSE reconnection support
await stream_registry.create_task(
task_id=task_id,
session_id=session.session_id,
user_id=session.user_id,
tool_call_id=tool_call_id,
tool_name=tool_name,
operation_id=operation_id,
)

# Attach tool_call and save pending result β€” lock serialises
# concurrent session mutations during parallel execution.
async def _save_pending() -> None:
nonlocal pending_message
session.add_tool_call_to_current_turn(tool_calls[yield_idx])
pending_message = ChatMessage(
role="tool",
content=OperationPendingResponse(
message=pending_msg,
operation_id=operation_id,
tool_name=tool_name,
).model_dump_json(),
tool_call_id=tool_call_id,
)
session.messages.append(pending_message)
await upsert_chat_session(session)

await _with_optional_lock(session_lock, _save_pending)
logger.info(
f"Saved pending operation {operation_id} (task_id={task_id}) "
f"for tool {tool_name} in session {session.session_id}"
)

# Store task reference in module-level set to prevent GC before completion
bg_task = asyncio.create_task(
_execute_long_running_tool_with_streaming(
tool_name=tool_name,
parameters=arguments,
tool_call_id=tool_call_id,
operation_id=operation_id,
task_id=task_id,
session_id=session.session_id,
user_id=session.user_id,
)
)
_background_tasks.add(bg_task)
bg_task.add_done_callback(_background_tasks.discard)

# Associate the asyncio task with the stream registry task
await stream_registry.set_task_asyncio_task(task_id, bg_task)
except Exception as e:
# Roll back appended messages β€” use identity-based removal so
# it works even when other parallel tools have appended after us.
async def _rollback() -> None:
if pending_message and pending_message in session.messages:
session.messages.remove(pending_message)

await _with_optional_lock(session_lock, _rollback)

# Release the Redis lock since the background task won't be spawned
await _mark_operation_completed(tool_call_id)
# Mark stream registry task as failed if it was created
try:
await stream_registry.mark_task_completed(task_id, status="failed")
except Exception as mark_err:
logger.warning(f"Failed to mark task {task_id} as failed: {mark_err}")
logger.error(
f"Failed to setup long-running tool {tool_name}: {e}", exc_info=True
)
raise

# Return immediately - don't wait for completion
yield StreamToolOutputAvailable(
toolCallId=tool_call_id,
toolName=tool_name,
output=OperationStartedResponse(
message=started_msg,
operation_id=operation_id,
tool_name=tool_name,
task_id=task_id, # Include task_id for SSE reconnection
).model_dump_json(),
success=True,
yield StreamLongRunningStart(
data={
"toolCallId": tool_call_id,

This comment was marked as outdated.

The AI SDK strips unknown fields from tool-input-available events.
Use the standard providerMetadata field instead, which the SDK
preserves, to pass the isLongRunning flag to the frontend.

Backend changes:
- Change isLongRunning field to providerMetadata object
- Set providerMetadata: {isLongRunning: true} for long-running tools
- Add debug logging to verify flag is set

Frontend changes:
- Check part.providerMetadata.isLongRunning instead of part.isLongRunning
- Add console debug logging to verify detection

Tested programmatically - the complete flow works correctly.
Comment on lines +1431 to 1435
data={
"toolCallId": tool_call_id,
"toolName": tool_name,
}
)

This comment was marked as outdated.

ToolWrapper no longer accepts message prop. This was causing
TypeScript errors and preventing the component from rendering.

All ToolWrapper calls now only pass part and children props.
Fixes 11 TypeScript compilation errors.
- Only invalidate session queries on successful completion (status='ready')
- Previously invalidated on both 'ready' and 'error' status
- When backend returned 500, error status triggered refetch which caused infinite loop
- Fixes spam of 'Let me check!' messages when backend is unavailable
- AI SDK's ToolUIPart doesn't have toolName as separate field
- Tool name is encoded in type field as 'tool-{name}'
- Extract it using substring(5) to remove 'tool-' prefix
- Update debug logging to show extracted toolName
- This fixes 'toolName: unknown' in console logs
- Agent creation can take longer than 12 seconds
- Previous 12s timeout was causing 'Stream timed out' errors
- Increased to 60s to accommodate long-running tool execution
- Disable input when status='submitted' to prevent message spam
- Set stream start timeout to 30s (only detects backend down, doesn't affect tool execution)
- Once stream starts, tools can run indefinitely (timeout is cleared)
- Mini-game shows during long-running tool execution without timeout
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

conflicts Automatically applied to PRs with merge conflicts platform/backend AutoGPT Platform - Back end platform/frontend AutoGPT Platform - Front end size/l size/xl

Projects

Status: βœ… Done
Status: Done

Development

Successfully merging this pull request may close these issues.

1 participant