fix(copilot): remove is_long_running hack from agent generation tools#12180
fix(copilot): remove is_long_running hack from agent generation tools#12180
Conversation
Remove the `is_long_running = True` override from create_agent, edit_agent, and customize_agent tools. Now that CoPilot runs in the executor service (which already handles background execution), the async delegation pattern is unnecessary. This fixes the issue where agent generation completion messages never appeared in chat because the code was exiting early expecting an external Redis Stream completion that never came. The tools now execute synchronously in the CoPilot executor and stream completion messages back to chat immediately. Fixes: Agent generation completion not showing in chat
β¦ation Remove all dead code related to the async processing delegation pattern that is no longer needed after removing the is_long_running hack: - Remove `_operation_id` and `_task_id` parameter extraction - Remove passing these params to generate_agent/generate_agent_patch - Remove `status: "accepted"` checks and AsyncProcessingResponse returns - Remove AsyncProcessingResponse class definition from models.py - Remove operation_id/task_id params from agent_generator functions: - generate_agent() and generate_agent_external() - generate_agent_patch() and generate_agent_patch_external() - generate_agent_dummy() and generate_agent_patch_dummy() - Remove 202 Accepted handling for async processing This cleanup removes 126 lines of code that was supporting the old async delegation workflow.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughThis PR removes asynchronous operation processing from the agent generation backend flow and updates the frontend to use synchronous operations with mini-game UI during long-running tool execution. Backend functions eliminate operation_id and task_id parameters, the AsyncProcessingResponse model is deleted, and frontend components are restructured to conditionally display a LongRunningToolDisplay wrapper instead of handling operation state responses. Changes
Estimated code review effortπ― 3 (Moderate) | β±οΈ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
π₯ Pre-merge checks | β 3β Passed checks (3 passed)
βοΈ Tip: You can configure your own custom pre-merge checks in the settings. β¨ Finishing Touchesπ§ͺ Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
β¦task_id Update test assertions to match new function signatures after removing operation_id and task_id parameters from generate_agent_external and generate_agent_patch_external. Fixes: - TestGenerateAgent::test_calls_external_service - TestGenerateAgentPatch::test_calls_external_service
β¦game display - Add is_long_running property to BaseTool for UI feedback control - Mark create_agent, edit_agent, customize_agent as long-running tools - Create LongRunningToolDisplay component for generic mini-game UI - Clean up CreateAgent and EditAgent to use shared component - Remove manual title configuration, use generic message - Create LONG_RUNNING_TOOLS constant for frontend reference This makes it easy to add new long-running tools without UI changes.
autogpt_platform/backend/backend/copilot/tools/agent_generator/service.py
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 1
π§Ή Nitpick comments (3)
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx (2)
2-5: Convoluted relative import path β simplify to../ToolAccordion/AccordionContentThe import resolves correctly but the path
../../tools/CreateAgent/../../components/ToolAccordion/AccordionContentis unnecessarily indirect (it looks copy-pasted from atools/file). From this file's location incomponents/LongRunningToolDisplay/, the direct path is simply:π§ Proposed fix
-import { - ContentGrid, - ContentHint, -} from "../../tools/CreateAgent/../../components/ToolAccordion/AccordionContent"; +import { + ContentGrid, + ContentHint, +} from "../ToolAccordion/AccordionContent";π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@autogpt_platform/frontend/src/app/`(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx around lines 2 - 5, The import for ContentGrid and ContentHint in LongRunningToolDisplay.tsx uses a convoluted relative path; update the import statement that currently references "../../tools/CreateAgent/../../components/ToolAccordion/AccordionContent" to the simplified relative path "../ToolAccordion/AccordionContent" so ContentGrid and ContentHint are imported directly from the neighboring ToolAccordion folder.
7-7: Dependency direction: sharedcomponents/importing from feature-specifictools/CreateAgent/
MiniGamelives undertools/CreateAgent/components/MiniGame/, butLongRunningToolDisplayis a shared component undercomponents/. Having a shared component depend on a feature-local component inverts the expected dependency direction (features β shared, not shared β feature). SinceMiniGameis now used by bothCreateAgentandEditAgentviaLongRunningToolDisplay, it should be moved to a shared location (e.g.,components/MiniGame/) and the import updated accordingly.π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@autogpt_platform/frontend/src/app/`(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx at line 7, LongRunningToolDisplay imports the feature-local MiniGame, inverting dependency direction; move the MiniGame component from tools/CreateAgent/components/MiniGame/ into the shared components area (e.g., components/MiniGame/) and update the import in LongRunningToolDisplay.tsx to reference the new shared path; ensure any internal imports/exports inside the moved MiniGame module stay correct (export name MiniGame) and update any other callers (CreateAgent/EditAgent) to the shared location so shared β features dependency is preserved.autogpt_platform/backend/backend/copilot/tools/create_agent.py (1)
225-238: No timeout guard on the synchronousgenerate_agentcall
generate_agentis now awaited synchronously and can take several minutes. If the external agent generator service hangs,_executewill block indefinitely with no timeout or circuit-breaker. Ensure the executor service that hosts CoPilot enforces an upper-bound deadline (e.g., viaasyncio.wait_for) for long-running tool invocations.π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@autogpt_platform/backend/backend/copilot/tools/create_agent.py` around lines 225 - 238, The await of generate_agent inside _execute has no timeout and can block indefinitely; wrap the call to generate_agent with asyncio.wait_for using a configurable timeout constant (e.g., AGENT_GENERATION_TIMEOUT or DEFAULT_TIMEOUT) and import asyncio, then handle asyncio.TimeoutError to return an ErrorResponse (similar shape to the existing AgentGeneratorNotConfiguredError handling) with a clear message and error code like "timeout" or "service_unresponsive"; ensure you catch and log the timeout separately from other exceptions so long-running or hung external generator calls are bounded.
π Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
π Files selected for processing (10)
autogpt_platform/backend/backend/copilot/tools/base.pyautogpt_platform/backend/backend/copilot/tools/create_agent.pyautogpt_platform/backend/backend/copilot/tools/customize_agent.pyautogpt_platform/backend/backend/copilot/tools/edit_agent.pyautogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/helpers.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/helpers.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.ts
π€ Files with no reviewable changes (2)
- autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/helpers.tsx
- autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/helpers.tsx
π§ Files skipped from review as they are similar to previous changes (1)
- autogpt_platform/backend/backend/copilot/tools/customize_agent.py
π§° Additional context used
π Path-based instructions (15)
autogpt_platform/frontend/**/*.{ts,tsx,js,jsx}
π CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/frontend/**/*.{ts,tsx,js,jsx}: Use Node.js 21+ with pnpm package manager for frontend development
Always run 'pnpm format' for formatting and linting code in frontend development
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.tsautogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
autogpt_platform/frontend/**/*.{tsx,ts}
π CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/frontend/**/*.{tsx,ts}: Use function declarations for components and handlers (not arrow functions) in React components
Only use arrow functions for small inline lambdas (map, filter, etc.) in React components
Use PascalCase for component names and camelCase with 'use' prefix for hook names in React
Use Tailwind CSS utilities only for styling in frontend components
Use design system components from 'src/components/' (atoms, molecules, organisms) in frontend development
Never use 'src/components/legacy/' in frontend code
Only use Phosphor Icons (@phosphor-icons/react) for icons in frontend components
Use generated API hooks from '@/app/api/generated/endpoints/' instead of deprecated 'BackendAPI' or 'src/lib/autogpt-server-api/'
Use React Query for server state (via generated hooks) in frontend development
Default to client components ('use client') in Next.js; only use server components for SEO or extreme TTFB needs
Use '' component for rendering errors in frontend UI; use toast notifications for mutation errors; use 'Sentry.captureException()' for manual exceptions
Separate render logic from data/behavior in React components; keep comments minimal (code should be self-documenting)
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.tsautogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
autogpt_platform/frontend/**/*.{ts,tsx}
π CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/frontend/**/*.{ts,tsx}: No barrel files or 'index.ts' re-exports in frontend code
Regenerate API hooks with 'pnpm generate:api' after backend OpenAPI spec changes in frontend development
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.tsautogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
autogpt_platform/frontend/src/**/*.{ts,tsx}
π CodeRabbit inference engine (autogpt_platform/frontend/CLAUDE.md)
autogpt_platform/frontend/src/**/*.{ts,tsx}: Fully capitalize acronyms in symbols, e.g.graphID,useBackendAPI
Use function declarations (not arrow functions) for components and handlers
Separate render logic (.tsx) from business logic (use*.tshooks)
Use shadcn/ui (Radix UI primitives) with Tailwind CSS styling for UI components
Use Phosphor Icons only for icons
Use ErrorCard for render errors, toast for mutations, and Sentry for exceptions
Use design system components fromsrc/components/(atoms, molecules, organisms)
Never usesrc/components/__legacy__/*components
Use generated API hooks from@/app/api/__generated__/endpoints/with patternuse{Method}{Version}{OperationName}
Use Tailwind CSS only for styling, with design tokens
Do not useuseCallbackoruseMemounless asked to optimize a given function
Never type withanyunless a variable/attribute can ACTUALLY be of any type
autogpt_platform/frontend/src/**/*.{ts,tsx}: Structure components asComponentName/ComponentName.tsx+useComponentName.ts+helpers.tsand use design system components fromsrc/components/(atoms, molecules, organisms)
Use generated API hooks from@/app/api/__generated__/endpoints/with patternuse{Method}{Version}{OperationName}and regenerate withpnpm generate:api
Use function declarations (not arrow functions) for components and handlers
Separate render logic from business logic with component.tsx + useComponent.ts + helpers.ts structure
Colocate state when possible, avoid creating large components, use sub-components in local/componentsfolder
Avoid large hooks, abstract logic intohelpers.tsfiles when sensible
Use arrow functions only for callbacks, not for component declarations
Avoid comments at all times unless the code is very complex
Do not useuseCallbackoruseMemounless asked to optimize a given function
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.tsautogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
autogpt_platform/frontend/src/app/(platform)/**/components/**/*.{ts,tsx}
π CodeRabbit inference engine (autogpt_platform/frontend/CLAUDE.md)
Put sub-components in local
components/folder within feature directories
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx
autogpt_platform/frontend/src/**/*.tsx
π CodeRabbit inference engine (autogpt_platform/frontend/CLAUDE.md)
Component props should be
type Props = { ... }(not exported) unless it needs to be used outside the componentComponent props should be
interface Props { ... }(not exported) unless the interface needs to be used outside the component
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
autogpt_platform/frontend/**/*.{js,jsx,ts,tsx}
π CodeRabbit inference engine (AGENTS.md)
autogpt_platform/frontend/**/*.{js,jsx,ts,tsx}: Format frontend code usingpnpm format
Never use components fromsrc/components/__legacy__/*
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.tsautogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
autogpt_platform/frontend/**/*.{js,jsx,ts,tsx,css}
π CodeRabbit inference engine (AGENTS.md)
Use Tailwind CSS only for styling, use design tokens, and use Phosphor Icons only
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.tsautogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
autogpt_platform/**/*.{ts,tsx}
π CodeRabbit inference engine (AGENTS.md)
Never type with
any, if no types available useunknown
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.tsautogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
autogpt_platform/frontend/src/app/(platform)/**/*.tsx
π CodeRabbit inference engine (AGENTS.md)
If adding protected frontend routes, update
frontend/lib/supabase/middleware.ts
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
autogpt_platform/frontend/src/**/*.ts
π CodeRabbit inference engine (AGENTS.md)
Do not type hook returns, let Typescript infer as much as possible
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.ts
autogpt_platform/backend/**/*.py
π CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/backend/**/*.py: Use Python 3.11 (required; managed by Poetry via pyproject.toml) for backend development
Always run 'poetry run format' (Black + isort) before linting in backend development
Always run 'poetry run lint' (ruff) after formatting in backend development
Files:
autogpt_platform/backend/backend/copilot/tools/base.pyautogpt_platform/backend/backend/copilot/tools/create_agent.pyautogpt_platform/backend/backend/copilot/tools/edit_agent.py
autogpt_platform/backend/**/*.{py,txt}
π CodeRabbit inference engine (autogpt_platform/backend/CLAUDE.md)
Use
poetry runprefix for all Python commands, including testing, linting, formatting, and migrations
Files:
autogpt_platform/backend/backend/copilot/tools/base.pyautogpt_platform/backend/backend/copilot/tools/create_agent.pyautogpt_platform/backend/backend/copilot/tools/edit_agent.py
autogpt_platform/backend/backend/**/*.py
π CodeRabbit inference engine (autogpt_platform/backend/CLAUDE.md)
Use Prisma ORM for database operations in PostgreSQL with pgvector for embeddings
Files:
autogpt_platform/backend/backend/copilot/tools/base.pyautogpt_platform/backend/backend/copilot/tools/create_agent.pyautogpt_platform/backend/backend/copilot/tools/edit_agent.py
autogpt_platform/**/*.py
π CodeRabbit inference engine (AGENTS.md)
Format Python code with
poetry run format
Files:
autogpt_platform/backend/backend/copilot/tools/base.pyautogpt_platform/backend/backend/copilot/tools/create_agent.pyautogpt_platform/backend/backend/copilot/tools/edit_agent.py
π§ Learnings (8)
π Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/frontend/**/*.{tsx,ts} : Separate render logic from data/behavior in React components; keep comments minimal (code should be self-documenting)
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx
π Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/frontend/**/*.{ts,tsx} : No barrel files or 'index.ts' re-exports in frontend code
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.tsautogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
π Learning: 2026-02-04T16:50:51.495Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-02-04T16:50:51.495Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Avoid large hooks, abstract logic into `helpers.ts` files when sensible
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.ts
π Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/frontend/**/*.{tsx,ts} : Use design system components from 'src/components/' (atoms, molecules, organisms) in frontend development
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx
π Learning: 2026-02-04T16:50:33.615Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/frontend/CLAUDE.md:0-0
Timestamp: 2026-02-04T16:50:33.615Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Use shadcn/ui (Radix UI primitives) with Tailwind CSS styling for UI components
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx
π Learning: 2026-02-04T16:50:33.615Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/frontend/CLAUDE.md:0-0
Timestamp: 2026-02-04T16:50:33.615Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Use design system components from `src/components/` (atoms, molecules, organisms)
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx
π Learning: 2026-02-04T16:50:51.495Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-02-04T16:50:51.495Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Colocate state when possible, avoid creating large components, use sub-components in local `/components` folder
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
π Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/frontend/**/*.{tsx,ts} : Use '<ErrorCard />' component for rendering errors in frontend UI; use toast notifications for mutation errors; use 'Sentry.captureException()' for manual exceptions
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
𧬠Code graph analysis (3)
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx (3)
autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolAccordion/ToolAccordion.tsx (1)
ToolAccordion(21-102)autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolAccordion/AccordionContent.tsx (2)
ContentGrid(9-17)ContentHint(126-138)autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/components/MiniGame/MiniGame.tsx (1)
MiniGame(9-50)
autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx (3)
autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/helpers.tsx (1)
isAgentPreviewOutput(60-64)autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/helpers.tsx (1)
isAgentPreviewOutput(63-67)autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx (1)
LongRunningToolDisplay(18-35)
autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx (3)
autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/helpers.tsx (1)
isAgentPreviewOutput(60-64)autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/helpers.tsx (1)
isAgentPreviewOutput(63-67)autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx (1)
LongRunningToolDisplay(18-35)
β° Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: types
- GitHub Check: Seer Code Review
- GitHub Check: end-to-end tests
- GitHub Check: test (3.12)
- GitHub Check: test (3.13)
- GitHub Check: test (3.11)
- GitHub Check: Check PR Status
- GitHub Check: Analyze (python)
π Additional comments (7)
autogpt_platform/backend/backend/copilot/tools/base.py (1)
40-42: LGTM β docstring accurately reflects the new semantics.The updated docstring correctly decouples
is_long_runningfrom async backend processing and ties it to the UI mini-game indication, which is consistent with the broader PR change of removing async delegation from agent generation tools.autogpt_platform/backend/backend/copilot/tools/create_agent.py (1)
50-52:is_long_runningstill returnsTrueβ contradicts PR objectivesThe PR description states "Tools now rely on the base class default
is_long_running = False", but the property here still returnsTrue. The added docstring ("show mini-game") further confirms the intent is to keep itTrue. The PR objectives text appears to be stale relative to the actual implementation, whereis_long_running = Trueis intentionally retained for the frontend mini-game UI while only the async delegation path (operation_id/task_id) was removed.autogpt_platform/backend/backend/copilot/tools/edit_agent.py (1)
48-50: Sameis_long_running/ PR objectives mismatch ascreate_agent.pySame observation as
create_agent.py: the PR objectives claim this property should fall back to the base-classFalse, but the implementation keepsreturn True. The added docstring is consistent with the actual behavior, making the PR description the inaccurate source.autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx (1)
82-128: Cleanly simplified β LGTMOperation-state guards removed,
LongRunningToolDisplayproperly replaces the inline mini-game,hasExpandableContentreflects only real output states. No concerns.autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx (1)
94-149: Cleanly simplified β LGTM
LongRunningToolDisplayintegration is consistent withEditAgent.tsx. Operation-state checks correctly removed. No concerns.autogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.ts (2)
18-20: RemoveisLongRunningToolfunctionβit is unused dead code
isLongRunningToolis exported but has no callers anywhere in the codebase. Neither the function nor the module it resides in (long-running-tools.ts) is imported or referenced elsewhere.CreateAgent.tsxandEditAgent.tsxcheckisStreaming(part.state) instead of calling this utility.
7-11: Backend parity is confirmed βcustomize_agent.pyhasis_long_runningreturningTrueand properly implements async delegation with theasync def _execute()method, consistent withcreate_agentandedit_agent.
π€ Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@autogpt_platform/frontend/src/app/`(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx:
- Around line 24-31: The accordion title and the ContentHint inside
LongRunningToolDisplay both repeat the same wait message; remove the redundant
ContentHint component (or replace its text with supplementary info such as
keyboard controls) to avoid duplication. Locate the ContentHint element in the
LongRunningToolDisplay component (adjacent to MiniGame and within the accordion
props: title="This may take a few minutes. Play while you wait.") and either
delete that ContentHint node or update its content to provide additional,
non-duplicative guidance (e.g., game keyboard controls) so only one wait message
remains in the UI.
---
Nitpick comments:
In `@autogpt_platform/backend/backend/copilot/tools/create_agent.py`:
- Around line 225-238: The await of generate_agent inside _execute has no
timeout and can block indefinitely; wrap the call to generate_agent with
asyncio.wait_for using a configurable timeout constant (e.g.,
AGENT_GENERATION_TIMEOUT or DEFAULT_TIMEOUT) and import asyncio, then handle
asyncio.TimeoutError to return an ErrorResponse (similar shape to the existing
AgentGeneratorNotConfiguredError handling) with a clear message and error code
like "timeout" or "service_unresponsive"; ensure you catch and log the timeout
separately from other exceptions so long-running or hung external generator
calls are bounded.
In
`@autogpt_platform/frontend/src/app/`(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx:
- Around line 2-5: The import for ContentGrid and ContentHint in
LongRunningToolDisplay.tsx uses a convoluted relative path; update the import
statement that currently references
"../../tools/CreateAgent/../../components/ToolAccordion/AccordionContent" to the
simplified relative path "../ToolAccordion/AccordionContent" so ContentGrid and
ContentHint are imported directly from the neighboring ToolAccordion folder.
- Line 7: LongRunningToolDisplay imports the feature-local MiniGame, inverting
dependency direction; move the MiniGame component from
tools/CreateAgent/components/MiniGame/ into the shared components area (e.g.,
components/MiniGame/) and update the import in LongRunningToolDisplay.tsx to
reference the new shared path; ensure any internal imports/exports inside the
moved MiniGame module stay correct (export name MiniGame) and update any other
callers (CreateAgent/EditAgent) to the shared location so shared β features
dependency is preserved.
| title="This may take a few minutes. Play while you wait." | ||
| defaultExpanded={true} | ||
| > | ||
| <ContentGrid> | ||
| <MiniGame /> | ||
| <ContentHint> | ||
| This could take a few minutes β play while you wait! | ||
| </ContentHint> |
There was a problem hiding this comment.
Redundant wait messaging β title and hint say the same thing
The accordion title ("This may take a few minutes. Play while you wait.") and the ContentHint inside ("This could take a few minutes β play while you wait!") convey identical information. The hint adds no additional value and the duplication is noticeable to users. Remove the ContentHint or replace it with genuinely supplementary text (e.g., keyboard controls for the game).
βοΈ Proposed fix
<ContentGrid>
<MiniGame />
- <ContentHint>
- This could take a few minutes β play while you wait!
- </ContentHint>
</ContentGrid>π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| title="This may take a few minutes. Play while you wait." | |
| defaultExpanded={true} | |
| > | |
| <ContentGrid> | |
| <MiniGame /> | |
| <ContentHint> | |
| This could take a few minutes β play while you wait! | |
| </ContentHint> | |
| title="This may take a few minutes. Play while you wait." | |
| defaultExpanded={true} | |
| > | |
| <ContentGrid> | |
| <MiniGame /> |
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@autogpt_platform/frontend/src/app/`(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx
around lines 24 - 31, The accordion title and the ContentHint inside
LongRunningToolDisplay both repeat the same wait message; remove the redundant
ContentHint component (or replace its text with supplementary info such as
keyboard controls) to avoid duplication. Locate the ContentHint element in the
LongRunningToolDisplay component (adjacent to MiniGame and within the accordion
props: title="This may take a few minutes. Play while you wait.") and either
delete that ContentHint node or update its content to provide additional,
non-duplicative guidance (e.g., game keyboard controls) so only one wait message
remains in the UI.
- Create LongRunningToolWrapper component that wraps ALL tools - Automatically detects if tool is long-running and shows mini-game - Remove manual LongRunningToolDisplay from CreateAgent/EditAgent - All tools (GenericTool, CustomizeAgent, etc.) now automatic - No need to add mini-game to individual tool components This makes the system completely generic - just mark is_long_running=True in backend and frontend automatically shows mini-game!
π PR Overlap DetectionThis check compares your PR against all other open PRs targeting the same branch to detect potential merge conflicts early. π΄ Merge Conflicts DetectedThe following PRs have been tested and will have merge conflicts if merged after this PR. Consider coordinating with the authors.
π’ Low Risk β File Overlap OnlyThese PRs touch the same files but different sections (click to expand)
Summary: 2 conflict(s), 0 medium risk, 1 low risk (out of 3 PRs with file overlap) Auto-generated on push. Ignores: |
ToolWrapper is a better name since it wraps ALL tools, not just long-running ones. It conditionally shows mini-game for long-running tools based on LONG_RUNNING_TOOLS list.
There was a problem hiding this comment.
π§Ή Nitpick comments (1)
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx (1)
10-14: Trim the comments β the code is self-documenting.The JSDoc (lines 10β14) restates what the function name and 3-line body already communicate, and the inline comments (
// Extract tool nameβ¦,// Automatically show mini-gameβ¦,// Render the actual tool component) are equally redundant for a 15-line wrapper.βοΈ Suggested cleanup
-/** - * Wrapper that automatically shows mini-game for long-running tools. - * Checks the tool name against LONG_RUNNING_TOOLS and displays - * LongRunningToolDisplay during streaming. - */ export function LongRunningToolWrapper({ part, children }: Props) { - // Extract tool name from part.type (e.g., "tool-create_agent" -> "create_agent") const toolName = part.type.replace(/^tool-/, ""); const isStreaming = part.state === "input-streaming" || part.state === "input-available"; return ( <> - {/* Automatically show mini-game if tool is long-running and streaming */} {isLongRunningTool(toolName) && ( <LongRunningToolDisplay isStreaming={isStreaming} /> )} - {/* Render the actual tool component */} {children} </> ); }Based on learnings: "Avoid comments at all times unless the code is very complex."
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@autogpt_platform/frontend/src/app/`(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx around lines 10 - 14, Remove the redundant JSDoc and inline comments in the LongRunningToolWrapper component: delete the block comment describing behavior at the top and remove the short inline comments near the tool name extraction, LONG_RUNNING_TOOLS check, and rendering, leaving the component logic and identifiers (LongRunningToolWrapper, LONG_RUNNING_TOOLS, LongRunningToolDisplay) intact so the code itself documents its behavior.
π Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
π Files selected for processing (4)
autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsxautogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
π§ Files skipped from review as they are similar to previous changes (1)
- autogpt_platform/frontend/src/app/(platform)/copilot/tools/CreateAgent/CreateAgent.tsx
π§° Additional context used
π Path-based instructions (10)
autogpt_platform/frontend/**/*.{ts,tsx,js,jsx}
π CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/frontend/**/*.{ts,tsx,js,jsx}: Use Node.js 21+ with pnpm package manager for frontend development
Always run 'pnpm format' for formatting and linting code in frontend development
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsxautogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
autogpt_platform/frontend/**/*.{tsx,ts}
π CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/frontend/**/*.{tsx,ts}: Use function declarations for components and handlers (not arrow functions) in React components
Only use arrow functions for small inline lambdas (map, filter, etc.) in React components
Use PascalCase for component names and camelCase with 'use' prefix for hook names in React
Use Tailwind CSS utilities only for styling in frontend components
Use design system components from 'src/components/' (atoms, molecules, organisms) in frontend development
Never use 'src/components/legacy/' in frontend code
Only use Phosphor Icons (@phosphor-icons/react) for icons in frontend components
Use generated API hooks from '@/app/api/generated/endpoints/' instead of deprecated 'BackendAPI' or 'src/lib/autogpt-server-api/'
Use React Query for server state (via generated hooks) in frontend development
Default to client components ('use client') in Next.js; only use server components for SEO or extreme TTFB needs
Use '' component for rendering errors in frontend UI; use toast notifications for mutation errors; use 'Sentry.captureException()' for manual exceptions
Separate render logic from data/behavior in React components; keep comments minimal (code should be self-documenting)
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsxautogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
autogpt_platform/frontend/**/*.{ts,tsx}
π CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/frontend/**/*.{ts,tsx}: No barrel files or 'index.ts' re-exports in frontend code
Regenerate API hooks with 'pnpm generate:api' after backend OpenAPI spec changes in frontend development
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsxautogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
autogpt_platform/frontend/src/**/*.{ts,tsx}
π CodeRabbit inference engine (autogpt_platform/frontend/CLAUDE.md)
autogpt_platform/frontend/src/**/*.{ts,tsx}: Fully capitalize acronyms in symbols, e.g.graphID,useBackendAPI
Use function declarations (not arrow functions) for components and handlers
Separate render logic (.tsx) from business logic (use*.tshooks)
Use shadcn/ui (Radix UI primitives) with Tailwind CSS styling for UI components
Use Phosphor Icons only for icons
Use ErrorCard for render errors, toast for mutations, and Sentry for exceptions
Use design system components fromsrc/components/(atoms, molecules, organisms)
Never usesrc/components/__legacy__/*components
Use generated API hooks from@/app/api/__generated__/endpoints/with patternuse{Method}{Version}{OperationName}
Use Tailwind CSS only for styling, with design tokens
Do not useuseCallbackoruseMemounless asked to optimize a given function
Never type withanyunless a variable/attribute can ACTUALLY be of any type
autogpt_platform/frontend/src/**/*.{ts,tsx}: Structure components asComponentName/ComponentName.tsx+useComponentName.ts+helpers.tsand use design system components fromsrc/components/(atoms, molecules, organisms)
Use generated API hooks from@/app/api/__generated__/endpoints/with patternuse{Method}{Version}{OperationName}and regenerate withpnpm generate:api
Use function declarations (not arrow functions) for components and handlers
Separate render logic from business logic with component.tsx + useComponent.ts + helpers.ts structure
Colocate state when possible, avoid creating large components, use sub-components in local/componentsfolder
Avoid large hooks, abstract logic intohelpers.tsfiles when sensible
Use arrow functions only for callbacks, not for component declarations
Avoid comments at all times unless the code is very complex
Do not useuseCallbackoruseMemounless asked to optimize a given function
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsxautogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
autogpt_platform/frontend/src/app/(platform)/**/components/**/*.{ts,tsx}
π CodeRabbit inference engine (autogpt_platform/frontend/CLAUDE.md)
Put sub-components in local
components/folder within feature directories
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsxautogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
autogpt_platform/frontend/src/**/*.tsx
π CodeRabbit inference engine (autogpt_platform/frontend/CLAUDE.md)
Component props should be
type Props = { ... }(not exported) unless it needs to be used outside the componentComponent props should be
interface Props { ... }(not exported) unless the interface needs to be used outside the component
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsxautogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
autogpt_platform/frontend/**/*.{js,jsx,ts,tsx}
π CodeRabbit inference engine (AGENTS.md)
autogpt_platform/frontend/**/*.{js,jsx,ts,tsx}: Format frontend code usingpnpm format
Never use components fromsrc/components/__legacy__/*
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsxautogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
autogpt_platform/frontend/**/*.{js,jsx,ts,tsx,css}
π CodeRabbit inference engine (AGENTS.md)
Use Tailwind CSS only for styling, use design tokens, and use Phosphor Icons only
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsxautogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
autogpt_platform/**/*.{ts,tsx}
π CodeRabbit inference engine (AGENTS.md)
Never type with
any, if no types available useunknown
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsxautogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
autogpt_platform/frontend/src/app/(platform)/**/*.tsx
π CodeRabbit inference engine (AGENTS.md)
If adding protected frontend routes, update
frontend/lib/supabase/middleware.ts
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsxautogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
π§ Learnings (12)
π Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/frontend/src/components/**/*.{tsx,ts} : Structure React components as: ComponentName/ComponentName.tsx + useComponentName.ts + helpers.ts (exception: small 3-4 line components can be inline; render-only components can be direct files)
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsxautogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
π Learning: 2026-02-04T16:50:33.615Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/frontend/CLAUDE.md:0-0
Timestamp: 2026-02-04T16:50:33.615Z
Learning: Applies to autogpt_platform/frontend/src/**/*.tsx : Component props should be `type Props = { ... }` (not exported) unless it needs to be used outside the component
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx
π Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/frontend/**/*.{tsx,ts} : Separate render logic from data/behavior in React components; keep comments minimal (code should be self-documenting)
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsxautogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsxautogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
π Learning: 2026-02-04T16:50:51.495Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-02-04T16:50:51.495Z
Learning: Applies to autogpt_platform/frontend/src/**/*.tsx : Component props should be `interface Props { ... }` (not exported) unless the interface needs to be used outside the component
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx
π Learning: 2026-02-04T16:50:51.495Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-02-04T16:50:51.495Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Colocate state when possible, avoid creating large components, use sub-components in local `/components` folder
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsxautogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
π Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/frontend/**/*.{tsx,ts} : Use design system components from 'src/components/' (atoms, molecules, organisms) in frontend development
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
π Learning: 2026-02-04T16:50:33.615Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/frontend/CLAUDE.md:0-0
Timestamp: 2026-02-04T16:50:33.615Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Use design system components from `src/components/` (atoms, molecules, organisms)
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx
π Learning: 2026-02-04T16:50:33.615Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/frontend/CLAUDE.md:0-0
Timestamp: 2026-02-04T16:50:33.615Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Separate render logic (`.tsx`) from business logic (`use*.ts` hooks)
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
π Learning: 2026-02-04T16:50:51.495Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-02-04T16:50:51.495Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Separate render logic from business logic with component.tsx + useComponent.ts + helpers.ts structure
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
π Learning: 2026-02-04T16:50:51.495Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-02-04T16:50:51.495Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Structure components as `ComponentName/ComponentName.tsx` + `useComponentName.ts` + `helpers.ts` and use design system components from `src/components/` (atoms, molecules, organisms)
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
π Learning: 2026-02-04T16:50:33.615Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/frontend/CLAUDE.md:0-0
Timestamp: 2026-02-04T16:50:33.615Z
Learning: Applies to autogpt_platform/frontend/src/components/**/*.{ts,tsx} : Structure components as `ComponentName/ComponentName.tsx` + `useComponentName.ts` + `helpers.ts`
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
π Learning: 2026-02-04T16:50:51.495Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-02-04T16:50:51.495Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Avoid large hooks, abstract logic into `helpers.ts` files when sensible
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
𧬠Code graph analysis (2)
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx (2)
autogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.ts (1)
isLongRunningTool(18-20)autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx (1)
LongRunningToolDisplay(18-35)
autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx (1)
autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx (1)
LongRunningToolWrapper(15-31)
β° Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
- GitHub Check: lint
- GitHub Check: integration_test
- GitHub Check: types
- GitHub Check: end-to-end tests
- GitHub Check: test (3.12)
- GitHub Check: test (3.13)
- GitHub Check: test (3.11)
- GitHub Check: Seer Code Review
- GitHub Check: Check PR Status
- GitHub Check: Analyze (python)
π Additional comments (3)
autogpt_platform/frontend/src/app/(platform)/copilot/tools/EditAgent/EditAgent.tsx (1)
90-96: LGTM β simplifiedhasExpandableContentcondition is correct.Removing the three operation-state predicates and retaining only the four terminal output predicates accurately reflects the coordinated backend removal of
is_long_running. ThegetAccordionMetafallthrough to the error case remains safe becausehasExpandableContentalready gates the accordion behind one of the four predicates, sogetAccordionMetais never called with an unhandled output type.autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx (1)
212-317: LGTM β wrapping is clean andkeyplacement is correct.Moving
keytoLongRunningToolWrapper(the outermost element in each list item) is the right React pattern. Thepart as ToolUIPartcast on both the wrapper and the inner tool components is redundant for the inner components since the type is unchanged, but it's harmless and consistent with the existing style.autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx (1)
18-19:isStreamingstate coverage is correct.
ToolUIPart.stateis a discriminated union of exactly'input-streaming','input-available', and'output-available'β there is no'output-streaming'state on tool parts. The two-state OR covers all in-progress phases (model generating arguments β arguments received, tool executing) and the mini-game correctly disappears once'output-available'is reached.
π€ Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In
`@autogpt_platform/frontend/src/app/`(platform)/copilot/components/LongRunningToolWrapper/LongRunningToolWrapper.tsx:
- Around line 10-14: Remove the redundant JSDoc and inline comments in the
LongRunningToolWrapper component: delete the block comment describing behavior
at the top and remove the short inline comments near the tool name extraction,
LONG_RUNNING_TOOLS check, and rendering, leaving the component logic and
identifiers (LongRunningToolWrapper, LONG_RUNNING_TOOLS, LongRunningToolDisplay)
intact so the code itself documents its behavior.
β¦ing only for frontend UI The executor was still spawning background tasks when it saw is_long_running=True, triggering the old async delegation pattern with 'operation is still running' messages. This caused: - Async delegation instead of synchronous execution with streaming - Session timeouts waiting for async completion - Mini-game not displaying because tool execution wasn't streaming properly Fix: - Remove async delegation code from _yield_tool_call (lines 1434-1586 in service.py) - All tools now execute synchronously with heartbeats, regardless of is_long_running - The is_long_running property is now ONLY used by frontend to show mini-game UI - Update function docstring to reflect new behavior - Remove unused imports: OperationStartedResponse, OperationPendingResponse, OperationInProgressResponse The mini-game feature now works as intended: 1. Backend tools set is_long_running = True for UI display hint 2. Executor runs ALL tools synchronously with streaming 3. Frontend ToolWrapper detects is_long_running and shows mini-game during streaming
|
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request. |
There was a problem hiding this comment.
Actionable comments posted: 1
π§Ή Nitpick comments (1)
autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx (1)
10-13: Remove JSDoc and inline comments β they violate the "avoid comments" guideline.The JSDoc block and the two inline comments inside the JSX (
{/* Automatically show mini-game... */}and{/* Render the actual tool component */}) add no information beyond what the code already expresses.β»οΈ Proposed fix
-/** - * Wrapper for all tool components. Automatically shows mini-game - * for long-running tools by checking LONG_RUNNING_TOOLS list. - */ export function ToolWrapper({ part, children }: Props) { const toolName = part.type.replace(/^tool-/, ""); const isStreaming = part.state === "input-streaming" || part.state === "input-available"; return ( <> - {/* Automatically show mini-game if tool is long-running and streaming */} {isLongRunningTool(toolName) && ( <LongRunningToolDisplay isStreaming={isStreaming} /> )} - {/* Render the actual tool component */} {children} </> ); }As per coding guidelines: "Avoid comments at all times unless the code is very complex."
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@autogpt_platform/frontend/src/app/`(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx around lines 10 - 13, Remove the JSDoc header and the two JSX inline comments inside the ToolWrapper component: delete the top JSDoc block above the ToolWrapper component and remove the comments that reference LONG_RUNNING_TOOLS/mini-game and the "Render the actual tool component" JSX comments so the code contains no explanatory comments; keep all references to LONG_RUNNING_TOOLS and the ToolWrapper component logic unchanged.
π Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
π Files selected for processing (3)
autogpt_platform/backend/sample.logsautogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsxautogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
β Files skipped from review due to trivial changes (1)
- autogpt_platform/backend/sample.logs
π§ Files skipped from review as they are similar to previous changes (1)
- autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
π§° Additional context used
π Path-based instructions (10)
autogpt_platform/frontend/**/*.{ts,tsx,js,jsx}
π CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/frontend/**/*.{ts,tsx,js,jsx}: Use Node.js 21+ with pnpm package manager for frontend development
Always run 'pnpm format' for formatting and linting code in frontend development
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
autogpt_platform/frontend/**/*.{tsx,ts}
π CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/frontend/**/*.{tsx,ts}: Use function declarations for components and handlers (not arrow functions) in React components
Only use arrow functions for small inline lambdas (map, filter, etc.) in React components
Use PascalCase for component names and camelCase with 'use' prefix for hook names in React
Use Tailwind CSS utilities only for styling in frontend components
Use design system components from 'src/components/' (atoms, molecules, organisms) in frontend development
Never use 'src/components/legacy/' in frontend code
Only use Phosphor Icons (@phosphor-icons/react) for icons in frontend components
Use generated API hooks from '@/app/api/generated/endpoints/' instead of deprecated 'BackendAPI' or 'src/lib/autogpt-server-api/'
Use React Query for server state (via generated hooks) in frontend development
Default to client components ('use client') in Next.js; only use server components for SEO or extreme TTFB needs
Use '' component for rendering errors in frontend UI; use toast notifications for mutation errors; use 'Sentry.captureException()' for manual exceptions
Separate render logic from data/behavior in React components; keep comments minimal (code should be self-documenting)
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
autogpt_platform/frontend/**/*.{ts,tsx}
π CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/frontend/**/*.{ts,tsx}: No barrel files or 'index.ts' re-exports in frontend code
Regenerate API hooks with 'pnpm generate:api' after backend OpenAPI spec changes in frontend development
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
autogpt_platform/frontend/src/**/*.{ts,tsx}
π CodeRabbit inference engine (autogpt_platform/frontend/CLAUDE.md)
autogpt_platform/frontend/src/**/*.{ts,tsx}: Fully capitalize acronyms in symbols, e.g.graphID,useBackendAPI
Use function declarations (not arrow functions) for components and handlers
Separate render logic (.tsx) from business logic (use*.tshooks)
Use shadcn/ui (Radix UI primitives) with Tailwind CSS styling for UI components
Use Phosphor Icons only for icons
Use ErrorCard for render errors, toast for mutations, and Sentry for exceptions
Use design system components fromsrc/components/(atoms, molecules, organisms)
Never usesrc/components/__legacy__/*components
Use generated API hooks from@/app/api/__generated__/endpoints/with patternuse{Method}{Version}{OperationName}
Use Tailwind CSS only for styling, with design tokens
Do not useuseCallbackoruseMemounless asked to optimize a given function
Never type withanyunless a variable/attribute can ACTUALLY be of any type
autogpt_platform/frontend/src/**/*.{ts,tsx}: Structure components asComponentName/ComponentName.tsx+useComponentName.ts+helpers.tsand use design system components fromsrc/components/(atoms, molecules, organisms)
Use generated API hooks from@/app/api/__generated__/endpoints/with patternuse{Method}{Version}{OperationName}and regenerate withpnpm generate:api
Use function declarations (not arrow functions) for components and handlers
Separate render logic from business logic with component.tsx + useComponent.ts + helpers.ts structure
Colocate state when possible, avoid creating large components, use sub-components in local/componentsfolder
Avoid large hooks, abstract logic intohelpers.tsfiles when sensible
Use arrow functions only for callbacks, not for component declarations
Avoid comments at all times unless the code is very complex
Do not useuseCallbackoruseMemounless asked to optimize a given function
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
autogpt_platform/frontend/src/app/(platform)/**/components/**/*.{ts,tsx}
π CodeRabbit inference engine (autogpt_platform/frontend/CLAUDE.md)
Put sub-components in local
components/folder within feature directories
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
autogpt_platform/frontend/src/**/*.tsx
π CodeRabbit inference engine (autogpt_platform/frontend/CLAUDE.md)
Component props should be
type Props = { ... }(not exported) unless it needs to be used outside the componentComponent props should be
interface Props { ... }(not exported) unless the interface needs to be used outside the component
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
autogpt_platform/frontend/**/*.{js,jsx,ts,tsx}
π CodeRabbit inference engine (AGENTS.md)
autogpt_platform/frontend/**/*.{js,jsx,ts,tsx}: Format frontend code usingpnpm format
Never use components fromsrc/components/__legacy__/*
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
autogpt_platform/frontend/**/*.{js,jsx,ts,tsx,css}
π CodeRabbit inference engine (AGENTS.md)
Use Tailwind CSS only for styling, use design tokens, and use Phosphor Icons only
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
autogpt_platform/**/*.{ts,tsx}
π CodeRabbit inference engine (AGENTS.md)
Never type with
any, if no types available useunknown
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
autogpt_platform/frontend/src/app/(platform)/**/*.tsx
π CodeRabbit inference engine (AGENTS.md)
If adding protected frontend routes, update
frontend/lib/supabase/middleware.ts
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
π§ Learnings (4)
π Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/frontend/src/components/**/*.{tsx,ts} : Structure React components as: ComponentName/ComponentName.tsx + useComponentName.ts + helpers.ts (exception: small 3-4 line components can be inline; render-only components can be direct files)
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
π Learning: 2026-02-04T16:50:33.615Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/frontend/CLAUDE.md:0-0
Timestamp: 2026-02-04T16:50:33.615Z
Learning: Applies to autogpt_platform/frontend/src/components/**/*.{ts,tsx} : Structure components as `ComponentName/ComponentName.tsx` + `useComponentName.ts` + `helpers.ts`
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
π Learning: 2026-02-04T16:50:33.615Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/frontend/CLAUDE.md:0-0
Timestamp: 2026-02-04T16:50:33.615Z
Learning: Applies to autogpt_platform/frontend/src/**/*.tsx : Component props should be `type Props = { ... }` (not exported) unless it needs to be used outside the component
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
π Learning: 2026-02-04T16:50:51.495Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-02-04T16:50:51.495Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Structure components as `ComponentName/ComponentName.tsx` + `useComponentName.ts` + `helpers.ts` and use design system components from `src/components/` (atoms, molecules, organisms)
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
𧬠Code graph analysis (1)
autogpt_platform/frontend/src/app/(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx (2)
autogpt_platform/frontend/src/app/(platform)/copilot/tools/long-running-tools.ts (1)
isLongRunningTool(18-20)autogpt_platform/frontend/src/app/(platform)/copilot/components/LongRunningToolDisplay/LongRunningToolDisplay.tsx (1)
LongRunningToolDisplay(18-35)
β° Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: end-to-end tests
- GitHub Check: test (3.12)
- GitHub Check: test (3.13)
- GitHub Check: test (3.11)
- GitHub Check: Check PR Status
π€ Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@autogpt_platform/frontend/src/app/`(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx:
- Line 7: The prop type uses React.ReactNode without importing React; update the
ToolWrapper component to import the type and use it explicitly: add an import
like `import type { ReactNode } from 'react'` at the top of ToolWrapper.tsx and
change the prop declaration from `children: React.ReactNode;` to `children:
ReactNode;` (ensure the change is applied to the ToolWrapper props/interface
where `children` is declared).
---
Nitpick comments:
In
`@autogpt_platform/frontend/src/app/`(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx:
- Around line 10-13: Remove the JSDoc header and the two JSX inline comments
inside the ToolWrapper component: delete the top JSDoc block above the
ToolWrapper component and remove the comments that reference
LONG_RUNNING_TOOLS/mini-game and the "Render the actual tool component" JSX
comments so the code contains no explanatory comments; keep all references to
LONG_RUNNING_TOOLS and the ToolWrapper component logic unchanged.
|
|
||
| interface Props { | ||
| part: ToolUIPart; | ||
| children: React.ReactNode; |
There was a problem hiding this comment.
π§© Analysis chain
π Script executed:
#!/bin/bash
# Find and read the main tsconfig
echo "=== Main tsconfig.json ==="
find autogpt_platform/frontend -name "tsconfig.json" -type f | head -5 | while read file; do
echo "File: $file"
cat "$file"
echo ""
done
# Check for global type declarations
echo "=== Global declarations ==="
find autogpt_platform/frontend -name "global.d.ts" -o -name "globals.d.ts" | while read file; do
echo "File: $file"
cat "$file"
echo ""
done
# Check next.config and jsx settings
echo "=== next.config files ==="
find autogpt_platform/frontend -name "next.config.*" | while read file; do
echo "File: $file"
head -50 "$file"
echo ""
doneRepository: Significant-Gravitas/AutoGPT
Length of output: 2718
π Script executed:
#!/bin/bash
# Also check the actual file to see all its imports
echo "=== ToolWrapper.tsx file content ==="
cat -n autogpt_platform/frontend/src/app/\(platform\)/copilot/components/ToolWrapper/ToolWrapper.tsxRepository: Significant-Gravitas/AutoGPT
Length of output: 1301
Remove React namespace; import ReactNode explicitly.
React.ReactNode is used on line 7 without importing React. With strict TypeScript mode enabled and no automatic React type declarations configured, this should cause a compilation error. Use explicit imports for type clarity and correctness.
β»οΈ Proposed fix
import type { ToolUIPart } from "ai";
+import type { ReactNode } from "react";
import { isLongRunningTool } from "../../tools/long-running-tools";
import { LongRunningToolDisplay } from "../LongRunningToolDisplay/LongRunningToolDisplay";
interface Props {
part: ToolUIPart;
- children: React.ReactNode;
+ children: ReactNode;
}π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@autogpt_platform/frontend/src/app/`(platform)/copilot/components/ToolWrapper/ToolWrapper.tsx
at line 7, The prop type uses React.ReactNode without importing React; update
the ToolWrapper component to import the type and use it explicitly: add an
import like `import type { ReactNode } from 'react'` at the top of
ToolWrapper.tsx and change the prop declaration from `children:
React.ReactNode;` to `children: ReactNode;` (ensure the change is applied to the
ToolWrapper props/interface where `children` is declared).
Replace hardcoded LONG_RUNNING_TOOLS list with stream-based communication. Backend now yields StreamLongRunningStart event when a long-running tool begins. Changes: - Add LONG_RUNNING_START to ResponseType enum - Add StreamLongRunningStart class to response_model.py - Yield StreamLongRunningStart after StreamToolInputAvailable when tool.is_long_running - Import get_tool in service.py Frontend will listen for this event to show UI feedback (e.g., mini-game) during long-running operations, eliminating the need for hardcoded tool lists.
β¦nning tools
Replace hardcoded LONG_RUNNING_TOOLS list with event-based detection.
Frontend now listens for 'long-running-start' stream events from backend.
Changes:
- Update ToolWrapper to accept message prop and check for long-running-start events
- Pass message to all ToolWrapper instances in ChatMessagesContainer
- Remove long-running-tools.ts (hardcoded list)
- Check if any message part has type 'long-running-start' with matching toolCallId
- Update comments to be more generic ("UI feedback" instead of "mini-game")
Benefits:
- Single source of truth (backend is_long_running property)
- No list synchronization needed between backend and frontend
- More flexible - backend can decide at runtime
- Cleaner architecture using existing streaming infrastructure
Replace all references to 'mini-game' in comments/docstrings with generic 'UI feedback' to allow for future UI variations. Changes: - base.py: 'shows mini-game in UI' β 'triggers long-running UI' - create/edit/customize_agent.py: Remove '- show mini-game' from docstrings - service.py: 'mini-game UI' β 'UI feedback' - response_model.py: Remove '(like a mini-game)' example - LongRunningToolDisplay: 'Displays a mini-game' β 'Displays UI feedback' - ToolWrapper: Remove '(e.g., mini-game)' example Keep implementation flexible for future UI changes.
Remove the long-running callback that was spawning background tasks for tools like create_agent and edit_agent in the SDK path. Tools now run synchronously with heartbeats, matching the behavior of the main service.py executor. Changes: - Remove _build_long_running_callback function - Set long_running_callback=None in set_execution_context - Remove unused imports (LongRunningCallback, OperationPendingResponse, etc.) - Update tool supplement comment to reflect synchronous execution - Remove accidentally committed sample.logs file This fixes the "stream timed out" issue where tools were delegated to background and session would stop prematurely.
| input=arguments, | ||
| ) | ||
|
|
||
| # Check if this tool is long-running (survives SSE disconnection) | ||
| # Notify frontend if this is a long-running tool (e.g., agent generation) | ||
| tool = get_tool(tool_name) | ||
| if tool and tool.is_long_running: | ||
| # Atomic check-and-set: returns False if operation already running (lost race) | ||
| if not await _mark_operation_started(tool_call_id): | ||
| logger.info( | ||
| f"Tool call {tool_call_id} already in progress, returning status" | ||
| ) | ||
| # Build dynamic message based on tool name | ||
| if tool_name == "create_agent": | ||
| in_progress_msg = "Agent creation already in progress. Please wait..." | ||
| elif tool_name == "edit_agent": | ||
| in_progress_msg = "Agent edit already in progress. Please wait..." | ||
| else: | ||
| in_progress_msg = f"{tool_name} already in progress. Please wait..." | ||
|
|
||
| yield StreamToolOutputAvailable( | ||
| toolCallId=tool_call_id, | ||
| toolName=tool_name, | ||
| output=OperationInProgressResponse( | ||
| message=in_progress_msg, | ||
| tool_call_id=tool_call_id, | ||
| ).model_dump_json(), | ||
| success=True, | ||
| ) | ||
| return | ||
|
|
||
| # Generate operation ID and task ID | ||
| operation_id = str(uuid_module.uuid4()) | ||
| task_id = str(uuid_module.uuid4()) | ||
|
|
||
| # Build a user-friendly message based on tool and arguments | ||
| if tool_name == "create_agent": | ||
| agent_desc = arguments.get("description", "") | ||
| # Truncate long descriptions for the message | ||
| desc_preview = ( | ||
| (agent_desc[:100] + "...") if len(agent_desc) > 100 else agent_desc | ||
| ) | ||
| pending_msg = ( | ||
| f"Creating your agent: {desc_preview}" | ||
| if desc_preview | ||
| else "Creating agent... This may take a few minutes." | ||
| ) | ||
| started_msg = ( | ||
| "Agent creation started. You can close this tab - " | ||
| "check your library in a few minutes." | ||
| ) | ||
| elif tool_name == "edit_agent": | ||
| changes = arguments.get("changes", "") | ||
| changes_preview = (changes[:100] + "...") if len(changes) > 100 else changes | ||
| pending_msg = ( | ||
| f"Editing agent: {changes_preview}" | ||
| if changes_preview | ||
| else "Editing agent... This may take a few minutes." | ||
| ) | ||
| started_msg = ( | ||
| "Agent edit started. You can close this tab - " | ||
| "check your library in a few minutes." | ||
| ) | ||
| else: | ||
| pending_msg = f"Running {tool_name}... This may take a few minutes." | ||
| started_msg = ( | ||
| f"{tool_name} started. You can close this tab - " | ||
| "check back in a few minutes." | ||
| ) | ||
|
|
||
| # Track appended message for rollback on failure | ||
| pending_message: ChatMessage | None = None | ||
|
|
||
| # Wrap session save and task creation in try-except to release lock on failure | ||
| try: | ||
| # Create task in stream registry for SSE reconnection support | ||
| await stream_registry.create_task( | ||
| task_id=task_id, | ||
| session_id=session.session_id, | ||
| user_id=session.user_id, | ||
| tool_call_id=tool_call_id, | ||
| tool_name=tool_name, | ||
| operation_id=operation_id, | ||
| ) | ||
|
|
||
| # Attach tool_call and save pending result β lock serialises | ||
| # concurrent session mutations during parallel execution. | ||
| async def _save_pending() -> None: | ||
| nonlocal pending_message | ||
| session.add_tool_call_to_current_turn(tool_calls[yield_idx]) | ||
| pending_message = ChatMessage( | ||
| role="tool", | ||
| content=OperationPendingResponse( | ||
| message=pending_msg, | ||
| operation_id=operation_id, | ||
| tool_name=tool_name, | ||
| ).model_dump_json(), | ||
| tool_call_id=tool_call_id, | ||
| ) | ||
| session.messages.append(pending_message) | ||
| await upsert_chat_session(session) | ||
|
|
||
| await _with_optional_lock(session_lock, _save_pending) | ||
| logger.info( | ||
| f"Saved pending operation {operation_id} (task_id={task_id}) " | ||
| f"for tool {tool_name} in session {session.session_id}" | ||
| ) | ||
|
|
||
| # Store task reference in module-level set to prevent GC before completion | ||
| bg_task = asyncio.create_task( | ||
| _execute_long_running_tool_with_streaming( | ||
| tool_name=tool_name, | ||
| parameters=arguments, | ||
| tool_call_id=tool_call_id, | ||
| operation_id=operation_id, | ||
| task_id=task_id, | ||
| session_id=session.session_id, | ||
| user_id=session.user_id, | ||
| ) | ||
| ) | ||
| _background_tasks.add(bg_task) | ||
| bg_task.add_done_callback(_background_tasks.discard) | ||
|
|
||
| # Associate the asyncio task with the stream registry task | ||
| await stream_registry.set_task_asyncio_task(task_id, bg_task) | ||
| except Exception as e: | ||
| # Roll back appended messages β use identity-based removal so | ||
| # it works even when other parallel tools have appended after us. | ||
| async def _rollback() -> None: | ||
| if pending_message and pending_message in session.messages: | ||
| session.messages.remove(pending_message) | ||
|
|
||
| await _with_optional_lock(session_lock, _rollback) | ||
|
|
||
| # Release the Redis lock since the background task won't be spawned | ||
| await _mark_operation_completed(tool_call_id) | ||
| # Mark stream registry task as failed if it was created | ||
| try: | ||
| await stream_registry.mark_task_completed(task_id, status="failed") | ||
| except Exception as mark_err: | ||
| logger.warning(f"Failed to mark task {task_id} as failed: {mark_err}") | ||
| logger.error( | ||
| f"Failed to setup long-running tool {tool_name}: {e}", exc_info=True | ||
| ) | ||
| raise | ||
|
|
||
| # Return immediately - don't wait for completion | ||
| yield StreamToolOutputAvailable( | ||
| yield StreamLongRunningStart( | ||
| toolCallId=tool_call_id, | ||
| toolName=tool_name, | ||
| output=OperationStartedResponse( | ||
| message=started_msg, | ||
| operation_id=operation_id, | ||
| tool_name=tool_name, | ||
| task_id=task_id, # Include task_id for SSE reconnection | ||
| ).model_dump_json(), | ||
| success=True, | ||
| ) | ||
| return | ||
|
|
This comment was marked as outdated.
This comment was marked as outdated.
Sorry, something went wrong.
Add logic to detect long-running tools in the SDK execution path and emit StreamLongRunningStart event to trigger UI feedback display. Changes: - Import StreamLongRunningStart and get_tool - Check if tool has is_long_running=True when StreamToolInputAvailable is received - Yield StreamLongRunningStart event to notify frontend This ensures the mini-game UI displays for long-running tools like create_agent when using the SDK execution path.
Changed StreamLongRunningStart event type from "long-running-start" to "data-long-running-start" to match the Vercel AI SDK's DataUIPart format. This ensures the event is properly added to message.parts and can be detected by the frontend. Changes: - Backend: Update event type to "data-long-running-start" - Backend: Wrap toolCallId/toolName in a "data" object - Frontend: Check for "data-long-running-start" type and access data.toolCallId This follows the AI SDK protocol for custom data events.
β¦able Instead of sending a separate custom event, add isLongRunning boolean to the existing StreamToolInputAvailable event. This is much simpler and works with the AI SDK without needing custom event handling. Backend changes: - Add isLongRunning field to StreamToolInputAvailable - Check tool.is_long_running in response_adapter and set the flag - Remove separate StreamLongRunningStart emission Frontend changes: - Check part.isLongRunning directly on the tool part - Remove message prop from ToolWrapper (no longer needed) - Simplify detection logic This approach piggybacks on the existing tool-input-available event that the AI SDK already recognizes and adds to message.parts.
| ) | ||
|
|
||
| # Check if this tool is long-running (survives SSE disconnection) | ||
| # Notify frontend if this is a long-running tool (e.g., agent generation) | ||
| tool = get_tool(tool_name) | ||
| if tool and tool.is_long_running: | ||
| # Atomic check-and-set: returns False if operation already running (lost race) | ||
| if not await _mark_operation_started(tool_call_id): | ||
| logger.info( | ||
| f"Tool call {tool_call_id} already in progress, returning status" | ||
| ) | ||
| # Build dynamic message based on tool name | ||
| if tool_name == "create_agent": | ||
| in_progress_msg = "Agent creation already in progress. Please wait..." | ||
| elif tool_name == "edit_agent": | ||
| in_progress_msg = "Agent edit already in progress. Please wait..." | ||
| else: | ||
| in_progress_msg = f"{tool_name} already in progress. Please wait..." | ||
|
|
||
| yield StreamToolOutputAvailable( | ||
| toolCallId=tool_call_id, | ||
| toolName=tool_name, | ||
| output=OperationInProgressResponse( | ||
| message=in_progress_msg, | ||
| tool_call_id=tool_call_id, | ||
| ).model_dump_json(), | ||
| success=True, | ||
| ) | ||
| return | ||
|
|
||
| # Generate operation ID and task ID | ||
| operation_id = str(uuid_module.uuid4()) | ||
| task_id = str(uuid_module.uuid4()) | ||
|
|
||
| # Build a user-friendly message based on tool and arguments | ||
| if tool_name == "create_agent": | ||
| agent_desc = arguments.get("description", "") | ||
| # Truncate long descriptions for the message | ||
| desc_preview = ( | ||
| (agent_desc[:100] + "...") if len(agent_desc) > 100 else agent_desc | ||
| ) | ||
| pending_msg = ( | ||
| f"Creating your agent: {desc_preview}" | ||
| if desc_preview | ||
| else "Creating agent... This may take a few minutes." | ||
| ) | ||
| started_msg = ( | ||
| "Agent creation started. You can close this tab - " | ||
| "check your library in a few minutes." | ||
| ) | ||
| elif tool_name == "edit_agent": | ||
| changes = arguments.get("changes", "") | ||
| changes_preview = (changes[:100] + "...") if len(changes) > 100 else changes | ||
| pending_msg = ( | ||
| f"Editing agent: {changes_preview}" | ||
| if changes_preview | ||
| else "Editing agent... This may take a few minutes." | ||
| ) | ||
| started_msg = ( | ||
| "Agent edit started. You can close this tab - " | ||
| "check your library in a few minutes." | ||
| ) | ||
| else: | ||
| pending_msg = f"Running {tool_name}... This may take a few minutes." | ||
| started_msg = ( | ||
| f"{tool_name} started. You can close this tab - " | ||
| "check back in a few minutes." | ||
| ) | ||
|
|
||
| # Track appended message for rollback on failure | ||
| pending_message: ChatMessage | None = None | ||
|
|
||
| # Wrap session save and task creation in try-except to release lock on failure | ||
| try: | ||
| # Create task in stream registry for SSE reconnection support | ||
| await stream_registry.create_task( | ||
| task_id=task_id, | ||
| session_id=session.session_id, | ||
| user_id=session.user_id, | ||
| tool_call_id=tool_call_id, | ||
| tool_name=tool_name, | ||
| operation_id=operation_id, | ||
| ) | ||
|
|
||
| # Attach tool_call and save pending result β lock serialises | ||
| # concurrent session mutations during parallel execution. | ||
| async def _save_pending() -> None: | ||
| nonlocal pending_message | ||
| session.add_tool_call_to_current_turn(tool_calls[yield_idx]) | ||
| pending_message = ChatMessage( | ||
| role="tool", | ||
| content=OperationPendingResponse( | ||
| message=pending_msg, | ||
| operation_id=operation_id, | ||
| tool_name=tool_name, | ||
| ).model_dump_json(), | ||
| tool_call_id=tool_call_id, | ||
| ) | ||
| session.messages.append(pending_message) | ||
| await upsert_chat_session(session) | ||
|
|
||
| await _with_optional_lock(session_lock, _save_pending) | ||
| logger.info( | ||
| f"Saved pending operation {operation_id} (task_id={task_id}) " | ||
| f"for tool {tool_name} in session {session.session_id}" | ||
| ) | ||
|
|
||
| # Store task reference in module-level set to prevent GC before completion | ||
| bg_task = asyncio.create_task( | ||
| _execute_long_running_tool_with_streaming( | ||
| tool_name=tool_name, | ||
| parameters=arguments, | ||
| tool_call_id=tool_call_id, | ||
| operation_id=operation_id, | ||
| task_id=task_id, | ||
| session_id=session.session_id, | ||
| user_id=session.user_id, | ||
| ) | ||
| ) | ||
| _background_tasks.add(bg_task) | ||
| bg_task.add_done_callback(_background_tasks.discard) | ||
|
|
||
| # Associate the asyncio task with the stream registry task | ||
| await stream_registry.set_task_asyncio_task(task_id, bg_task) | ||
| except Exception as e: | ||
| # Roll back appended messages β use identity-based removal so | ||
| # it works even when other parallel tools have appended after us. | ||
| async def _rollback() -> None: | ||
| if pending_message and pending_message in session.messages: | ||
| session.messages.remove(pending_message) | ||
|
|
||
| await _with_optional_lock(session_lock, _rollback) | ||
|
|
||
| # Release the Redis lock since the background task won't be spawned | ||
| await _mark_operation_completed(tool_call_id) | ||
| # Mark stream registry task as failed if it was created | ||
| try: | ||
| await stream_registry.mark_task_completed(task_id, status="failed") | ||
| except Exception as mark_err: | ||
| logger.warning(f"Failed to mark task {task_id} as failed: {mark_err}") | ||
| logger.error( | ||
| f"Failed to setup long-running tool {tool_name}: {e}", exc_info=True | ||
| ) | ||
| raise | ||
|
|
||
| # Return immediately - don't wait for completion | ||
| yield StreamToolOutputAvailable( | ||
| toolCallId=tool_call_id, | ||
| toolName=tool_name, | ||
| output=OperationStartedResponse( | ||
| message=started_msg, | ||
| operation_id=operation_id, | ||
| tool_name=tool_name, | ||
| task_id=task_id, # Include task_id for SSE reconnection | ||
| ).model_dump_json(), | ||
| success=True, | ||
| yield StreamLongRunningStart( | ||
| data={ | ||
| "toolCallId": tool_call_id, | ||
| "toolName": tool_name, | ||
| } | ||
| ) |
This comment was marked as outdated.
This comment was marked as outdated.
Sorry, something went wrong.
Address CodeRabbit review comment by using direct relative paths instead of convoluted ../../tools/CreateAgent/../../components paths.
| input=arguments, | ||
| ) | ||
|
|
||
| # Check if this tool is long-running (survives SSE disconnection) | ||
| # Notify frontend if this is a long-running tool (e.g., agent generation) | ||
| tool = get_tool(tool_name) | ||
| if tool and tool.is_long_running: | ||
| # Atomic check-and-set: returns False if operation already running (lost race) | ||
| if not await _mark_operation_started(tool_call_id): | ||
| logger.info( | ||
| f"Tool call {tool_call_id} already in progress, returning status" | ||
| ) | ||
| # Build dynamic message based on tool name | ||
| if tool_name == "create_agent": | ||
| in_progress_msg = "Agent creation already in progress. Please wait..." | ||
| elif tool_name == "edit_agent": | ||
| in_progress_msg = "Agent edit already in progress. Please wait..." | ||
| else: | ||
| in_progress_msg = f"{tool_name} already in progress. Please wait..." | ||
|
|
||
| yield StreamToolOutputAvailable( | ||
| toolCallId=tool_call_id, | ||
| toolName=tool_name, | ||
| output=OperationInProgressResponse( | ||
| message=in_progress_msg, | ||
| tool_call_id=tool_call_id, | ||
| ).model_dump_json(), | ||
| success=True, | ||
| ) | ||
| return | ||
|
|
||
| # Generate operation ID and task ID | ||
| operation_id = str(uuid_module.uuid4()) | ||
| task_id = str(uuid_module.uuid4()) | ||
|
|
||
| # Build a user-friendly message based on tool and arguments | ||
| if tool_name == "create_agent": | ||
| agent_desc = arguments.get("description", "") | ||
| # Truncate long descriptions for the message | ||
| desc_preview = ( | ||
| (agent_desc[:100] + "...") if len(agent_desc) > 100 else agent_desc | ||
| ) | ||
| pending_msg = ( | ||
| f"Creating your agent: {desc_preview}" | ||
| if desc_preview | ||
| else "Creating agent... This may take a few minutes." | ||
| ) | ||
| started_msg = ( | ||
| "Agent creation started. You can close this tab - " | ||
| "check your library in a few minutes." | ||
| ) | ||
| elif tool_name == "edit_agent": | ||
| changes = arguments.get("changes", "") | ||
| changes_preview = (changes[:100] + "...") if len(changes) > 100 else changes | ||
| pending_msg = ( | ||
| f"Editing agent: {changes_preview}" | ||
| if changes_preview | ||
| else "Editing agent... This may take a few minutes." | ||
| ) | ||
| started_msg = ( | ||
| "Agent edit started. You can close this tab - " | ||
| "check your library in a few minutes." | ||
| ) | ||
| else: | ||
| pending_msg = f"Running {tool_name}... This may take a few minutes." | ||
| started_msg = ( | ||
| f"{tool_name} started. You can close this tab - " | ||
| "check back in a few minutes." | ||
| ) | ||
|
|
||
| # Track appended message for rollback on failure | ||
| pending_message: ChatMessage | None = None | ||
|
|
||
| # Wrap session save and task creation in try-except to release lock on failure | ||
| try: | ||
| # Create task in stream registry for SSE reconnection support | ||
| await stream_registry.create_task( | ||
| task_id=task_id, | ||
| session_id=session.session_id, | ||
| user_id=session.user_id, | ||
| tool_call_id=tool_call_id, | ||
| tool_name=tool_name, | ||
| operation_id=operation_id, | ||
| ) | ||
|
|
||
| # Attach tool_call and save pending result β lock serialises | ||
| # concurrent session mutations during parallel execution. | ||
| async def _save_pending() -> None: | ||
| nonlocal pending_message | ||
| session.add_tool_call_to_current_turn(tool_calls[yield_idx]) | ||
| pending_message = ChatMessage( | ||
| role="tool", | ||
| content=OperationPendingResponse( | ||
| message=pending_msg, | ||
| operation_id=operation_id, | ||
| tool_name=tool_name, | ||
| ).model_dump_json(), | ||
| tool_call_id=tool_call_id, | ||
| ) | ||
| session.messages.append(pending_message) | ||
| await upsert_chat_session(session) | ||
|
|
||
| await _with_optional_lock(session_lock, _save_pending) | ||
| logger.info( | ||
| f"Saved pending operation {operation_id} (task_id={task_id}) " | ||
| f"for tool {tool_name} in session {session.session_id}" | ||
| ) | ||
|
|
||
| # Store task reference in module-level set to prevent GC before completion | ||
| bg_task = asyncio.create_task( | ||
| _execute_long_running_tool_with_streaming( | ||
| tool_name=tool_name, | ||
| parameters=arguments, | ||
| tool_call_id=tool_call_id, | ||
| operation_id=operation_id, | ||
| task_id=task_id, | ||
| session_id=session.session_id, | ||
| user_id=session.user_id, | ||
| ) | ||
| ) | ||
| _background_tasks.add(bg_task) | ||
| bg_task.add_done_callback(_background_tasks.discard) | ||
|
|
||
| # Associate the asyncio task with the stream registry task | ||
| await stream_registry.set_task_asyncio_task(task_id, bg_task) | ||
| except Exception as e: | ||
| # Roll back appended messages β use identity-based removal so | ||
| # it works even when other parallel tools have appended after us. | ||
| async def _rollback() -> None: | ||
| if pending_message and pending_message in session.messages: | ||
| session.messages.remove(pending_message) | ||
|
|
||
| await _with_optional_lock(session_lock, _rollback) | ||
|
|
||
| # Release the Redis lock since the background task won't be spawned | ||
| await _mark_operation_completed(tool_call_id) | ||
| # Mark stream registry task as failed if it was created | ||
| try: | ||
| await stream_registry.mark_task_completed(task_id, status="failed") | ||
| except Exception as mark_err: | ||
| logger.warning(f"Failed to mark task {task_id} as failed: {mark_err}") | ||
| logger.error( | ||
| f"Failed to setup long-running tool {tool_name}: {e}", exc_info=True | ||
| ) | ||
| raise | ||
|
|
||
| # Return immediately - don't wait for completion | ||
| yield StreamToolOutputAvailable( | ||
| toolCallId=tool_call_id, | ||
| toolName=tool_name, | ||
| output=OperationStartedResponse( | ||
| message=started_msg, | ||
| operation_id=operation_id, | ||
| tool_name=tool_name, | ||
| task_id=task_id, # Include task_id for SSE reconnection | ||
| ).model_dump_json(), | ||
| success=True, | ||
| yield StreamLongRunningStart( | ||
| data={ | ||
| "toolCallId": tool_call_id, |
This comment was marked as outdated.
This comment was marked as outdated.
Sorry, something went wrong.
The AI SDK strips unknown fields from tool-input-available events.
Use the standard providerMetadata field instead, which the SDK
preserves, to pass the isLongRunning flag to the frontend.
Backend changes:
- Change isLongRunning field to providerMetadata object
- Set providerMetadata: {isLongRunning: true} for long-running tools
- Add debug logging to verify flag is set
Frontend changes:
- Check part.providerMetadata.isLongRunning instead of part.isLongRunning
- Add console debug logging to verify detection
Tested programmatically - the complete flow works correctly.
ToolWrapper no longer accepts message prop. This was causing TypeScript errors and preventing the component from rendering. All ToolWrapper calls now only pass part and children props. Fixes 11 TypeScript compilation errors.
- Only invalidate session queries on successful completion (status='ready') - Previously invalidated on both 'ready' and 'error' status - When backend returned 500, error status triggered refetch which caused infinite loop - Fixes spam of 'Let me check!' messages when backend is unavailable
- AI SDK's ToolUIPart doesn't have toolName as separate field
- Tool name is encoded in type field as 'tool-{name}'
- Extract it using substring(5) to remove 'tool-' prefix
- Update debug logging to show extracted toolName
- This fixes 'toolName: unknown' in console logs
- Agent creation can take longer than 12 seconds - Previous 12s timeout was causing 'Stream timed out' errors - Increased to 60s to accommodate long-running tool execution
- Disable input when status='submitted' to prevent message spam - Set stream start timeout to 30s (only detects backend down, doesn't affect tool execution) - Once stream starts, tools can run indefinitely (timeout is cleared) - Mini-game shows during long-running tool execution without timeout
Summary
Fixes agent generation by removing async delegation from the executor while keeping
is_long_runningfor frontend UI hints. Introduces stream-based communication for long-running tools.Problem
The executor was still spawning background tasks when it saw
is_long_running = True, causing:Solution
1. Remove Async Delegation from Executor
_yield_tool_callinservice.pyis_long_running2. Add Stream Event for Long-Running Tools
StreamLongRunningStartevent type toresponse_model.pytool.is_long_running = True3. Frontend Event-Based Detection
long-running-startevents from streamLONG_RUNNING_TOOLSlistis_long_runningpropertyArchitecture
Before:
is_long_running = Trueβ spawns background task β async delegationAfter:
is_long_running = Trueβ yieldsStreamLongRunningStartevent β runs synchronouslyBenefits
β Tools run synchronously with proper streaming
β Completion messages appear in chat immediately
β No hardcoded lists to synchronize
β Backend has full control over UI hints
β Mini-game automatically shows/hides based on tool state
Testing