feat(platform): Add file upload to copilot chat [SECRT-1788]#12220
feat(platform): Add file upload to copilot chat [SECRT-1788]#12220
Conversation
Enable users to attach files (documents, images, spreadsheets, video, audio) to copilot chat messages with upload progress feedback and attachment display in sent messages. Resolves: https://linear.app/autogpt/issue/SECRT-1788 Backend: - Add POST /workspace/files/upload endpoint with virus scanning, size limits, and storage cap enforcement - Add GET /workspace/storage/usage endpoint - Enrich chat stream requests with file metadata so the LLM can reference attached files via read_workspace_file - Thread file_ids through CoPilotExecutionEntry and RabbitMQ queue Frontend: - Add AttachmentMenu (+) popover with file category picker - Add FileChips showing attached files with upload spinner state - Leverage AI SDK native FileUIPart for sent message file parts - Add MessageAttachments component rendering file pills in chat bubbles - Add upload proxy route (Next.js API β backend) - Extract file_ids from FileUIPart URLs in transport layer - Handle upload failures gracefully (chips revert, no phantom messages) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds end-to-end workspace file attachment support: frontend UI and upload proxy, backend upload/storage endpoints, StreamChatRequest/file_ids propagation and validation, message enrichment with an β[Attached files]β block, propagation of sanitized file_ids into Copilot execution, and related tests and settings. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Frontend
participant UploadProxy as Upload Proxy
participant Backend
participant WorkspaceDB as WorkspaceDB
participant CopilotExec as CopilotExecutor
User->>Frontend: Attach files + Send (text + files)
Frontend->>UploadProxy: POST /api/workspace/files/upload (formData)
UploadProxy->>Backend: Forward POST /api/workspace/files/upload (Authorization)
Backend->>WorkspaceDB: Store files, scan, enforce quota
WorkspaceDB-->>Backend: Return file_ids + metadata
Backend-->>UploadProxy: UploadFileResponse (metadata + ids)
UploadProxy-->>Frontend: Return file_ids + metadata
Frontend->>Backend: POST /api/chat/.../stream {message, file_ids}
Backend->>WorkspaceDB: Batch-fetch UserWorkspaceFile for sanitized file_ids
Backend-->>Backend: Build "[Attached files]" block and append to message
Backend->>CopilotExec: Enqueue copilot turn (message + sanitized file_ids)
CopilotExec-->>Backend: Ack enqueue
Backend-->>Frontend: Streamed response (with file parts)
Estimated code review effortπ― 4 (Complex) | β±οΈ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
π₯ Pre-merge checks | β 2 | β 1β Failed checks (1 warning)
β Passed checks (2 passed)
βοΈ Tip: You can configure your own custom pre-merge checks in the settings. β¨ Finishing Touches
π§ͺ Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
π PR Overlap DetectionThis check compares your PR against all other open PRs targeting the same branch to detect potential merge conflicts early. π΄ Merge Conflicts DetectedThe following PRs have been tested and will have merge conflicts if merged after this PR. Consider coordinating with the authors.
π’ Low Risk β File Overlap OnlyThese PRs touch the same files but different sections (click to expand)
Summary: 10 conflict(s), 0 medium risk, 8 low risk (out of 18 PRs with file overlap) Auto-generated on push. Ignores: |
autogpt_platform/backend/backend/api/features/workspace/routes.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and canβt be posted inline due to platform limitations.
β οΈ Outside diff range comments (2)
autogpt_platform/frontend/src/app/(platform)/copilot/components/EmptySession/EmptySession.tsx (1)
79-97:β οΈ Potential issue | π‘ MinorDisable quick actions while files are uploading.
isUploadingFilesis passed toChatInput, but quick-action buttons remain clickable and can trigger overlappingonSendcalls during active upload.π§ Suggested fix
- disabled={isCreatingSession || loadingAction !== null} + disabled={ + isCreatingSession || + loadingAction !== null || + !!isUploadingFiles + }π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@autogpt_platform/frontend/src/app/`(platform)/copilot/components/EmptySession/EmptySession.tsx around lines 79 - 97, Quick-action buttons can still be triggered while files upload; update the Button disable logic and handler to respect isUploadingFiles: add isUploadingFiles to the disabled expression for the Buttons rendered in the quickActions map (alongside isCreatingSession and loadingAction !== null) and set aria-busy when isUploadingFiles is true; additionally, guard handleQuickActionClick (the click handler) to no-op early if isUploadingFiles is true to prevent overlapping onSend calls.autogpt_platform/frontend/src/app/api/chat/sessions/[sessionId]/stream/route.ts (1)
27-35:β οΈ Potential issue | π‘ MinorValidate
file_idsat the proxy boundary before forwarding.
file_idsis forwarded without runtime shape checks. Invalid payloads can propagate to backend and fail unpredictably. Return a 400 here if it is notstring[] | null | undefined.π§ Suggested fix
const body = await request.json(); const { message, is_user_message, context, file_ids } = body; + + if ( + file_ids != null && + (!Array.isArray(file_ids) || + file_ids.some((id) => typeof id !== "string")) + ) { + return new Response( + JSON.stringify({ error: "file_ids must be an array of strings" }), + { status: 400, headers: { "Content-Type": "application/json" } }, + ); + } @@ - file_ids: file_ids || null, + file_ids: file_ids ?? null,Also applies to: 59-64
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@autogpt_platform/frontend/src/app/api/chat/sessions/`[sessionId]/stream/route.ts around lines 27 - 35, Validate the incoming file_ids after parsing the request JSON in the route handler: ensure file_ids is either undefined, null, or an array of strings (string[]); if any element is not a string or file_ids is another type, return a 400 JSON response like the existing message block. Apply the same runtime shape check wherever file_ids is read/forwarded (the initial parse block around the const { message, is_user_message, context, file_ids } = body and the later usage around lines 59-64) so invalid payloads are rejected at the proxy boundary before forwarding to the backend.
π§Ή Nitpick comments (6)
autogpt_platform/backend/backend/api/features/workspace/routes.py (1)
171-182: Avoid double-buffering large uploads in memory.Collecting chunks into a list and then
b"".join(...)creates an additional full-size copy. Abytearraypath reduces peak memory pressure per request.Proposed refactor
- chunks: list[bytes] = [] + content = bytearray() total_size = 0 while chunk := await file.read(64 * 1024): # 64KB chunks total_size += len(chunk) if total_size > max_file_bytes: raise fastapi.HTTPException( status_code=400, detail=f"File exceeds maximum size of {config.max_file_size_mb} MB", ) - chunks.append(chunk) - content = b"".join(chunks) + content.extend(chunk) + content_bytes = bytes(content) ... - if current_usage + len(content) > storage_limit_bytes: + if current_usage + len(content_bytes) > storage_limit_bytes: ... - usage_ratio = (current_usage + len(content)) / storage_limit_bytes + usage_ratio = (current_usage + len(content_bytes)) / storage_limit_bytes ... - await scan_content_safe(content, filename=filename) + await scan_content_safe(content_bytes, filename=filename) ... - workspace_file = await manager.write_file(content, filename) + workspace_file = await manager.write_file(content_bytes, filename)π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@autogpt_platform/backend/backend/api/features/workspace/routes.py` around lines 171 - 182, The current upload loop double-buffers large files by appending chunks to chunks:list[bytes] and then calling b"".join(chunks); change it to accumulate into a single mutable buffer (e.g., a bytearray) to avoid the extra full-size copy: replace chunks list with a bytearray buffer, extend it on each await file.read(...) iteration, keep the total_size check against max_file_bytes, and then use bytes(buffer) or pass the bytearray as needed for the final content variable (replace content = b"".join(chunks) with converting/using the bytearray); reference variables/functions: chunks, total_size, max_file_bytes/config.max_file_size_mb, file.read, and content to locate where to modify.autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatInput/ChatInput.tsx (1)
9-11: Please removeuseCallbackand keep handlers as function declarations.This adds unnecessary memoization and arrow-handler style where simple function declarations are enough.
As per coding guidelines: "Do not use `useCallback` or `useMemo` unless asked to optimize a specific function" and "Use function declarations (not arrow functions) for components and handlers."Proposed refactor
-import { ChangeEvent, useCallback, useState } from "react"; +import { ChangeEvent, useState } from "react"; ... - const handleChange = useCallback( - (e: ChangeEvent<HTMLTextAreaElement>) => { - if (isRecording) return; - baseHandleChange(e); - }, - [isRecording, baseHandleChange], - ); + function handleChange(e: ChangeEvent<HTMLTextAreaElement>) { + if (isRecording) return; + baseHandleChange(e); + }Also applies to: 50-54, 79-86
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@autogpt_platform/frontend/src/app/`(platform)/copilot/components/ChatInput/ChatInput.tsx around lines 9 - 11, Remove the unnecessary useCallback usage and arrow-handler style in the ChatInput component: delete the import of useCallback and replace any const <name> = useCallback(...) or const <name> = (...) => {...} handler definitions (e.g. the input change handler, submit handler, attach/file handlers referenced in the component and noted around the other occurrences) with plain function declarations like function handleX(...) {...}; ensure their parameter types (ChangeEvent, etc.) and references in JSX remain unchanged so behavior is preserved.autogpt_platform/backend/backend/api/features/chat/routes.py (2)
406-407: Silent skip may hide configuration issues.When a file_id is not found (or belongs to a different workspace), it's silently skipped. This is reasonable for graceful degradation, but consider logging a warning to help diagnose issues where users report missing attachments.
π Suggested logging
wf = await get_workspace_file(fid, workspace.id) if wf is None: + logger.warning( + f"[STREAM] File {fid} not found in workspace for user {user_id}" + ) continueπ€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@autogpt_platform/backend/backend/api/features/chat/routes.py` around lines 406 - 407, The loop that checks "if wf is None: continue" should log a warning instead of silently skipping so missing attachments can be diagnosed; update the code in routes.py at the block where "wf" and "file_id" are validated to emit a warning-level log (using the module's existing logger or Python logging) that includes the file_id and relevant context (workspace id, user id or request id if available) and then continue, so behavior stays graceful but misconfigurations are visible in logs.
398-418: Consider batch fetching files to avoid N+1 queries.The loop makes individual
get_workspace_filecalls for eachfile_id. With many attachments, this causes N+1 database queries. Consider adding a batch fetch method (e.g.,get_workspace_files(file_ids, workspace_id)) that retrieves all files in a single query.β»οΈ Suggested approach
# In workspace.py, add a batch fetch method: async def get_workspace_files( file_ids: list[str], workspace_id: str, ) -> list[WorkspaceFile]: files = await UserWorkspaceFile.prisma().find_many( where={ "id": {"in": file_ids}, "isDeleted": False, "workspaceId": workspace_id, } ) return [WorkspaceFile.from_db(f) for f in files]Then in the route:
- for fid in request.file_ids: - wf = await get_workspace_file(fid, workspace.id) - if wf is None: - continue + workspace_files = await get_workspace_files(request.file_ids, workspace.id) + for wf in workspace_files: size_kb = round(wf.size_bytes / 1024, 1) file_lines.append( - f"- {wf.name} ({wf.mime_type}, {size_kb} KB), file_id={fid}" + f"- {wf.name} ({wf.mime_type}, {size_kb} KB), file_id={wf.id}" )π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@autogpt_platform/backend/backend/api/features/chat/routes.py` around lines 398 - 418, The current loop calls get_workspace_file for each id causing N+1 DB queries; modify backend.data.workspace to add a batch fetch function (e.g., get_workspace_files(file_ids, workspace_id)) that returns all WorkspaceFile objects in one query, then in the route replace the per-id awaits with a single call to get_workspace_files(workspace_id, request.file_ids) and iterate the returned list to build file_lines (still using get_or_create_workspace to obtain workspace.id); ensure the batch method filters by workspaceId and isDeleted so behavior matches get_workspace_file.autogpt_platform/frontend/src/app/api/openapi.json (2)
12141-12147: Consider enforcing uniqueness forfile_ids.Allowing duplicate IDs can cause redundant backend work; making the array unique tightens the request contract.
Proposed schema tweak
"file_ids": { "anyOf": [ - { "items": { "type": "string" }, "type": "array" }, + { + "type": "array", + "items": { "type": "string" }, + "uniqueItems": true + }, { "type": "null" } ], "title": "File Ids" }π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@autogpt_platform/frontend/src/app/api/openapi.json` around lines 12141 - 12147, The schema for the file_ids property allows arrays of strings but doesn't prevent duplicate entries; update the OpenAPI schema for "file_ids" (the anyOf branch with "type": "array" and "items": {"type":"string"}) to include "uniqueItems": true so arrays must contain unique string IDs, keeping the existing null branch intact.
6530-6552: Document expected non-2xx upload failures in the OpenAPI contract.Given this endpoint enforces file-size/storage/scan constraints, exposing explicit failure statuses will make frontend error handling and generated clients more deterministic.
Proposed OpenAPI response additions
"responses": { "200": { "description": "Successful Response", "content": { "application/json": { "schema": { "$ref": "#/components/schemas/backend__api__features__workspace__routes__UploadFileResponse" } } } }, + "413": { "description": "File exceeds allowed size limit" }, + "415": { "description": "Unsupported media type" }, + "507": { "description": "Workspace storage quota exceeded" }, "401": { "$ref": "#/components/responses/HTTP401NotAuthenticatedError" }, "422": {π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@autogpt_platform/frontend/src/app/api/openapi.json` around lines 6530 - 6552, Add explicit non-2xx responses to the upload endpoint's responses block (the one that currently returns "$ref": "#/components/schemas/backend__api__features__workspace__routes__UploadFileResponse") so frontend clients can handle size/type/storage/scan errors deterministically: include HTTP 413 (Payload Too Large) with a schema ref like "#/components/schemas/UploadFileTooLarge", HTTP 415 (Unsupported Media Type) with "#/components/schemas/UnsupportedFileTypeError", HTTP 507 (Insufficient Storage) with "#/components/schemas/InsufficientStorageError", and a scan/quarantine error (e.g., 422 or 409) with "#/components/schemas/FileScanQuarantinedError"; for each response add a descriptive "description" and "content": {"application/json": {"schema": {"$ref": "..."} }} and keep the existing 200 and 401/422 entries referencing UploadFileResponse and HTTPValidationError.
π€ Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@autogpt_platform/backend/backend/api/features/chat/routes.py`:
- Line 399: The handler currently allows an unbounded number of request.file_ids
which can cause expensive DB work; add an upper bound check (e.g., 20) either in
the Pydantic model StreamChatRequest as a constrained list or add a runtime
guard before the block that handles file_ids in routes.py (the if
request.file_ids and user_id: branch) and raise HTTPException(status_code=400,
detail="Too many file attachments (max 20)") when len(request.file_ids) exceeds
the limit; update any unit tests that exercise large file_id lists accordingly.
In `@autogpt_platform/backend/backend/api/features/workspace/routes.py`:
- Around line 176-179: The handler currently raises fastapi.HTTPException with
status_code=400 for per-file size violations; change it to use the correct 413
status by setting status_code to 413 (or
fastapi.status.HTTP_413_REQUEST_ENTITY_TOO_LARGE) in the raise call so the
file-size check in the route returns "Payload Too Large" instead of Bad Request
(locate the raise in the workspace upload route where fastapi.HTTPException is
raised for file size).
- Around line 186-215: The storage cap check is race-prone because
get_workspace_total_size + manager.write_file are separate; fix by making the
size check and reservation atomic: obtain a per-workspace lock (e.g., async
mutex keyed by workspace.id or a Redis-based distributed lock) before calling
get_workspace_total_size, recheck available space accounting for the incoming
file size, and only then call WorkspaceManager.write_file (or call a new
WorkspaceManager.reserve_and_write(filename, content_size, content) method that
performs the check+write under the same lock/DB transaction); ensure the lock is
released on success or error and return the same HTTP 413 if the recheck fails.
In `@autogpt_platform/frontend/src/app/`(platform)/copilot/useCopilotPage.ts:
- Around line 340-359: The upload workflow can reject (uploadFiles) and
currently lacks explicit .catch handling, causing uncaught promise rejections
and potentially dropping the user's send; wrap the uploadFiles promise paths in
try/catch or add a .catch to handle errors, ensure setIsUploadingFiles(false)
runs in finally, and on failure show the toast error and still call sendMessage
if appropriate; specifically update the branches around uploadFiles(...) in
useCopilotPage (references: uploadFiles, buildFileParts, sendMessage,
setIsUploadingFiles, toast) to catch network/JSON errors, log/report them via
toast, and only proceed to buildFileParts/sendMessage when uploads succeeded.
In `@autogpt_platform/frontend/src/app/api/workspace/files/upload/route.ts`:
- Around line 31-34: The code currently returns a NextResponse with a hardcoded
"Content-Type": "application/json" when returning errorText (see NextResponse
and response variables), which can mislabel non-JSON responses; change the
response creation to preserve the backend Content-Type by reading
response.headers.get('content-type') and using that value (or fall back to
'text/plain' if missing) instead of always "application/json", or simply copy
through response.headers when constructing the NextResponse so the original
content type and any other relevant headers are preserved.
- Around line 19-21: The code currently forwards the sentinel string returned by
getServerAuthToken() as a bearer token; update the logic around the token
variable (from getServerAuthToken()) and the headers object so you only set
headers["Authorization"] = `Bearer ${token}` when token is a real value (e.g.,
token is truthy and token !== "no-token-found"); otherwise do not add the
Authorization header (or handle missing auth explicitly) in the upload route
handler in route.ts.
---
Outside diff comments:
In
`@autogpt_platform/frontend/src/app/`(platform)/copilot/components/EmptySession/EmptySession.tsx:
- Around line 79-97: Quick-action buttons can still be triggered while files
upload; update the Button disable logic and handler to respect isUploadingFiles:
add isUploadingFiles to the disabled expression for the Buttons rendered in the
quickActions map (alongside isCreatingSession and loadingAction !== null) and
set aria-busy when isUploadingFiles is true; additionally, guard
handleQuickActionClick (the click handler) to no-op early if isUploadingFiles is
true to prevent overlapping onSend calls.
In
`@autogpt_platform/frontend/src/app/api/chat/sessions/`[sessionId]/stream/route.ts:
- Around line 27-35: Validate the incoming file_ids after parsing the request
JSON in the route handler: ensure file_ids is either undefined, null, or an
array of strings (string[]); if any element is not a string or file_ids is
another type, return a 400 JSON response like the existing message block. Apply
the same runtime shape check wherever file_ids is read/forwarded (the initial
parse block around the const { message, is_user_message, context, file_ids } =
body and the later usage around lines 59-64) so invalid payloads are rejected at
the proxy boundary before forwarding to the backend.
---
Nitpick comments:
In `@autogpt_platform/backend/backend/api/features/chat/routes.py`:
- Around line 406-407: The loop that checks "if wf is None: continue" should log
a warning instead of silently skipping so missing attachments can be diagnosed;
update the code in routes.py at the block where "wf" and "file_id" are validated
to emit a warning-level log (using the module's existing logger or Python
logging) that includes the file_id and relevant context (workspace id, user id
or request id if available) and then continue, so behavior stays graceful but
misconfigurations are visible in logs.
- Around line 398-418: The current loop calls get_workspace_file for each id
causing N+1 DB queries; modify backend.data.workspace to add a batch fetch
function (e.g., get_workspace_files(file_ids, workspace_id)) that returns all
WorkspaceFile objects in one query, then in the route replace the per-id awaits
with a single call to get_workspace_files(workspace_id, request.file_ids) and
iterate the returned list to build file_lines (still using
get_or_create_workspace to obtain workspace.id); ensure the batch method filters
by workspaceId and isDeleted so behavior matches get_workspace_file.
In `@autogpt_platform/backend/backend/api/features/workspace/routes.py`:
- Around line 171-182: The current upload loop double-buffers large files by
appending chunks to chunks:list[bytes] and then calling b"".join(chunks); change
it to accumulate into a single mutable buffer (e.g., a bytearray) to avoid the
extra full-size copy: replace chunks list with a bytearray buffer, extend it on
each await file.read(...) iteration, keep the total_size check against
max_file_bytes, and then use bytes(buffer) or pass the bytearray as needed for
the final content variable (replace content = b"".join(chunks) with
converting/using the bytearray); reference variables/functions: chunks,
total_size, max_file_bytes/config.max_file_size_mb, file.read, and content to
locate where to modify.
In
`@autogpt_platform/frontend/src/app/`(platform)/copilot/components/ChatInput/ChatInput.tsx:
- Around line 9-11: Remove the unnecessary useCallback usage and arrow-handler
style in the ChatInput component: delete the import of useCallback and replace
any const <name> = useCallback(...) or const <name> = (...) => {...} handler
definitions (e.g. the input change handler, submit handler, attach/file handlers
referenced in the component and noted around the other occurrences) with plain
function declarations like function handleX(...) {...}; ensure their parameter
types (ChangeEvent, etc.) and references in JSX remain unchanged so behavior is
preserved.
In `@autogpt_platform/frontend/src/app/api/openapi.json`:
- Around line 12141-12147: The schema for the file_ids property allows arrays of
strings but doesn't prevent duplicate entries; update the OpenAPI schema for
"file_ids" (the anyOf branch with "type": "array" and "items":
{"type":"string"}) to include "uniqueItems": true so arrays must contain unique
string IDs, keeping the existing null branch intact.
- Around line 6530-6552: Add explicit non-2xx responses to the upload endpoint's
responses block (the one that currently returns "$ref":
"#/components/schemas/backend__api__features__workspace__routes__UploadFileResponse")
so frontend clients can handle size/type/storage/scan errors deterministically:
include HTTP 413 (Payload Too Large) with a schema ref like
"#/components/schemas/UploadFileTooLarge", HTTP 415 (Unsupported Media Type)
with "#/components/schemas/UnsupportedFileTypeError", HTTP 507 (Insufficient
Storage) with "#/components/schemas/InsufficientStorageError", and a
scan/quarantine error (e.g., 422 or 409) with
"#/components/schemas/FileScanQuarantinedError"; for each response add a
descriptive "description" and "content": {"application/json": {"schema":
{"$ref": "..."} }} and keep the existing 200 and 401/422 entries referencing
UploadFileResponse and HTTPValidationError.
βΉοΈ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
π Files selected for processing (17)
autogpt_platform/backend/backend/api/features/chat/routes.pyautogpt_platform/backend/backend/api/features/workspace/routes.pyautogpt_platform/backend/backend/copilot/executor/utils.pyautogpt_platform/backend/backend/util/settings.pyautogpt_platform/frontend/src/app/(platform)/copilot/CopilotPage.tsxautogpt_platform/frontend/src/app/(platform)/copilot/components/ChatContainer/ChatContainer.tsxautogpt_platform/frontend/src/app/(platform)/copilot/components/ChatInput/ChatInput.tsxautogpt_platform/frontend/src/app/(platform)/copilot/components/ChatInput/components/AttachmentMenu.tsxautogpt_platform/frontend/src/app/(platform)/copilot/components/ChatInput/components/FileChips.tsxautogpt_platform/frontend/src/app/(platform)/copilot/components/ChatInput/useChatInput.tsautogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsxautogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/components/MessageAttachments.tsxautogpt_platform/frontend/src/app/(platform)/copilot/components/EmptySession/EmptySession.tsxautogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.tsautogpt_platform/frontend/src/app/api/chat/sessions/[sessionId]/stream/route.tsautogpt_platform/frontend/src/app/api/openapi.jsonautogpt_platform/frontend/src/app/api/workspace/files/upload/route.ts
autogpt_platform/backend/backend/api/features/workspace/routes.py
Outdated
Show resolved
Hide resolved
autogpt_platform/frontend/src/app/api/workspace/files/upload/route.ts
Outdated
Show resolved
Hide resolved
autogpt-reviewer
left a comment
There was a problem hiding this comment.
PR #12220 β feat(platform): Add file upload to copilot chat [SECRT-1788]
Author: ntindle | Requested by: ntindle | Files: 17 changed (+775/-86)
Key files: workspace/routes.py (+128), useCopilotPage.ts (+114), ChatInput.tsx (+85/-52), AttachmentMenu.tsx (+124), upload/route.ts (+49), chat/routes.py (+24), settings.py (+7), executor/utils.py (+6)
π― Verdict: REQUEST_CHANGES
What This PR Does
Adds file attachment support to copilot chat. Users click a "+" button to select files by category (documents, images, spreadsheets, video, audio), see file chips with upload progress, and sent messages display file attachment pills. Backend adds upload endpoint with ClamAV virus scanning, per-file size limits, and per-user storage caps. File metadata is enriched into the LLM's context so the copilot can reference uploaded files.
Specialist Findings
π‘οΈ Security file_ids list is unbounded β DoS via DB query storm, (3) no server-side MIME type validation (frontend allowlist is cosmetic), (4) filename not sanitized for path components, (5) ClamAV silently passes all uploads when disabled β needs health assertion.
ποΈ Architecture β
β Clean design. Upload separated from chat, proper workspace isolation, extensible categories. Minor: duplicate UploadFileResponse schema names in OpenAPI, file_ids threaded through RabbitMQ but unused by consumer (dead payload), URL regex to extract file_id is fragile.
β‘ Performance get_workspace_total_size() does full table scan + Python sum instead of SQL SUM() β runs on every upload, (2) double ClamAV scan per upload (route + write_file both call scan_content_safe), (3) sequential frontend uploads (no parallelism), (4) ~3Γ file size memory per upload (chunks + joined + ClamAV buffer), (5) N+1 DB queries for file_ids enrichment, (6) Next.js proxy buffers entire upload in memory.
π§ͺ Testing π΄ β Zero new tests for 775 lines of security-sensitive code. No backend tests for upload endpoint, size limits, quota enforcement, virus scanning, or cross-user auth. No frontend component tests or stories for 3 new components. No e2e test for file upload flow. The PR's test plan is manual-only checkboxes (all unchecked).
π Quality chat/routes.py:400), Config() instantiated per-request, magic numbers (chunk size, warning threshold), file_ids not UUID-validated, inconsistent error detail types (dict vs string).
π¦ Product β
β Flow is intuitive and follows established patterns. Good: file-only messages work, error resilience, send-button state. Should fix: no client-side file count/size limits, no per-file upload progress, .csv in both Documents and Spreadsheets, no file size on chips, virus detection error is generic.
π¬ Discussion
π QA
Blockers (3)
-
π΄ Zero test coverage β 775 lines of security-sensitive upload code (virus scanning, quota enforcement, auth scoping) with zero tests. At minimum: upload happy path, size limit enforcement, storage quota enforcement, virus scan mock, cross-user
file_idsauth test. [workspace/routes.py,chat/routes.py, all 3 new frontend components] -
π΄ TOCTOU race on storage quota β
get_workspace_total_size()reads current usage, thenwrite_file()writes. Concurrent uploads both pass the check before either commits. Use advisory lock or atomic DB constraint. [workspace/routes.py:180-193] -
π΄ Unbounded
file_idslist β No length validation or UUID format check. Attacker can send thousands of IDs, each triggering a DB query. Addmax_length=20and UUID pattern validation. [chat/routes.py:82, stream request model]
Should Fix (Follow-up OK) (8)
- O(n) storage quota query β
get_workspace_total_size()loads all file rows into Python to sum. Use SQLSUM()aggregate. [workspace.py:326-337] - Double virus scan β Both
upload_fileroute andwrite_filecallscan_content_safe. Remove one or passskip_scanflag. [workspace/routes.py:211,workspace.py:187] - Sequential frontend uploads β Files upload one-at-a-time in a
forloop. UsePromise.allSettledwith concurrency limit. [useCopilotPage.ts:~350-370] - No server-side MIME validation β Upload accepts any file type. Frontend allowlist is client-only. Add server-side allowlist or
python-magiccontent check. [workspace/routes.py] - Filename path components not stripped β
../../etc/passwdcreates confusing virtual paths. Applyos.path.basename(). [workspace/routes.py:206] - N+1 queries for file_ids enrichment β Each
file_idis a separate DB query. Use batchfind_many(where={"id": {"in": file_ids}}). [chat/routes.py:397-415] - Next.js proxy buffers entire upload β
request.formData()parses full upload into memory. Consider streamingrequest.bodydirectly. [upload/route.ts] "no-token-found"sentinel forwarded as bearer token β CodeRabbit flagged: when auth fails, string literal sent as Authorization header. [upload/route.ts]
Nice to Have (5)
- Client-side file count/size limits with user feedback before upload
- Per-file upload progress indication (not just all-or-nothing spinner)
- Storage usage surfaced in UI (endpoint exists, frontend doesn't call it)
- Storybook stories for
AttachmentMenu,FileChips,MessageAttachments - Remove
.csvfrom Documents category (already in Spreadsheets)
Risk Assessment
Merge risk: MEDIUM β Security-sensitive feature (file upload + virus scanning) with zero tests and a race condition on quota enforcement. The core design is solid and auth is properly scoped, but the gaps need to be closed before merge.
Rollback: EASY β New endpoints and UI components. No database migrations. Feature is additive β removing it doesn't break existing chat.
QA Evidence
11 screenshots captured during live testing:
- Landing page, signup flow, dashboard, copilot chat interface
- Attachment "+" button visible and functional
- Popover with 5 file categories working
- File chips rendering with remove buttons
- Upload attempted (GCS unavailable in review env β infrastructure, not code)
CI Status (at review time)
β
lint, integration_test, CodeQL, scope, size, license, snyk
β³ test (3.11/3.12/3.13), types, e2e, Cursor Bugbot β still pending
Merge Conflicts
5 PRs with confirmed conflicts: #12074 (17 files), #12163, #12207 (ChatInput.tsx, 4 conflicts), #12203, #12210
@ntindle The feature design is excellent β clean separation, solid auth, good UX. Three items need attention before merge: (1) add tests for the upload endpoint and quota enforcement, (2) fix the TOCTOU race on storage quota, and (3) add validation/limits on the file_ids parameter. The rest are follow-up items. Close to mergeable with one more pass. π
- B1: Add file_ids validation (max 20, UUID format filtering) on StreamChatRequest - B2: Add post-write storage quota check to eliminate TOCTOU race condition - B3: Add backend tests for upload routes and chat file_ids enrichment - SF1: Query Prisma directly in get_workspace_total_size (skip Pydantic conversion) - SF3: Parallelize frontend file uploads with Promise.allSettled - SF4: Add server-side file extension allowlist (415 for disallowed types) - SF5: Sanitize filenames with os.path.basename to strip directory components - SF6: Replace N+1 file_ids loop with batch find_many query - SF8: Guard against "no-token-found" sentinel in upload proxy route - N1: Add client-side file count (10) and size (100MB) limits - N5: Remove duplicate .csv from Documents category in AttachmentMenu - Redesign file attachment display with ContentCard components Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
β¦Gravitas/AutoGPT into ntindle/file-upload
Review Feedback AddressedAll items from the review have been addressed in commit 41eccad: Blockers
Should Fix
Nice to Have
Also redesigned file attachment display in chat messages using ContentCard components. |
There was a problem hiding this comment.
Actionable comments posted: 5
β»οΈ Duplicate comments (1)
autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts (1)
333-351:β οΈ Potential issue | π MajorUpload-failure handling still drops send intent and can bubble errors.
Line 448 throws when all uploads fail, and Line 344β351 returns early after pending state is cleared. This can drop the user message path on full upload failure.
π Suggested hardening
if (files.length > 0) { setIsUploadingFiles(true); void uploadFiles(files, sessionId) .then((uploaded) => { if (uploaded.length === 0) { toast({ title: "File upload failed", description: "Could not upload any files. Please try again.", variant: "destructive", }); - return; + sendMessage({ text: msg }); + return; } const fileParts = buildFileParts(uploaded); sendMessage({ text: msg, files: fileParts.length > 0 ? fileParts : undefined, }); }) + .catch(() => { + toast({ + title: "File upload failed", + description: "Could not upload files. Sending message without attachments.", + variant: "destructive", + }); + sendMessage({ text: msg }); + }) .finally(() => setIsUploadingFiles(false)); @@ try { const uploaded = await uploadFiles(files, sessionId); if (uploaded.length === 0) { - // All uploads failed β abort send so chips revert to editable - throw new Error("All file uploads failed"); + toast({ + title: "File upload failed", + description: "Could not upload any files. Sending without attachments.", + variant: "destructive", + }); + sendMessage({ text: trimmed }); + return; }#!/bin/bash # Verify current failure paths in the hook. FILE='autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts' echo "1) Throw on all-failed uploads:" rg -n -C2 'throw new Error\("All file uploads failed"\)' "$FILE" echo echo "2) Pending branch returns early on zero successful uploads:" rg -n -C2 'uploaded.length === 0|return;' "$FILE" echo echo "3) Pending upload chain currently lacks explicit .catch:" rg -n -C2 'void uploadFiles\(|\.then\(|\.catch\(|\.finally\(' "$FILE"Also applies to: 364-393, 442-449
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@autogpt_platform/frontend/src/app/`(platform)/copilot/useCopilotPage.ts around lines 333 - 351, The useEffect handler clears pendingMessage and pendingFilesRef then starts uploadFiles but on complete-failure it toasts and throws which drops the original send intent and can bubble errors; change the flow in the pending-message branch (the code that uses pendingMessage, pendingFilesRef, setPendingMessage, setIsUploadingFiles, uploadFiles) so that you do not permanently clear the send intent before uploads succeed and you never throw on upload failure: preserve or restore pendingMessage/pendingFilesRef when uploads fail, remove the thrown Error("All file uploads failed"), add a .catch handler on the uploadFiles promise to handle/upload errors (show toast, reset isUploadingFiles, and restore pendingMessage/pendingFilesRef to allow retry), and ensure the normal message-send path is invoked when uploads succeed (or when uploads fail but user should still send message) so sending is not dropped.
π§Ή Nitpick comments (2)
autogpt_platform/backend/backend/data/workspace.py (1)
330-332: Docstring optimization claim is not clearly reflected in the query.Line 330-332 says only
sizeBytesis fetched, but Line 339-341 uses a genericfind_manyand then sums in Python. Please switch to an explicit projection/DB aggregation (or relax the docstring claim) to avoid unnecessary row payload/memory work.In prisma-client-py 0.15.0, what is the recommended way to compute SUM(sizeBytes) for UserWorkspaceFile (server-side aggregate vs select projection), and what does find_many return by default when no select/include is provided?Also applies to: 339-342
π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@autogpt_platform/backend/backend/data/workspace.py` around lines 330 - 332, The code claims it only fetches sizeBytes but actually calls prisma find_many on UserWorkspaceFile and sums in Python; replace that find_many+Python-sum pattern with a server-side aggregation (use Prisma client's aggregate/_sum for the UserWorkspaceFile sizeBytes field) to compute SUM(sizeBytes) on the DB side, or if you must keep a multi-row fetch, at minimum use a select projection that only selects sizeBytes to reduce payload; update the code that calls find_many and the subsequent Python sum accordingly and adjust the docstring to match the chosen approach (reference the UserWorkspaceFile model, the find_many call, and the Python summation logic).autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts (1)
378-379: AddSentry.captureException()for proper exception tracking.This is manual exception handling with an explicit error path. Replace or supplement
console.error()withSentry.captureException()to align with exception tracking patterns elsewhere in the codebase and coding guidelines. The toast notification correctly handles user feedback; Sentry will provide structured error reporting for debugging.π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@autogpt_platform/frontend/src/app/`(platform)/copilot/useCopilotPage.ts around lines 378 - 379, In the error path inside useCopilotPage (where console.error("File upload failed:", err) and toast(...) are used), call Sentry.captureException(err) to record the exception (or replace console.error with Sentry.captureException) and ensure `@sentry/browser` or the project Sentry instance is imported/available in this module; keep the existing toast for user feedback but add Sentry.captureException(err) so the failure is tracked in Sentry.
π€ Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@autogpt_platform/backend/backend/api/features/chat/routes_test.py`:
- Around line 91-93: The test currently only asserts response.status_code != 422
which can hide other failures; update the assertion in
autogpt_platform/backend/backend/api/features/chat/routes_test.py to assert an
explicit successful status (e.g., assert response.status_code == 200) for the
20-file happy path by replacing the loose != 422 check on the response object
with a direct equality check to the expected success code.
In `@autogpt_platform/backend/backend/api/features/chat/routes.py`:
- Around line 404-434: The code validates and scopes file IDs into valid_ids and
uses them to build files_block, but later code still forwards the original
request.file_ids downstream (e.g., into executor payloads), bypassing
sanitization; update all places where request.file_ids is passed (search for
usages of request.file_ids or where executor payloads are constructed after
enrichment) to pass the filtered valid_ids (or an empty list if none) and ensure
any downstream fields (payload keys) are labeled/workspace-scoped the same way
you used for the database query (e.g., use the workspace from
get_or_create_workspace and the valid_ids list) so only sanitized,
workspace-scoped IDs are sent onward.
In `@autogpt_platform/backend/backend/api/features/workspace/routes_test.py`:
- Around line 106-119: The quota tests (e.g.,
test_upload_storage_quota_exceeded) rely on implicit default
Config.max_workspace_storage_mb and are brittle; make them deterministic by
stubbing the config value used by the route. In the test, patch or monkeypatch
the Config (or the symbol used to read max_workspace_storage_mb in
backend.api.features.workspace.routes) to a known value before calling _upload,
then assert against that fixed quota; keep the existing mocks
get_or_create_workspace and get_workspace_total_size intact so behavior is
predictable.
In `@autogpt_platform/backend/backend/api/features/workspace/routes.py`:
- Around line 255-271: The quota math can divide by zero when
config.max_workspace_storage_mb is 0; in the route code around
storage_limit_bytes/current_usage/usage_ratio (use the variables
storage_limit_bytes, current_usage, usage_ratio and the helper
get_workspace_total_size), add a guard that detects storage_limit_bytes == 0 and
return a controlled HTTPException (status_code=413) with the same detail shape
(message, used_bytes=current_usage, limit_bytes=0, used_percent=100 or
appropriate value) before any division occurs, and ensure the later warning
calculation only runs when storage_limit_bytes > 0 so no ZeroDivisionError is
possible.
In `@autogpt_platform/backend/backend/data/workspace.py`:
- Around line 339-341: The query using UserWorkspaceFile.prisma().find_many
filters only by workspaceId and isDeleted, allowing information disclosure
across workspaces; modify the where clause to also enforce ownership by the
requesting user (e.g., include userId or equivalent current_user_id in the where
filter) or validate ownership first by fetching the Workspace and checking
workspace.userId === current_user_id before listing files; update the call
referencing workspace_id and the calling context that supplies the requesting
user's id (e.g., current_user_id, user.id) so all data access in this file
enforces user ID checks.
---
Duplicate comments:
In `@autogpt_platform/frontend/src/app/`(platform)/copilot/useCopilotPage.ts:
- Around line 333-351: The useEffect handler clears pendingMessage and
pendingFilesRef then starts uploadFiles but on complete-failure it toasts and
throws which drops the original send intent and can bubble errors; change the
flow in the pending-message branch (the code that uses pendingMessage,
pendingFilesRef, setPendingMessage, setIsUploadingFiles, uploadFiles) so that
you do not permanently clear the send intent before uploads succeed and you
never throw on upload failure: preserve or restore
pendingMessage/pendingFilesRef when uploads fail, remove the thrown Error("All
file uploads failed"), add a .catch handler on the uploadFiles promise to
handle/upload errors (show toast, reset isUploadingFiles, and restore
pendingMessage/pendingFilesRef to allow retry), and ensure the normal
message-send path is invoked when uploads succeed (or when uploads fail but user
should still send message) so sending is not dropped.
---
Nitpick comments:
In `@autogpt_platform/backend/backend/data/workspace.py`:
- Around line 330-332: The code claims it only fetches sizeBytes but actually
calls prisma find_many on UserWorkspaceFile and sums in Python; replace that
find_many+Python-sum pattern with a server-side aggregation (use Prisma client's
aggregate/_sum for the UserWorkspaceFile sizeBytes field) to compute
SUM(sizeBytes) on the DB side, or if you must keep a multi-row fetch, at minimum
use a select projection that only selects sizeBytes to reduce payload; update
the code that calls find_many and the subsequent Python sum accordingly and
adjust the docstring to match the chosen approach (reference the
UserWorkspaceFile model, the find_many call, and the Python summation logic).
In `@autogpt_platform/frontend/src/app/`(platform)/copilot/useCopilotPage.ts:
- Around line 378-379: In the error path inside useCopilotPage (where
console.error("File upload failed:", err) and toast(...) are used), call
Sentry.captureException(err) to record the exception (or replace console.error
with Sentry.captureException) and ensure `@sentry/browser` or the project Sentry
instance is imported/available in this module; keep the existing toast for user
feedback but add Sentry.captureException(err) so the failure is tracked in
Sentry.
βΉοΈ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
π Files selected for processing (10)
autogpt_platform/backend/backend/api/features/chat/routes.pyautogpt_platform/backend/backend/api/features/chat/routes_test.pyautogpt_platform/backend/backend/api/features/workspace/routes.pyautogpt_platform/backend/backend/api/features/workspace/routes_test.pyautogpt_platform/backend/backend/data/workspace.pyautogpt_platform/frontend/src/app/(platform)/copilot/components/ChatInput/components/AttachmentMenu.tsxautogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsxautogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/components/MessageAttachments.tsxautogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.tsautogpt_platform/frontend/src/app/api/workspace/files/upload/route.ts
π§ Files skipped from review as they are similar to previous changes (4)
- autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/components/MessageAttachments.tsx
- autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatMessagesContainer/ChatMessagesContainer.tsx
- autogpt_platform/frontend/src/app/api/workspace/files/upload/route.ts
- autogpt_platform/frontend/src/app/(platform)/copilot/components/ChatInput/components/AttachmentMenu.tsx
π Review details
β° Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: types
- GitHub Check: Seer Code Review
- GitHub Check: Cursor Bugbot
- GitHub Check: end-to-end tests
- GitHub Check: test (3.12)
- GitHub Check: test (3.11)
- GitHub Check: test (3.13)
- GitHub Check: Check PR Status
π§° Additional context used
π Path-based instructions (20)
autogpt_platform/backend/**/*.py
π CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/backend/**/*.py: Use Python 3.11 (required; managed by Poetry via pyproject.toml) for backend development
Always run 'poetry run format' (Black + isort) before linting in backend development
Always run 'poetry run lint' (ruff) after formatting in backend development
Files:
autogpt_platform/backend/backend/api/features/chat/routes_test.pyautogpt_platform/backend/backend/api/features/workspace/routes_test.pyautogpt_platform/backend/backend/data/workspace.pyautogpt_platform/backend/backend/api/features/chat/routes.pyautogpt_platform/backend/backend/api/features/workspace/routes.py
autogpt_platform/backend/backend/api/features/**/*.py
π CodeRabbit inference engine (.github/copilot-instructions.md)
Update routes in '/backend/backend/api/features/' and add/update Pydantic models in the same directory for API development
When modifying API routes, update corresponding Pydantic models in the same directory and write tests alongside the route file
Files:
autogpt_platform/backend/backend/api/features/chat/routes_test.pyautogpt_platform/backend/backend/api/features/workspace/routes_test.pyautogpt_platform/backend/backend/api/features/chat/routes.pyautogpt_platform/backend/backend/api/features/workspace/routes.py
autogpt_platform/backend/**/*.{py,txt}
π CodeRabbit inference engine (autogpt_platform/backend/CLAUDE.md)
Use
poetry runprefix for all Python commands, including testing, linting, formatting, and migrations
Files:
autogpt_platform/backend/backend/api/features/chat/routes_test.pyautogpt_platform/backend/backend/api/features/workspace/routes_test.pyautogpt_platform/backend/backend/data/workspace.pyautogpt_platform/backend/backend/api/features/chat/routes.pyautogpt_platform/backend/backend/api/features/workspace/routes.py
autogpt_platform/backend/**/*_test.py
π CodeRabbit inference engine (autogpt_platform/backend/CLAUDE.md)
autogpt_platform/backend/**/*_test.py: Always review snapshot changes withgit diffbefore committing when updating snapshots withpoetry run pytest --snapshot-update
Use pytest with snapshot testing for API responses in test files
Colocate test files with source files using the*_test.pynaming convention
Files:
autogpt_platform/backend/backend/api/features/chat/routes_test.pyautogpt_platform/backend/backend/api/features/workspace/routes_test.py
autogpt_platform/backend/backend/api/**/*.py
π CodeRabbit inference engine (autogpt_platform/backend/CLAUDE.md)
autogpt_platform/backend/backend/api/**/*.py: Use FastAPI for building REST and WebSocket endpoints
Use JWT-based authentication with Supabase integration
Files:
autogpt_platform/backend/backend/api/features/chat/routes_test.pyautogpt_platform/backend/backend/api/features/workspace/routes_test.pyautogpt_platform/backend/backend/api/features/chat/routes.pyautogpt_platform/backend/backend/api/features/workspace/routes.py
autogpt_platform/backend/backend/**/*.py
π CodeRabbit inference engine (autogpt_platform/backend/CLAUDE.md)
Use Prisma ORM for database operations in PostgreSQL with pgvector for embeddings
Files:
autogpt_platform/backend/backend/api/features/chat/routes_test.pyautogpt_platform/backend/backend/api/features/workspace/routes_test.pyautogpt_platform/backend/backend/data/workspace.pyautogpt_platform/backend/backend/api/features/chat/routes.pyautogpt_platform/backend/backend/api/features/workspace/routes.py
autogpt_platform/**/*.py
π CodeRabbit inference engine (AGENTS.md)
Format Python code with
poetry run format
Files:
autogpt_platform/backend/backend/api/features/chat/routes_test.pyautogpt_platform/backend/backend/api/features/workspace/routes_test.pyautogpt_platform/backend/backend/data/workspace.pyautogpt_platform/backend/backend/api/features/chat/routes.pyautogpt_platform/backend/backend/api/features/workspace/routes.py
autogpt_platform/backend/**/*test*.py
π CodeRabbit inference engine (AGENTS.md)
Run
poetry run testfor backend testing (runs pytest with docker based postgres + prisma)
Files:
autogpt_platform/backend/backend/api/features/chat/routes_test.pyautogpt_platform/backend/backend/api/features/workspace/routes_test.py
autogpt_platform/frontend/**/*.{ts,tsx,js,jsx}
π CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/frontend/**/*.{ts,tsx,js,jsx}: Use Node.js 21+ with pnpm package manager for frontend development
Always run 'pnpm format' for formatting and linting code in frontend development
autogpt_platform/frontend/**/*.{ts,tsx,js,jsx}: Runpnpm formatto auto-fix formatting issues before completing work
Runpnpm lintto check for lint errors and fix any that appear before completing work
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts
autogpt_platform/frontend/**/*.{tsx,ts}
π CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/frontend/**/*.{tsx,ts}: Use function declarations for components and handlers (not arrow functions) in React components
Only use arrow functions for small inline lambdas (map, filter, etc.) in React components
Use PascalCase for component names and camelCase with 'use' prefix for hook names in React
Use Tailwind CSS utilities only for styling in frontend components
Use design system components from 'src/components/' (atoms, molecules, organisms) in frontend development
Never use 'src/components/legacy/' in frontend code
Only use Phosphor Icons (@phosphor-icons/react) for icons in frontend components
Use generated API hooks from '@/app/api/generated/endpoints/' instead of deprecated 'BackendAPI' or 'src/lib/autogpt-server-api/'
Use React Query for server state (via generated hooks) in frontend development
Default to client components ('use client') in Next.js; only use server components for SEO or extreme TTFB needs
Use '' component for rendering errors in frontend UI; use toast notifications for mutation errors; use 'Sentry.captureException()' for manual exceptions
Separate render logic from data/behavior in React components; keep comments minimal (code should be self-documenting)
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts
autogpt_platform/frontend/**/*.{ts,tsx}
π CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/frontend/**/*.{ts,tsx}: No barrel files or 'index.ts' re-exports in frontend code
Regenerate API hooks with 'pnpm generate:api' after backend OpenAPI spec changes in frontend developmentRun
pnpm typesto check for type errors and fix any that appear before completing work
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts
autogpt_platform/frontend/**/*.{js,jsx,ts,tsx}
π CodeRabbit inference engine (AGENTS.md)
autogpt_platform/frontend/**/*.{js,jsx,ts,tsx}: Format frontend code usingpnpm format
Never use components fromsrc/components/__legacy__/*
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts
autogpt_platform/frontend/src/**/*.{ts,tsx}
π CodeRabbit inference engine (AGENTS.md)
autogpt_platform/frontend/src/**/*.{ts,tsx}: Structure components asComponentName/ComponentName.tsx+useComponentName.ts+helpers.tsand use design system components fromsrc/components/(atoms, molecules, organisms)
Use generated API hooks from@/app/api/__generated__/endpoints/with patternuse{Method}{Version}{OperationName}and regenerate withpnpm generate:api
Use function declarations (not arrow functions) for components and handlers
Separate render logic from business logic with component.tsx + useComponent.ts + helpers.ts structure
Colocate state when possible, avoid creating large components, use sub-components in local/componentsfolder
Avoid large hooks, abstract logic intohelpers.tsfiles when sensible
Use arrow functions only for callbacks, not for component declarations
Avoid comments at all times unless the code is very complex
Do not useuseCallbackoruseMemounless asked to optimize a given function
autogpt_platform/frontend/src/**/*.{ts,tsx}: Use function declarations (not arrow functions) for components and handlers
Use type-safe generated API hooks via Orval + React Query for data fetching
Use React Query for server state management and co-locate UI state in components/hooks
Separate render logic (.tsx) from business logic (use*.tshooks)
Use only shadcn/ui (Radix UI primitives) with Tailwind CSS for UI components
Use Phosphor Icons only for all icon implementations
Use ErrorCard component for render errors, toast for mutations, and Sentry for exceptions
Use design system components fromsrc/components/(atoms, molecules, organisms)
Never usesrc/components/__legacy__/*components
Use generated API hooks from@/app/api/__generated__/endpoints/with patternuse{Method}{Version}{OperationName}
Use Tailwind CSS only for styling with design tokens
Do not useuseCallbackoruseMemounless asked to optimize a specific function
Never type withanyunless a variable/attribute can actually be of any type
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts
autogpt_platform/frontend/**/*.{js,jsx,ts,tsx,css}
π CodeRabbit inference engine (AGENTS.md)
Use Tailwind CSS only for styling, use design tokens, and use Phosphor Icons only
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts
autogpt_platform/frontend/src/**/*.ts
π CodeRabbit inference engine (AGENTS.md)
Do not type hook returns, let Typescript infer as much as possible
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts
autogpt_platform/**/*.{ts,tsx}
π CodeRabbit inference engine (AGENTS.md)
Never type with
any, if no types available useunknown
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts
autogpt_platform/frontend/src/**/*.{ts,tsx,js,jsx}
π CodeRabbit inference engine (autogpt_platform/frontend/CLAUDE.md)
Fully capitalize acronyms in symbols, e.g.
graphID,useBackendAPI
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts
autogpt_platform/frontend/src/**/use*.ts
π CodeRabbit inference engine (autogpt_platform/frontend/CLAUDE.md)
autogpt_platform/frontend/src/**/use*.ts: Extract component logic into custom hooks grouped by concern, with each hook in its own.tsfile
Do not type hook returns; let TypeScript infer types as much as possible
Files:
autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts
autogpt_platform/backend/backend/data/**/*.py
π CodeRabbit inference engine (.github/copilot-instructions.md)
All data access in backend requires user ID checks; verify this for any 'data/*.py' changes
Files:
autogpt_platform/backend/backend/data/workspace.py
autogpt_platform/**/data/*.py
π CodeRabbit inference engine (AGENTS.md)
For changes touching
data/*.py, validate user ID checks or explain why not needed
Files:
autogpt_platform/backend/backend/data/workspace.py
π§ Learnings (11)
π Learning: 2026-02-04T16:50:20.508Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/backend/CLAUDE.md:0-0
Timestamp: 2026-02-04T16:50:20.508Z
Learning: Applies to autogpt_platform/backend/backend/api/features/**/*.py : When modifying API routes, update corresponding Pydantic models in the same directory and write tests alongside the route file
Applied to files:
autogpt_platform/backend/backend/api/features/chat/routes_test.pyautogpt_platform/backend/backend/api/features/workspace/routes_test.pyautogpt_platform/backend/backend/api/features/workspace/routes.py
π Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/backend/backend/api/features/**/*.py : Update routes in '/backend/backend/api/features/' and add/update Pydantic models in the same directory for API development
Applied to files:
autogpt_platform/backend/backend/api/features/chat/routes_test.pyautogpt_platform/backend/backend/api/features/workspace/routes_test.pyautogpt_platform/backend/backend/api/features/chat/routes.pyautogpt_platform/backend/backend/api/features/workspace/routes.py
π Learning: 2026-02-26T17:02:22.448Z
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12211
File: .pre-commit-config.yaml:160-179
Timestamp: 2026-02-26T17:02:22.448Z
Learning: Keep the pre-commit hook pattern broad for autogpt_platform/backend to ensure OpenAPI schema changes are captured. Do not narrow to backend/api/ alone, since the generated schema depends on Pydantic models across multiple directories (backend/data/, backend/blocks/, backend/copilot/, backend/integrations/, backend/util/). Narrowing could miss schema changes and cause frontend type desynchronization.
Applied to files:
autogpt_platform/backend/backend/api/features/chat/routes_test.pyautogpt_platform/backend/backend/api/features/workspace/routes_test.pyautogpt_platform/backend/backend/data/workspace.pyautogpt_platform/backend/backend/api/features/chat/routes.pyautogpt_platform/backend/backend/api/features/workspace/routes.py
π Learning: 2026-02-26T21:29:44.094Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/frontend/CLAUDE.md:0-0
Timestamp: 2026-02-26T21:29:44.094Z
Learning: Applies to autogpt_platform/frontend/**/*.{ts,tsx} : Run `pnpm types` to check for type errors and fix any that appear before completing work
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts
π Learning: 2026-02-26T21:29:44.094Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/frontend/CLAUDE.md:0-0
Timestamp: 2026-02-26T21:29:44.094Z
Learning: Applies to autogpt_platform/frontend/**/*.{ts,tsx,js,jsx} : Run `pnpm lint` to check for lint errors and fix any that appear before completing work
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts
π Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/frontend/**/*.{tsx,ts} : Use '<ErrorCard />' component for rendering errors in frontend UI; use toast notifications for mutation errors; use 'Sentry.captureException()' for manual exceptions
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts
π Learning: 2026-02-26T21:29:44.094Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/frontend/CLAUDE.md:0-0
Timestamp: 2026-02-26T21:29:44.094Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Use ErrorCard component for render errors, toast for mutations, and Sentry for exceptions
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts
π Learning: 2026-02-26T21:29:44.094Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/frontend/CLAUDE.md:0-0
Timestamp: 2026-02-26T21:29:44.094Z
Learning: Applies to autogpt_platform/frontend/src/**/*.{ts,tsx} : Use React Query for server state management and co-locate UI state in components/hooks
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts
π Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/frontend/**/*.{tsx,ts} : Use React Query for server state (via generated hooks) in frontend development
Applied to files:
autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts
π Learning: 2026-02-26T21:29:27.605Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/CLAUDE.md:0-0
Timestamp: 2026-02-26T21:29:27.605Z
Learning: Backend is a Python FastAPI server with async support
Applied to files:
autogpt_platform/backend/backend/api/features/chat/routes.py
π Learning: 2026-02-04T16:50:20.508Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/backend/CLAUDE.md:0-0
Timestamp: 2026-02-04T16:50:20.508Z
Learning: Applies to autogpt_platform/backend/backend/api/**/*.py : Use FastAPI for building REST and WebSocket endpoints
Applied to files:
autogpt_platform/backend/backend/api/features/chat/routes.py
𧬠Code graph analysis (3)
autogpt_platform/backend/backend/api/features/chat/routes_test.py (5)
autogpt_platform/backend/backend/util/service_test.py (1)
TestClient(125-167)autogpt_platform/backend/backend/api/features/workspace/routes_test.py (1)
setup_app_auth(39-44)autogpt_platform/backend/backend/api/conftest.py (1)
mock_jwt_user(20-27)autogpt_platform/autogpt_libs/autogpt_libs/auth/jwt_utils.py (1)
get_jwt_payload(19-42)autogpt_platform/backend/backend/api/features/chat/routes.py (1)
create_session(185-211)
autogpt_platform/backend/backend/api/features/workspace/routes_test.py (3)
autogpt_platform/backend/backend/api/conftest.py (1)
mock_jwt_user(20-27)autogpt_platform/autogpt_libs/autogpt_libs/auth/jwt_utils.py (1)
get_jwt_payload(19-42)autogpt_platform/backend/backend/util/workspace.py (1)
write_file(151-288)
autogpt_platform/backend/backend/api/features/chat/routes.py (1)
autogpt_platform/backend/backend/data/workspace.py (1)
get_or_create_workspace(74-95)
autogpt_platform/backend/backend/api/features/chat/routes_test.py
Outdated
Show resolved
Hide resolved
autogpt_platform/backend/backend/api/features/workspace/routes.py
Outdated
Show resolved
Hide resolved
...tform/frontend/src/app/(platform)/copilot/components/ChatInput/components/AttachmentMenu.tsx
Outdated
Show resolved
Hide resolved
- Forward only sanitized, workspace-scoped file_ids to executor (not raw input) - Guard quota math against ZeroDivisionError when storage limit is zero - Sync backend extension allowlist with frontend (add .tsv, .tiff, .htm, .mkv, .flac, .aac, .m4a, .wma) - Use explicit 200 assertion in test instead of != 422 - Regenerate OpenAPI schema Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 3
π§Ή Nitpick comments (2)
autogpt_platform/backend/backend/api/features/workspace/routes.py (1)
245-257: Avoid building a second full in-memory copy of the upload payload.Line 247-257 stores chunks and then
joins, which duplicates memory for large files under concurrent uploads. Consider streaming into abytearray(single buffer growth) or scanning/writing via a temp file abstraction to reduce peak memory.π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@autogpt_platform/backend/backend/api/features/workspace/routes.py` around lines 245 - 257, The current upload loop builds a list "chunks" and then does b"".join(chunks), duplicating memory; replace that with a single growable buffer (e.g., use a bytearray named "buffer" and call buffer.extend(chunk)" while keeping the same size check using "total_size" and "max_file_bytes"), or alternatively stream directly to a temporary file (open a tempfile and write each chunk) and use that file handle instead of creating "content". Update references to "chunks" and "content" accordingly (remove the join and use "buffer" or the temp file path/handle) so peak memory is not duplicated during concurrent large uploads.autogpt_platform/frontend/src/app/api/openapi.json (1)
12141-12150: Tightenfile_idsitem validation in the schema.
maxItems: 20is good, but item type can also declare UUID semantics to match backend expectations and improve generated client validation.π§© Proposed schema refinement
"file_ids": { "anyOf": [ { - "items": { "type": "string" }, + "items": { "type": "string", "format": "uuid" }, "type": "array", "maxItems": 20 }, { "type": "null" } ], "title": "File Ids"π€ Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@autogpt_platform/frontend/src/app/api/openapi.json` around lines 12141 - 12150, The file_ids schema currently allows any string items; restrict each item to UUID form to match backend expectations by updating the "file_ids" property in openapi.json: change the array "items" descriptor (currently { "type": "string" }) to require a UUID (e.g., add "format": "uuid" or a UUID "pattern") while keeping "type":"array" and "maxItems":20 and preserving the null alternative; target the "file_ids" schema block so generated clients will validate UUID strings instead of arbitrary text.
π€ Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@autogpt_platform/backend/backend/api/features/workspace/routes.py`:
- Around line 224-225: The session_id query param is used directly when
composing session paths (e.g., the f"/sessions/{session_id}" usage in
WorkspaceManager) which allows path traversal; update the handler and any
WorkspaceManager methods that build session_path to validate and sanitize
session_id first: enforce a strict whitelist (e.g., only alphanumeric, dash,
underscore), enforce a reasonable max length, reject empty or dot-segments, and
normalize by using os.path.basename or pathlib.Path(...).name (or otherwise
strip any slashes) before interpolating into paths; apply the same
validation/sanitization to all places that accept session_id (including the
other occurrences mentioned) and return a 4xx error for invalid session_id
values.
- Around line 31-92: The _ALLOWED_EXTENSIONS set currently includes code/config
and archive types that are outside the feature scope; restrict it to only the
product-scoped categories (documents, spreadsheets, images, audio/video) by
removing code/executable-adjacent extensions (.py, .js, .ts, .sh, .bat, etc.)
and archive types (.zip, .tar, .gz, .7z, .rar) and any config-only formats you
deem risky (.json, .xml, .yaml/.yml, .toml, .ini, .cfg), leaving only safe
document/image/spreadsheet/audio/video extensions in _ALLOWED_EXTENSIONS and
update any related validation/tests/docs to reflect the narrowed allowlist.
In `@autogpt_platform/frontend/src/app/api/openapi.json`:
- Around line 6530-6552: Add the missing HTTP 415 Unsupported Media Type
response to the upload endpoint responses so the OpenAPI contract matches
backend behavior: in the responses object for the operation that currently
references backend__api__features__workspace__routes__UploadFileResponse, add a
"415" entry pointing to a new or existing response component (e.g., "$ref":
"#/components/responses/HTTP415UnsupportedMediaTypeError") or an inline
description/schema describing disallowed extensions; update components/responses
with HTTP415UnsupportedMediaTypeError if needed so the API spec declares
rejection of disallowed file extensions.
---
Nitpick comments:
In `@autogpt_platform/backend/backend/api/features/workspace/routes.py`:
- Around line 245-257: The current upload loop builds a list "chunks" and then
does b"".join(chunks), duplicating memory; replace that with a single growable
buffer (e.g., use a bytearray named "buffer" and call buffer.extend(chunk)"
while keeping the same size check using "total_size" and "max_file_bytes"), or
alternatively stream directly to a temporary file (open a tempfile and write
each chunk) and use that file handle instead of creating "content". Update
references to "chunks" and "content" accordingly (remove the join and use
"buffer" or the temp file path/handle) so peak memory is not duplicated during
concurrent large uploads.
In `@autogpt_platform/frontend/src/app/api/openapi.json`:
- Around line 12141-12150: The file_ids schema currently allows any string
items; restrict each item to UUID form to match backend expectations by updating
the "file_ids" property in openapi.json: change the array "items" descriptor
(currently { "type": "string" }) to require a UUID (e.g., add "format": "uuid"
or a UUID "pattern") while keeping "type":"array" and "maxItems":20 and
preserving the null alternative; target the "file_ids" schema block so generated
clients will validate UUID strings instead of arbitrary text.
βΉοΈ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled
You can enable these sources in your CodeRabbit configuration.
π Files selected for processing (4)
autogpt_platform/backend/backend/api/features/chat/routes.pyautogpt_platform/backend/backend/api/features/chat/routes_test.pyautogpt_platform/backend/backend/api/features/workspace/routes.pyautogpt_platform/frontend/src/app/api/openapi.json
π§ Files skipped from review as they are similar to previous changes (1)
- autogpt_platform/backend/backend/api/features/chat/routes_test.py
π Review details
β° Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
- GitHub Check: lint
- GitHub Check: integration_test
- GitHub Check: Seer Code Review
- GitHub Check: types
- GitHub Check: Cursor Bugbot
- GitHub Check: end-to-end tests
- GitHub Check: test (3.11)
- GitHub Check: Check PR Status
- GitHub Check: test (3.13)
- GitHub Check: test (3.12)
π§° Additional context used
π Path-based instructions (6)
autogpt_platform/backend/**/*.py
π CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/backend/**/*.py: Use Python 3.11 (required; managed by Poetry via pyproject.toml) for backend development
Always run 'poetry run format' (Black + isort) before linting in backend development
Always run 'poetry run lint' (ruff) after formatting in backend development
Files:
autogpt_platform/backend/backend/api/features/chat/routes.pyautogpt_platform/backend/backend/api/features/workspace/routes.py
autogpt_platform/backend/backend/api/features/**/*.py
π CodeRabbit inference engine (.github/copilot-instructions.md)
Update routes in '/backend/backend/api/features/' and add/update Pydantic models in the same directory for API development
When modifying API routes, update corresponding Pydantic models in the same directory and write tests alongside the route file
Files:
autogpt_platform/backend/backend/api/features/chat/routes.pyautogpt_platform/backend/backend/api/features/workspace/routes.py
autogpt_platform/backend/**/*.{py,txt}
π CodeRabbit inference engine (autogpt_platform/backend/CLAUDE.md)
Use
poetry runprefix for all Python commands, including testing, linting, formatting, and migrations
Files:
autogpt_platform/backend/backend/api/features/chat/routes.pyautogpt_platform/backend/backend/api/features/workspace/routes.py
autogpt_platform/backend/backend/api/**/*.py
π CodeRabbit inference engine (autogpt_platform/backend/CLAUDE.md)
autogpt_platform/backend/backend/api/**/*.py: Use FastAPI for building REST and WebSocket endpoints
Use JWT-based authentication with Supabase integration
Files:
autogpt_platform/backend/backend/api/features/chat/routes.pyautogpt_platform/backend/backend/api/features/workspace/routes.py
autogpt_platform/backend/backend/**/*.py
π CodeRabbit inference engine (autogpt_platform/backend/CLAUDE.md)
Use Prisma ORM for database operations in PostgreSQL with pgvector for embeddings
Files:
autogpt_platform/backend/backend/api/features/chat/routes.pyautogpt_platform/backend/backend/api/features/workspace/routes.py
autogpt_platform/**/*.py
π CodeRabbit inference engine (AGENTS.md)
Format Python code with
poetry run format
Files:
autogpt_platform/backend/backend/api/features/chat/routes.pyautogpt_platform/backend/backend/api/features/workspace/routes.py
π§ Learnings (7)
π Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/backend/backend/api/features/**/*.py : Update routes in '/backend/backend/api/features/' and add/update Pydantic models in the same directory for API development
Applied to files:
autogpt_platform/backend/backend/api/features/chat/routes.pyautogpt_platform/backend/backend/api/features/workspace/routes.pyautogpt_platform/frontend/src/app/api/openapi.json
π Learning: 2026-02-04T16:50:20.508Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/backend/CLAUDE.md:0-0
Timestamp: 2026-02-04T16:50:20.508Z
Learning: Applies to autogpt_platform/backend/backend/api/features/**/*.py : When modifying API routes, update corresponding Pydantic models in the same directory and write tests alongside the route file
Applied to files:
autogpt_platform/backend/backend/api/features/chat/routes.pyautogpt_platform/backend/backend/api/features/workspace/routes.py
π Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/backend/backend/data/**/*.py : All data access in backend requires user ID checks; verify this for any 'data/*.py' changes
Applied to files:
autogpt_platform/backend/backend/api/features/chat/routes.py
π Learning: 2026-02-04T16:50:51.495Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-02-04T16:50:51.495Z
Learning: Applies to autogpt_platform/**/data/*.py : For changes touching `data/*.py`, validate user ID checks or explain why not needed
Applied to files:
autogpt_platform/backend/backend/api/features/chat/routes.py
π Learning: 2026-02-26T21:29:27.605Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/CLAUDE.md:0-0
Timestamp: 2026-02-26T21:29:27.605Z
Learning: Backend is a Python FastAPI server with async support
Applied to files:
autogpt_platform/backend/backend/api/features/chat/routes.py
π Learning: 2026-02-04T16:50:20.508Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/backend/CLAUDE.md:0-0
Timestamp: 2026-02-04T16:50:20.508Z
Learning: Applies to autogpt_platform/backend/backend/api/**/*.py : Use FastAPI for building REST and WebSocket endpoints
Applied to files:
autogpt_platform/backend/backend/api/features/chat/routes.py
π Learning: 2026-02-26T17:02:22.448Z
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12211
File: .pre-commit-config.yaml:160-179
Timestamp: 2026-02-26T17:02:22.448Z
Learning: Keep the pre-commit hook pattern broad for autogpt_platform/backend to ensure OpenAPI schema changes are captured. Do not narrow to backend/api/ alone, since the generated schema depends on Pydantic models across multiple directories (backend/data/, backend/blocks/, backend/copilot/, backend/integrations/, backend/util/). Narrowing could miss schema changes and cause frontend type desynchronization.
Applied to files:
autogpt_platform/backend/backend/api/features/chat/routes.pyautogpt_platform/backend/backend/api/features/workspace/routes.py
π Additional comments (6)
autogpt_platform/backend/backend/api/features/chat/routes.py (2)
86-88: Good request-boundary hardening forfile_ids.
StreamChatRequest.file_idsnow has a bounded list size, which is a solid guardrail against oversized attachment lists.
404-492: Sanitization and workspace scoping are correctly enforced before downstream propagation.The Line 414 UUID filtering + Line 419-425 workspace-scoped batch query + Line 491 sanitized forwarding closes the prior trust-boundary gap cleanly.
autogpt_platform/backend/backend/api/features/workspace/routes.py (1)
294-307: Post-write quota recheck with rollback is a solid concurrency safeguard.The check-after-write and soft-delete path is a good pragmatic fix for quota races under concurrent uploads.
autogpt_platform/frontend/src/app/api/openapi.json (3)
2042-2044: Good schema ref migration for legacy upload endpoint.The response now points to the namespaced upload response model, which keeps the contract explicit and avoids ambiguity.
6589-6612: Workspace storage usage endpoint contract looks solid.Path, auth, and response schema wiring are consistent.
7850-7857: Nice addition of supporting workspace schemas.The multipart body schema, storage usage response model, and the two upload response models are coherently defined and referenced.
Also applies to: 11682-11692, 14061-14090
autogpt_platform/backend/backend/api/features/workspace/routes.py
Outdated
Show resolved
Hide resolved
β¦ copilot chat Remove "(attached files)" placeholder that was rendering in the chat bubble for file-only messages, and switch FILE_LINE_RE to greedy matching so filenames containing parentheses (e.g. "image (1).png") parse correctly. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
Autofix Details
Bugbot Autofix prepared fixes for both issues found in the latest run.
- β
Fixed: Proxy rejects file-only sends due to empty message
- Changed the message guard from
!messagetomessage === undefinedto allow empty string messages for file-only sends.
- Changed the message guard from
- β
Fixed: Pending message effect skips file-only new-session sends
- Changed the pendingMessage guard from
!pendingMessagetopendingMessage === nullto allow empty string pending messages for file-only new-session sends.
- Changed the pendingMessage guard from
Or push these changes by commenting:
@cursor push 411271074f
Preview (411271074f)
diff --git a/autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts b/autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts
--- a/autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts
+++ b/autogpt_platform/frontend/src/app/(platform)/copilot/useCopilotPage.ts
@@ -331,7 +331,7 @@
}, [sessionId, setMessages]);
useEffect(() => {
- if (!sessionId || !pendingMessage) return;
+ if (!sessionId || pendingMessage === null) return;
const msg = pendingMessage;
const files = pendingFilesRef.current;
setPendingMessage(null);
diff --git a/autogpt_platform/frontend/src/app/api/chat/sessions/[sessionId]/stream/route.ts b/autogpt_platform/frontend/src/app/api/chat/sessions/[sessionId]/stream/route.ts
--- a/autogpt_platform/frontend/src/app/api/chat/sessions/[sessionId]/stream/route.ts
+++ b/autogpt_platform/frontend/src/app/api/chat/sessions/[sessionId]/stream/route.ts
@@ -27,7 +27,7 @@
const body = await request.json();
const { message, is_user_message, context, file_ids } = body;
- if (!message) {
+ if (message === undefined) {
return new Response(
JSON.stringify({ error: "Missing message parameter" }),
{ status: 400, headers: { "Content-Type": "application/json" } },
autogpt_platform/frontend/src/app/api/chat/sessions/[sessionId]/stream/route.ts
Show resolved
Hide resolved
|
This pull request has conflicts with the base branch, please resolve those so we can evaluate the pull request. |
β¦ends Merge origin/dev, resolving conflicts in ChatMessagesContainer (adopt extracted MessagePartRenderer) and useCopilotPage (keep file-upload state alongside extracted stream/UI-store hooks). Also fix empty-message guards: use `pendingMessage === null` and `message === undefined` so that empty-string text from file-only sends is not rejected. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Conflicts have been resolved! π A maintainer will review the pull request shortly. |
β¦pload test, fix coroutine warning - useCopilotStream: Extract file_ids from FileUIPart URLs in prepareSendMessagesRequest so the backend receives attached file references (regression from proxy β direct transport refactor). - routes_test: Add test for uploading files without an extension. - processor: Capture coroutine before scheduling to prevent "coroutine was never awaited" RuntimeWarning on cleanup. - routes: Hoist local imports to module level (auto-formatted). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
β¦ayload limit Upload files directly to the Python backend instead of routing through the Next.js /api/workspace/files/upload proxy. Vercel's serverless function payload limit (4.5 MB) was rejecting files larger than that with FUNCTION_PAYLOAD_TOO_LARGE. Uses the same direct-backend auth pattern as useCopilotStream. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
π PR Overlap DetectionThis check compares your PR against all other open PRs targeting the same branch to detect potential merge conflicts early. π΄ Merge Conflicts DetectedThe following PRs have been tested and will have merge conflicts if merged after this PR. Consider coordinating with the authors.
π’ Low Risk β File Overlap OnlyThese PRs touch the same files but different sections (click to expand)
Summary: 10 conflict(s), 0 medium risk, 8 low risk (out of 18 PRs with file overlap) Auto-generated on push. Ignores: |
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Patch targets changed from source module to usage module since get_or_create_workspace is now a module-level import in routes.py. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

Summary
FileUIPartread_workspace_fileResolves: SECRT-1788
Backend
chat/routes.pyfile_idsin stream request, enrich user message with file metadataworkspace/routes.pyPOST /files/uploadandGET /storage/usageendpointsexecutor/utils.pyfile_idsthroughCoPilotExecutionEntryand RabbitMQsettings.pymax_file_size_mbandmax_workspace_storage_mbconfigFrontend
AttachmentMenu.tsx+button with popover for file category selectionFileChips.tsxMessageAttachments.tsxFileUIPartin chat bubblesupload/route.tsChatInput.tsxuseCopilotPage.tsFileUIPartconstruction, transportfile_idsextractionChatMessagesContainer.tsxMessageAttachmentsChatContainer.tsx/EmptySession.tsxisUploadingFilespropuseChatInput.tscanSendEmptyoption for file-only sendsstream/route.tsfile_idsto backendTest plan
+button β file chips appear with X buttonsfile_idspresent in stream POST bodyπ€ Generated with Claude Code
Note
Medium Risk
Adds authenticated file upload/storage-quota enforcement and threads
file_idsthrough the chat streaming path, which affects data handling and storage behavior. Risk is mitigated by UUID/workspace scoping, size limits, and virus scanning but still touches security- and reliability-sensitive upload flows.Overview
Copilot chat now supports attaching files: the frontend adds drag-and-drop and an attach button, shows selected files as removable chips with an upload-in-progress state, and renders sent attachments using AI SDK
FileUIPartwith download links.On send, files are uploaded to the backend (with client-side limits and failure handling) and the chat stream request includes
file_ids; the backend sanitizes/filters IDs, scopes them to the userβs workspace, appends an[Attached files]metadata block to the user message for the LLM, and forwards the sanitized IDs throughenqueue_copilot_turn.The backend adds
POST /workspace/files/upload(filename sanitization, per-file size limit, ClamAV scan, and per-user storage quota with post-write rollback) plusGET /workspace/storage/usage, introducesmax_workspace_storage_mbconfig, optimizes workspace size calculation, and fixes executor cleanup to avoid un-awaited coroutine warnings; new route tests cover file ID validation and upload quota/scan behaviors.Written by Cursor Bugbot for commit 8d3b95d. This will update automatically on new commits. Configure here.