Comic API
Developer navigation for docs, onboarding, SDKs, and pricing.

Comic API Documentation

Integrate professional comic generation capabilities into your applications.

How the Comic API works

Three quick onboarding illustrations: prepare the input, send the request, review the result.

Step 01
Prompt + token
Colorful hand-drawn comic API onboarding illustration showing a prompt sheet and API token note.
Step 02
Send request
Colorful hand-drawn comic API onboarding illustration showing a request moving into the API workspace.
Step 03
Review panels
Colorful hand-drawn comic API onboarding illustration showing finished panels being reviewed before export.

Authentication

The Comic API uses API tokens to authenticate requests. You can view and manage your API tokens in the API Dashboard.

Authentication to the API is performed via HTTP Bearer Auth. Provide your API key as the basic auth password value. You do not need to provide a username.

Authorization: Bearer YOUR_API_TOKEN

Base URL

https://api.llamagen.ai/v1

Endpoints

POST/comics/generations

Create a Comic Generation

Generate a new comic based on text prompts.

Request Body

ParameterTypeRequiredDescription
promptstringYes (or promptUrl)The story description or script. Provide either prompt or promptUrl.
modelstringNoSpecific model to use for generation (default: auto-selected based on plan).
presetstringNoStyle preset (default: "render").
sizestringNoOutput resolution. Use one of the supported values listed below, such as "1024x1024".
fixPanelNumnumberNoNumber of panels per page (1–20). Defaults to 4.
promptUrlstring (URL)No (or prompt)Alternative to prompt. Use the uploaded promptUrl to describe the content; you may omit prompt.
comicRolesArray<Role>NoOptional character list to keep faces consistent. Each role supports name, age, gender, optional dress, and optional image (URL).
Note: You must provide at least one of prompt or promptUrl.

Role Object Schema

[
  { "name": "Alice", "age": 23, "gender": "female", "dress": "hoodie", "image": "https://example.com/alice.png" },
  { "name": "Bob", "age": 25, "gender": "male", "dress": "jacket" }
]

Supported Size Values

RatiosizeWidthHeightBest for
1:11024x102410241024Square format for covers, profile-style art, and balanced compositions.
2:3512x768512768Portrait format for posters, character art, and book-style layouts.
1:2512x10245121024Tall vertical format for narrow posters and mobile-first scenes.
9:16576x10245761024Vertical video-style format for reels, stories, and phone screens.
3:4768x10247681024Classic portrait format for comic covers and character-focused shots.
4:31024x7681024768Traditional landscape format for scenes, dialogue panels, and illustrations.
3:2768x512768512Cinematic landscape format for environment shots and wider panels.
16:91024x5761024576Widescreen format for trailers, hero banners, and dramatic shots.
2:11024x5121024512Ultra-wide banner format for panoramic scenes and headers.

Example Request

1curl -X POST https://api.llamagen.ai/v1/comics/generations \
2  -H "Authorization: Bearer YOUR_API_TOKEN" \
3  -H "Content-Type: application/json" \
4  -d '{
5    "prompt": "A superhero cat saving a city from giant mice",
6    "preset": "render",
7    "size": "1024x1024",
8    "fixPanelNum": 4,
9    "comicRoles": [
10      { "name": "Captain Whisker", "age": 4, "gender": "male", "dress": "red cape", "image": "https://example.com/captain-whisker.png" },
11      { "name": "Mayor Paws", "age": 6, "gender": "male", "dress": "suit" }
12    ]
13  }'
POST/comics/upload

Upload Reference Image

Request

ParameterTypeRequiredDescription
filebinary (multipart/form-data)YesImage file up to 10MB. Supported types inferred from content.

Example

1curl -X POST https://api.llamagen.ai/v1/comics/upload \
2-H "Authorization: Bearer YOUR_API_TOKEN" \
3-F "file=@/path/to/your-image.png"

Response

{
  "code": 200,
  "fileUrl": "https://s.llamagen.ai/yourteam/uploads/abc123.png"
}
Use the returned promptUrl in /comics/generations as an alternative to the textual prompt.
GET/comics/generations/:id

Get Generation Status

Retrieve the status and result of a comic generation.

Path Parameters

ParameterTypeRequiredDescription
idstringYesThe generation ID returned from the creation endpoint.

Response

{
  "id": "gen_123456789",
  "status": "SUCCEEDED", // PENDING, PROCESSING, SUCCEEDED, FAILED
  "output": "https://cdn.llamagen.ai/comics/...",
  "createdAt": "2024-03-20T10:00:00Z"
}
GET/comics/usage

Get API Usage

Retrieve current usage count, quota and remaining credits.

Response

{
  "apiUsageCount": 12,
  "apiMaxUsage": 100,
  "credits": 88,
  "isPaidPlan": true
}

Example Request

1curl -X GET https://api.llamagen.ai/v1/comics/usage \
2  -H "Authorization: Bearer YOUR_API_TOKEN"

Prompt Basics

For stable, high-quality output, structure your prompt with visual style, story beats, and character descriptions.

Recommended Script Template

[Visual Style]
- Genre:
- Art style:
- Color palette:
- Lighting:
- Camera language:

[Story]
- Premise:
- Conflict:
- Emotional tone:
- Ending beat:

[Characters]
- Name:
  - Role:
  - Appearance:
  - Personality:
  - Signature expression/action:

[Panels]
1) Panel objective:
   - Scene description:
   - Character action:
   - Dialogue / caption:
2) Panel objective:
   - Scene description:
   - Character action:
   - Dialogue / caption:

[Constraints]
- Aspect ratio / size: choose one supported size such as 1024x1024, 576x1024, or 1024x576
- Forbidden elements:
- Consistency requirements:

Example Prompt

Visual Style: cinematic anime, clean line-art, soft rim lighting, warm dusk palette.
Story: a quiet fox detective helps a lost child find home before nightfall.
Characters:
- Ren (fox detective): slim build, tan coat, calm eyes, carries a paper map.
- Mino (child): short bob hair, yellow raincoat, anxious but curious.
Panels:
1) Wide shot of rainy alley; Ren notices Mino alone near a lantern.
2) Medium shot; Ren kneels, offers map, Mino starts to trust him.
3) Tracking shot; both crossing old bridge with city lights in background.
4) Close shot; child reunited with family, Ren leaving silently.
Constraints: preserve character face consistency across all panels; no text watermark; high detail background.

Tip: keep each panel objective explicit and avoid mixing too many styles in one request.

Supported size values: 1024x1024 (1:1), 512x768 (2:3), 512x1024 (1:2), 576x1024 (9:16), 768x1024 (3:4), 1024x768 (4:3), 768x512 (3:2), 1024x576 (16:9), 1024x512 (2:1)

MCP Usage

Connect via Streamable HTTP

MCP Endpoint
https://llamagen.ai/api/mcp
Authorization Header
Authorization: Bearer YOUR_API_TOKEN

Configure your MCP client to use Streamable HTTP transport with the endpoint above. Provide your API token via the Authorization header.

Available Tools

  • create_comic_generation— Create a generation job
  • get_comic_generation_status— Get status/result by id
  • get_api_usage— View current usage and quota

Client Configuration Example

Many MCP clients allow setting a remote HTTP endpoint with custom headers. Below is a generic example:

{
  "mcpServers": {
    "llamagen": {
      "url": "https://llamagen.ai/api/mcp",
      "headers": {
        "Authorization": "Bearer YOUR_API_TOKEN"
      }
    }
  }
}

The exact configuration format varies by client. Ensure the Authorization header is set with your token.

Cursor Setup

  1. Open Cursor Settings and find the Model Context Protocol (MCP) section.
  2. Add a new server using Streamable HTTP.
  3. Set URL to https://llamagen.ai/api/mcp.
  4. Under Headers, add Authorization: Bearer YOUR_API_TOKEN.
  5. Save and test by listing tools; you should see create_comic_generation, get_comic_generation_status, get_api_usage.

Alternatively, you can configure via Cursor's MCP configuration file using the same JSON structure as above, if your version supports it.

Integration Demos

LangChain JS Agent (Streamable HTTP + Azure OpenAI)

import "dotenv/config";
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js";
import { createAgent } from "langchain";
import { DynamicStructuredTool } from "@langchain/core/tools";
import { AzureChatOpenAI } from "@langchain/openai";

async function main() {
  const mcpUrl = "https://llamagen.ai/api/mcp";
  const YOUR_API_TOKEN = process.env.LLAMAGENAI_API_TOKEN;
  const client = new Client(
    { name: "demo-agent", version: "1.0.0" },
    { capabilities: { tools: {} } }
  );

  let connected = false;
  let discoveredTools: any | null = null;
  if (mcpUrl && YOUR_API_TOKEN) {
    const headers = {
      Authorization: "Bearer " + YOUR_API_TOKEN,
      Accept: "application/json",
    };
    const transport = new StreamableHTTPClientTransport(new URL(mcpUrl), {
      requestInit: { headers },
    });
    await client.connect(transport);
    discoveredTools = await client.listTools();
    connected = true;
  }

  let tools: any[] = [];
  if (connected && discoveredTools) {
    tools = discoveredTools.tools.map((tool: any) => {
      return new DynamicStructuredTool({
        name: tool.name,
        description: tool.description ?? "",
        schema: tool.inputSchema,
        func: async (input: Record<string, any>) => {
          const result = await client.callTool({
            name: tool.name,
            arguments: input,
          });
          return JSON.stringify(result);
        },
      });
    });
  }
  console.log("Loaded " + tools.length + " tools");

  const AZURE_OPENAI_DEPLOYMENT_NAME = process.env.AZURE_OPENAI_DEPLOYMENT_NAME;
  const AZURE_OPENAI_API_KEY = process.env.AZURE_OPENAI_API_KEY;
  const AZURE_OPENAI_API_VERSION = process.env.AZURE_OPENAI_API_VERSION;
  const AZURE_OPENAI_ENDPOINT = process.env.AZURE_OPENAI_ENDPOINT;
  const llm = new AzureChatOpenAI({
    model: AZURE_OPENAI_DEPLOYMENT_NAME,
    azureOpenAIApiKey: AZURE_OPENAI_API_KEY,
    azureOpenAIApiInstanceName: AZURE_OPENAI_DEPLOYMENT_NAME,
    azureOpenAIApiDeploymentName: AZURE_OPENAI_DEPLOYMENT_NAME,
    azureOpenAIApiVersion: AZURE_OPENAI_API_VERSION,
    azureOpenAIEndpoint: AZURE_OPENAI_ENDPOINT,
  });

  const agent = createAgent({
    model: llm,
    tools,
  });

  const result = await agent.invoke({
    messages: [
      {
        role: "user",
        content: "Query the current usage and quota for the llamagen project",
      },
    ],
  });

  console.log(result);

  const result1 = await agent.invoke({
    messages: [
      {
        role: "user",
        content: "Create a comic of a little girl running in a forest",
      },
    ],
  });

  console.log(result1);
}

main().catch((err) => {
  console.error(err);
  process.exit(1);
});

SDK Quick Start

Install the SDK first, then create a generation job and wait for the final result in two clear steps.

Install

 npm i comic

Step 1: Create generation

1import { LlamaGenClient } from 'comic';
2
3const llamagen = new LlamaGenClient({
4  apiKey: process.env.LLAMAGEN_API_KEY!,
5});
6
7const created = await llamagen.comic.create({
8  prompt: 'A detective fox in Tokyo',
9  size: '1024x1024'
10});

Step 2: Wait for completion

1const result = await llamagen.comic.waitForCompletion(created.id);
2
3console.log(result);

Supported size values: 1024x1024 (1:1), 512x768 (2:3), 512x1024 (1:2), 576x1024 (9:16), 768x1024 (3:4), 1024x768 (4:3), 768x512 (3:2), 1024x576 (16:9), 1024x512 (2:1)

TypeScript Types

The SDK ships with first-class TypeScript types for request payloads and responses.

1import type {
2  CreateComicParams,
3  ComicArtworkResponse,
4  ComicGenerationStatus
5} from 'comic';
6
7const payload: CreateComicParams = {
8  prompt: 'A superhero cat saving a city',
9  preset: 'render',
10  size: '1024x1024'
11};

Valid size values: 1024x1024 (1:1), 512x768 (2:3), 512x1024 (1:2), 576x1024 (9:16), 768x1024 (3:4), 1024x768 (4:3), 768x512 (3:2), 1024x576 (16:9), 1024x512 (2:1)

Status Lifecycle

StatusMeaning
PENDINGRequest accepted and queued.
PROCESSINGGeneration is actively running.
SUCCEEDEDGeneration completed successfully.
FAILEDGeneration failed. Inspect error details.

Recommended: poll every 3-5 seconds with timeout protection and exponential retry for transient failures.

Request Logs

In dashboard settings, use Request Logs to trace endpoint usage, status codes, latency, and credit changes for each API call.

Keep a request ID in your app logs and correlate it with dashboard request logs for faster debugging.

Rate Limits

Demo Users

  • 4 requests per minute
  • 15 requests per day
  • Watermarked outputs

Paid Plans

  • 10 requests per minute
  • Usage based on credits
  • High-resolution, no watermark

Errors

CodeDescription
401Unauthorized - Invalid API token.
402Payment Required - Insufficient credits.
403Forbidden - Access denied.
429Too Many Requests - Rate limit exceeded.
500Internal Server Error.