
Comic API Documentation
Integrate professional comic generation capabilities into your applications.
How the Comic API works
Three quick onboarding illustrations: prepare the input, send the request, review the result.



Authentication
The Comic API uses API tokens to authenticate requests. You can view and manage your API tokens in the API Dashboard.
Authentication to the API is performed via HTTP Bearer Auth. Provide your API key as the basic auth password value. You do not need to provide a username.
Authorization: Bearer YOUR_API_TOKENBase URL
Endpoints
/comics/generationsCreate a Comic Generation
Generate a new comic based on text prompts.
Request Body
| Parameter | Type | Required | Description |
|---|---|---|---|
| prompt | string | Yes (or promptUrl) | The story description or script. Provide either prompt or promptUrl. |
| model | string | No | Specific model to use for generation (default: auto-selected based on plan). |
| preset | string | No | Style preset (default: "render"). |
| size | string | No | Output resolution. Use one of the supported values listed below, such as "1024x1024". |
| fixPanelNum | number | No | Number of panels per page (1–20). Defaults to 4. |
| promptUrl | string (URL) | No (or prompt) | Alternative to prompt. Use the uploaded promptUrl to describe the content; you may omit prompt. |
| comicRoles | Array<Role> | No | Optional character list to keep faces consistent. Each role supports name, age, gender, optional dress, and optional image (URL). |
prompt or promptUrl.Role Object Schema
[
{ "name": "Alice", "age": 23, "gender": "female", "dress": "hoodie", "image": "https://example.com/alice.png" },
{ "name": "Bob", "age": 25, "gender": "male", "dress": "jacket" }
]Supported Size Values
| Ratio | size | Width | Height | Best for |
|---|---|---|---|---|
| 1:1 | 1024x1024 | 1024 | 1024 | Square format for covers, profile-style art, and balanced compositions. |
| 2:3 | 512x768 | 512 | 768 | Portrait format for posters, character art, and book-style layouts. |
| 1:2 | 512x1024 | 512 | 1024 | Tall vertical format for narrow posters and mobile-first scenes. |
| 9:16 | 576x1024 | 576 | 1024 | Vertical video-style format for reels, stories, and phone screens. |
| 3:4 | 768x1024 | 768 | 1024 | Classic portrait format for comic covers and character-focused shots. |
| 4:3 | 1024x768 | 1024 | 768 | Traditional landscape format for scenes, dialogue panels, and illustrations. |
| 3:2 | 768x512 | 768 | 512 | Cinematic landscape format for environment shots and wider panels. |
| 16:9 | 1024x576 | 1024 | 576 | Widescreen format for trailers, hero banners, and dramatic shots. |
| 2:1 | 1024x512 | 1024 | 512 | Ultra-wide banner format for panoramic scenes and headers. |
Example Request
1curl -X POST https://api.llamagen.ai/v1/comics/generations \
2 -H "Authorization: Bearer YOUR_API_TOKEN" \
3 -H "Content-Type: application/json" \
4 -d '{
5 "prompt": "A superhero cat saving a city from giant mice",
6 "preset": "render",
7 "size": "1024x1024",
8 "fixPanelNum": 4,
9 "comicRoles": [
10 { "name": "Captain Whisker", "age": 4, "gender": "male", "dress": "red cape", "image": "https://example.com/captain-whisker.png" },
11 { "name": "Mayor Paws", "age": 6, "gender": "male", "dress": "suit" }
12 ]
13 }'/comics/uploadUpload Reference Image
Request
| Parameter | Type | Required | Description |
|---|---|---|---|
| file | binary (multipart/form-data) | Yes | Image file up to 10MB. Supported types inferred from content. |
Example
1curl -X POST https://api.llamagen.ai/v1/comics/upload \
2-H "Authorization: Bearer YOUR_API_TOKEN" \
3-F "file=@/path/to/your-image.png"Response
{
"code": 200,
"fileUrl": "https://s.llamagen.ai/yourteam/uploads/abc123.png"
}/comics/generations/:idGet Generation Status
Retrieve the status and result of a comic generation.
Path Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| id | string | Yes | The generation ID returned from the creation endpoint. |
Response
{
"id": "gen_123456789",
"status": "SUCCEEDED", // PENDING, PROCESSING, SUCCEEDED, FAILED
"output": "https://cdn.llamagen.ai/comics/...",
"createdAt": "2024-03-20T10:00:00Z"
}/comics/usageGet API Usage
Retrieve current usage count, quota and remaining credits.
Response
{
"apiUsageCount": 12,
"apiMaxUsage": 100,
"credits": 88,
"isPaidPlan": true
}Example Request
1curl -X GET https://api.llamagen.ai/v1/comics/usage \
2 -H "Authorization: Bearer YOUR_API_TOKEN"Prompt Basics
For stable, high-quality output, structure your prompt with visual style, story beats, and character descriptions.
Recommended Script Template
[Visual Style]
- Genre:
- Art style:
- Color palette:
- Lighting:
- Camera language:
[Story]
- Premise:
- Conflict:
- Emotional tone:
- Ending beat:
[Characters]
- Name:
- Role:
- Appearance:
- Personality:
- Signature expression/action:
[Panels]
1) Panel objective:
- Scene description:
- Character action:
- Dialogue / caption:
2) Panel objective:
- Scene description:
- Character action:
- Dialogue / caption:
[Constraints]
- Aspect ratio / size: choose one supported size such as 1024x1024, 576x1024, or 1024x576
- Forbidden elements:
- Consistency requirements:Example Prompt
Visual Style: cinematic anime, clean line-art, soft rim lighting, warm dusk palette.
Story: a quiet fox detective helps a lost child find home before nightfall.
Characters:
- Ren (fox detective): slim build, tan coat, calm eyes, carries a paper map.
- Mino (child): short bob hair, yellow raincoat, anxious but curious.
Panels:
1) Wide shot of rainy alley; Ren notices Mino alone near a lantern.
2) Medium shot; Ren kneels, offers map, Mino starts to trust him.
3) Tracking shot; both crossing old bridge with city lights in background.
4) Close shot; child reunited with family, Ren leaving silently.
Constraints: preserve character face consistency across all panels; no text watermark; high detail background.Tip: keep each panel objective explicit and avoid mixing too many styles in one request.
Supported size values: 1024x1024 (1:1), 512x768 (2:3), 512x1024 (1:2), 576x1024 (9:16), 768x1024 (3:4), 1024x768 (4:3), 768x512 (3:2), 1024x576 (16:9), 1024x512 (2:1)
MCP Usage
Connect via Streamable HTTP
Authorization: Bearer YOUR_API_TOKENConfigure your MCP client to use Streamable HTTP transport with the endpoint above. Provide your API token via the Authorization header.
Available Tools
- create_comic_generation— Create a generation job
- get_comic_generation_status— Get status/result by id
- get_api_usage— View current usage and quota
Client Configuration Example
Many MCP clients allow setting a remote HTTP endpoint with custom headers. Below is a generic example:
{
"mcpServers": {
"llamagen": {
"url": "https://llamagen.ai/api/mcp",
"headers": {
"Authorization": "Bearer YOUR_API_TOKEN"
}
}
}
}The exact configuration format varies by client. Ensure the Authorization header is set with your token.
Cursor Setup
- Open Cursor Settings and find the Model Context Protocol (MCP) section.
- Add a new server using Streamable HTTP.
- Set URL to https://llamagen.ai/api/mcp.
- Under Headers, add Authorization: Bearer YOUR_API_TOKEN.
- Save and test by listing tools; you should see create_comic_generation, get_comic_generation_status, get_api_usage.
Alternatively, you can configure via Cursor's MCP configuration file using the same JSON structure as above, if your version supports it.
Integration Demos
LangChain JS Agent (Streamable HTTP + Azure OpenAI)
import "dotenv/config";
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js";
import { createAgent } from "langchain";
import { DynamicStructuredTool } from "@langchain/core/tools";
import { AzureChatOpenAI } from "@langchain/openai";
async function main() {
const mcpUrl = "https://llamagen.ai/api/mcp";
const YOUR_API_TOKEN = process.env.LLAMAGENAI_API_TOKEN;
const client = new Client(
{ name: "demo-agent", version: "1.0.0" },
{ capabilities: { tools: {} } }
);
let connected = false;
let discoveredTools: any | null = null;
if (mcpUrl && YOUR_API_TOKEN) {
const headers = {
Authorization: "Bearer " + YOUR_API_TOKEN,
Accept: "application/json",
};
const transport = new StreamableHTTPClientTransport(new URL(mcpUrl), {
requestInit: { headers },
});
await client.connect(transport);
discoveredTools = await client.listTools();
connected = true;
}
let tools: any[] = [];
if (connected && discoveredTools) {
tools = discoveredTools.tools.map((tool: any) => {
return new DynamicStructuredTool({
name: tool.name,
description: tool.description ?? "",
schema: tool.inputSchema,
func: async (input: Record<string, any>) => {
const result = await client.callTool({
name: tool.name,
arguments: input,
});
return JSON.stringify(result);
},
});
});
}
console.log("Loaded " + tools.length + " tools");
const AZURE_OPENAI_DEPLOYMENT_NAME = process.env.AZURE_OPENAI_DEPLOYMENT_NAME;
const AZURE_OPENAI_API_KEY = process.env.AZURE_OPENAI_API_KEY;
const AZURE_OPENAI_API_VERSION = process.env.AZURE_OPENAI_API_VERSION;
const AZURE_OPENAI_ENDPOINT = process.env.AZURE_OPENAI_ENDPOINT;
const llm = new AzureChatOpenAI({
model: AZURE_OPENAI_DEPLOYMENT_NAME,
azureOpenAIApiKey: AZURE_OPENAI_API_KEY,
azureOpenAIApiInstanceName: AZURE_OPENAI_DEPLOYMENT_NAME,
azureOpenAIApiDeploymentName: AZURE_OPENAI_DEPLOYMENT_NAME,
azureOpenAIApiVersion: AZURE_OPENAI_API_VERSION,
azureOpenAIEndpoint: AZURE_OPENAI_ENDPOINT,
});
const agent = createAgent({
model: llm,
tools,
});
const result = await agent.invoke({
messages: [
{
role: "user",
content: "Query the current usage and quota for the llamagen project",
},
],
});
console.log(result);
const result1 = await agent.invoke({
messages: [
{
role: "user",
content: "Create a comic of a little girl running in a forest",
},
],
});
console.log(result1);
}
main().catch((err) => {
console.error(err);
process.exit(1);
});SDK Quick Start
Install the SDK first, then create a generation job and wait for the final result in two clear steps.
Install
npm i comicStep 1: Create generation
1import { LlamaGenClient } from 'comic';
2
3const llamagen = new LlamaGenClient({
4 apiKey: process.env.LLAMAGEN_API_KEY!,
5});
6
7const created = await llamagen.comic.create({
8 prompt: 'A detective fox in Tokyo',
9 size: '1024x1024'
10});Step 2: Wait for completion
1const result = await llamagen.comic.waitForCompletion(created.id);
2
3console.log(result);Supported size values: 1024x1024 (1:1), 512x768 (2:3), 512x1024 (1:2), 576x1024 (9:16), 768x1024 (3:4), 1024x768 (4:3), 768x512 (3:2), 1024x576 (16:9), 1024x512 (2:1)
TypeScript Types
The SDK ships with first-class TypeScript types for request payloads and responses.
1import type {
2 CreateComicParams,
3 ComicArtworkResponse,
4 ComicGenerationStatus
5} from 'comic';
6
7const payload: CreateComicParams = {
8 prompt: 'A superhero cat saving a city',
9 preset: 'render',
10 size: '1024x1024'
11};Valid size values: 1024x1024 (1:1), 512x768 (2:3), 512x1024 (1:2), 576x1024 (9:16), 768x1024 (3:4), 1024x768 (4:3), 768x512 (3:2), 1024x576 (16:9), 1024x512 (2:1)
Status Lifecycle
| Status | Meaning |
|---|---|
| PENDING | Request accepted and queued. |
| PROCESSING | Generation is actively running. |
| SUCCEEDED | Generation completed successfully. |
| FAILED | Generation failed. Inspect error details. |
Recommended: poll every 3-5 seconds with timeout protection and exponential retry for transient failures.
Request Logs
In dashboard settings, use Request Logs to trace endpoint usage, status codes, latency, and credit changes for each API call.
Rate Limits
Demo Users
- 4 requests per minute
- 15 requests per day
- Watermarked outputs
Paid Plans
- 10 requests per minute
- Usage based on credits
- High-resolution, no watermark
Errors
| Code | Description |
|---|---|
| 401 | Unauthorized - Invalid API token. |
| 402 | Payment Required - Insufficient credits. |
| 403 | Forbidden - Access denied. |
| 429 | Too Many Requests - Rate limit exceeded. |
| 500 | Internal Server Error. |