CtrlK
BlogDocsLog inGet started
Tessl Logo

klingai-image-to-video

Animate static images into video using Kling AI. Use when converting images to video, adding motion to stills, or building I2V pipelines. Trigger with phrases like 'klingai image to video', 'kling ai animate image', 'klingai img2vid', 'animate picture klingai'.

80

Quality

77%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/saas-packs/klingai-pack/skills/klingai-image-to-video/SKILL.md
SKILL.md
Quality
Evals
Security

Kling AI Image-to-Video

Overview

Animate static images using the /v1/videos/image2video endpoint. Supports motion prompts, camera control, dynamic masks (motion brush), static masks, and tail images for start-to-end transitions.

Endpoint: POST https://api.klingai.com/v1/videos/image2video

Request Parameters

ParameterTypeRequiredDescription
model_namestringYeskling-v1-5, kling-v2-1, kling-v2-master, etc.
imagestringYesURL of the source image (JPG, PNG, WebP)
promptstringNoMotion description for the animation
negative_promptstringNoWhat to exclude
durationstringYes"5" or "10" seconds
aspect_ratiostringNo"16:9" default
modestringNo"standard" or "professional"
cfg_scalefloatNoPrompt adherence (0.0-1.0)
image_tailstringNoEnd-frame image URL (mutually exclusive with masks/camera)
camera_controlobjectNoCamera movement (mutually exclusive with masks/image_tail)
static_maskstringNoMask image URL for fixed regions
dynamic_masksarrayNoMotion brush trajectories
callback_urlstringNoWebhook for completion

Basic Image-to-Video

import jwt, time, os, requests

BASE = "https://api.klingai.com/v1"

def get_headers():
    ak, sk = os.environ["KLING_ACCESS_KEY"], os.environ["KLING_SECRET_KEY"]
    token = jwt.encode(
        {"iss": ak, "exp": int(time.time()) + 1800, "nbf": int(time.time()) - 5},
        sk, algorithm="HS256", headers={"alg": "HS256", "typ": "JWT"}
    )
    return {"Authorization": f"Bearer {token}", "Content-Type": "application/json"}

# Animate a landscape photo
response = requests.post(f"{BASE}/videos/image2video", headers=get_headers(), json={
    "model_name": "kling-v2-1",
    "image": "https://example.com/landscape.jpg",
    "prompt": "Clouds slowly drifting across the sky, gentle wind rustling through trees",
    "negative_prompt": "static, frozen, blurry",
    "duration": "5",
    "mode": "standard",
})

task_id = response.json()["data"]["task_id"]

# Poll for result
while True:
    time.sleep(15)
    result = requests.get(
        f"{BASE}/videos/image2video/{task_id}", headers=get_headers()
    ).json()
    if result["data"]["task_status"] == "succeed":
        print(f"Video: {result['data']['task_result']['videos'][0]['url']}")
        break
    elif result["data"]["task_status"] == "failed":
        raise RuntimeError(result["data"]["task_status_msg"])

Start-to-End Transition (image_tail)

Use image_tail to specify both the first and last frame. Kling interpolates the motion between them.

response = requests.post(f"{BASE}/videos/image2video", headers=get_headers(), json={
    "model_name": "kling-v2-master",
    "image": "https://example.com/sunrise.jpg",        # first frame
    "image_tail": "https://example.com/sunset.jpg",    # last frame
    "prompt": "Time lapse of sun moving across the sky",
    "duration": "5",
    "mode": "professional",
})

Motion Brush (dynamic_masks)

Draw motion paths for specific elements in the image. Up to 6 motion paths per image in v2.6.

response = requests.post(f"{BASE}/videos/image2video", headers=get_headers(), json={
    "model_name": "kling-v2-6",
    "image": "https://example.com/person-standing.jpg",
    "prompt": "Person walking forward naturally",
    "duration": "5",
    "dynamic_masks": [
        {
            "mask": "https://example.com/person-mask.png",  # white = selected region
            "trajectories": [
                {"x": 0.5, "y": 0.7, "t": 0.0},   # start position (normalized 0-1)
                {"x": 0.5, "y": 0.5, "t": 0.5},   # midpoint
                {"x": 0.5, "y": 0.3, "t": 1.0},   # end position
            ]
        }
    ],
})

Static Mask (freeze regions)

Keep specific areas of the image static while animating the rest.

response = requests.post(f"{BASE}/videos/image2video", headers=get_headers(), json={
    "model_name": "kling-v2-master",
    "image": "https://example.com/scene.jpg",
    "prompt": "Water flowing in the river, birds flying",
    "duration": "5",
    "static_mask": "https://example.com/buildings-mask.png",  # white = frozen
})

Mutual Exclusivity Rules

These features cannot be combined in a single request:

Feature Set AFeature Set B
image_taildynamic_masks, static_mask, camera_control
dynamic_masks / static_maskimage_tail, camera_control
camera_controlimage_tail, dynamic_masks, static_mask

Image Requirements

ConstraintValue
FormatsJPG, PNG, WebP
Max size10 MB
Min resolution300x300 px
Max resolution4096x4096 px
Mask formatPNG with white (selected) / black (excluded)

Error Handling

ErrorCauseFix
400 invalid imageURL unreachable or wrong formatVerify image URL is publicly accessible
400 mutual exclusivityCombined incompatible featuresUse only one feature set per request
task_status: failedImage too complex or low qualityUse higher resolution, clearer source
Mask mismatchMask dimensions differ from sourceEnsure mask matches source image dimensions

Resources

  • Image-to-Video API
  • Motion Control Guide
Repository
jeremylongshore/claude-code-plugins-plus-skills
Last updated
Created

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.