REST API + MCP Server

Build with Video Intelligence

Two ways to integrate: a full REST API for custom applications, and an MCP server so AI assistants can work with your videos natively.

REST API

Full programmatic control

A comprehensive RESTful API covering the entire video lifecycle — from upload and encoding to AI analysis, transcription, search, and team management.

Upload & Import

Import videos from YouTube or upload directly via pre-signed S3 URLs. Encoding happens automatically in the background.

AI Analysis

Submit videos for AI-powered analysis — generate chapters, detect scenes, extract entities, and produce detailed summaries.

Transcription

Transcribe audio with Whisper, then translate into 100+ languages. Full multi-language subtitle support out of the box.

Search & Chat

Hybrid semantic + keyword search across your library. Enable RAG-based AI chat scoped to individual videos or the entire collection.

Embeddings

Generate vector embeddings from analysis and transcription data to power semantic search and RAG-based AI chat.

Organizations

Manage teams with role-based access, data isolation between groups, and invite flows — all via API.

bash
curl -X POST "https://www.coniviso.com/api/v1/videos/{id}/analyze" \
  -H "Authorization: Bearer vi_live_..." \
  -H "Content-Type: application/json" \
  -d '{"analysisType": "comprehensive"}'
MCP Server

Your videos inside any AI assistant

Coniviso exposes a Model Context Protocol (MCP) server so AI assistants like Claude, Cursor, and custom agents can access your video library, search content, read transcriptions, and trigger analysis — all through the standard MCP interface.

What is MCP?

The Model Context Protocol is an open standard that lets AI models securely interact with external tools and data sources. Instead of writing custom integrations, any MCP-compatible AI client can connect to Coniviso and immediately use your videos as context.

21 Tools Available

Every tool is scoped to your API key permissions. Your AI assistant can list, search, update, and delete videos, manage organizations, groups, permissions and visibility, retrieve analysis and transcriptions, chat with individual videos or your entire library, generate embeddings, and submit new analysis jobs — all without leaving the conversation.

  • list_videosBrowse your video library with pagination and status filters
  • get_videoGet full metadata, processing status, and configuration for any video
  • search_videosHybrid semantic + keyword search across all accessible videos
  • chat_with_libraryAsk questions about your entire video collection using RAG
  • chat_with_videoQ&A about a specific video with multi-turn conversation support
  • get_analysisRetrieve AI analysis results: summaries, chapters, scenes, entities
  • get_transcriptionGet transcription data, optionally filtered by language
  • analyze_videoSubmit a video for AI analysis (comprehensive, scenes, objects, or text)
  • generate_embeddingsCreate vector embeddings from analysis and transcription data for semantic search
  • update_videoUpdate a video's title, description, visibility, or organization
  • delete_videoPermanently delete a video and all associated data
  • get_permissionsList all access permissions granted on a video
  • grant_permissionGrant a user, group, or organization access to a video
  • revoke_permissionRemove a previously granted access permission from a video
  • list_organizationsReturns a list of organizations you belong to
  • get_organizationReturns details of a specific organization
  • list_org_membersReturns all members of an organization
  • invite_org_memberInvite a member to an organization
  • remove_org_memberRemove a member from an organization
  • list_org_groupsList all groups within an organization
  • list_group_membersList all members of a specific group within an organization

8 Resources

MCP resources give AI assistants direct access to your video data as structured context:

  • coniviso://videosFull list of all accessible videos with metadata
  • coniviso://video/:idComplete video detail including analysis and transcription
  • coniviso://video/:id/transcriptionFull transcription data for a specific video
  • coniviso://video/:id/analysisAI analysis results: summary, chapters, scenes, entities
  • coniviso://video/:id/permissionsAccess control list showing granted permissions
  • coniviso://organizationsList of all organizations you belong to
  • coniviso://organization/:idOrganization details and list of its members
  • coniviso://organization/:id/groupsList of all groups within an organization

Connect in 30 Seconds

Add Coniviso to any MCP client with a single configuration block. Use your existing API key for authentication.

Endpointhttps://www.coniviso.com/mcp
AuthenticationBearer vi_live_...
TransportStreamable HTTP (JSON-RPC over HTTP)

Each session gets its own isolated server instance. Sessions are identified by the mcp-session-id header.

mcp config
{
  "mcpServers": {
    "coniviso": {
      "url": "https://www.coniviso.com/mcp",
      "headers": {
        "Authorization": "Bearer vi_live_..."
      }
    }
  }
}
Getting Started

Up and running in minutes

1. Create an API Key

Go to Settings → API Keys in your Coniviso dashboard. API keys follow the vi_live_* format and can be scoped with granular permissions.

2. Authenticate Requests

Include your API key as a Bearer token in the Authorization header. Works for both REST API calls and MCP connections.

3. Start Building

Use the REST API from any language, or connect the MCP server to your AI assistant. Both share the same API key and permissions.

Available Scopes

  • videos:readList and read video metadata
  • videos:writeUpload, update, and delete videos
  • analysisSubmit and read AI analysis results
  • searchSearch across video content
  • chatUse AI chat features (per-video and library-wide)
  • embeddingsGenerate and manage vector embeddings

Ready to integrate?

Start building with the Coniviso API and MCP server. Free tier includes 60 minutes of AI processing.