Build with Video Intelligence
Two ways to integrate: a full REST API for custom applications, and an MCP server so AI assistants can work with your videos natively.
Full programmatic control
A comprehensive RESTful API covering the entire video lifecycle — from upload and encoding to AI analysis, transcription, search, and team management.
Upload & Import
Import videos from YouTube or upload directly via pre-signed S3 URLs. Encoding happens automatically in the background.
AI Analysis
Submit videos for AI-powered analysis — generate chapters, detect scenes, extract entities, and produce detailed summaries.
Transcription
Transcribe audio with Whisper, then translate into 100+ languages. Full multi-language subtitle support out of the box.
Search & Chat
Hybrid semantic + keyword search across your library. Enable RAG-based AI chat scoped to individual videos or the entire collection.
Embeddings
Generate vector embeddings from analysis and transcription data to power semantic search and RAG-based AI chat.
Organizations
Manage teams with role-based access, data isolation between groups, and invite flows — all via API.
curl -X POST "https://www.coniviso.com/api/v1/videos/{id}/analyze" \
-H "Authorization: Bearer vi_live_..." \
-H "Content-Type: application/json" \
-d '{"analysisType": "comprehensive"}'Your videos inside any AI assistant
Coniviso exposes a Model Context Protocol (MCP) server so AI assistants like Claude, Cursor, and custom agents can access your video library, search content, read transcriptions, and trigger analysis — all through the standard MCP interface.
What is MCP?
The Model Context Protocol is an open standard that lets AI models securely interact with external tools and data sources. Instead of writing custom integrations, any MCP-compatible AI client can connect to Coniviso and immediately use your videos as context.
21 Tools Available
Every tool is scoped to your API key permissions. Your AI assistant can list, search, update, and delete videos, manage organizations, groups, permissions and visibility, retrieve analysis and transcriptions, chat with individual videos or your entire library, generate embeddings, and submit new analysis jobs — all without leaving the conversation.
list_videosBrowse your video library with pagination and status filtersget_videoGet full metadata, processing status, and configuration for any videosearch_videosHybrid semantic + keyword search across all accessible videoschat_with_libraryAsk questions about your entire video collection using RAGchat_with_videoQ&A about a specific video with multi-turn conversation supportget_analysisRetrieve AI analysis results: summaries, chapters, scenes, entitiesget_transcriptionGet transcription data, optionally filtered by languageanalyze_videoSubmit a video for AI analysis (comprehensive, scenes, objects, or text)generate_embeddingsCreate vector embeddings from analysis and transcription data for semantic searchupdate_videoUpdate a video's title, description, visibility, or organizationdelete_videoPermanently delete a video and all associated dataget_permissionsList all access permissions granted on a videogrant_permissionGrant a user, group, or organization access to a videorevoke_permissionRemove a previously granted access permission from a videolist_organizationsReturns a list of organizations you belong toget_organizationReturns details of a specific organizationlist_org_membersReturns all members of an organizationinvite_org_memberInvite a member to an organizationremove_org_memberRemove a member from an organizationlist_org_groupsList all groups within an organizationlist_group_membersList all members of a specific group within an organization
8 Resources
MCP resources give AI assistants direct access to your video data as structured context:
coniviso://videosFull list of all accessible videos with metadataconiviso://video/:idComplete video detail including analysis and transcriptionconiviso://video/:id/transcriptionFull transcription data for a specific videoconiviso://video/:id/analysisAI analysis results: summary, chapters, scenes, entitiesconiviso://video/:id/permissionsAccess control list showing granted permissionsconiviso://organizationsList of all organizations you belong toconiviso://organization/:idOrganization details and list of its membersconiviso://organization/:id/groupsList of all groups within an organization
Connect in 30 Seconds
Add Coniviso to any MCP client with a single configuration block. Use your existing API key for authentication.
https://www.coniviso.com/mcpBearer vi_live_...Each session gets its own isolated server instance. Sessions are identified by the mcp-session-id header.
{
"mcpServers": {
"coniviso": {
"url": "https://www.coniviso.com/mcp",
"headers": {
"Authorization": "Bearer vi_live_..."
}
}
}
}Up and running in minutes
1. Create an API Key
Go to Settings → API Keys in your Coniviso dashboard. API keys follow the vi_live_* format and can be scoped with granular permissions.
2. Authenticate Requests
Include your API key as a Bearer token in the Authorization header. Works for both REST API calls and MCP connections.
3. Start Building
Use the REST API from any language, or connect the MCP server to your AI assistant. Both share the same API key and permissions.
Available Scopes
videos:readList and read video metadatavideos:writeUpload, update, and delete videosanalysisSubmit and read AI analysis resultssearchSearch across video contentchatUse AI chat features (per-video and library-wide)embeddingsGenerate and manage vector embeddings
Ready to integrate?
Start building with the Coniviso API and MCP server. Free tier includes 60 minutes of AI processing.