ConvergePanel

ConvergePanel

How to use ConvergePanel

Research mode and claim verification share the same idea: compare models before you trust an answer.

Getting started

How do I run my first research panel?

Sign in, open the Research tab, enter your question, choose at least two models (free plan: up to two per run; paid plans: up to five), and run the panel. You will see each model's answer, then a synthesized report with agreement and disagreement mapped across models. Typical wall time is roughly 15–45 seconds; complex prompts can run longer.

How do I verify a claim?

Open the Verify claim tab, paste a claim (a sentence or short passage), select your models, and run verification. Each model returns a structured evaluation when parsing succeeds. ConvergePanel aggregates those into an overall verdict (confirmed, disputed, partially true, or unverifiable), a consensus score, per-model summaries, and an audit trail you can copy as JSON.

Video verification

⚠️ AI-assisted authenticity review — not forensic analysis. This is an AI-assisted video authenticity review; results should inform human judgment, not replace it.

How does video verification work?

Upload a video file (MP4, MOV, or WebM, up to 60 seconds and 50MB). ConvergePanel extracts frames at regular intervals and sends them with video metadata to three vision-capable AI models (GPT-4o, Claude, and Gemini). Each model independently reviews the frames for signs of AI generation, synthetic-looking edits, and manipulation indicators. You receive a verdict with consensus scoring and per-model evidence.

What do the video verdicts mean?

Authentic: Models reported no strong manipulation indicators. That does not mean the clip is necessarily genuine — it means no major red flags showed up in this pass. Likely manipulated: Multiple models flagged inconsistencies suggesting AI generation or digital editing; open the per-model evidence for specifics. Inconclusive: Models split or lacked confidence — human review is appropriate. Insufficient: The clip is too short, low resolution, or too compressed for a meaningful read.

Can I rely on this as legal or lab-grade evidence?

No. ConvergePanel uses general-purpose AI vision models, not dedicated authenticity-lab suites. Treat outputs as indicators for your judgment, not as standalone evidence in legal or compliance settings.

What does metadata review cover?

We read file metadata such as creation time, encoding software, resolution, frame rate, and codec. The product can flag markers like software names associated with AI video tools, missing creation dates, implausible timestamps, or unusual technical mixes — always as hints, not as definitive labels.

What about heavy compression?

Social and mobile video is often heavily compressed; compression can look like tampering. Models are guided to separate compression artifacts from manipulation indicators and to call out ambiguity. Very low quality may land in Inconclusive or Insufficient.

How many video verifications do I get?

Free plan: not available. 3-Model plan: 5 per calendar month. 5-Model plan: 20 per calendar month. Each video verification also consumes one run from your monthly panel run allowance.

What formats are supported, and is my video kept?

MP4, MOV, and WebM; max 50MB and 60 seconds. The file is processed in memory and not stored as media — only analysis results and structured metadata are saved. Extracted frames are discarded after the models finish.

Understanding results

What is the consensus score?

A number from 0 to 100. For claim verification it is computed from how models voted, how many returned usable parsed results, and related signals (for example low-confidence rows or parse errors). Higher generally means more support and healthier participation; lower means more disagreement, missing models, or weak evidence. Use it together with the confidence label and evidence quality — not as a single yes/no.

What do the confidence labels mean?

You will see High, Medium, or Low next to the score. High indicates a stronger score with enough models successfully contributing; Low flags a weak score and/or too few usable model rows. Medium is everything in between. Exact thresholds are fixed in the product logic so the same inputs yield the same label.

What is evidence quality?

Strong, mixed, or weak summarizes how tight the model evidence looks relative to agreement and low-confidence counts. Strong suggests aligned, higher-confidence evidence; weak suggests splits or many low-confidence / failed parses.

What do the verdicts mean in claim verification?

Confirmed — a large majority of models with usable verdicts rate the claim as accurate. Disputed — material disagreement (for example both accurate and inaccurate votes among usable models). Partially true — enough partial or qualified agreement that the core idea may hold but details need correction. Unverifiable — too many unknowns, too many unverifiable votes, or not enough usable model output to call the claim confirmed or disputed with confidence.

Why do some models say "unverifiable" while others say "accurate"?

Models differ in training data, tools, and recency. One may have live or web-backed context; another may refuse or hedge on the same text. That split is itself informative: it often means the claim depends on time-sensitive or source-specific facts not all models can see the same way.

Audit trail

What is the audit trail?

A compact record of a claim (or research metadata where exposed): which models ran, structured verdict signals, consensus score, timestamps, and lengths — designed so you can show what was checked without storing full raw completions in the bundle. In the app, open View audit trail on a claim result; use Copy audit as JSON or Download for your files.

Can I export audit trails?

Yes for individual runs via copy and JSON download in the UI. Bulk export across many runs is not available yet.

Governance & review

What is the Governance Dashboard?

On the 5-Model plan, the Governance Dashboard is where you set trust thresholds, assign a peer reviewer, and keep an audit trail of review decisions. When a claim verification or research run falls below your configured threshold, it can be flagged for review automatically.

How do I assign a reviewer?

Open ProfileGovernance Settings. Turn on Available as reviewer if you want others to assign you. To choose your own reviewer, enter their email — they need the 5-Model plan with reviewer availability on. Assignment applies right away.

What can a reviewer do?

Reviewers open GovernanceReview Queue. Each flagged run shows the claim or question, consensus score, model verdicts, where models agree or disagree, and governance reasons. They can approve, block with a comment, or request changes.

What is the Audit Log?

The Audit Log lists your own review actions: who approved or blocked a run, when, and the comment. Run owners see the outcome on their History view when you complete a review.

What triggers a review?

Your governance policy decides. Typical flags include consensus below a threshold, weak evidence quality, model failures, sensitive-domain detection (legal, medical, financial), and certain claim verdicts (e.g. disputed or unverifiable). Thresholds are configurable where your account has policy access.

Do I need a reviewer to use ConvergePanel?

No. Reviewer assignment is optional. All plans get consensus scoring and governance status on results. Peer review and the full governance dashboard are on the 5-Model plan if you want that workflow.

Plans & limits

What's included in the free plan?

Eight runs per calendar month and up to two models per run. Both research and claim runs count as one run each. Video verification is not included on free. Audit trail views and JSON copy/download are included where the product exposes them for your results.

What do paid plans add?

More models per run (up to five on the full plan), higher monthly run limits, video verification with its own monthly allowance (each video also uses one panel run), and longer history retention on paid tiers. The 5-Model plan adds governance dashboards, peer review, and the review audit log. See the pricing page for current numbers.

Do claims count against my monthly limit?

Yes. Each claim run increments usage the same way a research panel run does.

Troubleshooting

A model shows "parse error" — what happened?

Sometimes a model returns text we cannot parse into the expected JSON shape (for example extra prose or markdown around the payload). That model is marked as a parse error; other models still count. The consensus score reflects the reduced usable set.

My results seem slow — is that normal?

Yes. Each run fans out to multiple providers in parallel. Most complete in roughly 15–45 seconds; heavy prompts or slow endpoints can approach a minute.