VaaniNetra

Human-in-the-Loop Annotations

Review, correct, and export training data from the three-layer compliance pipeline.

Demo Scenario — Three-Layer Architecture in Action

This page demonstrates VaaniNetra's three-layer validation architecture — a key differentiator that transforms automated compliance checking into a continuously improving system:

Layer 1: Auto Extraction

The system extracts entities (CIN, DIN, amounts), classifies sections (13 regulatory types), and runs 132 compliance rules — all with confidence scores.

Layer 2: LLM Reasoning

Gemini 2.0 Flash refines predictions using RAG retrieval from 408 vectors across regulations and NFRA precedent orders. Evidence spans and explanations are generated.

Layer 3: Human Review

Low-confidence items (<70% for entities, <75% for compliance) are routed here for expert review. Every correction becomes training data for fine-tuning.

How to Demo

  1. Upload a document → extraction pipeline runs → low-confidence items appear in the queue below.
  2. Click "Review" on any task → see the auto prediction alongside the document context.
  3. Agree or correct — click "Agree with Auto" to validate, or edit the JSON and submit your correction.
  4. Export training data — all corrections can be exported as JSONL (for fine-tuning), few-shot prompts, or HuggingFace format.
  5. Check calibration stats — the Stats tab shows agreement rate, telling us how well-calibrated our confidence scores are.
Requirement: Human-in-the-Loop ValidationRequirement: Continuous Improvement PipelineRequirement: Training Data GenerationRequirement: Confidence Calibration
Task Type: