Hadrian is experimental alpha software. Do not use in production.
Hadrian
Features

Chat Modes

Advanced multi-model interaction modes for synthesized, chained, debated, and collaborative conversations

Hadrian's chat UI supports multiple conversation modes that define how selected models interact when responding to prompts. These modes enable sophisticated multi-model workflows beyond simple parallel responses.

Overview

Chat modes are organized into five phases based on complexity:

PhaseModesDescription
1. CoreMultiple, Chained, RoutedBasic parallel and sequential patterns
2. SynthesisSynthesized, Refined, CritiquedCombining and improving responses
3. CompetitiveElected, Tournament, ConsensusVoting and agreement-based selection
4. AdvancedDebated, Council, HierarchicalComplex multi-model orchestration
5. ExperimentalScattershot, Explainer, ConfidenceParameter variations and special modes

All modes work with model instances, allowing the same model to participate multiple times with different settings (e.g., "GPT-4 Creative" vs "GPT-4 Precise").


Phase 1: Core Modes

Multiple (Default)

Each model responds independently in parallel. This is the default mode.

AttributeValue
Min models1
FlowAll models respond simultaneously
OutputAll responses displayed side-by-side

Use cases: Getting diverse perspectives, comparing model outputs, maximum throughput.


Chained

Models respond sequentially, each building on previous responses.

AttributeValue
Min models2
FlowModel 1 → Model 2 → Model 3 → ...
OutputEach response shows chain position

Configuration:

OptionTypeDefaultDescription
chainOrderstring[]Selection orderCustom model sequence

Flow:

  1. First model responds to original prompt
  2. Each subsequent model sees all previous responses
  3. Instruction: "Build upon, refine, or improve the previous response(s)"

Use cases: Iterative refinement, building on ideas, progressive deepening.


Routed

A router model selects which model should respond based on the prompt.

AttributeValue
Min models2
FlowRouter analyzes → Selects best model → Selected model responds
OutputSingle response with routing reasoning

Configuration:

OptionTypeDefaultDescription
routerInstanceIdstringFirst instanceWhich instance routes
routingPromptstringBuilt-inCustom routing logic

Flow:

  1. Router model analyzes the prompt (temperature=0 for deterministic selection)
  2. Router selects the best target model with reasoning
  3. Selected model responds to original prompt

Use cases: Optimal model selection, cost efficiency, specialized routing.


Phase 2: Synthesis Modes

Synthesized

All models respond, then a synthesizer combines the results.

AttributeValue
Min models2
FlowParallel responses → Synthesizer combines
OutputSingle synthesized response with source metadata

Configuration:

OptionTypeDefaultDescription
synthesizerInstanceIdstringFirst instanceWhich instance synthesizes
synthesisPromptstringBuilt-inCustom synthesis instruction

Flow:

  1. Gathering phase: All non-synthesizer models respond in parallel
  2. Synthesis phase: Synthesizer reads all responses and creates unified answer

Use cases: Creating definitive answers, combining expert views, resolving contradictions.


Refined

Models take turns improving a response through multiple rounds.

AttributeValue
Min models2
FlowInitial → Refine → Refine → ... → Final
OutputFinal refined response with full history

Configuration:

OptionTypeDefaultDescription
refinementRoundsnumber2Number of refinement cycles
refinementPromptstringBuilt-inCustom refinement instruction

Flow:

  1. First model generates initial response
  2. Each subsequent model refines the previous response
  3. Process cycles through models for N rounds (round-robin)
  4. Returns final refined response

Use cases: Progressive quality improvement, error correction, iterative enhancement.


Critiqued

One model responds, others critique, then the original revises.

AttributeValue
Min models2
FlowInitial response → Parallel critiques → Revision
OutputRevised response with critique history

Configuration:

OptionTypeDefaultDescription
primaryInstanceIdstringFirst instanceInitial responder
critiquePromptstringBuilt-inCustom critique instruction

Flow:

  1. Primary model generates initial response
  2. All other models provide critiques in parallel
  3. Primary model revises based on all critiques received

Use cases: Quality assurance, peer review, error detection.


Phase 3: Competitive Modes

Elected

Models vote democratically to select the best response.

AttributeValue
Min models3
FlowAll respond → All vote → Winner selected
OutputWinning response with vote counts

Configuration:

OptionTypeDefaultDescription
votingPromptstringBuilt-inCustom voting criteria

Flow:

  1. Responding phase: All models respond in parallel
  2. Voting phase: All models vote on which response is best
  3. Winner determined by vote counts

Use cases: Selecting best solution, consensus quality assurance, democratic selection.


Tournament

Models compete in elimination brackets with judge-based comparisons.

AttributeValue
Min models4
FlowAll respond → Bracket matches → Winner advances → Final winner
OutputTournament winner with bracket history

Configuration:

OptionTypeDefaultDescription
tournamentBracketstring[][]Auto-generatedCustom bracket structure

Flow:

  1. Generating phase: All models respond in parallel
  2. Competing phase: Models paired into brackets
  3. Judge model compares each pair, selects winner
  4. Winners advance to next round (bye for odd numbers)
  5. Process repeats until one winner remains

Use cases: Finding best response through elimination, multi-criteria competition.


Consensus

Models revise their responses until they reach agreement.

AttributeValue
Min models2
FlowRound 0 → Revise → Revise → ... → Consensus
OutputFinal consensus response with agreement score

Configuration:

OptionTypeDefaultDescription
maxConsensusRoundsnumber5Maximum iterations
consensusThresholdnumber0.8Agreement threshold (0-1)
consensusPromptstringBuilt-inCustom consensus instruction

Flow:

  1. All models respond in parallel (round 0)
  2. Each model sees all responses, provides revised response
  3. System measures similarity/agreement between responses
  4. Process repeats until threshold reached or max rounds

Use cases: Reaching agreement, finding common ground, collaborative refinement.


Phase 4: Advanced Modes

Debated

Models argue different positions back-and-forth, then summarize.

AttributeValue
Min models2
FlowOpening arguments → Rebuttals → ... → Summary
OutputBalanced summary with full debate transcript

Configuration:

OptionTypeDefaultDescription
debateRoundsnumber3Number of rebuttal rounds
debatePromptstringBuilt-inCustom debate instruction
synthesizerInstanceIdstringFirst instanceSummarizer model

Flow:

  1. Assign positions (pro/con by default) to models
  2. Round 0: Each model presents opening argument from their position
  3. Rounds 1-N: Each model responds to opposing arguments (rebuttal)
  4. Summarizer synthesizes debate into balanced conclusion

Use cases: Exploring tradeoffs, multi-perspective analysis, arguments & counterarguments.


Council

Models discuss from assigned roles/perspectives, then synthesize.

AttributeValue
Min models2
FlowRole perspectives → Discussion rounds → Synthesis
OutputComprehensive response with all perspectives

Configuration:

OptionTypeDefaultDescription
councilRolesRecord<string, string>Auto-assignedManual role assignments
councilAutoAssignRolesbooleanfalseLet first model assign roles
councilPromptstringBuilt-inCustom council instruction

Default roles: Technical Expert, Business Analyst, User Advocate, Risk Assessor, Innovation Specialist

Flow:

  1. Assign roles (manually or auto-assigned based on query)
  2. Round 0: Each model presents initial perspective from their role
  3. Rounds 1-N: Each model responds to other perspectives (discussion)
  4. Synthesizer combines all perspectives into comprehensive response

Use cases: Multi-stakeholder perspectives, domain expertise integration, holistic analysis.


Hierarchical

A coordinator decomposes tasks and delegates to worker models.

AttributeValue
Min models2
FlowDecompose → Parallel workers → Synthesize
OutputSynthesized response with subtask results

Configuration:

OptionTypeDefaultDescription
coordinatorInstanceIdstringFirst instanceTask coordinator
hierarchicalWorkerPromptstringBuilt-inCustom worker instruction

Flow:

  1. Decomposition: Coordinator analyzes prompt, creates subtask list with assignments
  2. Execution: Worker models complete assigned subtasks in parallel
  3. Synthesis: Coordinator combines all worker results into final response

Use cases: Complex task breakdown, specialized workers, divide-and-conquer.


Phase 5: Experimental Modes

Scattershot

Run the same model multiple times with different parameter variations.

AttributeValue
Min models1
FlowSame prompt → Multiple parameter sets → Compare
OutputAll variations displayed with parameter labels

Configuration:

OptionTypeDefaultDescription
parameterVariationsModelParameters[]Built-in defaultsCustom parameter sets

Default variations:

  • temp=0.0 (deterministic)
  • temp=0.5 (balanced)
  • temp=1.0 (creative)
  • temp=1.5, top_p=0.9 (very creative)

Use cases: Parameter tuning, content variation generation, creative sampling.


Explainer

Generate explanations at different audience levels.

AttributeValue
Min models1
FlowExpert level → Intermediate → Beginner → ...
OutputAll explanations with audience labels

Configuration:

OptionTypeDefaultDescription
audienceLevelsstring[]["expert", "intermediate", "beginner"]Target audiences

Default levels: expert, intermediate, beginner, child, non-technical

Flow:

  1. First instance explains at first audience level (e.g., expert)
  2. Subsequent instances adapt/simplify for remaining levels
  3. Instances cycle through levels if fewer instances than levels

Use cases: Multi-audience explanations, progressive disclosure, accessibility.


Confidence-Weighted

Models provide confidence scores, synthesizer weights responses accordingly.

AttributeValue
Min models2
FlowResponses with confidence → Weighted synthesis
OutputSynthesized response weighted by confidence

Configuration:

OptionTypeDefaultDescription
synthesizerInstanceIdstringFirst instanceSynthesizer model
confidencePromptstringBuilt-inConfidence response template
confidenceThresholdnumber0Minimum confidence to include

Flow:

  1. All non-synthesizer models respond with self-assessed confidence (0-1)
  2. Responses include CONFIDENCE: [score] marker
  3. Synthesizer combines responses, weighting by confidence scores

Use cases: Uncertainty-aware synthesis, reliable information prioritization.


Mode Selection Guide

ScenarioRecommended Mode
Quick comparisonMultiple
Iterative improvementRefined, Chained
Best answer from manySynthesized, Elected
Explore tradeoffsDebated
Multi-stakeholder viewCouncil
Complex taskHierarchical
Find optimal settingsScattershot
Different audiencesExplainer
Cost-efficient routingRouted
Quality assuranceCritiqued

On this page