Blog
TL;DR: We used to burn entire afternoons debating product decisions. Now we spend 20 minutes aligning on context, then let everyone explore solutions individually (both with and without AI). The result? Decisions in hours instead of weeks, engineers who understand product strategy and product managers who grasp technical constraints. Most importantly: we ship faster and we get to increase AI literacy in the company!
Product decisions have the capacity to consume meeting after meeting. Different disciplines defending their perspectives, talking past each other and eventually reaching compromise solutions that satisfied no one. Engineers focused on technical constraints, designers on user experience, product managers on business impact - all valid concerns, but hard to reconcile in real-time group discussion.
We've replaced this with a simple pattern: brief context alignment meetings followed by individual analysis and research. Teams independently converge on similar solutions through completely different reasoning paths. The unexpected outcome: this hasn't just accelerated our decision-making, it has also made us better at working together and increased our product velocity.
Consensus Convergence describes the shift from group debates to structured async decision-making, anchored by shared context and individual exploration. Instead of trying to reach consensus through discussion, we align on the problem and constraints, then pursue independent research and AI conversations before converging on compatible solutions.
The counterintuitive insight: teams agree more easily when they explore solutions separately (with AI) than when they debate face-to-face. The shared context ensures everyone's solving the same problem, while individual AI conversations allow each discipline to understand the decision through their own analytical lens.
This was the decision that started our entire approach. We were stuck in a meeting loop — 3+ meetings, 6+ hours total, circular discussions about which of our 12 crypto index methodologies to maintain vs retire. Engineers arguing about infrastructure costs, product pushing for client flexibility, UX citing industry standards.
Looking for a solution, we tried something different: a 25-minute alignment meeting where we focused on crafting an unbiased prompt. This was critical — with prompt engineering, you can make AI say whatever you want it to. The most important part wasn't agreeing on the context, but ensuring our prompt wouldn't lead the AI toward any predetermined conclusion.
Given these 5 specific methodologies (we shared the actual methodology documents), our current implementation timeline constraints, thousands of dollars in support costs, and our user segment analysis showing who actually cares about methodological differences — the core question became: "What can we do to increase our velocity and lower our cost without impacting users?" Crucially, we spent time ensuring this question didn't bias toward consolidation or any specific solution.
Individual AI Conversations (2 hours async on Slack):
Result: Saved thousands of dollars a month, freed engineering capacity, decision made in 2.5 hours total instead of weeks of meetings. Most importantly, this success convinced us to apply the same pattern to future decisions.
Six months and many successful decisions later, this process has become invaluable. Just last week, we needed to decide on the optimal UI pattern for our React Native CoinDesk mobile app. The process that once took us weeks now flows naturally.
The Old Way: 2+ meetings debating endless scroll vs pagination. Mobile team pushing for modern UX patterns, backend arguing for query efficiency, product asking for user behavior data that didn't exist yet.
The New Way: 20-minute context alignment: Market cap rankings for 5000+ assets:
We are shipping a React-Native CoinDesk mobile app targeting both crypto natives and traditional traders/investors migrating from equities, forex, and commodities. The app lists 50-4,000 crypto assets ranked by live market-cap—similar to how Bloomberg Terminal, TradingView, or Interactive Brokers display large asset universes. Prices and 24h % changes tick every second, but vertical rank order updates only on explicit user refresh (mimicking traditional market data terminals where rank stability is crucial for professional decision-making).
UI patterns to evaluate
Infinite scroll – loads next batch as user nears end (true lazy-loading)
Numbered pagination with in-page virtual scroll (~50 rows per page) – user flips with "Next/Page X"; keeps exactly one page in memory, rows re-sort only on pull-to-refresh; adjacent pages pre-fetched for navigation speed (closer to traditional terminal/desktop trading platform UX).
Analyse from user-experience angle, considering both crypto-native users and traditional traders accustomed to TradingView, Bloomberg Terminal, Interactive Brokers, TD Ameritrade, or E*TRADE interfaces. For each pattern discuss:
Visual stability – flicker, scroll-position drift, row jumping (critical for traders used to stable terminal interfaces)
Interaction reliability – tap accuracy, selection integrity, back/deep-link behavior, accessibility
Resource impact – battery, CPU/GPU, memory, bandwidth (especially important for day traders on mobile)
Cognitive load – staying oriented at rank #N, share/bookmark/return paths, mental model alignment with traditional trading platforms
Professional workflow integration – screenshot sharing, rank referencing in communications, watchlist building patterns familiar from traditional platforms
Deliverables
SWOT table for each pattern.
Edge-cases/failure modes bullet list.
Competitive review – compare against leading crypto apps (Binance, Coinbase Pro, CoinMarketCap, Kraken Pro) AND traditional trading platforms (TradingView mobile, Interactive Brokers mobile, Bloomberg Mobile, Schwab mobile) to identify UX patterns that successfully bridge both worlds.
Reasoned recommendation – prioritize the pattern that best serves both audiences: crypto natives expecting modern mobile UX and traditional traders expecting terminal-like reliability and reference stability. Focus on which approach will most effectively convert traditional traders to crypto while maintaining crypto-native user satisfaction.
Please keep the focus on UX consequences, not implementation details.
Individual AI Explorations (90 minutes async):
Result: Still pending as this was a meeting we had a few days ago. (We'll share updates in our next blog!)
Beyond faster decisions, this approach transformed how we work together:
When the team starts from the same rigorously neutral prompt, each AI explores the problem space through its own “super-power” lens—systems analysis, scenario design, performance pragmatism, or rapid fact-gathering. Those lenses illuminate different facets of the same constraints, so recommendations naturally cluster around the optimal zone instead of diverging. The result: multiple independent rationales that cross-validate one another and give the team high confidence to “agree and commit.”
This convergence isn't coincidental; it suggests that when context is properly defined and unbiased, different analytical approaches naturally identify similar optimal solutions within the established constraints.
The transformation in our development speed has been dramatic, but it didn't happen overnight. Six months ago, identifying a product problem meant scheduling a discovery meeting for the following week. By the time we started building, the original problem had sometimes evolved or been deprioritized entirely.
Now, when we decided to add real-time price streaming to our asset rankings, we had context alignment and three AI-explored solutions posted on Slack within 4 hours. The engineer understood the UX implications of different scrolling patterns with live data updates, the UX designer grasped the database query implications of streaming data and the product manager researched how our competitors handled similar features.
Crucially, the competitive analysis revealed that most crypto apps suffered from crashes and reliability issues around real-time features, which reinforced our core USP: stability, accuracy, and timeliness. Since we were already touching the toplist infrastructure for streaming, we decided to improve the UX simultaneously while keeping things simple and efficient. We started implementation the next morning with complete confidence in our approach and a clear strategy to differentiate on reliability.
This velocity change cascades through everything we build. For our upcoming Q2 roadmap, instead of the usual three full-day planning sessions that still leave everyone uncertain about technical feasibility, we're planning to apply the same context alignment approach. Each feature will be pre-explored by relevant team members with AI, so we'll arrive at planning with solutions rather than just problems.
We expect engineering estimates to become more accurate because product requirements will be clearer from the start. UX designs should require fewer iterations because they'll be grounded in technical constraints upfront.
The psychological shift has been just as important as the time savings. There's genuine excitement about alignment sessions because everyone knows we'll walk away with clarity and direction. Engineers report higher job satisfaction from spending 70% of their time coding instead of 40% - 50%. Product managers can focus on strategy and user research rather than facilitating endless requirement debates. The UX team has more time for user testing and design iteration because they're not constantly reworking solutions based on last-minute constraint discoveries.
Most importantly, this approach scales with team growth. When we hired two new engineers last week, they could understand our past decisions by reading the AI conversation threads rather than requiring lengthy knowledge transfer sessions. When we expanded into options trading exchange metadata features, the same context-alignment pattern worked perfectly for a domain none of us had deep expertise in. The AI conversations had simultaneously became our institutional memory and onboarding documentation.
The process is straightforward but requires discipline:
Context Template:
Context: [Problem background, current state, and technical environment]
Options to Evaluate: [Specific alternatives being considered with detailed descriptions]
Analysis Framework: [What perspective to take, what factors to consider]
Key Evaluation Areas: [Specific dimensions for assessment]
Research Required: [External benchmarking, competitive analysis, industry standards needed]
Deliverables: [Expected outputs including analysis format, research components, and final recommendation structure]
Personal Tips:
Most product debates aren’t clashes over fundamental priorities; they’re arguments about how to get to an already-shared goal. Consensus Convergence turns those skirmishes into rapid alignment: a 20-minute session codifies the problem, success criteria, and evaluation framework in a neutral prompt; team-members then explore solutions independently with AI and naturally gravitate toward overlapping recommendations. Because the constraints are co-authored, ownership is real and execution starts with genuine commitment instead of half-hearted compliance.
Where this approach can stumble:
Consensus Convergence isn't about replacing human judgment with AI recommendations — it's about using AI to enable better collaboration within shared constraints. We've discovered that consensus emerges more naturally when people can explore solutions individually with AI partners, rather than defending positions in group settings.
This approach fundamentally shifts how we use our expertise within the team. Instead of burning expert colleagues' time during the learning and exploration phase, we use AI as individual research assistants to rapidly understand domains outside our core competencies. The engineer uses Claude to grasp UX implications, the designer uses GPT-4 to understand technical constraints, the product manager uses Gemini to research competitive landscapes. Only after this individual knowledge-building phase do we engage each other's expertise — but now for validation, refinement, and edge case identification rather than basic education.
The AI isn't running our company or making our decisions. It's accelerating each team member's ability to understand the full problem space before we collaborate. When the engineer posts their AI conversation about UX patterns, they're not saying "Claude decided this" — they're saying "I used Claude to research industry standards and user behavior patterns, here's what I learned, and here's my informed recommendation." The difference is crucial: expert human judgment backed by AI-accelerated research, not AI judgment validated by human approval.
The most valuable outcome isn't just faster decisions — it's the improved team dynamics and higher product velocity. Engineers understand product strategy better, product managers grasp technical constraints, and everyone spends more time building instead of debating.We've moved from "let's schedule a meeting to discuss this" to "let's align on context and explore with AI." The result is decisions in hours instead of weeks, better cross-team understanding, and significantly higher product velocity. That's the kind of convergence that scales teams and products.
Consensus Convergence (n.) — The phenomenon where teams replace lengthy product debates by agreeing on shared context and prompts in a single meeting, then pursuing individual AI conversations that independently arrive at similar conclusions — leading to faster decisions, better cross-team understanding, and higher product velocity.
Get our latest research, reports and event news delivered straight to your inbox.