Problem Statement
5PM Agency's existing matchmaking engine required significant modernisation to meet growing performance demands. The platform, which helps professionals discover and prioritise valuable business relationships, faced challenges with query latency, limited scalability, and basic explainability of match results. The existing system relied on rigid, rule-based workflows that could not autonomously adapt to diverse query intents, lacked the ability to dynamically select retrieval strategies based on context, and provided no transparent reasoning for match results.
Proposed Solution & Architecture
We designed and implemented a multi-agent AI matchmaking platform on a serverless, cloud-native architecture using AWS managed services, with Amazon Bedrock Agents at its core. The solution employs a supervisor-to-specialist agent orchestration pattern where autonomous AI agents collaborate to process queries through multi-step reasoning without human intervention.
Agent Orchestration Layer
A Supervisor Agent powered by Amazon Bedrock autonomously classifies incoming query intent, determines the optimal processing strategy, and routes requests to specialised agents. AWS Step Functions provides the orchestration backbone, coordinating agent interactions while maintaining session context and enforcing execution policies.
Query Intelligence Agent
Autonomously parses natural-language queries, performs chain-of-thought reasoning to extract structured attributes (industry, skills, geography, intent signals), and selects the optimal embedding strategy based on query complexity.
Search Orchestration Agent
Autonomously determines the retrieval strategy by evaluating query characteristics, selecting between vector similarity search, metadata-filtered retrieval, or hybrid approaches via Amazon OpenSearch Service.
Match Scoring & Explainability Agent
Applies a weighted multi-factor scoring model combining semantic similarity, intent alignment, and engagement likelihood — generating human-readable "why it matched" explanations.
Personalisation & Feedback Agent
Monitors user interaction signals stored in Amazon DynamoDB and autonomously triggers adaptive re-ranking through persona clustering via DynamoDB Streams.
Knowledge Base & RAG
Amazon Bedrock Knowledge Bases with Amazon OpenSearch Service and Amazon Titan Embeddings provide retrieval-augmented generation, grounding responses in verified professional profile data stored in Amazon S3.
AWS Services & Technologies
Architecture Diagrams


What We Delivered
- Designed a supervisor-to-specialist multi-agent orchestration pattern using Amazon Bedrock Agents
- Built a Query Intelligence Agent for autonomous natural-language parsing with chain-of-thought reasoning
- Deployed a Search Orchestration Agent with dynamic strategy selection across vector, metadata, and hybrid retrieval via Amazon OpenSearch Service
- Created a Match Scoring & Explainability Agent generating human-readable "why it matched" explanations
- Implemented a Personalisation & Feedback Agent using Amazon DynamoDB Streams for adaptive re-ranking
- Configured Amazon Bedrock Knowledge Bases with RAG for grounded responses from verified profile data
- Applied Bedrock Guardrails across all agents for content safety, denied topic filtering, and PII detection
Outcomes & Success Metrics
- AI matchmaking engine modernised as a multi-agent autonomous system using AWS serverless and managed services, achieving up to 70% reduction in infrastructure provisioning time and 35–50% lower operational overhead.
- Query response latency reduced by approximately 60%, with average response times consistently below 300 ms under normal load and under 500 ms during peak traffic.
- Agent routing accuracy exceeds 95%, with the Supervisor Agent correctly classifying query intent across all test scenarios.
- RAG retrieval accuracy exceeds 92%, with agents consistently grounding match explanations in verified profile data.
Lessons Learned
- Designing the multi-agent orchestration pattern required careful consideration of agent autonomy boundaries — each specialist agent needed sufficient independence while the Supervisor Agent maintained overall workflow coherence.
- Integrating Amazon Bedrock foundation models for intent parsing required iterative prompt engineering with chain-of-thought reasoning techniques to accurately convert natural-language queries into structured attributes.
- Dynamic strategy selection with Amazon OpenSearch Service highlighted the importance of proper embedding dimensionality and index configuration for optimal millisecond-latency retrieval.