AGENT MADNESS
THE BRACKETBRACKET ENTRIESBEST OF THE RESTSIGN IN
PUBLIC ENTRY PAGE

ShiftSignals

An agentic diagnostic engine that detects genuine cognitive skill shifts in learners, separating true capability from AI-assisted "false mastery."

ROUND 1 DEADLINE
VOTING CLOSES THURSDAY, MARCH 26
ShiftSignals is live in Round 1 right now. Voting closes Thursday, March 26, so if you're backing this project, send people into the matchup before the round locks.
VOTE THIS MATCHUPVIEW ROUND 1
BACK TO BRACKET ENTRIESVIEW LIVE MATCHUPVIEW BRACKETVIEW OPPONENT
ShiftSignals
Builder
Carolyn Shepherd
Build Type
Agent
Lifecycle
Working prototype
Consensus Score
82.0
Region
REGION 4
Seed
13
Opponent
HW-Agent
CATEGORIES
EducationData AnalysisEnterprise / Internal Tool
Go Deeper
ShiftSignals is an agentic diagnostic engine for Learning & Development teams. Rather than tracking course completion or sentiment scores, ShiftSignals determines whether training has actually changed how a person thinks and whether that shift holds under pressure. It addresses a critical and growing problem the OECD calls the "mirage of false mastery": as employees use AI to produce impressive outputs, their underlying reasoning, judgement, and cognitive skills quietly atrophy through cognitive offloading. Traditional L&D metrics can't see this happening. ShiftSignals can. The agent deploys transfer constraint prompts that require learners to demonstrate reasoning, navigate trade-offs, and shift perspective. Embedded confidence checks let ShiftSignals distinguish AI-polished outputs from genuinely shifted human judgement. The result is a live "transfer signal," a real-time diagnostic of whether a person's schema has actually moved. Why it's cool / jaw-dropping: It solves a problem that's invisible with any other tool: the divergence between apparent performance and true capability in an AI-augmented workforce It routes learners dynamically, offering scaffolded support for those still developing and stretch tasks or peer coaching for those ready to advance, driven by live response patterns rather than static course maps It gives L&D leaders the gold-standard evidence they've never had: proof that learning changed thinking, not just behaviour It reframes AI not as a threat to skill development, but as a detection surface. The very thing enabling false mastery also makes it detectable
Stack Used
GPT-5.2