Use Case #28: Quality Score Redesign
Rethinking how content quality is measured, scored, and displayed.
William Welsh
Author
Use Case #28: Quality Score Redesign
The content quality score was supposed to help writers improve. Instead, they ignored it.
"What does 73% even mean?"
Fair question.
The Problem
The score was a black box. Multiple factors weighted somehow. No clear path to improvement. Writers either ignored it or gamed specific factors without improving actual quality.
The Research
Claude analyzed 6 months of content data: scores, human editor ratings, performance metrics. Found correlations. Or rather, found the lack of them.
The Findings
The existing score correlated weakly with actual quality. Human editors gave some 60-score articles A+ ratings. Some 95-score articles were mediocre.
The score was measuring the wrong things: keyword density too heavily weighted, readability formulas that punished complex topics, length requirements that encouraged padding.
The Redesign
New scoring system designed around what actually predicted quality:
Structure (30%) - Clear hierarchy, logical flow, proper sections. Measurable through heading analysis.
Completeness (30%) - Covers the topic adequately. Measured against topic models for that content type.
Readability (20%) - Appropriate for target audience. Not one-size-fits-all formula.
Engagement Signals (20%) - Based on actual reader behavior from historical data.
The UI
Old: Single percentage. "Your score: 73%"
New: Four-dimension breakdown. Clear explanations. Specific improvement suggestions. "Add a section on pricing to improve completeness."
The Result
Writers started using the score. Content quality improved. Human editor ratings increased 15% over 2 months.
Quality score redesign for ContentEngine, December 2025.
William Welsh
Building AI-powered systems and sharing what I learn along the way. Founder at Tech Integration Labs.
Related Articles
View all →Use Case #20: Interactive Onboarding Design
User onboarding was broken. People signed up and immediately got lost. Claude analyzed the flow, redesigned it, and we saw completion rates jump from 34% to 78%.
Use Case #1: Autonomous Bug Fixing from Slack
One prompt. Zero babysitting. Claude read bug reports from Slack, traced the issues through my codebase, fixed them, deployed to production, and verified the fixes in a browser.
Use Case #2: Client Onboarding from URL
I gave Claude a business URL. It researched the company, scraped their content catalog, identified competitors, extracted brand colors, and generated a fully configured ContentEngine instance.