Grounded answers about Signals.
Signals is designed to replace top-down judgment with structured, peer-led insight. These answers highlight how that intent shows up in practice.
What's the difference between Signals and stack ranking?
Stack ranking sorts people against each other to defend a hierarchy or force a distribution. Signals never compares individuals side by side. Instead, it builds a qualitative and quantitative mirror for each person — grounded in their team, their peers, and their own reflection.
Rather than collapsing contribution into a competitive score, Signals documents the evidence behind contribution so growth conversations stay contextual, fair, and accountable.
What typically goes wrong with stack ranking and how is it mitigated with Signals?
Both hierarchical and “flat” stack ranking share a deeper flaw: they tie ratings to compensation decisions and treat the numbers as final judgments. Whether the pressure comes from a manager or from peers, the incentives reward gaming and silence real feedback. Signals deliberately separates insight from pay. It surfaces structured evidence so the organization can understand contribution before any downstream decision is made.
Surveys run in clearly defined cycles, guided by shared prompts and scaling language. Every response must include context, and admins review comments for tone and substance. The result is a grounded set of signals — inputs for thoughtful conversation, not a scoreboard that decides someone’s livelihood. No single person or clique controls the outcome.
Why is the rating scale from -5 to +5?
The -5 to +5 scale is symmetric by design. It captures both the direction and the magnitude of how someone exceeded or fell short of expectations. This structure encourages emotional honesty and resists inflation — positive feedback has room to celebrate meaningful over-performance, while constructive signals can show where support is needed.
The center of the scale remains "met expectations" rather than a numerical zero, reinforcing that steady, reliable contribution is valued. Clear labels at every step keep the conversation grounded in evidence rather than anxiety about rankings.
Do people see individual -5 to +5 ratings?
Once a cycle is published, each metric shows four rating groups: the overall aggregate, your own (self) rating, and aggregates for the team and peers (non-team colleagues) that rated you. You never see a single teammate’s score — only how each relationship group experienced your impact.
This keeps the focus on patterns and evidence rather than guessing who left which number. It also surfaces when perspectives diverge (self vs. team vs. peers) so follow-up conversations stay rooted in the work instead of speculation.
How much time should I expect to spend on the survey?
Plan for roughly 5–15 minutes per person you’re evaluating. Most of that time goes into writing useful examples and recognition, not clicking around the interface.
Signals exists because feedback and recognition are often left to chance. The structure keeps the process lightweight while making sure tensions and needs are actually addressed. Role-based recognition can look long, but it is meant to be used surgically — highlight the few strengths or focus areas that matter most right now, not every single accountability.
How are survey responses used?
Responses are inputs into an ongoing conversation about contribution, growth, and support. Ratings help teams see directional patterns, while comments capture the evidence behind those signals. They do not map 1:1 to compensation decisions or performance rankings.
Administrators and leaders use the insights to spot system-level needs — for example, uneven workloads, missing recognition, or coaching opportunities. Any downstream decisions must reference the underlying context, not just a number pulled from the survey.
Who can see my feedback?
You always see your own responses. Team members and peers view the results that relate to them once administrators publish the cycle. Admin reviewers can see submissions earlier to safeguard tone and clarity.
There is no public leaderboard. Access follows roles: participants see what they need to act, and admins steward the overall health of the process.
Are survey comments anonymous?
No. Signals makes authorship visible on purpose. Accountability encourages constructive, specific feedback and keeps dialogue open. While ratings aggregate, the stories stay attributed so people can follow up with questions or appreciation.
What happens after a survey closes?
Administrators review responses for tone, flag anything harmful, and resolve questions with the authors. Once the review is done, results can be published so individuals and teams can process the insights together.
Shared language from the survey — the scale, prompts, and recognition structure — stays available so follow-up conversations stay anchored in the same context.
How does role-based recognition work?
Each role lists accountabilities to make expectations visible. During a survey you can point to specific accountabilities where someone excels or needs focus. You are not expected to fill every line item — use it to spotlight the moments that best represent their contribution right now.
This keeps recognition grounded in actual work rather than vague praise and gives the recipient something actionable to build on.
What if I have concerns about the process?
Reach out to your Signals administrators if something feels off. They can pause a survey, remove unhelpful feedback, or adjust assignments. Signals treats every response as part of a living system — tensions feed improvements rather than being ignored.
If you are not sure who owns Signals in your organization, contact Joakim