Rolling out a 360 feedback process across a large group of leaders can be a powerful way to accelerate development and cultural change, but the logistics can be overwhelming, particularly when it comes to managing potentially hundreds of evaluators at a time.
One of the most common questions even experienced HR leader s ask is, “How do we avoid overwhelming evaluators with lots of surveys?”
If you are planning to implement the Hogan 360 at scale, there are several practical considerations that can help keep the process manageable, fair, and meaningful for participants.
If you are planning to implement the Hogan 360 at scale, there are several practical considerations that can help keep the process manageable, fair, and meaningful for participants.
Quick Guide to Terms
- Subject: The leader who is participating in the 360 & asking colleagues to evaluate them
- Evaluators: The colleagues, typically manager, peers, and direct reports providing feedback
- Manager: The person(s) the Subject reports into
- Nomination Process: The steps followed to choose which evaluators are asked to provide feedback by completing the Hogan 360 Survey.
Clearly Communicate Evaluator Time Commitment
One of the most important starting points is to clearly communicate to evaluators the time required to provide feedback. Given the variety of 360s on the market, leaders may be accustomed to assessments that take an hour or more to provide feedback for each subject.
The Hogan 360 is designed to be both concise and meaningful, and typically takes around 15 to 20 minutes to complete per subject. Even for busy leaders, this translates into a manageable amount of time over a typical two week window.
For example, if a Manager is asked to provide feedback for eight people, this equates to approximately60-90 minutes per week spread across 2 weeks.
PBC Best practice: Subjects should email their evaluators to explain why they are completing the process and ask them to provide feedback. Provide a template that also includes information on how long the survey will take, to help set expectations and increase likelihood of evaluators providing feedback.
Provide Clear Guidance on Evaluator Nomination
The process for selecting evaluators is one of the most important factors in the quality and credibility of 360 feedback data. You can ensure the success of your program by giving Subjects clear guidance on who should or should not be included in the process based on roles and organisational structure.
- Direct Reports: All immediate direct reports should generally be included. However, reporting structures are not always straightforward, and in some organisations participants may incorrectly include indirect reports. Defining what your organisation considers a “direct report” can help reduce the amount of overlap across leaders.
- Peers: Peers are typically defined as those who sit within the same team and report to the same manager. However, that definition may not always suit the context. In high-volume projects, it may be more practical to focus on peers with whom the participant interacts most regularly, rather than applying a rigid structural definition.
- Manager(s): For Subjects who are eager to know how they are perceived by senor leadership, there is often temptation to include their manager’s manager. While this input may be seen as valuable, it can become impractical in large-scale programmes. Unless senior leaders are highly committed to the process and willing to dedicate substantial time, this category is often best excluded. When not excluded, HR should review the nominations to ensure that no senior leader is responsible for feedback for no more than 8 subjects.
- Additional Stakeholders: Where organisations choose to include additional stakeholders, it can be useful to set a recommended limit, such as no more than three. This group is often highly valuable for more senior or cross-functional roles, but without limits the category can quickly become too broad and difficult to manage.
PBC Best Practice: Subjects should nominate people they genuinely work closely with, even where (and especially when) those relationships may involve regular challenge or conflict. Staggering the process or managing volume should not create an incentive for people to cherry-pick only favourable raters.
Consider an HR Review Before Finalising Rater Lists
In larger programmes, it can be very helpful for HR to review evaluator lists before they are finalised. This additional step can reduce unnecessary overlap, identify gaps, and ensure greater consistency across the process.
HR may also be well placed to suggest more relevant alternatives where nominations appear misaligned or overly repetitive. In complex organisations, this can improve both efficiency and the overall quality of the feedback pool.
PBC Best Practice: The 360 Evaluator Upload Form makes it easy to see all nominations in one place before submitting your order.
Consider Using Cohorts to Improve Time Management
From a project management perspective, HR leaders can choose between running one large 360 project or splitting the rollout into multiple smaller waves.
Running multiple smaller projects can make it easier to manage staggered participation. This approach is often more practical when trying to balance leader workload and reduce survey fatigue.
However, there are trade-offs. Tracking timelines and completion rates across multiple project logins can be more complex, and if the organisation later wants to aggregate reporting across projects, additional costs may apply depending on the pricing structure.
There is no universal right answer here, but the choice should be made with both operational ease and reporting requirements in mind.
PBC Best Practice: Don’t go at it alone. Work with your 360 Advisor or Consultant to determine the structure and support model that meets your organsation’s needs.
Consider Using Cohorts to Improve Time Management
From a project management perspective, HR leaders can choose between running one large 360 project or splitting the rollout into multiple smaller waves.
Running multiple smaller projects can make it easier to manage staggered participation This approach is often more practical when trying to balance leader workload and reduce survey fatigue.
However, there are trade-offs. Tracking timelines and completion rates across multiple project logins can be more complex, and if the organisation later wants to aggregate reporting across projects, additional costs may apply depending on the pricing structure.
There is no universal right answer here, but the choice should be made with both operational ease and reporting requirements in mind.
PBC Best Practice: Don’t go at it alone. Work with your 360 Advisor or Consultant to determine the structure and support model that meets your organsation’s needs.
Communicate Clearly About Data Use and Access
In any 360 process, especially one with high visibility, trust is critical. Participants need to feel confident not only in the process itself, but also in how their data will be used and who will have access to it.
The project lead should communicate very clearly and explicitly about confidentiality, reporting, access permissions, and the intended use of results. Ambiguity in this area can quickly undermine confidence and reduce openness in responses.
If the organisation has experience running 360 processes, participants may already have some sense of what to expect. Even so, assumptions should never replace clear communication. Reconfirming expectations helps build confidence and psychological safety.
PBC Best Practice: Designate a leader to be accessible if Subjects or Evaluators have privacy concerns, and ensure that all involved in the process know who that is.
Use 360 Data in Offsites Carefully and Transparently
Incorporate 360 feedback into team or leadership offsites can be highly powerful and enjoyable when handled with care.
Used well, 360 data at the team or cohort level can support rich reflection, shared learning, and meaningful development conversations. It can add depth to offsite discussions and help leaders engage more honestly with their own impact and growth areas.
Data can be provided in a way that is deidentified, and focuses on the average trends across the group, rather than scores from any one individual. The Hogan 360 offers several group reporting options. Do we have a landing page for group reporting we could link to?
However, transparency is essential. Participants should know in advance if and how their data may be referenced in an offsite setting. Nobody should feel caught off guard, surprised, or misled. Giving people clear notice helps preserve trust and ensures the process remains psychologically safe.
PBC Best Practice: Ask offsite participants to come prepared to share their top 3 Strengths & Opportunities (just the item, not the score). Create breakout groups to support shared Development Areas or learn from complimentary strengths.
Final Thoughts
A large-scale 360 rollout is a highly visible leadership intervention that can strengthen self-awareness, improve leadership effectiveness, and support development in a meaningful way for an influential group of leaders. Although the logistics can be intimidating, thoughtful process design can make large group 360s run just as smoothly as small projects.
Keeping the workload manageable, clarifying nomination guidelines, reviewing rater lists, choosing the right project structure, and communicating transparently all contribute to a better experience and stronger outcomes. When organisations also think carefully about how results are shared and used, they create the conditions for 360 feedback to be both credible and developmentally valuable.
*This article is authored by Kate Modic, Associate Director at PBC.