Among the many goals we wanted to accomplish with our new Performance Review module was to make it easy to implement. To facilitate a low lift for our customers we launched with a set of default questions.
The initial default questions were pulled from our experience, interviews that we had with HR leaders we know, and our best guess of what would be needed. They were meant to be a starting point that could be edited and would showcase the options that they have. A shortcut to value.
Lessons We Learned
Nothing beats customer feedback for showing you the gaps in your thinking or execution. We always assumed that we would revisit our defaults, but what we learned influenced more than just the questions.
Turns out how you set up your assessment questions has a huge impact. As we get ready to run end of year cycles with customers we decided to sit down and document the lessons learned and changes made. Video and transcript below.
A few of the main things:
- Question type or format influences responses significantly
- A big scale is overwhelming for many and causes angst
- Questions do double duty but need to have a primary purpose
- Differences between responses to the the same question provide the most insight
Running performance reviews is a balancing act of asking enough questions to drive insight and valuable conversations, and keeping it simple enough to reduce time and general angst. It is also weighed down by all past experiences of being rated or ranked, be it grades in school or experiences at past companies.
Ultimately what we learned is that there is a lot of opportunity to guide and support managers in the set up. You are not designing solely to get a response. Taking the time to think through how the information will be used pays dividends in fidelity of responses and organization weight.
Also tell people how you are going to use the data. It will go a long way to reducing the stress around the process.
We went out to market with the first release, V1 of our performance reviews. We learned a whole bunch of stuff. We talked to a whole bunch of people, and now we’re coming back to talk about the choices that we’re making and the updates that we’re making in the assessment questions. Also, the reporting, like how that impacts the reporting and why we’re doing it that way. A lot of this is really to capture our own thinking about this, to be able to share it, but also because we are verbal communicators, and this is an easy way for us to document what’s going on and ensure our thought process because we’ve learned a ton through this.
Where we are is inside the setup screen for performance reviews, and what we’re talking about today is the questions. Among the many things we are trying to do to reduce the friction of the performance review process, improve the fidelity of the reporting, and make it possible for you to run it more frequently, all the things we think are important as part of a continuous feedback culture, a high-performing culture is to provide default questions that will get you the quality information for the review and the conversation that people are having, as an employee and as a manager. Also, to populate reports where you can actually look at the context of where all this performance is happening and do some Talent strategy planning and other types of evaluation of the whole system to help improve the quality of the conversations managers and individual contributors are having and make it easy for HR to do that. And deliver reports that can drive real action in strategy.
So we went out with an initial set of questions as a kind of ‘let’s give it a shot,’ and we are now coming back to revisit those because there were maybe some gaps. We were probably asking too many questions in some cases, the wrong questions for the wrong audience in some cases, and maybe not enough distinction between them. Is that fair? Well, I think with our default questions on V1 or V0 were really just examples to illustrate more how the tool works, unless, hey, these are the recommended ones you should go out to your employees with and ask, right? Because I think we knew the people who were going to use version one of it were definitely going to change all of these things.
I think something that happened a little bit and they did, but something that happened that was unexpected throughout that process is we had a couple of people jump into that V1 and want to deploy something very quickly. So, yes, I think they took some of these defaults as recommendations instead of as illustrations or examples. Now that we’ve gone through a couple of those cycles, it’s just time to look back and say, ‘Hey, let’s change the defaults so that we’re more comfortable recommending them now that people are treating them as recommendations.’
I would say even the people that changed them, at least I can think of at least one case where they added a few of the defaults back in. So where they were kind of migrating an existing process or set of inputs, they realized that there was an opportunity there, and they sort of picked a couple of ours up. So that was some good validation too but probably not the most thought ever in the world taken into it. I would say the other input is that we started getting a ton more data through our reporting. So what we’re looking at is a talent review grid, and so now we also have the opportunity to look at the interplay between the questions that get asked and the reports that get created and make sure that we are setting our customers up to answer the most important questions and to have the best data they can to make some decisions.
Here’s the corrected and properly punctuated version of the text with a few adjustments:
Until we had a whole bunch of people do that, we didn’t maybe even see the potential behind what we were building. I think that’s fair. I think you know, and on top of that, what were their needs coming into it? I think you know, we had a couple of those. We now know, like you know, we’re sort of looking at in some cases the reporting and working backward. Then there’s other things that I think our customers have approached the first couple sets of questions with. Which are like, how are we facilitating the right conversations, the right feedback between people? Sometimes it’s not just about what happens in a report, and sometimes it’s not just about how it looks and how the features work in an app. But to them, a lot of it is, you know, how do we maintain high frequency of feedback, in the right type of feedback, whether it’s positive or negative, or constructive or whatever. They want to make sure that those things are flowing.
So, I think there’s a lot of different angles. In a way, I think maybe the last thing we learned about it is that there’s not really just one raw outcome of a performance review. Some companies really care about that feedback sharing happening. Some companies really care about a stack ranking of employees. Some companies really care about this as an input to comp. Some companies really care about this as a talent analysis that they’re going to keep up to date. Some companies almost treat it like a proxy, in some ways, for culture and you know how they’re doing from an engagement standpoint. A lot of different things kind of all being forced into one moment. I think that’s why there’s so many differing opinions. I agree the weight placed on this moment in time feels intense. And certainly, I would say a lot of what we were doing is trying to reduce that intensity. Whether it’s in the ease of the configuration, the ease of filling it out, the use of AI. In many, many ways, we’re trying to smooth that out. So even though there’s a lot going on and this information could be used a lot of different ways, let’s not make it feel heavy, and let’s make sure that people can do it. They’re not exhausted just in the execution of it, such that they’re not then going on and having the really important conversations or reviewing the data. That was something we certainly got as we were doing our initial market research. Was that just getting to the end of running a review cycle almost left people spent, and that’s not even maybe the most important part of it.
Okay, so default questions are one good way that we can help customers get sort of a head start on it. One quick note, just before I get into this, you’ll see here I have two questions sitting in this peer question review that say, ‘I see a manager.’ That’s because we are, again, part of our learning, adding a review type where individual contributors can report their managers. That’ll be out really, really shortly. In the meantime, just to get us set up for this initial run at the new questions, I’ve put them here, so just so you know.
Let’s go through the main questions that we have in here and why maybe they are the way they are, and then we can talk about maybe the reporting outputs from it. Anything you want to say before we dig into that? No, I think that’s it. Yeah, I think we’ll go through the three, four types and go from there.
The first column here, these are the questions that a manager is going to answer about their employee. Because we’re using this data to facilitate a conversation, we’re actually using it for a couple of conversations. One, we’re using it for the conversation between that employee and that manager. We’re also using it as a way to help facilitate a conversation for a management team or for a leadership team. So this first question we’ve got in here, the ‘job offer reaction’ is really meant to kind of document what is the impact of potentially losing this person. So how might you react if they had another job offer? In this, you can see we go into the editor; there’s lots of options. We’re going to look at all the other types as we go. This is a simple drop-down, and we’ve got a couple of things in here. We’re not asking you to have notes because really this is just us trying to document a point in time, what’s the impact to the business if this person were to consider leaving. It’s not included in the written feedback. This is not meant to be something to be talked about. And this is input that we want available for reports but isn’t necessarily part of the review cycle.
This combination of things, where we’re feeding two sets of conversations, means we need to look at the ability to limit access to the answers. Everything else works the same. You know, it’s a drop-down. I think many people may have seen this question before. But this really is asking the manager to reflect on what is my risk here. I think that’s something we want to make sure we’re thinking about when we do these reviews. The reviews are a combination of an employee experience point for the person that you’re working with. You certainly want to make it there. But there also a chance to look at how are we staffed towards our outcomes. Do we have confidence we’ve got the right people in the right seats? And some of that conversation has more to do with the business or the system that you’re managing than it does necessarily that one person. So I think in some ways, you do have to think about the two types of questions and the two types of conversations that are happening out of this review.
The job offer reaction question is a good example of how it allows you to ask an all-encompassing question in a very different light than you normally would. As a leader, if you ever say, ‘Should we get rid of this person?’ most people are going to say no, right? Most people are not going to want to let someone go. But what the highest-performing leaders are, they’re so hard on talent, and they’re always recruiting, always thinking that
there’s a significantly better person somewhere out there. This question gives managers a way to express that, and it allows them to be way more objective about the other questions related to job requirements, performance, and so on. It gives them an out in that dilemma of rating an employee who may not be meeting job requirements but is valuable in other ways.
Moving on to the ‘potential’ question, this is largely about a reporting outcome or a talent assessment or planning exercise rather than part of the conversation you’re having with the employee. It’s an opportunity to assess if you’re building a team for the future, understanding what the opportunity looks like a year or two from now. It’s a way to draft your entire management layer into the exercise of developing future leadership. This question gives managers a way to express how positively they feel about an employee’s potential while still being honest about their current performance.
The ‘job requirements’ question is about assessing whether the employee is meeting the job requirements. It’s a way for managers to express how positively they feel about the person but still be objective about their performance. It’s a good way to force the acknowledgment of unspoken assumptions and can be a basis for a training and development plan.
Maybe job requirements weren’t met, but in fact, impact or contribution to the business was still massive. Or maybe they are absolutely doing exactly what’s asked of them, but it’s, you know, a limited impact on contribution to the business. They don’t always pair one to one. This is an opportunity to make up for those places where the context of the situation didn’t allow them to do their job as it was described, but they were still making an impact. Or again, maybe identify when people are making a huge impact, but maybe it’s not in the job that they’re sitting in. So, I like the bifurcation of these two. I think, again, we’re giving some avenues for people to express where performance is independent or impact to the business is independent of necessarily job description. I think having them be different calls out different types of things that you value. You may value some things that they do that aren’t directly in their job description, which is super common in a startup, for sure.
Yeah, there’s definitely a difference sometimes between getting your tasks done and exceeding your goals. Yeah, for sure. We did used to have goals in here; that’s a good thing to point out in terms of a change. So, in our first round, these two questions of job requirements and performance were almost a little bit too close to each other in terms of the definition. There also was not a lot of help text, maybe is what I would call this, to try and coach people to make a good decision around it. We used to have goals be a major part of performance, and what I’ve probably learned in the first round is goals are loaded language. Also, goals, not everybody who you might be asking about this might know exactly what their goals are. We also have goals as a whole section within the performance review where literally the goals you’ve been tracking are detailed. So, not differentiated enough. That’s why we removed goals as a main identifier in here. In most cases, it would have a correlation to your goals, but language matters, and goals, I think, are best measured when you’re seeing progress over time and week over week. Whereas looking back, you’re going to get a huge recency bias around evaluating somebody whether somebody met their goals or not because we’re just not that great at remembering the detail over the past six months. So let the goals you’re tracking week over week take care of that. Again, there are lots of reasons why goals may or may not be met, and performance is sort of a separate opportunity to comment on how somebody’s contributing.
The last two. So, this engagement one, I made a case for actually removing it from the manager line because I thought really self-reported engagement is probably the only thing that really matters. I’ve had enough people kind of challenge me on the idea that, actually, this is another opportunity to get a sense of disparity between what a manager is observing and what an employee is experiencing. So, we kept it there, but maybe in this case, Brandon, we’ve got those three different types of questions. In this case, this is a slider. In our first set rollout in our V1 rollout, I think we had every question as a slider, and as you saw, I just had a whole bunch that were levels. Any sort of why did we do them as sliders and then why have we maybe changed our tune on that? I think the sliders initially started as almost like, you know, the intent was, you know, how much does this person deviate from the mean. So if you think about your team or your company, what’s engagement at your company? Is this person sort of more engaged or less engaged than that? I think that was a little bit more of the intent. Knowing that most people are sort of going to sit at the average. Is this person sort of averaging higher or lower? And because of that, this question type starts right in the middle. You can drag it in either direction. Usually, right is more positive, and left is more negative. What we found was maybe managers and employees didn’t necessarily treat it that way. No one ever wants to drag things more negative. I think that’s a harder mental barrier to get over. So they really scored people on sort of 50% to 100%. So bad would be 51% on, really good would be 100%. And so that sort of skewed everything up a little bit more positive. There’s still definitely benefits, I think, to this type of question type. It’s sort of fun, it’s easy, it’s a little bit off the cuff. But in some ways, I think it also can stress people out. What we liked about the levels as we were working on them and rolling them out is you sort of get what feels like the specificity of the drop-down, and that you’re choosing a one out of five, two out of five, three out of five, whatever. But you also get a little bit of granularity within to say, yeah, this person’s a three out of five, but they’re almost a two, and this person is a three out of five, but they’re almost a four. Sometimes it’s nice to differentiate between those two people.
On your team, yeah, the scale felt almost too big for people. So on a question of performance or job requirement, that sort of thing, there was a lot of hand-wringing around like exactly where on the scale because it just felt like it was too big of a scale, and it was putting too much load on people answering it. So, I think we definitely learned that the type of question that you ask influences people’s ability to answer it quickly.
We had thought initially that the slider would make it simpler, looks so easy, you just slide it around. We were, in fact, wrong. So, the levels have been adopted more readily by folks, and you can kind of see why. It feels like a progression where you can actually move someone up, and it’s a little bit easier to have a more distributed set of responses. So, if, yeah, that’s why we ended up changing that.
I’ve kept it on the engagement, largely because that’s not something that’s stepping up through a series of improvements. To me, that’s a lighter way to ask. Again, as Brandon said, a little fun. But as we’ve shown you, you can edit all of them. It’s up to you. We’re recommending this also for the difference. It just feels different than the other questions, and it is different than the other questions. So that’s why I’ve left it that way.
The last one is display of values. Same thing, put it in this because there’s no, you can’t. Like it’s there’s not a one-to-five. Values have a more subjective feel to it. Although we do have customers with lots of detailed text, so they’re using levels to have way more help text. Yeah, I like the sliders when it feels a little bit more subjective and levels when it feels a little bit more objective.
The other thing we changed is you can display the numbers. It’s a 100-point rating scale. We removed that on all the defaults because once you translate into numbers, people are then spinning on whether it’s a 92 or 93, or whether it’s a 54 or 58 when it doesn’t necessarily matter that much in terms of distinction. It was causing a lot of consternation for lots of people. So, easier just to move it away. You’re still going to get the essence of what you’re trying to do. You don’t need that granularity down at that level.
All right, so those are the questions, and that’s actually the biggest set of questions we’ve repeated. The same questions for the employee in those four areas, just removing those first two questions that were more, again, not necessarily for the written. Here is the opportunity for self-reflection from the employees against those things that are important. They’re the same questions so they can be compared with their managers, and there’s a real opportunity, especially in the manager assessment, for that to be incorporated.
We saw lots of great examples where the AI had drawn specific attention to where things were different, where maybe an employee is a little less confident than their manager in their abilities, where there’s an opportunity to align around where things are. So, having the exact same questions, obviously phrased differently, gives a huge opportunity to produce a kind of guide for that written review that’ll help it. Also, for the reporting, it gives you lots of options to compare.
One thing I like to look at, we can certainly go there, is performance against engagement, looking first at the manager, but then looking at self-reported engagement, because that’s the one that’s going to tell you that maybe they’re awesome but they have checked out, and that’s a real risk factor. So, I like that ability to use those details.
Anything else specific to the employee questions, Brennan?
I think once you go into detail of the questions, it feels repetitive. Anything new in there? No, I think the main thing is forcing that reflection from the employee and how you want to bring that up with your manager. What discussions should come from that. You called it out. The data you get out of this lets you do all the comparing and contrasting between these people, both what the manager is saying and what they are saying, what’s the average, how do the averages move between these questions. Same goes for peer and all the other stuff.
The other piece is like the AI gets that, like you mentioned, so when you do go into the written, it sometimes can be cognizant of the fact that the person might be a little bit more self-conscious or have lower self-confidence in certain areas than what the manager would otherwise say. Try to adjust. So, I think it’s helpful data points to calibrate the AI to help make the feedback higher quality. And I think that’s probably where we saw the peer questions provide the most value as well. So, there are only the two questions. There were a couple of things for the peer in the defaults we were looking at. Many people want to do peer evaluations or real 360s, but it can be super painful.
So, we’ve made some improvements just in around the selection. We do the auto-selection for you based on a whole bunch of criteria, being who’s in their team, their department, and then who are they meeting with. So, we take care of that sort of pain as opposed to a multi-step process of requests and approvals and all that kind of stuff. So that’s the first thing. Let’s make it really easy for people to run peer assessments and 360s. Let’s make it light on the peer so that they’re not overwhelmed by it because this, again, reviews can come with a perception of them being really time-consuming. So, we want to remove that amount of time while still getting really high-quality output and fidelity. By having a couple peer questions and being able to calibrate across employee performance across all three angles and having that influence definitely saw a number of examples and places where this was a really useful piece of data within a calibration conversation where a manager was maybe not even seeing everything, and they had such a different opinion of the performance than the peers did.
So, to me, this is a really great way to make sure that you are challenging what might be a narrow or biased view of someone by having enough inputs to balance that out. Lots of cases where everybody said exactly the same thing, but that opportunity to force the manager even to reflect.
You know, we talk a lot about employees having this self-reflection, but even managers need to reflect on, ‘Am I seeing the whole picture?’ The other thing for me is, as you move up in an organization, you have way less observable time with exactly what’s going on with each of your managers. You’re probably not in the same room with them or observing their day-to-day tasks nearly as much as when you were a manager at the front lines. So, depending on how many degrees of abstraction you have, you may have less and less time where you’ve observed that employee or, sorry, that peer. Information might be some of the best information that you get, so I think for lots of people, this is a good source for them to be able to get details. So, any comments that people are putting, or otherwise, just really rounds it out and helps support a really thorough conversation with that employee, and a balanced one. Hopefully, it addresses some of the bias that might come in.
Yeah, I completely agree. I think the peer feedback, in some cases, can be the most helpful. As you mentioned, as you sort of move up in an organization, you’re getting less visibility into people, and your time spent with them can be pretty short. So, if you’re getting feedback from others in the org that this person needs to improve on XYZ, and you’re talking to that person, they seem to be doing fine on XYZ, sometimes that peer feedback is the difference-maker in actually helping get them the help and coaching they need.
And you know, to be honest, that’s the best data, I think, that you can feed to the AI. Not because the AI is going to do something magical with it, but more because it will do everything you would do if you had unlimited time. That’s right, right? So, if you sort of imagine you have seven direct reports, and for each of those seven direct reports, you’re going to get three peers to provide feedback, right? So now you have 21 things to read before you can even go and do your review as a manager. Most managers I’ve talked to and met about this sort of don’t do that background work. They may sometimes scan through it, but they’re never going to study it in the same way. And the AI, on the other hand, will. It’s got unlimited willpower to go through, read, analyze, and do all those things. What’s so neat about it is it was really eye-opening for me as you know, we were doing our own, and I was sort of reflecting on, ‘Wow, the AI has pulled out some themes from peer feedback, from the differences in scores, and sort of made a couple almost like EQ observations about this person that maybe just went over my head because I wasn’t really studying for this test, right? And because it’s there, because they pointed it out, I kind of glance over and look at it. I’m like, ‘You know what? I think maybe this thing’s right.’ And like, ‘Okay, that can be a really productive conversation that otherwise wouldn’t have happened.’ So I think the peer questions can be such a huge treasure trove of insight that you otherwise can’t get.
Yeah, I love the addition. It’s just, um, I also love the AI bringing attention to sort of things that maybe you knew but you hadn’t had the right attention density on them to really reflect on it. And, again, it just creates that pause without you having to do any extra work. So now you’ve got better insights in less time. What could be better?
The last category is going to be the ability to have the individual contributors actually rate their managers. As I said in the opening, this is going to be its own set or its own type of question. Similar to these, you know, within the next week or so, but I wanted to have the questions that we’re going to start with. So these are the first time we’re having default questions around this individual contributor to their manager. This will be the V1 of these questions. And, again, I wanted to limit it as we add more question types or more review types, rather, and as we add more questions, we’re just adding more weight to the process. So being really conscious that we want to get high-quality actionable insight out of this that’s driving the right conversations and also providing the right kind of view into the data. But we don’t want to make this process any longer or harder.
So I have chosen two categories. One is around career development. So if we look at the manager’s role, part of the manager’s role is to be about helping this person develop in their career. Are they doing that? How effective are they in doing that? Are they just casually having career conversations, maybe just when prompted, or are they actually driving those conversations and working collaboratively on a development plan? How supported in career development is this person? That was one category I wanted to make sure to capture because it is certainly what a lot of people talk about. We work a lot with startups as well, and career development is a big piece of why you often go to a startup because there’s so much opportunity. So I wanted to make sure we captured that important piece of the manager’s role that can too often not be given to it.
I wanted it to be in the reviews so that there’s an opportunity to have the manager of managers pause and look at whether this is something that their team is doing and doing regularly. So that conversation happens there but also look at the impact on people’s performance based on whether their managers are having career conversations. So is that actually impacting whether people are getting results, whether they’re engaged? What is the impact of the manager’s attitude toward these types of activities in terms of the results you’re getting as a business? It’s not just about how the employees experience being supported, it’s also like does that have an impact one way or the other on the results they can produce today. So I really wanted to make sure that got captured. Anything more on that, Brennan?”
I think these questions, the, uh, like the employee sort of, um, doing upwards feedback to their manager, in many cases, I think they’re going to depend on like your culture, your company, and if your company is, you know, a company where you want people to stick around for a long time and you do have career opportunities for them, you sort of have to make sure that people are having the conversations and, and, um, discussing it and aware of it, right? Otherwise, there’s no wonder why someone might take that other offer. So, I think it’s going to depend a little bit on, like, you know, what, um, what stage your company’s at for sure and what things culturally that you value.
You want to make sure, um, you’re hitting on. And these ones can change too. I can imagine these ones, uh, evolving as the company, you know, becomes really good at certain things for sure. And I, um, I feel like you’re telling me that this, this, this V1 is going to get changed, that we’re going to see it like the other ones feel a little bit more like V2s, whereas these ones feel fair enough, but I don’t, it doesn’t, I don’t think that they’re bad ones. I can just imagine them, we’re going to learn about them, um, you know, once the majority of the company gets some of these things in place, it might feel like a steady state and maybe not as interesting as, as it was, you know, the first three or four cycles you, you took somebody through.
Yeah, the other thing for me, um, was a little bit harder to picture. So I wanted to also, um, anticipate the idea that you wanted to look at, um, what are distinctly manager conversations or manager skills, and can we capture them in some way. So this was one of them, is that, you know, are they actually managing their people or, uh, having career conversations, not just to make sure that employee experience is strong and that they’re doing distinct work because you move out of a direct contributor to a manager, they’re different things you never did before, included.
But also, are they, um, thinking about their team? Because if you’re not having career conversations, you don’t know what’s going on in your team. Who might be ready for a change? You’re not actively managing the team that you are responsible for to make sure that you have the team you need in place. I think career conversations are also a little bit of talent planning, right? So that was a, maybe it’s a stretch to get that from there, but that was a little bit of what I wanted to do because more than anything, and it’s particularly with the reporting that we have done, I want to make sure we’re helping the managers do the manager things. It’s always been our mission to help people be better managers, so some of this is actually pushing down some of the tools to them and some of the accountability to them around taking control of their own fate as it comes to their team. Are they doing the kinds of things that help them have to do the things that will make them the most successful? And as a manager, how your team performs is what makes you successful.
So actively managing your talent and the strategy around that team is really important. Maybe a little bit too much burden to put on one question, but that’s sort of what my thought process was. And then the other question was how you’re being supported in your current role. So are you getting valuable feedback? So, not just are you getting feedback, maybe you’re getting a ton of feedback, but it’s micromanage type feedback, is not that helpful? Are you getting valuable feedback from your manager? Are you getting support where you need it? Are you getting the autonomy to learn and grow? Are you, um, you know, are you actually getting new challenges and things that are actually helping you in your current role? So it was that distinction between, are you getting the help you need to do the job you have and contribute to the company, versus are you getting the support you need to think about your career and move through. And so, that’s the set of questions that we’re going to start with.
I think that’s it for questions. It’s a lot of questions, but it goes by quick. It does go by quick. I do want to spend just a couple of seconds talking about the reporting because there’s a strong relationship between the two. And as I said when we were, when we came back to do V2 of these questions, one of the things we were wanting to make sure we did is provide the right inputs into the reporting. One gap we had in V1 was we had never asked the question about potential, which, for anybody who has done, uh, a nine-box, is a gap because that is, in fact, what you would typically find in a nine-box. So, we are looking at those questions also from the perspective of can it help you then look at manage through the whole context or the whole environment that you’re, uh, the system rather that you’re managing or that you’re working in. In this set of reporting, you know, there’s lots written about the nine-box, there’s lots of, uh, information out there.
We’re not going to, uh, you know, define for you necessarily what each of these boxes might mean at your company. I think that’s where some really, uh, solid work can be done with your HR business partners and otherwise. But I do want to start bringing that idea of looking at the team and looking at everybody from this perspective to every manager. So these reports are available to any manager at any level. Likely you’re going to have your true sort of talent assessment or talent review type or calibration conversations at a leadership level. But it’s very possible for any manager to kind of look at it, and that’s part of, I didn’t learn this skill until a little bit later in my career, but it’s been one of the most valuable ones I’ve ever, um, or skill, uh, sort of practices I’ve ever, um, participated in, that sort of calibration. And so I think more people doing it, especially at startups where HR is a small department, probably for quite a while, the accountability or the ability for managers to take part in that I think is amazing. I think it also gets everybody thinking about making sure you’re ready for the next six months, the next six months.
So the first one was the classic nine-box. You’ve got your potential against job requirements. That’s going to help you understand, you know, here we’ve got a pretty good distribution. We’ve got a bunch of core players, maybe some F, we need to look at whether there’s some sort of intervention we need to do. And, you know, maybe Tisha here is our next rising star or the next person that might be in leadership. In addition to kind of looking at where does everybody sit and how, what’s the composition and do I need to think about bringing in more managers or otherwise. I also wanted to make sure you had an opportunity to kind of look at risk. So, because we have given you access to any of the questions that you’ve asked, whether they’re the questions that we gave you in the default or they’re your custom questions that you put in, you have this opportunity to look at them in a bunch of different ways. So the other opportunities you have, for example, might be, um, you know, looking at employee performance, um, from the manager’s perspective against self-reported engagement, so that you may be looking at, you know, here are my high performers that are really under-engaged. Is that a risk? Is there a problem there for me?
So starting to give you access to a quick and easy way to start looking at different dimensions of your team from a potential, from a succession planning, from a risk factor, um, from other ways like this. And that into this process, I think, is as valuable as thinking about how they will, uh, facilitate the conversation that you’re trying to drive.