Productivity Imposterism

Mayank Kejriwal
8 min readMar 13, 2022

We’ve all heard of the imposter syndrome, defined by Wikipedia as “a psychological pattern in which an individual doubts their skills, talents, or accomplishments and has a persistent internalized fear of being exposed as a fraud.” Indeed, by some estimates, it is believed that nearly 70% of individuals will experience the signs of impostor syndrome at least once in their life. The ailment, if we may call it that, is particularly hard on students and people trying to acclimate to new professional environments, but it can also happen in romantic relationships, workplaces, the classroom. Early evidence suggested it was more prevalent in women, but recent evidence has suggested that it may be spread more equally.

This article is not about imposter syndrome as discussed in the ordinary sense, or its impact on productivity. I am instead focusing on a related ‘syndrome’ that (I believe) is far more commonplace in a knowledge-based economy, namely productivity imposter syndrome or pithily, productivity imposterism. I named this on impulse, because it seemed like a good description, but I would not be surprised if there is a more official, evocative or even, dryly clinical, term for it. Let’s agree that a rose is a rose by any other name, and use productivity imposterism at the moment to discuss the substance of the matter.

Productivity imposterism occurs when you feel that you haven’t been productive enough, even though the symptoms suggests otherwise. The biggest symptom might be a sense of cognitive fatigue, even burnout, but without the swooping sense of having accomplished something. But can we honestly say that cognitive fatigue constitutes evidence of productivity? Why not point to a work product as evidence, just like your achievements serve as evidence refuting your sense of (vanilla) imposterism?

Root of the Problem: Continuous Inputs vs. Discontinuous Outputs

The answer may not surprise many of you working in knowledge-intensive fields. Work products or ‘output’ are singularities; in other words, a discontinuous emergence, whereas the ‘input’ is continuous unless you’re lazing around waiting for a lightbulb to go off. At this point, it is helpful to review how productivity (or more precisely, productivity ratio) is actually defined:

What is productivity? And how can we really measure it when input is continuous but output is a singular event?

For the software engineers and web developers out there, suppose your company asks you to write a piece of code X that does Y. You may be asked to produce a webpage that has certain features and is able to guide users in a well-defined manner. You start writing the code thinking it will be a breeze. The output seems continuous; you spend 10 hours (say) writing code, and when you open up the webpage locally, the output is there for all to see. We could measure it in terms of the percentage of the webpage completed. If we imagine the finished webpage is worth $50,000 to the company (a number drawn up by someone above your pay grade, no doubt), you seemingly finished 60% of the work within 10 hours, your productivity ratio, measured as dollars of output per hour of input, thus far, is 0.6*50,000/10=$3,000/hour. That’s amazing! (at least, for the organization; I’ll betcha you are not getting paid close to $3,000 per hour).

Okay, so your company’s happy with you; in fact, they decide to expand the free cafeteria menu with gourmet pizzas, offer free ride-sharing and a whole host of other ‘free’ things they hadn’t offered before. I may be behind the times on the perks, but bear with me. Your company can afford it: you are generating far more output per hour of input than what you are getting paid, directly or indirectly. Another simpler way to put it, they are doing what companies are meant to do: make profits. And they expect you to sustain, if not grow, your productivity, expressed to you in subliminal ways.

But now comes the glitch, inevitable in any software project of significance. You believe you are almost done, but something isn’t right. You think it’s a minor problem, and tell your manager that, but the problem ends up taking over a week, and everyone is frustrated at the end of the ordeal. The company, and your manager, suddenly isn’t too happy with you. You’re so disheartened by the whole thing, you can’t even be sure you’ve complete 100% of the task, whatever that means.

To continue the math, suppose each workday is counted as an 8 hour day, and that it took you 5 days to get the task to 90% completion. In other words, you completed 90–60=30% of the task in 8*5=40 hours. Your productivity ratio over this ‘glitchy’ period now is: 0.3*50,000/40=$375/hour. This is probably still above what you’re getting paid, but factor in those perks, the Silicon Valley rents for the office, and other indirect costs, and those profits aren’t looking so good. Of course, you are only one employee and this is just one (relatively tiny) project. But scale it up, and things can go wrong very quickly for any organization that depends on its human capital and the productivity per employee.

Despite counting as nothing more than a thought experiment, or at best, an anecdote, this simple example can be used to explain certain interesting workplace phenomena. First, you’ve probably heard the rule of thumb: 80% of the work is finished in 20% of the time. In a nutshell, that is the point the example was trying to make. But the full consequences of this observation haven’t been mapped out in depth. We feel burned out and de-motivated not because of the 80% but because of the other 20%, especially if we happen to be in the unfortunate position of dealing with that 20% (and appearing unproductive on paper). We certainly don’t feel good when we seem to be the unproductive one, and our colleague is the productive one, even though we’re working just as hard (maybe more). We can’t explain or quantify it. During these moments, when productivity imposterism takes its toll, we feel insecure and unmotivated, and that has a snowballing effect. If emotions are contagious, and they can be, others can get infected by what is perceived as ‘negativity’.

In high-tech, or indeed, any knowledge-based industry, 80% isn’t good enough unless you’re just getting bootstrapped as a startup. Even 99.99% might not be good enough if your reputation is built on reliability and ubiquity, as with cloud computing companies or a consumer-facing service like a social network and the Google search engine. So unless you’re very crafty at getting away with not doing the last 20%, you will likely face some version of productivity imposterism in your career.

A More Extreme Thought Experiment: Building a Time Machine

In the sciences, productivity imposterism can morph into outright frustration. Let’s take an extreme but illuminating example as our second thought experiment: you’re a physicist trying to build a time machine. You have some ideas, but you need to think through them; in some cases, work out things on paper, and in yet other cases, design experiments that may or may not succeed in the way you hope. For the sake of simplicity, we will assume you’re actually a retired physicist who can’t stop thinking about new physics (true physicists really can’t).

Suppose that you’ve designed ten experiments, each of which will take 6 months to do. You have a strong feeling one of these will lead to the breakthrough but you don’t know which one. This is not unlike real scientific research, with many dead ends. If you are very lucky, you will have solved the time travel problem in the first six months. But probability and statistics suggest that it will likely take you 3 years or more, and that’s assuming you stick with it and are certain one of the experiments will work. Productivity imposterism may well eliminate, or weaken, dogged determination after a few ‘unproductive’ months and in real life, one never knows that anything will really work, any more than a startup founder can say with certainty that one of her 10 great ideas will become the next Google.

A relevant, but flawed, analogy here from the investing world is putting your money in lots of startups. Most will fail, but a few (or perhaps even one) really will become the next Google, Uber, Meta, what have you. Yes, there is scope for talent and it’s not pure luck. Not all VCs are equally successful, and some seem to have a knack for picking winners. And VCs have a nice mathematical option to deal with this problem: a hefty discount rate when calculating the expected present value of their portfolio. This is also true in academia. (We write many papers and proposals, and each has different risk/reward profiles). Papers have less ‘structural’ risk in that we can always try again if it gets rejected, or mitigate the odds of getting scooped by presenting preliminary versions in workshops or publishing a working version on a preprint server like arXiv.

These solutions may work for papers and money investments, but our hypothetical web designer or physicist cannot invest in 10 different solutions at once unless they can clone themselves. The serial nature of the task, with sequences of seemingly unproductive moments punctuated by irregular spikes of success, is what makes productivity imposterism such a nefarious modern problem.

So what should the Modern Organization do?

If productivity imposterism is real, and the grapevine certainly suggests it is, what can or should the modern organization do about it? One problem is that organizations tend to be unwilling to even recognize these kinds of problems. The ‘business as usual’ mantra is surprisingly pervasive in human and organizational life, despite all the ruckus about continuous disruption and innovation. There is inertia in changing things, especially when the motivation for change is not staring us in the face. When it does, such as COVID-19, change is rapid and even sticky. If nothing else, COVID-19 showed us how fast norms can change, and how rapidly business can acclimatize itself to those norms if it chooses to.

Unfortunately, productivity imposterism is not one of those phenomena that is staring companies directly in the face. It is a subtle problem, one that no one talks about too directly. The only real symptom, confounded by other causes, is worker burnout, but it is far easier for companies to blame worker burnout on a whole bunch of vague causes, and equally inept solutions.

Getting rid of perverse incentives, or at bare minimum, interpreting output correctly, is the most concrete step that is doable in the near to medium term. I’m speaking here of eliminating metrics such as ‘lines of code written’, which still abound, although it looks like the door is already starting to close on them. The Great Resignation may have had something to do with it. But we should not forget that tight labor markets don’t last forever, and it doesn’t help when companies ask for the world even in entry-level positions.

In the long run, as our economy becomes ever more knowledge-intensive and creator-driven, human capital is the ultimate source of competitive advantage. So the modern organization should develop and adopt best practices on the issue of measuring and rewarding productivity to serve its own best interests: the organization that attracts mediocrity, rather than the truly productive, through a system of perverse incentives and adverse selection, will go extinct eventually. Indeed, I suspect this explains why some organizations have persistently toxic culture, despite turnovers of entire departments. Sometimes, the processes, not the people, of the organization are the sources of systemic toxicity.

We need a science of productivity that is designed for the twenty-first century organization and its workers. And it behooves all of us to educate ourselves on what it means to be productive in a high-risk, high-innovation environ.

--

--

Mayank Kejriwal

I am a research assistant professor at the University of Southern California, with expertise at the intersection of Artificial Intelligence and society.