Last October, I wrote a column about the use of generative AI in producing a professional service. I pondered the question of whether or not others’ knowledge about the use of AI in producing a professional service – such as legal work, consulting, or creative work – would devalue the service. My hypothesis was that there would be a negative perception surrounding the use of AI tools and that could impact what people would be willing to pay for the work. Recently, I learned about a new research study that addressed a closely related question, which I’ve paraphrased for this month’s column…
How does the use of AI impact people’s perceptions of you?
In other words: What do people think about you and the work you do if they know you are using AI? Spoiler alert – it’s not flattering. The study is titled Evidence of a social evaluation penalty for using AI(Reif, Larick and Soll, 2025).
The Social Evaluation Penalty
The research involved over 4,400 participants and brings an AI lens to a longstanding set of questions in social psychology centered on attribution theory. Attribution theory seeks to understand the reasons we attribute to someone’s behavior. For example, if you miss the bus, is it because of the weather (situational – external) or because you failed to set your alarm (dispositional – internal)? Or, if you ask for help with a task, is it because you are lacking in some way? Perhaps incompetent or lazy? Or is it normal for everybody to ask for help in this circumstance or with this particular task? Turns out, we tend to lean into negative – dispositional – beliefs when attributing behavior. Asking for help at work projects an image of deficiency that can harm a person’s professional image over time. There is a longstanding body of research around this theory, but the researchers sought to address this question in a slightly new light:
“Are people who use AI actually evaluated less favorably than people who receive other forms of assistance at work?” (Reif et al)
This study seeks to understand what happens when AI is deployed as the helper.
The AI Crutch
The researchers conducted four inter-related studies to test their hypothesis and isolate the impact to AI vs. other kinds of non-AI tools. The bottom line is revealed in their title – there is evidence of a social penalty for using AI. Workers using AI were seen as less competent, less motivated, and less diligent. These findings present a dilemma for people who want to use AI and also for companies attempting to move toward AI adoption.
One interesting finding is that the source of assistance matters to attribution theory. The research showed that AI use intensified the negative judgement relative to asking a co-worker or using non-AI tools. This is because AI is seen as providing greater assistance leading to this paradoxical finding:
“People who achieve productivity gains through AI might paradoxically be perceived as less competent or motivated.” (Reif et al)
Another more nuanced finding: If the task for which AI is being used is seen as a task “fit” for AI, there is a social penalty offset. This is especially true when the person making the value judgement also uses AI. To give put this idea of “task fit” into a relatable example – if someone used a calculator to multiple 153 x 88, most people would likely not think poorly of that person. But, if they reached for a calculator to multiply 153 x 10, that’s another story! Since AI is new, we’re still navigating the “task fit” conversation. The study contains more details around how the researchers conceived of “task fit” in their work.
Staying Out of the Social Penalty Box
These research findings jive with results from another report that was recently released, The GenAI Divide State of AI in Business 2025, from MIT’s project NANDA.
The headline from that report, which was covered extensively in the news, was that 95% of generative AI pilot projects fail. One might draw an (incorrect) conclusion from that headline that people are not interested in using generative AI. However, in reading the report, the research also highlights an enormous shadow AI use, with workers from over 90% of the companies reporting using AI regularly.
“Our research uncovered a thriving “shadow AI economy” where employees use personal ChatGPT accounts, Claude subscriptions, and other consumer tools to automate significant portions of their jobs, often without IT knowledge or approval.” (MIT NANDA, p8)
There are many reasons why people might not choose to use the officially sanctioned tools, including the conclusions in the State of AI report that generally sum up as “the official tools suck” and were poorly deployed.
However, I will posit another reason to use shadow AI – keeping AI use secret, because if others know you’re using it, you put yourself into the social evaluation penalty box. This may be a harder barrier to adoption than merely making technical improvements to the tools, as the State of AI report concludes. Instead, it would require thinking about how to address the social perceptions raised by Reif et al. to change workplace culture and norms. As they note:
“People care about how their actions will be perceived by others: people may choose not to use AI – or not to disclose their use of AI – if they expect to incur a social penalty. We propose that this social evaluation penalty is an overlooked barrier to AI adoption.” (Reif et al)
Organizations that are serious about AI adoption would be wise to pay attention to these findings and explore ways to reduce the social evaluation penalty.
Send Me Your Questions!
I would love to hear about your data dilemmas or AI ethics questions and quandaries. You can send me a note at [email protected] or connect with me on LinkedIn. I will keep all inquiries confidential and remove any potentially sensitive information – so please feel free to keep things high level and anonymous as well.
This column is not legal advice. The information provided is strictly for educational purposes. AI and data regulation is an evolving area and anyone with specific questions should seek advice from a legal professional.

