I’ve seen a lot of a certain gimmick recently. Maybe you have seen it too. Someone posts a written piece, and then they tell you that ChatGPT — the hottest new flavor of AI — wrote a portion of it. Well, this ain’t one of those pieces. I’d like to think that an AI chatbot just can’t capture that certain something that I bring to the table. That combination of often shabby humor and occasionally interesting insight that is just so…human. I mean, there’s no freaking way that AI would write an article about the economic concept of elasticity that references socks. AI is too smart for that. And no one’s ever accused me of being too smart for anything. I take pride in that fact.
I also take pride in having a job. When I hear of a new technology, the first thing that I think is: “who’s job is this thing going to take?” For example, when I saw this robot dog, I put my real dog Willow on notice. So, naturally, when journalists started advertising that some AI chatbot could do their jobs, I started wondering what other jobs it could do.
(By the way, if you want to buy a book about economic inequality written by a human (I swear!) you can give mine a try.)
What Can AI Do?
My (limited) understanding is that an AI bot like Chat-GPT is trained to mimic human language. At first, humans drive the training. A prompt is selected from a database, and a person inputs a desired answer. For example, imagine the prompt was “Are Economists fun people?” Then, the trainer might input an ideal answer of: “In general, Economists are not fun people.”
In the next step, a prompt is again sampled. Except this time, the bot itself produces some results. A person then ranks these results, so that the bot can evaluate how it did. Imagine the prompt “Are Economists fun people?” were selected in this second step. The bot might output two responses: 1) “Economists are not fun;” and 2) “Fun, Economists are not.” The trainer would then rank the first choice as the best because it sounds normal. The second choice sounds like it was written by Yoda. These “rewards” — or points for good responses — are then generalized to other cases. So, if the bot got a question like “Are the Cliffs of Moher pretty?” it would say “The Cliffs of Moher are pretty,” instead of “Pretty, the Cliffs of Moher are.”
This training ultimately makes Chat-GPT good at producing responses that sound human. Since computers can remember many, many things, an AI bot like Chat-GPT can write human responses to many, many prompts. And, it can respond with much longer prose than I indicated above. After all, if the prompt is write “A 5-page essay on the level of fun that Economists possess,” the ideal training response would also be 5-pages long. Indeed, AI responses aren’t limited to just prose. The bot can be trained in any “language,” and so can even help with computer coding. In other words, Chat-GPT can do a lot.
What Can’t AI Do?
OK, so AI can write like a human in response to tons of different queries. Impressive. But, my (again, limited) understanding is that the bot is trained on some set of inputs, but it cannot go beyond these inputs. For example, Chat-GPT is not connected to the internet. So, Chat-GPT cannot just scrape the internet everyday and “learn” new things. Indeed, Chat-GPT is currently limited to things that it learned before 2021. For example, when I asked Chat-GPT about the 2022 Grammys I got this response: “The 2022 Grammy Awards nominees have not been announced yet as the award ceremony is usually held in January of the next year. The knowledge cutoff for me is 2021, so I do not have the most recent information on the nominees for the 2022 Grammys.” So, Chat-GPT isn’t going to adapt immediately to new trends.
Chat-GPT also isn’t going to go beyond what you ask it. So, if you were a manager at a large company and wanted someone to write a two-page memo on the candy industry, Chat-GPT could likely get you started. But, it isn’t going to point out that an opportunity exists in marshmallow manufacturing unless someone else noticed it pre-2021. And, if someone noticed it pre-2021, you are probably already too late. A person on the other hand, could notice the opportunity and then think critically about whether that avenue still existed.
Chat-GPT also can’t help you with complex social interactions. For example, in your imaginary manager role, you may have an employee who ignores your request to write that two-page memo. You may wonder how to work with this employee. Well, I asked Chat-GPT what to do, and I got something resembling a self-help article that I could have just as easily got from Google. Have a look at the exchange below:
I don’t know about you, but if someone asked me how to deal with such an employee, I wouldn’t just list a bunch of steps. After I said “I’m sorry your struggling,” my first question would be: “What is the employee doing wrong?” Only after this question could I hope to give specific advice. The AI speaks in the sort of generalities that work in the articles it is trained on. The bot is clearly helpful, but falls well short of a person.
So, AI clearly has some limitations. What jobs are in Chat-GPT’s wheelhouse? What jobs lay well outside it’s expertise? Let’s go to the data.
Which Jobs Might be Vulnerable to AI?
To see which jobs Chat-GPT could threaten, I turned to a really cool database, the O*Net. Among many other things, the O*Net collects data from employees, employers, and experts on the skill-content of occupations. These skills include basics ones, like reading, speaking, writing, and math. And, the database also includes more complicated skills, like coordination, managing of finances, or negotiation.
To identify occupations vulnerable to Chat-GPT, I first focused on writing — the skill this form of AI most clearly possesses. I identified occupations in the top 25 percent of all jobs with respect to the importance of writing. The jobs range from Copywriter to Marriage and Family Therapist to CEO.
Then, I identified two skills that I do not think Chat-GPT possesses. The first was critical thinking, which O*Net defines as: “using logic and reasoning to identify the strengths and weaknesses of alternative solutions, conclusions, or approaches to problems.” The second is social perception, which O*Net defines as: “being aware of others’ reactions and understanding why they react as they do.” For each of these two measures I created something called a Z-score. The Z-score is negative when an occupation is below average in that skill, and positive when it is above average.
The figure below plots each of the occupations where writing was important on a 2×2 grid. Occupations in the bottom left are vulnerable to Chat-GPT. These occupations rely on writing, but use below average critical thinking and social perceptiveness. Occupations in the top right are least vulnerable. And, those in the top left or bottom right use an above average amount of one skill or the other. I have also color-coded the dots based on pay. Red is the lowest paying 25 percent, orange the next lowest paying, grey the second highest paying, and green the highest paying 25 percent.
Figure. Jobs Organized by Above- or Below-Average Critical Thinking (X-axis) and Social Perceptiveness (Y-axis)
If you look at the top right quadrant, you can see that may of the least vulnerable jobs are also the highest paying jobs (those with green dots). Chief Executives are safe because while Chat-GPT can do the writing part of their job, high levels of critical thinking and social perceptiveness are protective. By contrast, in the bottom left quadrant, many of the most vulnerable jobs are either in the lowest paying or second lowest paying groups. Proofreaders/copywriters are more vulnerable because the writing that they do is more easily replicable by Chat-GPT.
Of course, the relationship between pay and protection isn’t uniform. Indeed, many of the lowest-paying jobs are protected by their high degree of social perceptiveness. The top left quadrant is filled with lower-paying jobs. A job like Marriage and Family Therapist pays a little less than $50,000 a year at the median, but is unlikely to be replaced by Chat-GPT anytime soon because of its high degree of required social perceptiveness. In the bottom right quadrant, highly-paid physicists are likely also safe. Not because of their social graces, but because their job requires a high degree of critical thinking.
In a way, the figure above points out something positive. Jobs across the income spectrum require skills that give human workers an advantage over AI like Chat-GPT. For now, I suspect that many workers will find Chat-GPT somewhere between and idle curiosity and a helpful tool.
In the longer run, I’m sure that these sorts of technologies will displace some workers. The tendency in our capitalist society is to view these replacements as necessary destruction in the interest of enhanced productivity. And, on average, I have no doubt that these technological transitions end up benefitting many. Still, it’s worth remembering that for the people likely to be affected first — people who on average already make less — the transition may be anything but easy. Just ask those middle-income workers affected by earlier versions of automation. Even though we live in a world with chatting AI bots, their real wages are stuck in the decade before the internet even existed.