Future of Work

Cappelli’s Column: You’re Not Falling Behind GenAI

Have you noticed that when pundits say that everyone is using the large language models (LLMs) like ChatGPT, they never say how they are using it? I had that experience in a meeting where I did ask people who said they using ChatGPT exactly what they were doing, and it turned out they were using it as a search engine: Ask it for a piece of information, and it will pull it up. Nothing wrong with that, arguably better than a regular internet search that pulls up several sources that could answer your question somewhere in them. But it’s also not revolutionary.

I did a poll of the participants of MIT’s Work/24 online conference, a group we might think of as more techy than average, asking them if they were actually using any LLMs at work. I thought the results were surprisingly modest. About half said individual employees were playing with it, a small number it seems to me given that now it is built into to search engines like Microsoft Edge. You could be using it and not even trying. Only 13% said that they were using it for some tasks and only 1% said that they were using it enough to take over a job. This is hardly consistent with the hyperbolic view that it is revolutionizing the workplace and that all the smart companies are using it.

Why the big disconnect between the continued reporting about how transformative these tools are and the great difficulty in being able to point to many cases where it is actually being used? The answer is that the initial hype comes from the people who build the tools. For them, these innovations are considerable because they represent big developments in the work they do. They are also focused on what these tools are in principle capable of doing—under the right circumstances, where resources aren’t an issue. They aren’t asking where there is an obvious for what they can do, is it cost effective to use them, and are they much better than what we are doing now?

Let’s think about driverless cars. Can they work? Yes, I believe they could operate safely and effectively. But how many people actually want them? Will sports car drivers and off-road fans want a robot that will drive the car for them? For those who do, are they willing to pay the tens of thousands of dollars more needed to install the navigation systems and the fees for the support systems they require? Will the insurance companies be able and willing to provide insurance for them? Will the public sector be willing to pay for infrastructure needed for them? Not yet, and it’s not clear when they will despite the claims from the 2010s that they would have already taken over.

Much the same thing happened with machine learning, which I believe is an extremely useful and focused tool that makes much better predictions than we can otherwise do. But we weren’t thinking about all the data needed to build the models, the database management resources needed to make that happen, or the fact that people don’t necessarily want algorithms to make decisions for them (like who to hire for example).

With LLMs like ChatGPT, what have we been missing here? One issue is whether there really is a demand for what they can do. Sure, they can take over simple correspondence, say for businesses dealing with customers. But no one is writing that now. It is form letters cleared in advance by legal departments. Yes, it could make for smarter and more capable chatbots to answer customer questions. But companies don’t necessarily want solutions that customers like better. They want cheaper solutions. Chatbots are good enough now, and they don’t want to spend the considerable programming time to make them more capable.

These tools are very good at providing summaries of information on topics you don’t already know. But how often in your job do you do that? The people who do that type of who frequently—journalists, researchers, professors —are not surprisingly the people saying how great LLMs are. They can be very useful when you need that, but even then, we still need someone to check on what they say to make sure it isn’t crazy as sometimes the responses appear to be.

Where LLMs can perhaps be most useful is actually making sense of data, especially in our own organizations where we are drowning in it without being able to learn anything from it. But to do that, we need to have the data already organized, cleaned of problems, and in place—a big database management task and the same constraint that is holding back machine learning.

So where does this leave us? A main goal from business leaders seems to be getting these models to cut head count. That is a big task. Search engines were arguably a bigger and more useful innovation when they first started than LLMs are now, and it is very difficult to point to situations where they took over jobs. To reduce head count requires taking over individual tasks. Where we see that happening is when “subject matter experts” who really know what the tasks require along with someone who has a general sense as to what LLMs with the right data could do. It’s not IT and programmers doing this. The most common new tool is more and better chatbots to answer more questions from employees. Is it an improvement? Sure. Does it cut jobs? No. In some cases, it just improves existing bots; in others, it answers questions where the information was not any individual’s responsibility to address. Good for the organization, good for effectiveness, doesn’t cut headcount.

In short, progress using LLMs is painstaking work deep in the weeds of reorganizing tasks. It’s not a magic wand, and the fact that it isn’t revolutionizing your workplace is to be expected.

Peter Cappelli is the George W. Taylor Professor of Management and director of the Center for Human Resources at the The Wharton School. 

Tags: June 2024

Recent Articles