Our Blog
/
Blog

What Workers Say About Workplace AI: In Conversation With PAI’s Stephanie Bell

$hero_image['alt']

Advances in AI have the potential to radically change how we work. What those changes look like in practice, however, remains to be seen. Will the jobs of the future be easier or just more specialized? More creative or increasingly tedious?

Stephanie Bell
Stephanie Bell, Research Scientist, PAI

Important clues can be found in the workplaces where AI systems are already being deployed. Currently, Partnership on AI’s (PAI) Stephanie Bell is concluding an international study of on-the-job experiences with AI where workers shared their stories through journals and interviews. As a research scientist with PAI’s AI, Labor, and the Economy Program, she hopes this work can help guide us to a future where everyone shares the benefits of AI — and that includes both employers and employees.

According to Stephanie, companies that don’t get input from their frontline workers when designing and implementing AI systems are “disempowering some of their best sources of knowledge and insight.”

“In addition to losing out on all of the business benefits that you could be seeing from the ideas themselves,” she said, “you’re also creating a culture that people are probably less excited to participate in.”

We recently caught up with Stephanie to learn more about what AI in the workplace looks like in practice and how it could be improved. Below is our conversation, edited for length and clarity.
 


I feel like when we talk about labor and automation, timeframes often seem to be the focus. “What is five years from now going to look like? What about 10 years?” Is that framework the wrong way to think about how automation and AI are changing labor?

I think the big question that I’m always grappling with is not just “How quickly is my job going to get taken by a machine or by a robot?” or “How quickly is somebody else’s job going to get replaced?” but also “How is that job, in the meantime, going to change as a result of our attempts to try and incorporate these new technologies into our workplaces and into our society?”

There are some folks who are obviously very dedicated to the idea of full replacement of human labor with machine and algorithmic labor. I think, for some people, that’s a utopia. But the process of how we get there, if we ever get there, is one that puts a lot of those technologies in direct contact with us as people in our workplaces — and really changes what it looks like for each of us to do our work.

Can you give some examples of how AI systems are already changing how we work?

Yeah, absolutely. We actually just did some research with three different groups of workers: call center workers who were based in India, folks who were putting together the datasets that are needed to build machine learning models based in sub-Saharan Africa, and then warehouse workers in the United States. And all three of those groups are seeing AI technologies in their workplaces and expected to work with them on a daily basis.

In cutting-edge warehouses, for instance, you see robots that move around the warehouse and can bring you objects. You previously would have had to use your own two feet to go track those objects down, covering five, 10, or even 20 miles in long shifts, bending and lifting every 30 to 40 seconds. And now you’re standing in the same space while robots keep bringing you new things.

And in the call centers, previously you were the person who was kind of on the spot to deal with whatever the person on the other end of the phone line was trying to get your help with. You decided how to best relate to the caller, and were in charge of figuring out what they needed and how to make it happen. Now, there’s this whole suite of software that almost “listens in” to these calls and says, “You’re talking too quickly, you sound too emotional in this conversation, you didn’t give the proper greeting,” and then prompts you with what the software thinks might be the proper resolution to a given customer complaint after hearing some key words and listening to your responses in the conversation.

And for the data annotators, to say a little bit more about what their job looked like, they would review driving footage frame-by-frame and draw bounding boxes around specific objects. Boxes that basically say, “This is a stoplight. This is a car. This is a cat in the middle of the road.” This annotated data is used to train self-driving car systems. It’s a much more advanced and specialized version of what you and I do when we complete CAPTCHAs on websites to prove that we’re not robots.

For these data annotators, there’s new machine learning technology that takes all of that data that they helped build previously, and says, “Okay, I will make reasonable projections of what the next frame in a video will look like after we’ve already identified the objects in the first frame of the video.” It then automatically draws a box around, say, a stop sign in the next 500 frames of the video.

Pretty uniformly, the data annotators said, “I think it’s great. It saves me a ton of time, but it also has this other set of flaws.” Some of these flaws had to do with the operation of the algorithm and its accuracy. But some of the annotators flagged that it took away some of the artistry of their previous job. It shifts it from this craft-based thing into almost editorial work or, in the words of more than one person I spoke with, you’re just cleaning up the messes of this algorithm that tries to do the job that you used to do, but much less accurately.

Those sound like really interesting cases at the frontier of human/AI cooperation in the workplace. Why were you talking to these people?

I take as a starting point for my research that both technology and the economy are only useful so far as they benefit humanity. Put like that, it’s honestly a pretty banal statement that I think a lot of people would agree with. But the further you get into any given technology development or broader economic process, the easier it is to get sidetracked by technological advances, or the latest quarterly profits, or annual GDP numbers, and lose track of those ultimate goals. And one of the things that has become increasingly clear to me is that a lot of the ways that artificial intelligence is being developed and deployed at present are decoupled from what would generate broadly shared prosperity. The financial gains, it seems, are concentrated in the hands of a relatively small number of companies and owners.

And there are real harms on the other side of that exchange, where if you automate a job, or if you radically reduce job quality, there’s a person at the other end who holds that job. In the cases of some of these industries, millions of people hold that job. And what do we get for all of these changes that we’re making in pursuit of these kinds of technological goals? In some instances, we don’t get very much at all. We don’t get much of what we’d be able to identify as genuine productivity increases with benefits that could be spread across the rest of humanity. You probably get some pretty handsome profits for the companies who came up with the products, but you also get very clear, real, human harms on the other side of that. These harms come in the form of fewer jobs, lower pay for work that requires less specialization than before, increases in physical and mental stress on the job, and decreases in autonomy, privacy, and dignity.

What surprised you the most when talking to these groups of workers?

I think the one that came through most fully for me, honestly, is how beneficial quite a few of the workers found these technologies in their jobs. I think it’s fair to say, loosely, there’s one camp that’s very excited about the benefits of these technologies, particularly to businesses. And then there’s another camp that’s rightly very concerned about the harms to the workers that interact with them. There hasn’t been as much focus on what the workers think about these technologies and their purported benefits. I heard a lot of references, unprompted, from workers about how much they appreciated some of those benefits, like increased efficiency, accuracy, or ease in completing their work. Of course, it wasn’t a wholly positive story. They also said that those benefits came with real trade-offs. High physical and mental stress, for instance, about hitting AI-driven performance targets designed to push workers ever-harder. And higher degrees of frustration when the technology wasn’t doing the job as well as they could, meaning they had to fix the problem and sometimes be blamed for those failures anyway. I think their reflections on the ways that these transformations are happening offer some really useful insights for the industry, potential “win-win” directions for these technologies.

The big point worth emphasizing here, one that often gets overlooked when talking about technology — and especially when talking about AI — is that humans are still in charge of all of the relevant decisions. Humans decide what to build, how to use it, when to use it. Those choices define the impacts of technology more than anything that we could consider to be inherent to AI or AI in workplaces. In my research, I heard from workers using the same AI technologies in different workplaces. Whether their experiences with them were positive or negative often came down to managerial choices. Things like choosing to use AI assessments to coach workers versus using them as inputs for performance evaluation decisions. Or using AI to give workers additional information they could draw on to make their own decision versus taking that decision away from them and assigning it to an algorithm. Workers are being impacted by the decisions of executives, managers, and AI developers here. The workplace AI product itself is just the medium for those decisions. And they can be made in ways that harm workers (through ignorance or indifference) or they can be made in ways that help and benefit workers.

“There hasn’t been as much focus on what the workers think about these technologies and their purported benefits.”

What would these “win-wins” look like in practice? What would you like to see?

So I think, for starters, bringing workers into the process when creating a new technology product — from design to implementation — is a real opportunity to gain potentially revenue-generating or cost-saving insights from your frontline. At the same time, including those workers increases their own investment in that technology, their likelihood of adopting it, and their ability to give useful feedback on how well it fits into their current workflows: all of these very tactical things that really affect how much value you’re able to capture as a business from any new technology.

I think the second point is that as companies are thinking about the types of products that they want to create — and use these really powerful technologies to build — they should be asking what tasks are they focusing on and why. There’s some recent research out from a team based at MIT and Duke that I think is really compelling and aligns with a lot of what I was seeing in the field, which is that people are much more likely to adopt something if it gives them the opportunity to continue to do their core activities. The folks that I was talking with, the more these technologies intruded on the aspects of their job that they found satisfying, the more the workers did not appreciate those technologies. As opposed to something that augmented their ability to accomplish their core tasks — their efficiency, their speed, how accurately they could do something.

There are a lot of accuracy technologies built into the smartest warehouses these days and many of the warehouse workers I spoke with appreciated the technologies that made it harder for them to get something wrong, which is great. Everybody loves not screwing up — myself included. So how can we use these insights about where workers are looking for and appreciate the help? I think it points away from a brute force approach, like, “Let’s take all of this off your plate and automate it,” or, “How can we use this monitoring tool to wring maximum productivity from you?” and toward something like, “How do we help you — on your own terms — do this in a more efficient, more powerful, and (frankly, from the company’s perspective) more profitable fashion?”

So Stephanie, what’s next for this work?

So, the reason we’ve been doing this isn’t just a grand philosophical experiment on “How do we think about AI and its various interfaces with humanity in the context of the workplace?” We’re aiming to create a set of specific, targeted, actionable commitments for companies who are willing to take this issue seriously. We’ve been referring to them as “the shared prosperity targets.” And the goal is to do something for artificial intelligence’s impact on labor that is similar to climate change commitments like net-zero emissions targets. A helpful universal metric that companies can either assess themselves against or turn to some kind of third-party assessor on. These targets will basically measure the impact of new technologies on the workforce: Are they leading us towards more access to better jobs or are they leading us away from that?

What might it look like if companies don’t commit to targets like these, if we just maintain the status quo?

I think a lot of optimists in this arena have discussed the idea that algorithms and robots are going to take some of the most “dull, dirty and dangerous” jobs. Take the stuff that we find boring and aren’t interested in and, as a result, leave all of us humans these jobs that are creative and empathetic and allow us to apply our human discernment. But the way that I observed this playing out right now, in these places with really high degrees of interaction with automation and AI systems, is that the combination of AI technology and managerial decisions in these jobs is sending them in the opposite direction.

To go back to our data annotators in sub-Saharan Africa, they went from having this craft-based task where they could take pride in their ownership and quality from start to finish to jobs where they give algorithmic outputs a “thumbs up” or make some adjustments to fix them. If we go back to our warehouses, the people who used to be roaming the warehouse and seeing how the whole thing functioned are now in the same spot all day getting stuff brought to them by all of the robots. And so their universe of problem-solving and ability to make process contributions has shrunk radically. Now they only see what’s at their station. And the call center workers who previously had free rein to figure out how to empathize with a customer who might be really frustrated with something that the company did, they now have to follow this real-time script and it almost turns them into this robot that just makes the customer even angrier sometimes. And that’s on top of the basic worker safety concerns, or privacy concerns created by the ways some workplaces use these technologies.

If someone wants to stay updated on this work, the shared prosperity targets and your research, how can they do that?

First, we always love people signing up for our email list. I’m sure that you’ll drop a nice link into this interview for us.

Second, if you’re reading this and thinking, “Hmm, this makes an awful lot of sense. I wonder what this would look like in my workplace?” we’re always keen to find people who are up for helping us test and refine some of the ideas that we’re working on, so please reach out. Especially if you’re somebody who is building workplace AI technology, working with it as a worker, or using it (or considering using it) in your organization.

Also, we’ll be releasing a report on the research I discussed that will be coming out later this year. And we’d love for people to read the report, give us feedback, and spread the word.