Skip to main content

Instead of stealing jobs, what if A.I. just tells us how to do them better?

taylorism workforce office bullpen
Image used with permission by copyright holder
The early part of the twentieth century saw a book written by a management consultant and mechanical engineer named Frederick Taylor, titled The Principles of Scientific Management. Workplace inefficiency, Taylor’s book argued, was one of the greatest crimes in America; it robbed both workers and employers alike of prosperity levels they deserved to achieve. For example, Taylor noted the “deliberate loafing” the bricklayers’ union forced on its workers at the time by limiting them to just 275 bricks per day when working on a city contract, and 375 per day on private work. Taylor had other ideas. In the interests of efficiency he believed that every single act performed by a workforce could be modified and modified to make it more efficient, “as though it were a physical law like the Law of Gravity.”

Others took up Taylor’s dream of an efficient, almost mechanised workforce. Contemporaries Frank and Lillian Gilbreth studied the science of bricklaying, introducing ambidexterity and special scaffolds designed to reduce lifting. The optimal number of motions bricklayers were told to perform was pared down to between two and five depending on the job, and new measures were introduced to keep track of the number of bricks an individual laid — to both incentivize workers and reduce wastage.

It’s now possible to offer workers real-time feedback in a way no human manager ever could.

Like many management theories, Taylorism had its moment in the sun, before being replaced. Today, however, its fundamental ideas are enjoying a surprising resurgence. Aided by the plethora of smart sensors and the latest advances in artificial intelligence, it’s now possible to monitor workers more closely than ever, and offer them real-time feedback in a way that no (human) manager ever could.

A recent study from the University of Waterloo showed how motion sensors and A.I. can be used to extract insights from expert bricklayers by equipping them with sensor suits while they worked to build a concrete wall. The study discovered that master masons don’t necessarily follow the standard ergonomic rules taught to novices. Instead, they employ movements (such as swinging, rather than lifting, blocks) that enable them to work twice as fast with half the effort.

“As we all know, [an] ageing workforce is a threat to the national economy,” researcher Abdullatif Alwasel told Digital Trends. “In highly physical work, such as masonry, the problem lies in the nature of work. Masonry is highly physical and repetitive work: two major factors that are known to cause musculoskeletal injuries. However, when this kind of work is done in an ergonomically safe way, it doesn’t cause injuries. This is apparent through the percentage of injuries in expert workers versus novice or less experienced workers. [Our team’s work] looks at using A.I. to extract safe postures that expert workers use to perform work safely and effectively as a first step towards creating a training tool for novice workers to graduate safe and effective masons and to decrease the number of injuries in the trade.”

taylorism workforce ekso exoskeleton
Ekso
Ekso

Alwasel describes the team’s current work as a “first step.” By the end of the project, however, they hope to be able to develop a real-time feedback system which alerts workers whenever they use the wrong posture. Thanks to the miniaturization of components, it’s not out of the question that such a sensor suit could one day be used on construction sites across America. As with Taylor’s dream, both workers and employers will benefit from the enhanced levels of efficiency.

“Our next step is to find out whether the concept of expert safe workers applies to other trades that have similar situations,” Alwasel said. “I think commercialization is a final step that has to be done to make use of this technology and we are looking for ways to do that.”

Objects that nudge back

It should be noted that the classical concept of Taylorism is not always viewed entirely favorably. Critics point out that it robbed individuals of their autonomy, that it made jobs more rote and repetitive, that it could adversely affect worker wellbeing by causing them to over-speed, and that it assumed speed and efficiency was the ultimate goal of… well, everything really.

As with so much of modern technology, a lot depends on what we gain versus what we lose.

It’s difficult to criticize a project like the University of Waterloo’s, which is focused on reducing injuries among the workforce. But this same neo-Taylorist approach can be seen throughout the tech sector. In Amazon’s warehouses, product pickers (or “fulfillment associates”) are given handheld devices that reveal where individual products are located and, via a routing algorithm, tell them the shortest possible journey to get there. The devices also collect constant, real-time streams of data concerning how fast employees walk and complete individual orders, thereby quantifying productivity. Quoted in an article for the Daily Mail, a warehouse manager described workers as “sort of like a robot, but in human form.” Similar technology is increasingly used in warehouses (not just Amazon’s) around the world.

It’s not just Amazon, either. A company called CourseSmart creates study aids that allow teachers to see whether their students are skipping pages in their textbooks, failing to highlight passages or take notes, or plain not studying. This information — even when it concerns out-of-lesson time for students, can be fed back to teachers. The dean of a university’s school of business described the service to the New York Times as “Big Brother, sort of, but with a good intent.” The idea is to find out exactly what practices produce good students, and then nudge them toward it.

These “nudges” form an increasingly large part of our lives. Rather than the subtle nudges of previous “dumb” objects (for example, the disposability of a plastic cup, which starts disintegrating after a few uses and therefore encourages you to throw it away), today’s smart technology means we can be given constant feedback on everything from our posture to which route to take to the bathroom for a quicker toilet break to how best to study. Autonomous technology challenges the autonomy of individuals.

taylorism workforce amazon warehouse
Amazon
Amazon

Whether that’s a bad thing or not depends a whole lot on your perspective. In Sarah Conly’s Against Autonomy, the author argues that we should “save people from themselves.” It’s part of a larger argument that may begin with technology to modify how you work, continue to the banning of cigarettes and excessively sized meals, and maybe even extend to spending too much of your paycheck without making the proper savings.

There are no easy answers here. As with so much of modern technology (news feeds that show us only articles they think will be of interest, smart speakers in the home, user data exchanged for “free” services, etc.), a lot depends on what we gain versus what we lose. We might be very willing to have a smart exoskeleton that tells us how not to damage our backs when lifting heavy bricks. We may be less so if we feel that our humanity is minimized by the neverending push toward efficiency.

What’s not in question is whether the tools now exist to help make this neo-Taylorism a reality. They most certainly do. Now we need to work out how best to use them. To paraphrase chaos-theory mathematician Dr. Ian Malcolm (also known as Jeff Goldblum’s character in Jurassic Park), we’ve been so preoccupied with whether or not we could achieve these things, we haven’t necessarily thought enough about whether we should.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Scientists are using A.I. to create artificial human genetic code
Profile of head on computer chip artificial intelligence.

Since at least 1950, when Alan Turing’s famous “Computing Machinery and Intelligence” paper was first published in the journal Mind, computer scientists interested in artificial intelligence have been fascinated by the notion of coding the mind. The mind, so the theory goes, is substrate independent, meaning that its processing ability does not, by necessity, have to be attached to the wetware of the brain. We could upload minds to computers or, conceivably, build entirely new ones wholly in the world of software.

This is all familiar stuff. While we have yet to build or re-create a mind in software, outside of the lowest-resolution abstractions that are modern neural networks, there are no shortage of computer scientists working on this effort right this moment.

Read more
Deep-learning A.I. is helping archaeologists translate ancient tablets
DeepScribe project 1

Deep-learning artificial intelligence is helping grapple with plenty of problems in the modern world. But it also has its part to play in helping solve some ancient problems as well -- such as assisting in the translation of 2,500-year-old clay tablet documents from Persia's Achaemenid Empire.

These tablets, which were discovered in modern-day Iran in 1933, have been studied by scholars for decades. However, they’ve found the translation process for the tablets -- which number in the tens of thousands -- to be laborious and prone to errors. A.I. technology can help.

Read more
A.I. translation tool sheds light on the secret language of mice
ai sheds light on mouse communication

Breaking the communication code

Ever wanted to know what animals are saying? Neuroscientists at the University of Delaware have taken a big leap forward in decoding the sounds made by one particular animal in a way that takes us a whole lot closer than anyone has gotten so far. The animal in question? The humble mouse.

Read more