Skip to main content

Women with Byte: Vivienne Ming’s plan to solve ‘messy human problems’ with A.I.

Image used with permission by copyright holder

Building A.I. is one thing. Actually putting it to use and leveraging it for the betterment of humanity is entirely another. Vivienne Ming does both.

As a theoretical neuroscientist and A.I. expert, Ming is the founder of Socos Lab, an incubator that works to find solutions to what she calls “messy human problems” — issues in fields like education, mental health, and other areas where problems don’t always have clear-cut solutions.

Recommended Videos

“Most of what we do is about solving some very grounded real-world problems — often with very little data,” Ming tells Digital Trends.

The company’s latest pursuit? Identifying when people are starting to show subtle signs of stress and anxiety. To do so, it’s developing A.I. that can analyze speech patterns, eye movements, and other biometric indicators to determine if a person is feeling stressed or anxious — even if they themselves aren’t aware of it.  “We’re going to build a model of you and try to understand what the stressors are in your life and ways that you might be able to intervene,” Ming says.

What would such a system be used for? According to Ming, a stress-spotting A.I. program could be used by universities, businesses, and community health organizations to monitor the mental health of groups of people in those organizations. “You could see that some new policy at your company or some change at your university was driving stress — the kind of stress that predicts long-term health consequences,” she says.

However, Ming also acknowledges the potential for this tech to be used unethically, and underscores that she doesn’t want anybody using Socos technology as a tool for secretly or invasively monitoring people’s mental health. “I have a background in that space, and bad things happen when companies know what’s going on with individuals,” she explains.

Making a better person | Vivienne Ming | TEDxBerkeley

Taking a stand

Ming speaks from experience. Once upon a time, she served as the chief scientist at Gild, a company created with the goal of eliminating bias in the hiring process using artificial intelligence. Unfortunately, somewhere down the line, Ming reportedly discovered that the software was being used by companies in a way that would actually make bias worse. But rather than seeing this as problematic, clients were asking Gild for new tools that would further increase bias in the hiring process.

Ming left the company when she found that her opposition to creating such tools was a problem for other executives at the company. There are many ways A.I. that was perhaps created for magnanimous reasons can be abused, and Ming says anyone working in the field should be ready to stand up against that and, if necessary, leave a company if things start moving in that direction.

“You might say, ‘I would never allow someone to abuse this system,’ and then your startup is about to run out of money or, on the flip side, someone’s offering you a billion dollars,” Ming says. “I mean literally offering you a billion dollars to do it. Now it’s not so easy to say no. If you’re not willing to walk away from situations like that, you shouldn’t put yourself in those situations in the first place.”

Obviously, an A.I. system that can determine how people are feeling could be abused if it ends up in the wrong hands. Imagine your employer or the government using your innermost feelings against you. Ming says she’s working hard to make sure her A.I. is being built in a way so that it can only be used to benefit people. She says powerful A.I. systems can become “authoritarian tech” when things go wrong.

“For me, the two biggest concerns are the intentional and unintentional abuse of these systems,” Ming says.

A.I. as a force for good

Importantly, Ming isn’t just a believer in the virtues of artificial intelligence. She also makes a strong case for why fears about A.I are often overblown. Alarmists often warn that A.I. could potentially become hyperintelligent and take over the world in the not-too-distant future — but Ming says she’s not worried about that at all. In her eyes, the A.I. we have today doesn’t resemble anything close to the A.I. apocalypse scenario that’s bandied about in science fiction.

“We haven’t invented anything that could become so smart that it would begin to think like us, much less become a superintelligence,” Ming says. “This is a mediocre metaphor, but it’s like we’ve invented these astonishing savants that can play Go vastly better than the best human in the world, but they don’t actually understand anything about Go. It just knows that if there’s a Go board in front of it, do these things and it wins. That’s it.”

The Future of Human Potential | Dr. Vivienne Ming | SingularityU South Africa

Another common A.I. fear is that it’s going to take away jobs and render humanity obsolete in the workforce. Ming doesn’t buy this one either — at least not completely. While she does acknowledge that certain highly routine jobs will likely be automated in the future, she also points out that automation has huge potential to create new jobs. What remains to be seen is if people will be ready for those new jobs. Ming says she’s not confident many people are properly equipped for the next generation of jobs, and what we should focus on is using A.I. to help prepare them.

“How do you take highly creative people, that know how to do things, and then massively increase their productivity in the economy by just turning them loose from all of the constraints of the experience that existed before?” Ming asks. “A.I. will create lots of jobs. Our responsibility is how many people we make ready for those jobs. Right now, I would be shocked if it’s more than one or two percent of the labor force.”

Original ideas

So how do we do that? Education is part of it — though Ming thinks just getting a good education isn’t going to be enough when A.I. starts majorly influencing the world of work. She says we need to start teaching people how to utilize these new tools in creative and unique ways, rather than just teaching them how things function.

“The professional middle class is about to get blindsided. They were promised that if they got a good education, they’d have a job for the rest of their lives,” Ming says. “I’m telling you that if we can keep that promise 10 years from now, I would be stunned.”

Learning to code is great, Ming says, but plenty of people know how to code. Furthermore, A.I. is writing code more and more. What’s going to set people apart in the future is not what they know how to do but what original ideas they bring to the table.

The future is helping people pinpoint what they can bring to the table.

Thor Benson
Former Digital Trends Contributor
Thor Benson is an independent journalist who has contributed to Digital Trends, The Atlantic, The Daily Beast, NBC News and…
Kia’s futuristic, affordable EV4 sedan will launch in 2025
kias futuristic affordable ev4 sedan will launch in 2025 653867 v2 1

Kia certainly sparked interest when it unveiled the concept model of the EV4 in 2023. The sedan’s futuristic design and electric range capacity, combined with the promise of affordability, showed that Kia was ready to make bold moves to diversify its EV lineup.

But two big questions came up: When would the EV4 actually launch, and would the smaller sedan/hatchback ever launch stateside, given American's preference for larger vehicles.

Read more
Hyundai believes CarPlay, Android Auto should remain as options
The 6.9-inch Sony digital media receiver installed in the dashboard of a vehicle.

Hyundai must feel good about the U.S. market right now: It just posted "record-breaking" November sales, led by its electric and hybrid vehicles.

It wouldn’t be too far of a stretch for the South Korean automaker to believe it must be doing something right about answering the demands of the market. And at least one recurring feature at Hyundai has been a willingness to keep offering a flexible range of options for drivers.

Read more
Dodge’s Charger EV muscles up to save the planet from ‘self-driving sleep pods’
dodges charger ev muscles up to save the planet from self driving sleep pods stellantis dodge daytona

Strange things are happening as the electric vehicle (EV) industry sits in limbo ahead of the incoming Trump administration’s plans to end tax incentives on EV purchases and production.

The latest exemple comes from Dodge, which is launching a marketing campaign ahead of the 2025 release of its first fully electric EV, the Daytona Charger.

Read more