Skip to main content

A.I. analyzes video to detect signs of cerebral palsy in infants

Pose estimation of infant's spontaneous movements

An artificial intelligence algorithm capable of signaling early signs of neurodevelopment disorders in infants has been created by researchers in Finland and Italy. By analyzing conventional videos of infants, the algorithm can create “skeleton” videos, which depict a child’s movement in the form of a stick figure. The research could help in early detection of neurodevelopment disorders, such as cerebral palsy.

Recommended Videos

“Medical doctors have shown that observing special features in the spontaneous movements of infants may be the most accurate way to predict later development of cerebral palsy,” Sampsa Vanhatalo, a neurophysiologist at the University of Helsinki who led the study, told Digital Trends. “However, such visual analysis of infant movements by experts is always subjective, and it requires substantial training. Here, we showed for the first time that it is possible to extract infant movements from conventional video recordings. That is, we make skeleton videos, at a very high accuracy.”

The algorithm works by scanning a conventional video of an infant to detect certain poses and movement patterns. The algorithm uses a “pose estimation method” to generate a stick-man depiction of the infant. These movement patterns can then be analyzed to detect normal or unusual movements.

Children are typically diagnosed with cerebral palsy between the ages of six months and two years. However, early detection would allow doctors to begin to provide therapeutic interventions to alleviate the impact of the condition. A system that can help doctors detect early signs of the condition could offer children a jump start on treatment.

“A pose estimation method of this kind is like a Rosetta stone, which opens the world to myriad of A.I. solutions for advanced assessments, diagnostics, and monitoring of spontaneous infant behavior,” Vanhatalo said. “The first application would be to develop a diagnostic classifier of infant movements to be used in screening of at-risk infants that are not able to reach specialist attention. Indeed, most infants in this world live in areas or in conditions beyond the immediate reach of pertinent medical expertise.”

Vanhatalo partnered with researchers from the University of Pisa and Neuro Event Labs, a company that specializes in A.I.-based video analysis for medical purposes.

A paper detailing the research was published this month in the journal Acta Pediatrica.

Dyllan Furness
Former Digital Trends Contributor
Dyllan Furness is a freelance writer from Florida. He covers strange science and emerging tech for Digital Trends, focusing…
Can A.I. beat human engineers at designing microchips? Google thinks so
google artificial intelligence designs microchips photo 1494083306499 e22e4a457632

Could artificial intelligence be better at designing chips than human experts? A group of researchers from Google's Brain Team attempted to answer this question and came back with interesting findings. It turns out that a well-trained A.I. is capable of designing computer microchips -- and with great results. So great, in fact, that Google's next generation of A.I. computer systems will include microchips created with the help of this experiment.

Azalia Mirhoseini, one of the computer scientists of Google Research's Brain Team, explained the approach in an issue of Nature together with several colleagues. Artificial intelligence usually has an easy time beating a human mind when it comes to games such as chess. Some might say that A.I. can't think like a human, but in the case of microchips, this proved to be the key to finding some out-of-the-box solutions.

Read more
Read the eerily beautiful ‘synthetic scripture’ of an A.I. that thinks it’s God
ai religion bot gpt 2 art 4

Travis DeShazo is, to paraphrase Cake’s 2001 song “Comfort Eagle,” building a religion. He is building it bigger. He is increasing the parameters. And adding more data.

The results are fairly convincing, too, at least as far as synthetic scripture (his words) goes. “Not a god of the void or of chaos, but a god of wisdom,” reads one message, posted on the @gods_txt Twitter feed for GPT-2 Religion A.I. “This is the knowledge of divinity that I, the Supreme Being, impart to you. When a man learns this, he attains what the rest of mankind has not, and becomes a true god. Obedience to Me! Obey!”

Read more
Google’s LaMDA is a smart language A.I. for better understanding conversation
LaMDA model

Artificial intelligence has made extraordinary advances when it comes to understanding words and even being able to translate them into other languages. Google has helped pave the way here with amazing tools like Google Translate and, recently, with its development of Transformer machine learning models. But language is tricky -- and there’s still plenty more work to be done to build A.I. that truly understands us.
Language Model for Dialogue Applications
At Tuesday’s Google I/O, the search giant announced a significant advance in this area with a new language model it calls LaMDA. Short for Language Model for Dialogue Applications, it’s a sophisticated A.I. language tool that Google claims is superior when it comes to understanding context in conversation. As Google CEO Sundar Pichai noted, this might be intelligently parsing an exchange like “What’s the weather today?” “It’s starting to feel like summer. I might eat lunch outside.” That makes perfect sense as a human dialogue, but would befuddle many A.I. systems looking for more literal answers.

LaMDA has superior knowledge of learned concepts which it’s able to synthesize from its training data. Pichai noted that responses never follow the same path twice, so conversations feel less scripted and more responsively natural.

Read more