AI News, MIT researchers teach AI to spot depression | Alphr artificial intelligence

Can Artificial Intelligence Detect Depression in a Person’s Voice?

So, the notion that artificial intelligence could help predict if a person is suffering from depression is potentially a big step forward—albeit one that brings with it questions about how it might be used.

More importantly, the model she and fellow MIT scientist Mohammad Ghassemi developed was able to recognize depression with a relatively high degree of accuracy through analyzing how people speak, rather than their specific responses to a clinician’s questions.

The potential benefit, Alhanai notes, is that this type of neural network approach could one day be used to evaluate a person’s more natural conversations outside a formal, structured interview with a clinician.

That could be helpful in encouraging people to seek professional help when they otherwise might not, due to cost, distance or simply a lack of awareness that something’s wrong.

Spotting patterns The model focused on audio, video and transcripts from 142 interviews of patients, about 30 percent of whom had been diagnosed with depression by clinicians.

Specifically, it used a technique called sequence modeling, in which sequences of text and audio data from both depressed and non-depressed people were fed into the model.

The researchers also found that the model needed considerably more data to predict depression solely from how a voice sounded, as opposed to what words a person used.

He also says that the researchers will want to try to better understand what specific patterns from all the raw data the model identified as indicative of depression.

We know of the placebo and nocebo effects in medicine, when blind users of sugar pills experience either the positive or negative effects of a medicine because they have either the positive or negative expectations of it.

But if the patient is speaking at home into their phone, maybe recording a daily diary, and the machine detects a change, it may signal to the patient that they should contact the doctor.

Artificial intelligence senses people through walls

X-ray vision has long seemed like a far-fetched sci-fi fantasy, but over the last decade a team led by Professor Dina Katabi from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has continually gotten us closer to seeing through walls.

The researchers use a neural network to analyze radio signals that bounce off people’s bodies, and can then create a dynamic stick figure that walks, stops, sits, and moves its limbs as the person performs those actions.

“We’ve seen that monitoring patients’ walking speed and ability to do basic activities on their own gives health care providers a window into their lives that they didn’t have before, which could be meaningful for a whole range of diseases,” says Katabi, who co-wrote a new paper about the project.

“A key advantage of our approach is that patients do not have to wear sensors or remember to charge their devices.” Besides health care, the team says that RF-Pose could also be used for new classes of video games where players move around the house, or even in search-and-rescue missions to help locate survivors.

A neural network trained to identify cats, for example, requires that people look at a big dataset of images and label each one as either “cat” or “not cat.” Radio signals, meanwhile, can’t be easily labeled by humans.

Since cameras can’t see through walls, the network was never explicitly trained on data from the other side of a wall – which is what made it particularly surprising to the MIT team that the network could generalize its knowledge to be able to handle through-wall movement.