• 2 months ago
Welcome to FlickFrenzy!

Subscribe now and dive into a world where every flick is a new thrill!
Transcript
00:00Hello. Hello. Hello. Hello.
00:04A look. A word. Hello.
00:07A first impression.
00:09And straightaway we know, I can trust her, but not him.
00:13She's lying. And so is he.
00:16But he seems reliable.
00:18Hello.
00:20The level of trustworthiness that you perceive in another person's face,
00:23even when they're complete strangers,
00:25can predict criminal sentencing decisions,
00:27including up to capital punishment.
00:29It can predict hiring decisions.
00:33A brief glance at someone's face or the sound of their voice
00:36can affect our decisions.
00:39Gathering information from facial and vocal cues
00:43has been fundamental to social interactions.
00:46Language has only been around for tens of thousands of years,
00:50which, in evolutionary terms, is no more than a blink of the eye.
00:57First impressions can be alluring, but often deceptive.
01:01I'm looking forward to tomorrow.
01:03Our face and voice reveal a lot about us.
01:06Our mood, our disposition, our health.
01:09They point to what's going on inside us,
01:11and the cues they give can even be interpreted by artificial intelligence.
01:17Science fiction has been predicting this development for ages,
01:21but it's still hard to fathom,
01:23and it leaves us feeling sceptical and uneasy,
01:26because we're not used to it.
01:43We encounter strangers every day.
01:46A new face, an unfamiliar voice.
01:49Both unique and distinct.
01:53They express our individuality,
01:55but they also help us decide whether we like the person
01:59and whether we accept or reject their advances.
02:03Decisions we make instantly.
02:09With just 100 milliseconds of exposure,
02:12people already make up their mind
02:14about trustworthiness and competence and dominance,
02:16but that making up their mind takes, you know,
02:19several hundreds of milliseconds,
02:21and people need a very quick glance.
02:23There's certain facial features, even in a static photograph,
02:27that convey levels of intelligence,
02:30and that can lead to judgements and biased decisions.
02:36John Freeman is looking at what happens in our brain
02:39after a short glance at someone's face.
02:42His theory, many of these instantaneous decisions
02:45are based on learned stereotypes.
02:49The same applies to voices.
02:52Pascal Bolland continues to find evidence
02:55that we associate certain emotions and traits
02:58with how someone's voice sounds.
03:02We see voices as a type of auditory face.
03:06We need just one word to form an opinion.
03:09A voice like this...
03:13..is seen as inspiring and confident by most people,
03:17whereas this one leaves the listener thinking
03:21they wouldn't trust him with their money.
03:23Hello.
03:24Hello.
03:25Hello.
03:26Hello.
03:27What does science say?
03:29Do we all see the same thing when we look at someone's face?
03:33John Freeman uses a special morphing program
03:36to get a more accurate answer.
03:40He can alter gender, age, mood and character traits subtly.
03:48If you ask hundreds of different subjects
03:51to judge the trustworthiness of these individual faces,
03:55you'll find that they generally agree
03:57in terms of being highly correlated with one another.
04:01So the same faces appear trustworthy
04:03or relatively untrustworthy across the board generally.
04:08Although we're all different,
04:10the result is surprisingly similar for everyone.
04:13At least if we're asked if someone is trustworthy.
04:17The first impression is when we decide
04:19who we want to communicate, cooperate
04:22or form a close relationship with.
04:28Is it surprising that people have these kinds of unconscious tendencies
04:33despite humans being such rational creatures?
04:36I would say not really, right?
04:38When we think evolutionarily,
04:40in terms of our evolutionary past,
04:43before we had verbal language as non-human primates,
04:47non-verbal communication and using facial appearance,
04:52using cues of the face, voice and body were really critical
04:56for survival, for the maintenance of resources,
04:59for building social groups.
05:01It can be attributed to our evolution.
05:05Making instant decisions about who is friend or foe
05:08greatly increased our chances of survival.
05:17As pack animals, we've always formed communities.
05:20Long before language played a role,
05:22humans developed a keen sense of how those around them felt.
05:28And being able to read the room is a huge advantage.
05:31If someone in the group is scared,
05:33your own life may also be in danger too.
05:36If someone is seething with rage, you placate them or run.
05:43Our brains are still wired the same way today.
05:46As soon as we encounter someone new,
05:48we immediately attempt to establish whether they are with us or against us.
05:56But to what extent do these first impressions actually alter our behaviour?
06:01The evidence shows that they have a strong impact.
06:07They predict all sorts of downstream social outcomes
06:10and real-world consequences.
06:12And so, you know, the findings like faces that appear more competent
06:16are more likely to be elected to senator and governor positions
06:20in the United States,
06:22and even presidential candidates are more likely to win in the US.
06:28Competent-looking managers and attractive people are paid more,
06:32and defendants who look untrustworthy are given longer sentences.
06:39But what about our voices?
06:41We can hear fine nuances of confidence, dominance and competence too,
06:46even if they have little in common with the speaker's actual personality.
06:56Beyond words, a voice also transports emotions
07:00and can even bring things to life.
07:06Puppets become real figures we relate to like other human beings.
07:17It illustrates how we instinctively relate to a human personality
07:21if it has a voice.
07:24On a puppet, for example,
07:27and because of changing body cues and changing vocal cues,
07:31that the perception of the emotion,
07:33the perception of that person's or the puppet's intentions are changed.
07:38Oh, yes.
07:39Look me in the eyes.
07:40Just a little bit.
07:43Generosity.
07:44Sympathetic.
07:45Ambivalent.
07:48Our brains create real people from the voices they hear,
07:53even when the people aren't real.
07:57Once you give a machine a voice,
08:00it gets a personality as if it were human.
08:04It's an automatic reflex.
08:10And research shows that people's feelings change
08:13if their computer, car or coffee machine has a voice.
08:19The vocal acoustics we give machines
08:21could even determine how we interact with them.
08:27Furhat, wake up.
08:29Wake up.
08:30How can I help you?
08:32What can you do?
08:33You can, for example, ask me to introduce myself or to chat a little.
08:37Can you introduce yourself?
08:39I'm a Furhat robot,
08:40a social robot built to interact with people
08:42in the same way you interact with each other.
08:45So I can smile.
08:47And nod.
08:48Gabriel Scanser is one of Furhat's creators.
08:52A first-person shooter,
08:55A first prototype of the robot was launched in 2011.
08:59I looked a bit more crude back then
09:01with cables sticking out from my head.
09:04They came up with the idea to cover the cables with a furhat.
09:07And that, ladies and gentlemen,
09:09is where the name Furhat comes from.
09:11I don't really need my furhat anymore.
09:14I look pretty as I am, don't you think?
09:16I don't know what the original interest comes from, really.
09:19I think it's a very fascinating idea
09:22of creating an agent that interacts like a human
09:27and behaves like a human.
09:29It's fascinating in its own right,
09:31but it's also, again, back to the idea that if we can do that,
09:35we start to get a better understanding of how we as humans work.
09:40In the future, Gabriel Scanser wants Furhat
09:43to behave like a human during a conversation.
09:46But as soon as scientists try to transfer our human behaviours to machines,
09:51it quickly becomes apparent how complex our behaviours are.
09:59Today, Furhat is supposed to make small talk.
10:02The robot searches the web on its own for responses.
10:10What do you mean by that?
10:15You are quite stupid.
10:18That's a pretty rude thing to say, Furhat.
10:22So I have no idea what the robot will say next,
10:25so it's a surprise for me what it says,
10:28and it's a bit fascinating to see how the conversation unfolds.
10:34Although the conversation takes some unexpected turns,
10:37Furhat has already mastered the basics.
10:40When to speak, where their conversation partner is looking
10:45and how much eye contact is appropriate.
10:48The scientists have programmed Furhat with a whole range of emotional facial cues.
10:55However, the finer differences we express using Mimic and our voices
10:59are proving trickier.
11:01So as humans, for example, we have these micro-expressions,
11:04so my eyes move a little bit all the time,
11:07I make small movements with my face,
11:10and we want the robot to have those small movements also,
11:13otherwise it looks very robotic and not very human-like.
11:17So we think that the face is extremely important,
11:20and the way we give feedback to each other and everything
11:23is expressed through the face,
11:25but also through the voice and the tone of our voice and so on.
11:30That's why it's so difficult for Furhat to react appropriately.
11:34The same word or the same sentence can come across very differently
11:38depending on the mood, the occasion or the person we're talking to.
11:44Unfortunately, there's no user manual for humans that Furhat can learn from.
11:49Not yet, anyway.
11:53There's plenty of cases where a face can be identical and the same features,
11:59but the context, the body and the voice
12:01dramatically changes how we understand that person.
12:04There's all sorts of different kinds of cues
12:07in terms of intonation, pitch contour, formant characteristics
12:12that change how we perceive other people's voices,
12:15the emotions that they're feeling, their intentions.
12:21How do we read moods?
12:23Marc Schwertz is researching how tiny movements in our facial muscles
12:27can influence our communication.
12:33The eyebrows, cheeks, lips and chin
12:36all contribute to forming very different types of smiles.
12:43It's very subtle because it has to do with micro-expressions
12:46that you see around the eye region or the mouth region.
12:51Or you can fake a smile, like if I do this in a very fake manner,
12:56you can see that the person is pretending to be happy
12:59or being cynical or sarcastic, but it's not revealing
13:04what his or her true sentiments or emotions are.
13:10And it's not only smiling,
13:12it's also in the very subtle movements of eyebrows,
13:16the very subtle movements of blinking.
13:21A recent US TV show focused on body language.
13:25Dr Lightman was the protagonist of the crime show Lie To Me.
13:29He was an expert in micro-expressions,
13:32who believed facial expressions could expose lies and suppressed emotions.
13:37Huge scone.
13:39Shame and shame.
13:41Contempt.
13:43These expressions are universal.
13:47Can we really identify every single emotion just by practising?
13:52Some scientists think so.
13:54Apparently, all we need to do
13:56is consciously notice each millisecond-long facial expression.
14:00The results are used in market research
14:03to find out which commercials are most effective.
14:06Specially trained security teams at airports
14:09also analyse facial cues to spot potential terrorists.
14:16But is it really that easy to tell when criminals are lying?
14:20Hollywood wants us to think so.
14:2343 muscles combined to produce a possibility of 10,000 expressions.
14:27Now, if you learn them all, you don't need a polygraph.
14:31How much did we spend on this damn project?
14:33But the scientific world takes a slightly dimmer view.
14:36In real life, it's often much harder to do.
14:41So, for instance, there are these claims that from your micro-expressions
14:44you can see whether someone is lying or not,
14:46but that's close to impossible.
14:49So for most times that people lie about something,
14:53you're close to chance level about guessing
14:56whether or not someone is speaking the truth or not.
14:58Ironically speaking, if your lie becomes more important,
15:02like if I'm lying about something which really matters,
15:05like I have to hide something, it's called the pink elephant effect,
15:09your cues to lying become clearer for the other person.
15:15So the more you try your best not to show that you're lying,
15:20the more likely it is that people will see that you're lying.
15:26How easy is it to tell when someone is lying?
15:29Mark Schvertz is looking to children aged five and over for the answer.
15:36The children are asked to tell the prince in a computer game the truth.
15:42But lie to the dragon.
15:49They're supposed to help the prince hide from the dragon.
15:52Cameras and microphones record the children's behaviour
15:55in an attempt to find any differences.
16:00After recording numerous children,
16:02the results highlight signs that point to lying.
16:12When you look at the face, when they're being truthful,
16:16they have a very open and spontaneous kind of expression.
16:22When they're lying and they have the impression
16:26that they're being watched and being observed,
16:29you see that they have this sense of, oh, I'm being observed.
16:32And you can tell also from facial expressions around the mouth area,
16:36which is a more marked kind of expression than in, say, the truthful condition.
16:43It's something about the voice.
16:45So when they're being truthful, they have a very soft, normal, warm voice.
16:50When they're lying, they tend to be a little bit more...
16:53using a creaky voice, like talking a little bit like this.
16:57But not every child showed the same cues,
17:00so it's not a reliable way to tell if they are telling the truth or not.
17:14Generally, we're much better at controlling our facial cues than our vocal cues.
17:20Every sound we produce is created by over 100 muscles all working closely together.
17:28Emotions alter muscular tension, which impacts the tone of our voices.
17:38Everything is controlled by different parts of the brain.
17:42The muscles in the chest and abdomen that create the required air pressure.
17:46Muscles in the tongue, lips and face that vary the voice.
17:50And, of course, the larynx and vocal cords.
17:54The higher the pitch, when we become excited, for example, the faster they vibrate.
17:59But does everyone hear the same thing when a stranger talks to us?
18:03Do we all come to the same conclusion in deciding if someone is trustworthy,
18:07extroverted or willing to try new things?
18:10Piyun Shuler is conducting research using a range of different voices.
18:22Although the group doesn't agree on everything, the data shows some clear tendencies.
18:29Artificial intelligence is being used to help identify them.
18:34My theory is that if a human can hear something, a computer can pick up on it, too.
18:43But it becomes a little spooky when we go beyond what a human can spot.
18:50We're now trying to assess whether the speaker has COVID-19 or not.
18:54Plus for yes and minus for no.
18:58We've got one vote for positive.
19:01It was negative.
19:03Here's the next voice.
19:11We now have three positives and one negative.
19:14I'm going to say that the speaker has COVID-19.
19:17The next voice is negative.
19:20We now have three positives and one negative.
19:23I'm going to say positive.
19:25Maurice?
19:26Yes, that's right.
19:28Diagnosing COVID by simply listening to someone's voice sounds risky,
19:32at least when we rely on the human ear.
19:35At the start of the pandemic,
19:37Piyun Shuler programmed a range of voices into artificial intelligence.
19:41Is a more accurate diagnosis now possible?
19:44This is the asymptomatic negative.
19:47This is the asymptomatic negative case.
19:57And this is the symptomatic positive case.
20:11What we can see quite clearly on the right here are the upper tones.
20:18There are lots more signals we can use too,
20:21like the uncontrolled vibration of the vocal cords
20:24that leads to irregularities in the stimuli,
20:27a certain throatiness, breathlessness that causes longer speech breaks.
20:41All these things give the computer enough examples to reach a decision
20:45and differentiate between asthma or a cold.
20:55At least 85% of the diagnoses made by artificial intelligence were correct.
21:00What's more, computers can also identify ADHD, Parkinson's,
21:05Alzheimer's and depression by analysing voices.
21:09Anything that goes wrong in the body or brain impacts our voices.
21:16To make a diagnosis,
21:18artificial intelligence looks at up to 6,000 different vocal cues.
21:26The new technology could allow diagnoses
21:29to be made more easily and earlier.
21:34Every word we say reveals more about us than we realise.
21:39And as listeners, we are influenced by the person speaking to us.
21:44Subconsciously, we relate to the person speaking.
21:47We internalise their anxiety, uncertainty, excitement or happiness
21:52as if it were our own.
21:56It's a type of synchronisation that connects two people through mimicry.
22:05But in general, mimicry is something that we do observe a lot
22:09in normal kind of conversations.
22:13And it's reflected in various aspects of our communication,
22:19from the words we use, the syntax we use,
22:23the prosody we produce, so the intonation and the tempo,
22:26but also the non-verbal communication, for instance, smiling behaviour.
22:31The closer the relationship or desire for a relationship,
22:34the more intense our subconscious mimicry becomes.
22:39We also mimic more strongly when we want to be liked.
22:43A smile is the clearest signal.
22:47The smiling person triggers something,
22:49often triggers something of happiness in yourself.
22:52Like if you see a smiling person, you sometimes start to smile yourself.
22:57And so I don't know, maybe one of the attractive features of the Mona Lisa
23:01has exactly to do with that.
23:03There's something intriguing, something attractive
23:06about the painting because she elicits smiles, she elicits happiness.
23:13We allow ourselves to be influenced by someone else's mood.
23:17Mark Schvertz wanted to take a closer look.
23:21In this experiment, the speaker is describing something to her audience.
23:25Her manner is animated and she smiles frequently.
23:32Her audience reacts similarly.
23:35They smile back, nod in agreement and give positive feedback.
23:44But what happens when the same speaker repeats the process, but more seriously?
23:49Her listeners also look more earnest.
23:52They appear to concentrate more and their reactions are more constrained.
23:59Synchronization signals empathy and interest in the other person.
24:04If our communication is successful, we tune into them more closely.
24:08And it's not just our facial cues that sync, it's our voices too.
24:20Trying to express an emotion vocally that we're not feeling is nearly impossible.
24:24So what transforms a voice into an instrument that can appeal to,
24:28persuade or motivate other people?
24:34Oliver Niebuhr has carried out numerous case studies and all have the same outcome.
24:40It's not what we say that counts, it's how we say it.
24:49The voice is an extremely complex, multilayered signal.
24:53We all have information that we want to impart,
24:56but absorbing it is hard work from a cognitive point of view.
25:01So we need to work on how we present it.
25:03Emphasizing words can send a clear signal indicating which parts of the conversation are important.
25:09We need to consider short pauses too.
25:14In fact, people who communicate like this are regarded as more likable.
25:20So it's all about how we use our voices to package the content.
25:25The phonetician is performing a short test.
25:28How well can his co-worker present a text he's reading from for the first time?
25:36Artificial intelligence is again used for analysis.
25:41The computer calculates a score of 47.5.
25:44It splits the voice into 16 parameters including speed, rhythm, melody, volume and pauses.
25:50A score between 1 and 100 for acoustic appeal is then calculated.
25:59Our ears prefer a varied pitch, usually around two octaves.
26:03Charismatic speakers often switch between loud and quiet voices.
26:09It makes the voice sound more melodic.
26:17For centuries, this principle has been used by populists and demagogues to capture the attention of audiences.
26:23Because even ancient civilizations understood that the only way to motivate and inspire people
26:29is to listen to them.
26:31Public speaking, projecting and modulating your voice to achieve the best result,
26:35was taught in classic antiquity.
26:38Now it's become a lost skill.
26:45It's possible to train your voice to transport information effectively.
26:51It's no different than a computer.
26:54It's possible to train your voice to transport information effectively.
27:00It's no different to learning a new vocabulary or grammar.
27:08Oliver Niebuhr has developed a computer training program.
27:15The principle is fairly basic.
27:17The upper and lower lines show the pitch.
27:20Circles in different colors and size represent speed, volume and pauses.
27:36Users are shown what they can improve in real time.
27:42After one day of training, the speaker tries again.
27:46And paycheck errors by 90 percent, saving both your time and your money.
27:50And it scores 12 points higher than the previous day.
27:55The main improvements are in pitch variation.
28:00His score has soared from 34.5 to 73.3.
28:04That's almost double his previous attempt.
28:10It's a clear improvement.
28:13Other voices are so seductive that we can lose ourselves in them,
28:18as shown in this experiment.
28:24Some drivers were given instructions by this voice.
28:31And others by this slightly less engaging voice.
28:43What the drivers didn't know was that halfway through the experiment,
28:46the sat-nav started giving incorrect directions.
28:51And it got progressively worse and worse.
28:53We wanted to see at what point the drivers would quit.
28:58We were able to show that the more expressive, the more convincing voice
29:02kept drivers following the wrong route for longer,
29:05despite it going against their better judgment.
29:09We had to call them, explain it was just a test and ask them to come back.
29:20An engaging voice plays a key role when it comes to flirting or meeting a romantic partner.
29:26Hello, I'm Willi.
29:29Hello Willi, I'm Cordula.
29:33I'm looking for a woman, for the long term.
29:37Who's attractive? Who do we desire?
29:41We make snap judgments when it comes to one of life's most important decisions,
29:46quickly and irrationally.
29:50Snap judgments have always thought they're really fascinating.
29:53It's sort of how we determine who we want to date,
29:57who we want to be friends with, who we don't want to be friends with.
30:00And we don't have a lot of introspective access to those feelings.
30:04What drives that?
30:06We usually encounter people and we like them or we don't like them.
30:10And we have a good sense of that, we get along with them.
30:13And these things determine all of our behaviour.
30:16Are we all born with a universal code?
30:20Is it nature or nurture that allows us to interpret character traits
30:24and read emotions through facial and vocal cues?
30:27One thing is certain, we react to these cues from a very young age.
30:35Just like our animal ancestors.
30:46Obviously primates never developed verbal language like humans.
30:53But just like us, they communicate vocally and they know how to interpret the signals.
31:05Scientists have always assumed that there are huge differences
31:10between verbal humans and non-verbal primates.
31:14But research shows that the auditory cortex in both species is more similar than expected.
31:20Whether verbal or non-verbal, it makes no difference on how the brain processes signals.
31:26Pascal Ballon has tested humans and primates using functional magnetic resonance imaging, or fMRI.
31:34The results show that both groups react to their own species' voice in the same parts of the brain.
31:43Our ancestors also probably used these areas 20 million years ago.
31:48Primates process vocal cues in the same way that humans do.
31:54Primates process vocal cues the same way we do, even without language.
32:06The brain's architecture has changed slightly in humans to process language.
32:14But the mechanisms have stayed the same in other species for anything beyond language,
32:19identity, emotions, personality.
32:26Research into how primates interpret facial cues shows similar results.
32:31Again, similar brain structures to humans are activated in the primates.
32:39Does that mean that we are born with the ability to understand facial and vocal cues?
32:45Joris is 10 months old and is getting ready for an experiment.
32:53Sarah Jessen wants to measure Joris' brainwaves to see how he reacts to unfamiliar faces.
32:59Can he already judge who's trustworthy and untrustworthy?
33:05We carried out research where we showed babies a range of faces for 50 milliseconds.
33:10That's only a twentieth of a second.
33:12It's so quick we assumed that babies wouldn't even register the faces.
33:18However, we identified activity in the brain that proved that babies had not only registered the faces,
33:25but had even made a decision about their facial expressions.
33:29But is it innate or learned?
33:38We don't believe that a baby is born with the ability to judge whether a face is trustworthy or not.
33:45It's more likely to be a combination of learning processes and facial expressions.
33:52Over the first few months, faces and voices are a baby's most important learning resources.
33:58Parents intuitively use pronounced facial cues,
34:01empathize with the baby's facial expressions,
34:04and make a decision about whether a face is trustworthy or not.
34:08It's also important for the baby to be able to recognize facial expressions.
34:13It's also important for the baby to be able to recognize facial expressions.
34:18Parents intuitively use pronounced facial cues,
34:21emphasize certain words and exaggerate.
34:26This captures the baby's attention and allows them to recognize emotions more easily.
34:31By six months, babies can already differentiate between happiness, fear, sadness and anger.
34:43Just like a child, Furhat is learning to understand us better
34:46and practicing how to behave in a conversation.
34:49He analyzes the movements and eyes of his conversation partner using a camera.
34:58Furhat has to know whether I am talking to Furhat or my colleague here.
35:02So that's quite tricky and we call that a multi-party interaction,
35:06where there are more than two people talking.
35:09And one of the ways of handling this is that we track the head pose of the users.
35:15So here we can see that the camera has detected us two here.
35:20And it can also recognize our faces.
35:22So if I turn around and look back, it will see that I'm the same person still.
35:28It's time to play a game with Furhat.
35:31The team takes turns drawing a shape, while the other players guess what it is.
35:37Could it be a star? No.
35:39Is it a flower? Yeah, it's a flower.
35:43Got it, so let's go.
35:45My guess is feather. No.
35:51Is it a worm? I know, it is snake.
35:53It is, yes.
35:56This is a good one.
35:59A boat.
36:02Keep guessing.
36:05Furhat will look and sound more like a human over time.
36:08But he won't be identical.
36:11Studies show we prefer a clear distinction between humans and robots,
36:15or we find it too creepy.
36:19The boundaries are already blurred in the media.
36:22After 40 years, ABBA is back on stage.
36:26Not in real life, of course, but as avatars.
36:30It's like time has simply passed the group of 70-year-olds by.
36:34Only their voices are still real.
36:41We're at the start of a huge technological shift.
36:46We are often asked to record actors' voices.
36:51I predict that over the next few years,
36:53these real-life voice recordings will become superfluous.
36:59Because we'll be able to create new, synthetic voices using scientific principles.
37:06I don't know what you're talking about, Hal.
37:10I know that you and Frank were planning to disconnect me.
37:14And I'm afraid that's something I cannot allow to happen.
37:17Artificial intelligence making independent decisions
37:20was still a pipe dream in director Stanley Kubrick's day.
37:26But today, we live with it.
37:28And maybe that's a good thing.
37:30But today, we live with it.
37:32And maybe that's a good thing.
37:34Because AI doesn't make emotional decisions or succumb to alluring voices.
37:40It's neutral and impartial.
37:42And everything we're not. Right?
37:45If you program an AI,
37:47if they see that African-American is linked with hostility and crime
37:52in media depictions in TV, and those are inputs into the AI,
37:57the AI is going to pick up on that and act accordingly.
38:01So AI is more like us than we realize.
38:04In Lie to Me, we're shown how some faces are automatically linked to stereotypes.
38:09What are you guys up to?
38:10We're testing for a racial bias.
38:12It's like the racist.
38:13They're all racist.
38:15Yeah, 80% of people who take this test are biased.
38:18We're just looking for the guy who takes the lie.
38:19And science dictates that subconscious bias directly impacts our perceptions.
38:23Time for each of them.
38:26They leave a lot of collateral damage in the brain.
38:29It's not just a stereotype living in sort of a filing cabinet in the brain.
38:34They're changing approach and avoidance tendencies,
38:37behavioral tendencies, motor tendencies, visual tendencies, auditory tendencies.
38:41John Freeman is looking at exactly what happens using fMRI.
38:45The test group is shown several different faces.
38:49As well as the part of the brain responsible for facial recognition,
38:52other areas that process social and emotional information are also activated.
38:59These areas memorize bias and personality traits.
39:04To provide a rapid response, our brains make fast predictions.
39:10We register what we perceive to be most probable.
39:18This can often be adaptive, right?
39:20If you walk into a restaurant, you expect to see chairs and tables and a waiter, etc.
39:25You're not going to waste a lot of metabolic resources, the brain's time,
39:29the visual system's resources in processing every single object in that space.
39:34You generate a bunch of hypotheses.
39:36You know what a restaurant is.
39:38And you kind of run with those hypotheses.
39:40And you use expectations to fill in the gaps that the brain is too lazy to figure out itself.
39:46It doesn't want to waste its resources on.
39:51So are we merely at the mercy of these mechanisms?
39:54John Freeman refuses to accept this theory.
39:57He's carrying out research to find out how we learn these stereotypes
40:01and whether we can unlearn them.
40:05He shows the test group a range of faces linked to specific character traits.
40:10So given all that, we wanted to explore our capacity
40:14to rapidly acquire completely novel facial stereotypes out of thin air.
40:19People that have a wide salient width, which is the nose bridge on the face,
40:25and it's a cue that really has nothing to do with anything interesting.
40:30It's just simply how wide the bridge of the nose is.
40:34So it's an arbitrary facial feature.
40:36So it's an arbitrary facial feature.
40:38And 80% of the time, we're pairing this wide salient width with trustworthy behaviors.
40:45So now they see completely new faces, not the ones that they had previously learned about,
40:50and that have wide and narrow salients, that arbitrary facial feature.
40:55And indeed, what we found was that on a variety of different measures,
40:58more conscious, less conscious, that people are applying these stereotypes.
41:03They are automatically activating these stereotypes without their conscious awareness
41:08from just a couple minutes of learning.
41:14Our brains are highly flexible when it comes to stereotyping.
41:18We can learn to evaluate faces differently, at least over the short term.
41:23John is now looking into a training method that works long term.
41:27The same principle applies to voices.
41:34Ultimately, the more we know about these mechanisms,
41:37the less susceptible we are to being deceived by first impressions.
41:44First impressions. Fascinating, sometimes deceptive, but always a special, even magical moment.
41:52And maybe the start of a new relationship,
41:54where we discover so much more than what we saw at first sight.
42:08Copyright © 2020 Mooji Media Ltd. All Rights Reserved.
42:11No part of this recording may be reproduced
42:14without Mooji Media Ltd.'s express consent.

Recommended