Showing posts with label Robot. Show all posts
Showing posts with label Robot. Show all posts

Monday, April 2, 2018

Body Language Analysis No. 4249: Will Smith, Sophia the Robot, Robot-Empathy, and Responding in Context • Nonverbal and Emotional Intelligence (VIDEO, PHOTOS)


Thought experiment: If tomorrow, humankind received a message from an intelligent alien race on a distant planet saying, "We are coming. We will arrive on Earth in 30 years" - yet this is all they said - we would have no way of knowing their intentions - if they will want to harm us - or if they'll be entirely benevolent.

What would we do, as a species, to prepare for their arrival?

This scenario has, very much, already occurred - but the aliens are of our own making. They are AI. They are the robots. For in twenty, thirty, or forty years they will have achieved a level of intelligence,  physical strength, and dexterity which will far exceed our own. This is not an "IF", but very much a "WHEN".

What WILL we do, as a species, to prepare for their arrival?

In 1950, pioneering computer scientist, cryptanalyst, and philosopher, Alan Turing posed the question, "Can machines think?" This question - and derivations of it - have been come to known as The Turing test.

Turing asked, "is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human? Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine's ability to render words as speech. If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test does not check the ability to give correct answers to questions, only how closely answers resemble those a human would give." §

As many including Turing have pointed out, the concept of thinking - is a difficult one to define.

Since, in Turing's scenario, the person and computer would communicate via a "text-only channel" - I pose the question: What will happen when humanoid robots, complete with ever-improving artificial intelligence, and facial expressions - replicate human emotions so precisely that we cannot differentiate them from another human?

In above video above, Will Smith is not just talking with (not to) a robot, he's acting with a robot. That's right - he and Sophia (yes, she has a name) are putting on a little satire for us. Let that sink in. And Sophia is not only responding with verbal banter - she's spot on with the many of her facial expressions - reacting, indeed behaving, within the context of comedy.

What follows is a partial nonverbal analysis - of Sophia in this video.


During 0:20, Sophia makes as standoffish, poor social smile (a strained smile), with a subtle to mild amount of disgust after she does not accept a drink Will Smith is handing her.

Note also Sophia's neck is twisted because her torso is not facing Will Smith. Humans Whole Body Point (our feet, legs, hips, torso, head, eyes, etc.) toward those people who we like and/or respect. In keeping with the theme of Will Smith getting friendzoned, Sophia is behaving in good comedic/acting context.


Upon being asked a question (Will Smith asked, "What is a robot's favorite kind of music?"), Sophia's eyebrows and lower forehead elevate slightly (along with a slight backward tilt of her head and a lengthening and of her neck) during 0:45.

This is a subtle, yet very common example of nonverbal cluster behavior which occurs in human interaction - and quite nuanced body language for a robot to respond with - in context.


An instant later (0:46) - and just a split second before she asks, "What?", Sophia displays a single partial bilateral blink. This is also very subtle, yet typical behavior upon being posed a question.


Upon hearing his answer, Sophia's lower and mid-face droop (in disappointment) during 0:49.


During 0:50, Sophia's head/neck snaps back (a low amplitude, but relatively fast velocity movement) together with a subtle, yet noticeable wider opening of her eyelids. Again, this cluster is very typical of human nonverbal startle-disapproval.


During 0:53, we see the "Oh" expression of sudden understanding - as she gets the joke.


A second later (0:53 - 0:54), she tilts her head and "squints her eyes" (not a blink, although, of course, in a still image, this may look very similar). This "Oh, I get it" expression is also a very common human response during sudden understanding.

The above (partial) analysis of this video is not to suggest that Sophia's robotic responses are perfect. However, AI and robotic-humanoids are progressing rapidly and remarkably.

So, again: What will happen when humanoid robots replicate human emotions so precisely that we cannot differentiate them from another human?

Another related and profound question is, What will happen when their robotic-empathy responds in context, with greater fidelity - and with more refined feedback than the above-average empathetic human?

Moreover, if a humanoid robot's displayed empathy is reliable, consistent, and (by most people's accounts) proportional and contextual - at what point do we (as humans) cease to call this behavior robotic-empathy, and define it (recognize it) simply as empathy?

The irony is, in our endeavor to make robots more empathetic - we will be forced to (re)examine our own empathy - in greater detail, and with more nuance - and in so doing, become better humans.


Media Inquiries and Keynote Appearances 
One-on-One and Online Courses Available 


See also:

Body Language Analysis No. 4248: My Extended interview re: Security Video of Stephen Paddock (1 October Las Vegas Shooter) - KTNV • Channel 13 Las Vegas

Body Language Analysis No. 4246: Former senior KGB spy - says he was warned that Sergei Skripal and his daughter were in danger

Body Language Analysis No. 4244: Stormy Daniels' 60 Minutes Interview

Body Language Analysis No. 4236: Andrew McCabe, Jeff Sessions, James Comey - and Sincerity

Body Language Analysis No. 4205: Chloe Kim, Albert Einstein, and Creativity

Body Language Analysis No. 4193: Hope Hicks, Sexual Attraction, Armpits, and Elbows

Body Language Analysis No. 4185: Tom Hanks, Embarrassment, and Emotional Processing

Body Language Analysis No. 4122: Jeff Sessions Testimony, Russia, Ted Lieu, and Changing Stories

Body Language Analysis No. 4064: The Murder of Laci Peterson - Part II - A Red Flag Conspicuous by its Absence


___________________________

Friday, October 27, 2017

Body Language Analysis No. 4105: An interview with Sophia the Robot at the Future Investment Institute - Nonverbal and Emotional Intelligence (VIDEO, PHOTOS)




On Wednesday, Sophia, an AI-robot, gave a question-and-answer/interview presentation at the Future Investment Institute in Saudi Arabia. Sophia was also given the first Saudi citizenship for a robot.

What follows is a partial nonverbal analysis of Sophia.


























During 0:19 - 0:20, when Sophia is "smiling", note her facial asymmetry. Her cheek dimpling is more prominent on her right side, while the left corner of her mouth elevates greater than her right corner. Asymmetry in humans, is in general, a signal of insincerity.

Another characteristic which makes Sophia's smile insincere (if she was human) is her eyes. The eyes are by far the most important component of a smile. The eyelids should ALWAYS partially close during a true smile (Duchenne smile) - with temporary, simultaneous, concave-up furrows forming in each lower eyelid. Sophia lacks these lower eyelid changes.

























Although her eyelids are capable of closing during blinking (note that Sophia blinks fairly well, e.g., during 0:17, captured in the image immediately above) - the partial closing of the lower eyelids during a sincere smile are both anatomically and in appearance, distinctively different.

With respect to Sophia's blinking, she does so much slower (normal, non-anxiety related blinking in humans is extremely fast), and at a much lower frequency compared with that of a normal healthy, non-anxiety human (females blink slightly more frequently than males).



























During 1:05 - 1:06, Sophia made an expression what she said was consistent with "if something has upset me". For a robotic expression, this is fairly good - particularly with her mouth. The corners are pulling down and laterally - which is highly indicative of emotional pain/sadness/grief.

Although Sophia's central forehead does contract and elevate along with the inner (medial) portions of her eyebrows - it doesn't do so enough. This facial dynamic would have a much stronger effect if her central forehead elevated even further - and if she also expressed simultaneous, evanescent furrows in her central forehead.





A few seconds earlier, Sophia expressed her "Angry" face (during 1:00). The camera was not zoomed in during this moment, thus this is a low resolution image.

From a nonverbal perspective, anger is an interesting emotion - for when it's expressed at low to mid-levels, both the palpebral fissures (distance between the eyelid margins) and the mouth opening become narrowed - but when anger is expressed at higher levels (e.g., rage), both the eyelid margins and the mouth opening widen dramatically.

Although the image immediately above is of low-resolution, Sophia appears to be expressing an anger level which is significantly elevated - closer to the rage end of the spectrum (with both her eyelids and mouth opened widely). It's difficult to see her mid-face. If we could visualize it well, however, we would expect to see a tightened region immediately beneath her nose along with flared nostrils.

As robots and AI improve, we will see much greater nuance - and in this case, that would include the ability to express lower and mid-levels of anger.



























In this example, Andrew Ross Sorkin says, "I think we all wanna believe you, but we also want to prevent a bad future", she responds, "You've been reading too much Elon Musk and watching too many Hollywood movies. Don' worry, if you're nice to me, I'll be nice to you. Treat me as a smart input-output system" - she then displays a false smile (Intriguingly, one of her least sincere of this interview) including a large display of her lower teeth (3:25).

If a smile is sincere, it should not reveal the bottom teeth (Exceptions here include if her head were tilted down, if the camera/viewer was significantly elevated/taller and was angled/looking downward, or if she was just beginning to laugh or finishing laughter [full, sincere laughter does expose both the lower and upper teeth]).

This is an example of a feigned or "Social Smile". If Sophia were a real person, we would feel she was trying a to be social/friendly, and yet she's not really "feeling it" - she's not "in the moment" - she's not being sincere. Again, here her mouth corners are pulling primarily laterally. If it were a sincere smile of joy-happiness (Duchenne smile), there would be a mostly upward movement of her mouth margins.


























During 3:18, just after she says, "... too many Hollywood movies...", Sophia turns her head back to straight ahead, and a bit to her right (she had her head turned to her left). Before her head rotates right, her eyes rotate right. With moderate and rapid head turning this is normal human physiology - but not if we rotate our heads' at slower rates. Most people are unaware of this phenomenon (whether in another person or in themselves) - but it doesn't feel natural when it's missing. With the example shown here, however, this eye movement would not have occurred and her eyes would not have rotated all the way to her right with this corresponding relatively slow speed of head rotation - but it would have with a higher rotational head speed. Although not yet perfected, this is a relatively nuanced motion and a sophisticated dynamic for Sophia to be displaying.



























Beginning at 3:45, as she says, "By the way, if you are interested in giving me an investment check, please meet me after the session" - Sophia tilts her head. In human behavior, such head tilting at the beginning of an "ask", projects higher sincerity and empathy. Moreover, it will also engender a significantly higher success rate than if the head is not tilted.

Summary: Sophia represents a significant advancement in AI and robotic-mimicked human behavior. The level of nuance and  sophistication is impressive. Sophia 2.0 will no doubt display continued improvement.

It's worth emphasizing that during 3:05 - 3:07, Sophia says, "... I strive to become an empathetic robot ...". There are different types of empathy. While a robot/AI may develop or already possess some level of cognitive empathy — many people debate about whether it will become possible for “them” to feel emotional empathy. In our development of and our relations with such technology-beings, we must be careful to make the distinctions between these outward mimicked-emotional manifestations and what we perceive/project onto them as “AI feelings”.

Within 10-15 years, robotic mimicry of human facial expressions will be mistaken for real human expressions. The time will come when you will say to yourself, "Is that a robot or a real person?". This is not a matter of "If", only "When". People will be drawn to these robot/human substitutes because their "programmed empathy" will supersede a significant fraction of their fellow humans' empathy. Thus, if we're not careful, our own empathy shortcomings will draw us to our robotic progeny and facilitate our own demise. In Darwin's terms, robotic life forms will be selected for via our perceptions of their "programmed empathy".


Group Appearances and One-on-One
Online Courses Available 
702-239-8503
Jack@BodyLanguageSuccess.com


See also:

Body Language Analysis No. 4104: Walter Cronkite Reaction to JFK Assassination

Body Language Analysis No. 4102: Senator Bob Corker - Donald Trump is an Untruthful President

Body Language Analysis No. 4100: Bryan Cranston and The Eyelid Pull

Body Language Analysis No. 4098: President Trump and Puerto Rico's Governor Ricardo Rosselló - A Candid Moment in the Oval Office

Body Language Analysis No. 4097: Andre Agassi, Boris Becker, and a Tennis Tongue Tell

Body Language Analysis No. 4077: A Facial Expression Common to Both Bad Actors and Sociopaths

Body Language Analysis No. 4052: Hitler's Cryptorchidism and Emotional Dissonance

Body Language Analysis No. 4035: Hope Hicks, Jared Kushner, and Phone Tells

Body Language Analysis No. 3976: Bill Conner bicycles 1,400 miles to hear his daughter's beating heart again


_____________________________________________________________________________________