Laughter and art are no longer the exclusive realm of humans as artificial intelligence systems around the world show, increasingly blurring the human-machine divide.
Soon after an AI-generated artwork provoked an uproar in the US, by winning a fine art prize in a state competition, Japanese scientists announced that they have successfully taught a robot called Erica a “shared-laughter AI system”.
Laughter, like eye contact and gestures, are a natural part of conversation which is why the team from Kyoto University decided to build a nuanced sense of humour into Erica’s repertoire of skills.
Her cues and training data were based on 80 dialogues from speed-dating scenarios, and the Japanese android was trained to distinguish between more subtle, social and explicit, mirthful laughs, and to respond appropriately — and she got it.
More than 80% of the time Erica responded correctly with a quiet chuckle or a burst of laugh-out-loud hilarity.
“We think that one of the important functions of conversational AI is empathy,” says the lead author Dr Koji Inoue, from the Kyoto University department of intelligence science and technology.
“Conversation is, of course, multimodal, not just responding correctly. So we decided that one way a robot can empathise with users is to share their laughter, which you cannot do with a text-based chatbot.”
The scientists designed three subsystems: one to detect laughter, a second to decide whether to laugh and a third to choose which type of laughter to respond to the initial human laugh.
One way a robot can empathise with users is to share their laughter, which you cannot do with a text-based chatbot.
— Dr Koji Inoue, IT professor at Kyoto University
“Our biggest challenge was identifying the actual cases of shared laughter [to use in this exercise], which isn’t easy, because as you know, most laughter is actually not shared at all,” Inoue says.
“We had to carefully categorise exactly which laughs we could use for our analysis and not just assume that any laugh can be responded to.”
Amateur actresses remotely controlled Erica who interacted with students from Kyoto University in this training and, when her funny bone was fully developed, more than 130 people listened to her respond to four short dialogues.
In scenario one, she could only utter social laughter; in scenario two, she produced mirthful laughter and in the third scenario she combined both types of laughter (shared laughter).
Baseline models in which she never laughed (no laughter) or uttered a social laugh in reaction to every human laugh (all laughter) were also implemented.

The “shared-laughter” system came out tops when people assessed how well Erica performed on “empathy, naturalness, human-likeness and understanding” in each scenario.
Inoue says this shows the need for a combined system to detect and respond with “proper laughing behaviour”, as demonstrated in their paper in the journal Frontiers in Robotics and IA.
“There are many other laughing functions and types [beyond mirth and social] which need to be considered, and this is not an easy task. We haven’t even attempted to model unshared laughs, though they are the most common,” Inoue says — reflecting on how far they have yet to go.
“We do not think this is an easy problem, and it may well take more than 10 to 20 years before we can finally have a casual chat with a robot like we would with a friend.”





Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.