How to improve robots' social skills for interaction? - Gadgets Price
How to improve robots' social skills for interaction?

As robots become more sophisticated to perform more and more tasks normally reserved for humans, one thing they haven’t quite mastered is social skills.

That could change in the future with the help of new technologies like those recently developed by researchers at MIT. A team at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has developed a computer model-based robotics framework that includes certain social interactions that determine how robots can better communicate.

The technology can also help the machines learn to perform these social behaviors on their own — based on a simulated environment that creates realistic and predictable social interactions for machines.

While the technology is intended to help robots communicate better with each other, one day it could help them have smoother and safer interactions with humans, explains Boris Katz, lead researcher and head of CSAIL’s InfoLab Group and a member of the Center for Brains, Minds and Machines (CBMM).

“Robots will be living in our world soon enough and they really need to learn how to communicate with us on human terms,” ​​he said in a press statement. “They need to understand when it’s time for them to help and when it’s time for them to see what they can do to prevent something from happening.”

The frame

In particular, the environment that researchers have developed is one in which: robots pursue physical and social goals as they move through a two-dimensional grid, researchers explained.

They designed each physical target to relate to the environment, with each social target involving guessing what a robot is trying to do and then acting on that guess, researchers said. In the environment, a robot watches its companion, guesses what task it wants to accomplish, and then helps or hinders this other robot based on its own goals.

For example, a robot’s physical goal might be to navigate to a tree at a particular point on the grid. Another robot may try to guess what that robot will do next — like water the tree — and then act in a way that helps or hinders that goal, depending on its own goals.

“We’ve opened up a new mathematical framework for how to model social interaction between two agents,” said Ravi Tejwani, a research assistant at CSAIL, in a press statement. “Our formulation allows the plan to discover the ‘how’; we specify the ‘what’ in terms of what social interactions mean mathematically.”

Rewards based system

In the system they’ve created, researchers use their model to indicate what a robot’s physical goals are, what its social goals are, and how much emphasis it should place on one over the other, they said.

The researchers defined three types of robots in the box: a level 0 robot that has only physical goals and cannot reason socially; a level 1 robot with physical and social goals that assumes all other robots have only physical goals; and a level 2 robot that assumes that other robots have social and physical goals.

The model rewards a robot for actions it takes that bring it closer to achieving its goals. When a robot tries to help another robot, it adjusts its reward to match that of its companion; if it tries to hinder, it adjusts its reward to be the opposite.

The system uses a scheduler algorithm that decides which actions the robot should take by continuously updating the reward to guide the robot in accomplishing a mix of physical and social goals.

Future Progress

While the system currently strictly aids robot-to-robot interactions, researchers said it could one day lead to smoother and more positive human-robot interactions, Katz said.

“This is very early work and we’re just surfacing, but I feel like this is the first very serious effort to understand what it means for humans and machines to interact socially,” he said. in a press statement. A paper on the team’s work is available online.

Researchers will continue their work on creating a more sophisticated system with 3D agents in an environment that allows for many more types of interactions, such as the manipulation of household objects, they said. They also plan to modify their model to include environments where actions can fail.

The team also wants to develop a neural network-based robot planner into the model that learns from experience and performs faster. Researchers also want to conduct an experiment to collect data on the functions people use to determine whether two robots interact socially to advance their technology, they said.

Elizabeth Montalbano is a freelance writer who has written about technology and culture for over 20 years. She has lived and worked as a professional journalist in Phoenix, San Francisco and New York City. In her spare time she enjoys surfing, travelling, music, yoga and cooking

Leave a Reply

Your email address will not be published.