close
close

first Drop

Com TW NOw News 2024

AI Assistant Monitors Teamwork to Promote Effective Collaboration | MIT News
news

AI Assistant Monitors Teamwork to Promote Effective Collaboration | MIT News

During a research cruise around Hawaii in 2018, Yuening Zhang SM ’19, PhD ’24 saw the challenges of keeping a tight ship. The careful coordination required to map underwater terrain could sometimes create a stressful environment for team members, who may have had differing views on what tasks to complete in spontaneously changing conditions. During these voyages, Zhang considered how a robotic companion could have helped her and her crewmates accomplish their goals more efficiently.

Six years later, as a research assistant in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Zhang developed what might be considered a missing piece: an AI assistant that communicates with team members to align roles and achieve a common goal. In a paper presented at the International Conference on Robotics and Automation (ICRA) and published in IEEE Xplore on August 8, she and her colleagues present a system that can oversee a team of both human and AI agents, intervening when needed to potentially increase the effectiveness of teamwork in domains such as search-and-rescue missions, medical procedures, and strategic video games.

The CSAIL-led group has developed a theory of mind model for AI agents that represents how humans think and understand each other’s potential plans of action when working together on a task. By observing the actions of its fellow agents, this new team coordinator can infer their plans and understanding of each other from a prior set of beliefs. When their plans are incompatible, the AI ​​helper intervenes by aligning their beliefs about each other, instructing their actions, and asking questions as needed.

For example, when a team of rescuers is in the field triaging victims, they must make decisions based on their beliefs about each other’s roles and progress. This type of epistemic planning could be enhanced by CSAIL’s software, which can send messages about what each agent plans to do or has done to ensure task completion and avoid duplication of effort. In this case, the AI ​​helper might intervene to communicate that an agent has already gone to a certain room, or that none of the agents are covering a certain area with potential victims.

“Our work takes into account the sentiment that ‘I believe that you believe what someone else believes,’” says Zhang, now a researcher at Mobi Systems. “Imagine you’re working on a team and you ask yourself, ‘What exactly is this person doing? What am I going to do? Does he know what I’m going to do?’ We model how different team members understand the overarching plan and communicate what they need to accomplish to help achieve their team’s overall goal.”

AI comes to the rescue

Even with a sophisticated plan, both human and robotic agents will become confused and even make mistakes if their roles are unclear. This plight is especially acute in search and rescue missions, where the goal may be to locate someone in danger despite limited time and a large area to scan. Fortunately, communications technology, augmented by the new robotic assistant, could potentially inform search parties about what each group is doing and where they are searching. In turn, the agents could navigate their terrain more efficiently.

This type of task organization can help with other high-stakes scenarios, such as surgeries. In these cases, the nurse must first transport the patient to the operating room, after which the anesthesiologist puts the patient to sleep before the surgeons begin the surgery. During surgery, the team must constantly monitor the patient’s condition and respond dynamically to the actions of each colleague. To ensure that each activity within the procedure remains well-organized, the AI ​​team coordinator can oversee and intervene if there is confusion about any of these tasks.

Effective teamwork is also integral to video games like “Valorant,” where players coordinate who should attack and defend against another team online. In these scenarios, an AI assistant could appear on-screen to alert individual users where they’ve misinterpreted what tasks they need to complete.

Before leading the development of this model, Zhang designed EPike, a computer model that can act as a team member. In a 3D simulation program, this algorithm controlled a robotic agent that had to match a container with the drink the human had chosen. As rational and sophisticated as they may be, there are cases where these AI-simulated bots are limited by their misconceptions about their human partners or the task at hand. The new AI coordinator can correct the agents’ beliefs when necessary to solve potential problems, and in this case it consistently intervened. The system sent messages to the robot about the human’s true intentions to ensure that it correctly matched the container.

“In our work on human-robot collaboration over the years, we’ve been both humbled and inspired by how flexible human partners can be,” says Brian C. Williams, an MIT professor of aeronautics and astronautics, a CSAIL fellow, and lead author of the study. “Consider a young couple with children, working together to get their kids breakfast and off to school. If one parent sees their partner serving breakfast and still in their bathrobe, the parent knows to quickly shower and get the kids off to school, without having to say a word. Good partners are well-informed about each other’s beliefs and goals, and our work on epistemic planning aims to capture this style of reasoning.”

The researchers’ method combines probabilistic reasoning with recursive mental modeling of the agents, allowing the AI ​​assistant to make risk-based decisions. Additionally, they focused on modeling the understanding of agents’ plans and actions, which could complement previous work on modeling beliefs about the current world or environment. The AI ​​assistant currently infers agents’ beliefs based on a given prior of possible beliefs, but the MIT group envisions applying machine learning techniques to generate new hypotheses on the fly. To apply this counterpart to real-life tasks, they also aim to consider richer plan representations in their work and further reduce the computational cost.

Dynamic Object Language Labs President Paul Robertson, Johns Hopkins University Assistant Professor Tianmin Shu, and former CSAIL Fellow Sungkweon Hong PhD ’23 join Zhang and Williams on the paper. Their work was supported in part by the U.S. Defense Advanced Research Projects Agency’s (DARPA) Artificial Social Intelligence for Successful Teams (ASIST) program.