close
close

first Drop

Com TW NOw News 2024

Research focus: week of August 12, 2024
news

Research focus: week of August 12, 2024

Welcome to Research Focus, a series of blog posts highlighting key publications, events, code/datasets, hires, and other milestones from the research community at Microsoft.

Research Focus: August 5, 2024

Register now for the Research Forum on September 3

Discover what the future holds for AI at the Microsoft Research Forum (opens in new tab)an event series exploring recent research developments, bold new ideas, and important discussions with the global research community.

In Episode 4, learn about the latest multimodal AI models, cutting-edge benchmarks for AI evaluation and model self-improvement, and a whole new kind of computer for AI inference and hard optimization. Discover how these research breakthroughs and more can help improve everything from weather forecasting to materials design.

Your one-time registration will give you access to our live chat with researchers on the day of the event and additional resources to help you delve deeper into the research.

Episode 4 airs on Tuesday, September 3 at 9:00pm Pacific Time.

Microsoft Research Blog

Microsoft at CHI 2024: Innovations in Human-Centered Design

From immersive virtual experiences to interactive authoring tools, Microsoft Research is leading the way in understanding how people interact with technology. Discover our latest breakthroughs in human-computer interaction research at CHI 2024.


Towards effective AI support for developers: a survey of wishes and concerns

Talking to customers provides important insights into their challenges and what they like. This helps identify innovative and creative ways to solve problems (without creating new ones) and protects against ruining workflows that customers actually like. However, many AI-related development tools are currently built without consulting developers.

In a recent paper, Towards Effective AI Support for Developers: A Survey of Desires and Concerns, Microsoft researchers explore developers’ perspectives on integrating AI into their workflows. This study reveals developers’ top desires for AI assistance, along with their top concerns. The findings from this extensive survey of 791 Microsoft developers help researchers identify key areas where AI can improve productivity and how to address developer concerns. The findings provide actionable insights for product teams and leaders to create AI tools that truly support the needs of developers.


SuperBench: Improve cloud AI infrastructure reliability with proactive validation

Cloud service providers have been using geographic redundancies in hardware for years to ensure the availability of their cloud infrastructure. For AI workloads, however, these redundancies can inadvertently lead to hidden degradation, also known as “gray outages.” This can degrade end-to-end performance and hide performance issues, complicating root cause analysis of outages and regressions.

In a recent article: SuperBench: Improving Cloud AI Infrastructure Reliability with Proactive Validation (opens in new tab)Microsoft researchers and Azure cloud engineers introduce a proactive validation system specifically for AI infrastructure that mitigates hidden degradation caused by hardware redundancies The article, which won a ‘best article’ award at USENIX-ATC (opens in new tab)outlines SuperBench’s comprehensive benchmark suite, capable of evaluating individual hardware components and representing most real-world AI workloads. It includes a validator, which learns benchmark criteria to clearly identify faulty components, and a selector, which balances validation time and issue penalties, enabling optimal timing for validation execution with a tailored subset of benchmarks. Testbed evaluation and simulation show that SuperBench can increase the mean time between incidents by up to 22.61x. SuperBench has been successfully deployed in Azure production, validating hundreds of thousands of GPUs over the past two years.


Virtual Voices: Exploring Individual Differences in Written and Verbal Meeting Participation

A key component of team performance is participation among group members. Workplace meetings provide a common stage for such participation. But with the shift to remote work, many meetings are being held virtually. In such meetings, chat offers an alternative way of participation, where participants can contribute to the conversation synchronously by writing.

In a recent article: Virtual Voices: Exploring Individual Differences in Written and Verbal Meeting Participation (opens in new tab)Microsoft researchers and external colleagues are investigating factors that influence participation in virtual meetings, drawing on individual differences (status trait theory), psychological safety perceptions, and group communication. Results of the paper, published in the Journal of Vocational Behavior (opens in new tab)reveal gender (self-identified) and nuances of work level. Women participated more in chat, while men participated more verbally, as measured by meeting telemetry. Additionally, men with the highest work level contributed the most verbally to virtual meetings, while women with the highest work level used chat the most. In terms of the type of chats sent, women used emoji reactions more often than men and men sent more attachments than women. Additionally, results showed that psychological safety moderated the relationship between work level and overall chat participation, such that low-work level employees with high perceived psychological safety sent more chats than their counterparts. This study provides insights into communication patterns and the impact of psychological safety on participation in technology-mediated spaces.