Research to advance study of leader effectiveness in online settings

Man at computer leads online meeting
Friday, May 3, 2024
Interdisciplinary leadership science team receives substantial federal grant.

A Center for Leadership Science interdisciplinary research team is receiving a $692,881 federal grant to discover new ways to enhance leader and team effectiveness in online environments, using Artificial Intelligence technologies and diverse fields of expertise. The funding from the U.S. Army Research Institute for Behavioral and Social Sciences will span three years.

"With the rise of online work and virtual teamwork, effective training of leaders is increasingly important. Yet the current tools and approaches to evaluate leaders, such as 360-degree assessments, can be vulnerable to subjective bias," said CLS Co-Director George Banks, who is the project’s leader. The CLS is based in the Belk College of Business and includes faculty from several colleges and the School of Data Science.

"Even as work in online contexts has expanded, the field of leadership development has not adapted to new settings and the changing needs of a wide array of leaders and emerging leaders. We have little understanding of effective leader behaviors in virtual settings, and there are limited tools to train aspiring leaders to work effectively with teams in online settings," said Banks, chair of the Department of Management and faculty in the interdisciplinary Organizational Science Ph.D. Program.

Along with Banks, team researchers are Scott Tonidandel, management professor and director of the organizational science program; Wenwen Dou, computer science professor and co-director of the Ribarsky Center for Visualization; and Eric Heggestad, psychological science and organizational science professor and incoming associate provost for faculty affairs. All are faculty with the CLS. The work also will involve students in conducting research.

The new research is expected to benefit the U.S. Army, which faces a continuous challenge with developing effective leaders, while also providing usable science-based tools and insights for businesses and other organizations, particularly in the Charlotte region. In addition to the new grant, Charlotte’s interdisciplinary School of Data Science and a Truist grant also have supported the work.

The project proposes to develop a machine learning algorithm capable of recognizing four categories of verbal leader behaviors in virtual meetings. The team wants to enhance the understanding of how to effectively evaluate leaders in a virtual setting by focusing specifically on leader behaviors while considering multiple categories of leader behaviors simultaneously.

A need exists for more efficient ways to deliver training in an on-going manner. By developing an algorithm that can accurately score leader behaviors in virtual communications, leaders can receive more timely feedback that can enable them to adapt their behavior in real-time.

The Charlotte team’s approach is intended to substantially reduce bias in leader evaluations by focusing on what a leader actually says, called leader signals, as opposed to solely relying on subjective evaluations of those behaviors.

"While researchers have studied leadership, the field has a limited understanding of how followers respond to specific behaviors by leaders, particularly in team contexts or virtual environments," Banks said. "Current leadership research has studied behaviors in what we commonly call leadership styles, such as ethical or transformational leadership. Another gap exists in the understanding of how leaders from varied demographics, such as gender, use certain leader behaviors, and how effective they are."

While the new research will start with analysis of verbal leader behaviors during meetings, using AI tools including machine learning algorithms, the team also sees future research possibilities with relevance for communications using emails, instant messaging, social media and other mediums.

Three specific studies are planned: a data science study to create the algorithmic model needed to score leader behaviors; a validation study to evaluate the ability to generalize the algorithmic model; and a third study to determine the extent to which leaders and teams benefit from feedback provided by the model.