menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

The Legal and Ethical Minefield of A.I.-Driven Employee Surveillance

3 0
tuesday

Business Finance Media Technology Policy Wealth Insights Interviews

Art Art Fairs Art Market Art Reviews Auctions Galleries Museums Interviews

Lifestyle Nightlife & Dining Style Travel Interviews

Power Lists Nightlife & Dining Art A.I. PR

About About Observer Advertise With Us Reprints

The Legal and Ethical Minefield of A.I.-Driven Employee Surveillance

The same technologies marketed as objective performance tools may quietly amplify bias, erode privacy and manipulate worker behavior at scale.

When we wrote about the legal and ethical implications of A.I. in hiring in 2019, we focused on the assessment of job candidates before they had been hired or consented to an ongoing relationship with an organization. What has become increasingly clear since then is that the far more consequential, and far less scrutinized, deployment of A.I. may be happening deep within the employment relationship itself. 

Sign Up For Our Daily Newsletter

Thank you for signing up!

By clicking submit, you agree to our terms of service and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime.

Over the past few years, digital innovations and advances in A.I. have turbocharged remote work through data capture, producing a new generation of workplace monitoring, performance analytics and employee profiling tools. Many of these technologies promise to help organizations improve productivity, identify high-potential talent, reduce unwanted turnover and allocate compensation more efficiently. The pitch is compelling: why rely on the inevitably subjective judgment of a manager who observes an employee for a few hours a week when you can have an A.I. system that synthesizes thousands of behavioral data points continuously and in real time? 

But this power asymmetry between organizations armed with sophisticated predictive tools and employees who are largely unaware of how they are being profiled raises profound ethical and legal questions that the business community has not yet adequately considered or confronted. Whether they know it or not, most people have now been subject to “surveillance pricing” as consumers. For example, the airline that offers a specific fare bundle because loyalty-program data signals you are likely to buy it, or the website that charges more for infant formula because an algorithm has sensed the desperation of a new parent. The same logic, applied to the employment relationship, produces what labor advocates and researchers have begun to call “surveillance wages”: a system in which pay is set not by an employee’s performance or market value, but by formulas that use personal data—often collected without the employee’s knowledge or consent—to identify the minimum compensation she will accept before looking elsewhere. This is only the beginning. 

To be sure, performance management has always been imperfect. Alan Colquitt’s research cited in Next Generation Performance Management consistently shows that performance ratings tell us nearly as much about the rater as about the person being rated, reflecting idiosyncratic biases, attribution errors and halo effects as much as actual job performance. Organizations have long recognized this problem and invested in calibration sessions, 360-degree feedback systems and structured rating scales in an attempt to reduce subjectivity. Now, A.I. promises to replace biased human judgment with objective, data-driven evaluation, but the transition from bias-laden human assessment to algorithm-driven appraisal introduces its own set of distortions. The added danger is that those distortions are invisible, self-reinforcing and cloaked in the authority of “objective” data science. 

Before examining the specific temptations that employers will face and what can be done to address them, it is worth noting that the legal framework governing the employment relationship was not designed with these tools in mind. Employment law in the United States rests on a foundation of statutes like the Americans with Disabilities Act (ADA), Title VII of the Civil Rights Act, the Age Discrimination in Employment Act (ADEA), the National Labor Relations Act (NLRA) and an increasingly active patchwork of state privacy laws that were drafted to govern the conduct of human decision-makers, not algorithmic systems trained on behavioral data. As A.I. becomes a substitute for managerial judgment, the legal protections these statutes were designed to afford employees may be quietly circumvented. 

What temptations will companies face in using A.I. to monitor and evaluate employee performance? 

The first and most straightforward temptation is to use A.I. to monitor and evaluate employee behavior in ways that go far beyond what any manager could observe directly. Modern workplace monitoring tools can log keystrokes, track mouse movements and active screen time, analyze email and messaging patterns for sentiment and engagement signals, flag extended periods of inactivity, transcribe and interpret video calls and track an employee’s physical location through mobile phones or badge swipe data. Productivity platforms increasingly use machine learning to synthesize these digital outputs into a single performance score that is fed, often invisibly, into compensation, promotion and termination decisions. This is no longer a niche practice: a 2022 New York Times examination found that eight of the ten largest American companies surveil their employees with tracking software, while global demand for employee monitoring tools increased 65 percent between 2019 and 2022—a figure that has only grown as remote and hybrid work normalized continuous digital observation. 

Microsoft’s Viva Insights platform, deployed across thousands of enterprises globally, tracks employees’ email response times, meeting attendance, focus hours and collaboration patterns, synthesizing these into dashboards visible to managers and HR. Commercial monitoring vendors such as Teramind and Hubstaff offer SaaS tools enabling any employer to log keystrokes, take random screenshots and generate per-employee productivity scores; Teramind’s platform additionally analyzes email content and web browsing behavior for “insider threat” detection. 

Additionally, Amazon’s algorithmic management system in its warehouse operations tracks worker activity to the second via its “Time Off Task” (TOT) system: employees who accumulate more than 30 minutes of inactivity receive automated warnings, and those exceeding two hours face automatic termination workflows, entirely without manager involvement. In January 2024, France’s data protection authority (CNIL) fined Amazon €32 million for this “excessively intrusive” surveillance system. 

The problem is that such systems measure activity, not performance. And they do so in ways that can systematically disadvantage employees with disabilities, caregiving responsibilities or non-traditional work styles. An employee who processes information slowly due to a learning disability, who takes frequent short breaks to manage anxiety or who thinks best in extended periods of offline focus may score poorly on an A.I. system calibrated on the behavioral signatures of historically top-rated employees. These top-rated employees also may have been rated highly due to factors unrelated to their actual contribution, such as gender, race or similarity to their supervisors. If historical performance ratings are biased, and the research suggests they frequently are, then training an A.I. model on those ratings will simply launder and amplify those biases at scale, with the additional complication that the resulting discrimination becomes harder to detect and challenge. 

There is also a more insidious temptation: to use A.I.-generated performance profiles not merely to evaluate employees but to categorize them in ways that invisibly shape how they are managed, communicated with and developed over time. If an algorithm flags an employee as “low potential” or “high flight risk,” that categorization may subtly recalibrate every subsequent interaction she has with the organization, reducing the developmental investment she receives, limiting her access to stretch assignments and potentially creating a self-fulfilling prophecy of disengagement and exit. The A.I. doesn’t terminate the employee directly, but it reorganizes the environment around her until she leaves of her own accord. Under the ADA, an employer cannot take adverse action against an employee because it perceives her to have a disability or impairment. But if an A.I. system, trained on behavioral patterns correlated with depression, anxiety or ADHD, flags that employee for reduced investment, the legal and ethical boundaries become deeply blurred. This all means that A.I. is now not only empowered to identify or infer disabilities or disadvantages, but in some sense to create them. 

What temptations will companies face in using employees’ personal data to profile and manipulate compensation? 

A second, and more recently visible, temptation involves the use of data that extends far beyond the four corners of the employment relationship to calibrate an employer’s leverage over individual workers. This includes consumer data such as spending patterns and subscription services, which can reveal whether an employee is living paycheck to paycheck or has a financial cushion. It may include real estate records indicating the........

© Observer