We live in a surveillance society. Our every preference, inquiry, whim, desire, relationship, and fear can be seen, recorded, and monetized by thousands of prying corporate eyes. Researchers and policymakers are only just beginning to map the contours of this new economy—and reckon with its implications for equity, democracy, freedom, power, and autonomy.

For consumers, the digital age presents a devil’s bargain: in exchange for basically unfettered access to our personal data, massive corporations like Amazon, Google, and Facebook give us unprecedented connectivity, convenience, personalization, and innovation. Scholars have exposed the dangers and illusions of this bargain: the corrosion of personal liberty, the accumulation of monopoly power, the threat of digital redlining,1 predatory ad-targeting,2 and the reification of class and racial stratification.3 But less well understood is the way data—its collection, aggregation, and use—is changing the balance of power in the workplace.

This report offers some preliminary research and observations on what we call the “datafication of employment.” Our thesis is that data-mining techniques innovated in the consumer realm have moved into the workplace. Firms who’ve made a fortune selling and speculating on data acquired from consumers in the digital economy are now increasingly doing the same with data generated by workers. Not only does this corporate surveillance enable a pernicious form of rent-seeking—in which companies generate huge profits by packaging and selling worker data in marketplace hidden from workers’ eyes—but also, it opens the door to an extreme informational asymmetry in the workplace that threatens to give employers nearly total control over every aspect of employment.

The report begins with an explanation of how a regime of ubiquitous consumer surveillance came about, and how it morphed into worker surveillance and the datafication of employment. The report then offers principles for action for policymakers and advocates seeking to respond to the harmful effects of this new surveillance economy. The final sections concludes with a look forward at where the surveillance economy is going, and how researchers, labor organizers, and privacy advocates should prepare for this changing landscape.

The Data Gold Rush

The collection of consumer data over the past two decades has enabled a rent-seeking bonanza, giving rise to Silicon Valley as we know it today—massive monopoly tech firms and super-wealthy financiers surrounded by a chaotic churn of heavily leveraged startups. The datafication of employment augurs an acceleration of these forces.

In the digital era, data is treated as a commodity whose value is divorced from the labor required to generate it. Thus, data extraction—from workers and consumers—provides a stream of capital whose value is infinitely speculatable. Returns on that speculatable capital concentrates in the hands of owners, with minimal if any downward redistribution.

Google offered consumers a product whose commercial purpose (mass data collection) was all but orthogonal to its front-end use (search). Likewise, the service provided by Uber’s workers (car service) is entirely secondary—and much less profitable—than the data they produce while providing it (a total mesh of city transportation logistics). Search and ridesharing aren’t the goals for these services; the goal is data—specifically, the packaging of data as a salable commodity and a resource upon which investors can speculate.

Crucially, data collection and analysis also provides firms with feedback mechanisms that allow them to iteratively hone their extraction processes. By constantly surveilling us, for example, Amazon gets better at recommending us products, Facebook at monopolizing our attention, and Google at analyzing our preferences, desires, and fears. As consumer data extraction constrains consumer choice and reifies inequities, data extraction in the workplace undermines workers’ freedom and autonomy and deprives them of (even more) profit generated by their labor.

Not only does this corporate surveillance enable a pernicious form of rent-seeking: it also opens the door to an extreme informational asymmetry in the workplace that threatens to give employers nearly total control over every aspect of employment.”

For the most part, these processes remain opaque—at least for most of us. The digital economy is a one-way mirror: we freely expose ourselves to powerful corporations, while they sell and manipulate the minute pixels of our identities in ways we’ll never know or imagine. A 2008 study found it would take 250 working hours to read every privacy policy one encounters in a given year4—which themselves are written in a legalese barely comprehensible to an educated person. As platforms and apps have proliferated, that hour count is likely much higher today. The content of those policies typically guarantees that users have no right to know (much less control) how their data is used. In the Wild West of datafied employment, transparency is even more rare. Most workers have scarcely an inkling that their data is being mined and exploited to generate profit for their employers.

In all, ubiquitous corporate surveillance creates a closed circle. Working people are surveilled as consumers and as workers—when they check Facebook in the morning, when they sit down at their desks, when they get home to shop online for a car loan. The data consumers and workers generate in consumption and in work generate profit for Silicon Valley firms and enable them to more efficiently extract data in the future. Rent-seeking via data accumulation is extremely lucrative for shareholders (who derive profit without paying labor), but deprives workers of compensation for the wealth they produce and concentrates wealth at the very top. The algorithmic means by which this system is fortified constrain and coerce workers and consumers alike.

How Surveillance Capitalism Paved the Way for the Datafication of Employment

A couple years ago, Shoshanna Zuboff coined the term “surveillance capitalism” to describe the business models of major technology companies such as Google and Facebook—the monetization of data generated by constant software surveillance.5 In her article “The Secrets of Surveillance Capitalism,” Zuboff outlines the development of a tech economy driven by targeted ad sales, which relied upon user data in order to better match products with their potential buyers. This business model shaped the practices and infrastructure of the companies that thrived during this era, prioritizing product design that enabled the extraction of the most data possible from each user over designs that protected user privacy or fulfilled the digital-age promise of free and open access to information, unfettered by gatekeepers.

The advent of smartphones and the ubiquity of social media allowed for unprecedented amounts of unstructured, personal data to be collected, and these massive datasets in turn lay the groundwork for the artificial intelligence boom. If data could be used to predict purchasing behavior, automated systems could use this data to anticipate needs in real time and respond to our inquiries by better understanding (and mimicking) human brains. Though use of artificial intelligence in this way is still in its infancy stage, constant and increasing use of search engines, social media, and smart devices installed in homes provides the necessary data to not only improve product functionality, but also to improve its extractive capabilities: the more data people give, the better the machines get at finding ways to extract even more.

With all this data floating around, a niche industry of data brokers emerged. These companies aggregate, analyze, and package user data and sell it to companies to increase sales to targeted demographics. Additionally, they helped create “user profiles,” which follow users around as they browse the web, so that ads for certain kinds of products show up in that user’s search results and on social media. For the affluent consumer, targeted ads are a convenience at best—providing a more personalized experience that mirrors their preferences6—and a mild annoyance at worst (for example, seeing ubiquitous ads for a pair of boots that have already been purchased). But targeted advertising has a darker side. Ads tailored to specific demographics can reinforce bias and exacerbate inequalities. A 2015 Carnegie Mellon study, for example, found that Google was more likely to show ads for high-income jobs to men than to women.7 A 2013 Harvard study found that ads for arrest record databases were significantly more likely to appear on searches for “black-sounding names.”8 For-profit colleges—including those with well documented records of targeting poor and minority students—have been among Google’s biggest advertisers.9

Digital profiling increasingly conditions access to economic opportunity, often filtering marginalized people toward products that exacerbate their marginalization. In 2013, the Senate Committee on Commerce, Science, and Transportation conducted a review of the data broker industry,10 uncovering myriad ways data collected online can be used to ghettoize and exploit the financially vulnerable. For example, Experian—a credit reporting company that sells consumer data for marketing purposes—offers a product called “ChoiceScore,” which “helps marketers identify and more effectively market to under-banked consumers.” Experian’s materials describe this enticing “untapped” market: “each year, under-banked consumers alone spend nearly $11 billion on non-traditional financial transactions like payday loans and check-cashing services.” Such consumers include “new legal immigrants, recent graduates, widows, those with a generation bias against the use of credit, followers of religions that historically have discouraged credit,” and “consumers with transitory lifestyles, such as military personnel.”11 [See Appendix A.]

Targeted “cultural marketing” of this kind is perfectly legal. But when this data finds its way into the hand of unscrupulous mortgage brokers or companies peddling predatory financial products, it can facilitate racially disparate economic harm. Payday loan businesses have made a fortune targeting poor minority communities, capturing financially stressed black and brown Americans in endless cycles of debt.12 In 2016, Google Adwords banned advertisements for payday loan services, in the hopes of protecting users from “deceptive or harmful financial products.”13 But lead generators and online payday loan companies still appear in Google search results.

As consumer data extraction constrains consumer choice and reifies inequities, data extraction in the workplace undermines workers’ freedom and autonomy and deprives them of (even more) profit generated by their labor.

In this way, targeted advertising opens and forecloses economic opportunity. Those identified as financially desperate receive ads for predatory loan products and for-profit colleges, while those identified as affluent are targeted for high-paying jobs and low-interest banking products. The architecture of the online world is akin to a bifurcated luxury hotel:14 the attractive amenities—an elegant dining room, a spa, a yoga studio—are locked behind doors that only privileged customers can access with their key cards, while the doors that beckon to other guests conceal lesser goods: a continental breakfast, treacherous workout machines, rigged slots. The hotel is built to conceal the divide between these two castes, neither one seeing what is available to the other. They pass each other by, oblivious to the guardrails on their movements,15 as they move from room to room.

Legal scholar Frank Pasquale calls this system the “scored society.16 Companies and governments collect unprecedented amounts of data about us—our habits, our histories, our beliefs, our desires, our networks—and algorithms parse that data to assess our worthiness for jobs, for loans, for insurance, and for suspicion in the criminal justice system. The scored society doesn’t merely deprive its denizens of liberty and privacy; its harms are material and unevenly distributed. Reputational scores based on historical data reify the lopsided structure of American society, further advantaging the already advantaged and marginalizing the marginalized.17

Online marketers crunch our data to assess our desperation, restraint, and credulity, assigning us a score that follows us everywhere we browse. Increasingly, financial institutions and credit-raters are using the same techniques to assess risk.18

In 2015, Facebook patented a technology to help lenders discriminate against borrowers based on their social connections. “When an individual applies for a loan,” the patent reads, “the lender examines the credit ratings of members of the individual’s social network who are connected to the individual through authorized nodes. If the average credit rating of these members is at least a minimum credit score, the lender continues to process the loan application. Otherwise, the loan application is rejected.”

As tech journalist Susie Cagle warned, this technology, if widely adopted, would essentially bring back redlining, returning us to “an era where the demographics of your community determined your credit-worthiness.”19 In 2015, Fair Isaac Corporation (FICO)—a major credit scoring company—announced that it would begin assessing social media activity in evaluating risk.20 “If you look at how many times a person says ‘wasted’ in their profile,” FICO CEO Will Lansing told the Financial Times, “it has some value in predicting whether they’re going to repay their debt.”

Digital Surveillance Enters the Workplace

The modes of surveillance developed to track consumers have shaped the way they move through the economy—not just which ads they see, but which resources and services they have access to. Meanwhile, the same tactics are increasingly used to control and extract additional value in the workplace. Reputational scores don’t stop following people the moment they clock in, and indeed, it may determine whether they get a job in the first place.

Data surveillance is a natural fit for the workplace, where strategies to monitor and manage worker output are nothing new.21 Indeed, the surveillance and the managerial obsession with efficiency have been hallmarks of the American workplace for over a century. The theory of scientific management, also known as “Taylorism”—the application of scientific methods to organize workflows in order to maximize efficiency—was born around the same time as American capitalism. Indeed, historians have recently located the roots of Taylorism on the plantation,22 where innovative and exacting technologies of management were deployed to control enslaved peoples’ bodies and organize their labor.

Twenty-first-century surveillance technologies exponentially increase employers’ ability to monitor every aspect of workers’ lives, while companies’ profit-making and decision-making processes are obscured behind opaque and impenetrable algorithms. Employers claim that technology increases productivity while improving objectivity—perhaps arguing that a computer doesn’t play favorites, it dispassionately identifies and rewards best workers—but an exploration of these processes reveals this too is an illusion.

In an ideal free market, employers and workers would have equal power: Employers would have the power to make hiring decisions, measure worker productivity, and fire those who don’t meet their standards, while workers would have the power to choose the most attractive employment opportunities, creating a sense of competition among companies to attract and retain the best talent. However, as economists such as the Roosevelt Institute’s Marshall Steinbaum argue,23 labor market concentration has dramatically reduced many workers’ ability to make those choices over the past thirty years. The scale has shifted dramatically, putting almost all the power in the hands of employers, and the use of technology to create a panopticon-like culture of surveillance has made that imbalance almost impossible to rectify.

Labor and employment law is set up to carefully moderate that putative balance: it strives to protect workers and ensure economic stability without encumbering firms’ ability to manage human capital efficiently. Data-driven software and algorithmic decision-making, however, act as a force-multiplier for the power held by firms, with no balancing agent on the side of workers. Because of the sheer volume of data collected, and the lack of transparency about how that data is used to score and assess workers, labor forces are at a stark informational disadvantage, which reduces their bargaining, negotiating, or exit power in the modern economy. In addition, when systems built to exploit or exclude already marginalized or vulnerable populations are repurposed for workforce management, exponential harms can result.

Because of the sheer volume of data collected, and the lack of transparency about how that data is used to score and assess workers, labor forces are at a stark informational disadvantage, which reduces their bargaining, negotiating, or exit power in the modern economy.

Over the past two years, the sheer size of corporate investments in workplace data underscore how valuable this new tool may be. According to a 2017 report by Deloitte,24 71 percent of companies prioritize “people analytics” and HR data for recruitment and workforce management (even though the same report also states that only 8 percent say they generate usable data). Venture capital and Wall Street investors have pumped millions into the sector as well. In May 2018, the Japanese firm Recruit Holdings bought Glassdoor, a website where employees can post reviews of their companies, for $1.2 billion. Recruit Holdings also owns Indeed, the “world’s largest online jobs board,” and their COO Hisayuki Idekoba explained: “Glassdoor’s database of employer information and the job search capabilities of Indeed complement each other well. Glassdoor’s mission of helping people everywhere find jobs and companies they love is a great fit with Indeed’s goal of helping people get jobs.”

Meanwhile, Cornerstone OnDemand—a company used by many firms25 to sift through online job candidates—raised $300 million in its post-IPO equity round, including recruiting LinkedIn as top investor. RedOwl, a cybersecurity analytics firm, was acquired by ForcePoint for $24 million last year. Telogis, a company that develops location-based software, attracted $141 million in funding before it was acquired by Verizon. And last year, Google for Jobs launched an AI-driven jobs site that lets users search for jobs in nearly all platforms: Facebook, LinkedIn, Monster, CareerBuilder, and so on. On top of that, Google.org, the company’s philanthropy arm, invested $50 million in future of work initiatives last September, including skills matching and job training programs.

All of these companies own vast amounts of workplace data that enables them to control nearly every aspect of our working lives, from job search and hiring to performance reviews, productivity monitoring, behavioral analytics, job reviews, and career path.

This model makes for excellent business: as companies hoard more data, and buy out competitors with data, they begin to have a monopoly on whatever service they’re providing and on attracting investment on the potential use of the data in the future. A good example of this practice is WeWork. WeWork has bought out huge parcels of commercial real estate and would appear, externally, to be in the business of renting out office space. But their business model is actually architectural data:26 they track occupants of their office spaces to see how they move, and how they use space; WeWork then uses that data to optimize and speed up construction on new spaces. Similarly, MoviePass—the app that functions as a subscription service to movie theaters—is not owned by a movie studio or theater chain, but rather a data brokerage firm.27

The imbalance in power created by this huge corporate investment in the datafication of employment is felt by workers most in four areas: hiring; on-the-job monitoring and performance evaluation; productivity and wellness tracking; and exiting employment.

Hiring

The use of machine learning to sift through high volumes of potential job applicants is certainly not new—we’ve known that firms use simple keyword searches in résumés and personality quizzes for years. But a new field of “people analytics” has emerged, whereby software companies promise to surface the most qualified hires through psychological profiling based on thousands of data points related to where people live, their social media use, their personal relationships, and even which web browser they use. Each person’s unique profile, when matched with data supplied by third-party brokers, adds up to a score. The score is matched against an established ideal by the hiring firm, and the closer the match, the more eligible the candidate is for hire.

The problem with technologies like these is they can serve to underscore existing biases and worsen structural inequities. For example, Cornerstone found that those applicants who installed newer browsers on their computers—such as Chrome or Firefox—stayed at their jobs 15 percent longer than those who used default browsers that come pre-installed, such as Safari for Mac.28 So, if an applicant is using a library computer because they don’t own a computer of their own—and thus are likely to be using a default browser—Cornerstone might score you lower for a job for which you’d otherwise be qualified. Cornerstone’s algorithm also favors lower commute times, thus biasing its process against applicants who might be willing to brave a longer commute for a chance at greater economic mobility.

Some companies have gone beyond correlational data in their screening of applicants: Technologies such as HireVue promise to ease the hiring burden by using video intelligence to score candidates. “The content of the verbal response, intonation, and nonverbal communication are just a few of the 20,000 data points we collect,” they say in their promotional material.29 “These data points are analyzed with our proprietary machine learning algorithms to accurately predict future job performance.”

Those identified as financially desperate receive ads for predatory loan products and for-profit colleges, while those identified as affluent are targeted for high-paying jobs and low-interest banking products.

While potential employers are required to inform applicants if they are conducting a credit score review in advance of employment, they are not required to inform applicants about the use of additional data sources—such as those that firms may acquire from murky third-party brokers—to determine eligibility for hiring. Workers may be passed over for consideration because of small indicators in vast data sets, and yet probably have no idea of the existence of either indicators or data sets, and no ability to contest the conclusions drawn from them. Algorithmic decisions made based on these data points can easily become proxies for discrimination and bias.

And in fact, the Equal Employment Opportunity Commission (EEOC) has begun to consider the implications of these new, usually opaque technologies under the light of anti-discrimination law. In October 2016, the EEOC held a public meeting at which a panel of industrial psychologists, attorneys, and labor economists explained how data-scraping technologies used in hiring could replicate bias. “Absent careful safeguards,” Berkman Klein Center fellow and law professor Dr. Ifeoma Ajunwa told the EEOC, “[big] data collection practices . . . could allow for demographic information and sensitive health and genetic information to be incorporated in big data analytics, which in turn influence employment decisions, thereby challenging the spirit of anti-discrimination laws such as Title VII, the Americans with Disabilities Act and the Genetic Information Non-Discrimination Act.”

Dr. Ajunwa has argued that hiring algorithms should be assessed and modified to prevent disparate impact under Title VII of the Civil Rights Act. With her co-authors, Dr. Ajunwa proposes a technical solution which uses arithmetic means to evaluate an algorithm’s decision-making output against the EEOC’s existing rule of thumb (that is, the four-fifths rule) for identifying disparate impact.30

In September, Senator Kamala Harris wrote to the EEOC (as well as the FBI and Federal Trade Commission) requesting that the commission “develop guidelines for employers on the fair use of facial analysis technologies and how this technology may violate anti-discrimination law.” She also asked the commission whether workers have issued any complaints of disparate impact from such technologies. The EEOC has not responded to the letter.

On-the-Job Monitoring and Performance Evaluation

Once hired, workers experience surveillance and data extraction from their employers in a variety of ways. One of the most common and insidious of these is algorithm-based decision-making, which is already in use by employers across a range of industries to manage wage-setting, allocation of hours,31 and evaluation metrics related to hiring, promotions, and firing.32 Employers argue that these algorithms promote “objective” decisions,33 but in fact their opacity can make them all but unnavigable for workers attempting to understand the rules that govern their employment. Workers have essentially no influence over the way that these systems are designed, minimal information on how the systems use data to make decisions, no access to or control over the data they generate in the systems, and no control over the way firms use this data. And, recently, workers have made accusations that these technologies are used to hide employer wrongdoing, such as wage theft performed by chipping away at minutes through automatic time deductions.34

For people who access their jobs through platforms such as Uber, Handy, or Instacart, the entire work experience is mediated by algorithms that nudge them toward specific behaviors concerning work times or customer engagement, without ever establishing work rules outright.35 Many labor advocates and litigators have argued that the intent of data-driven suggestions is to muddy the relationship between workers and companies, enabling companies to hire workers as 1099 employees (independent contractors) rather than W2 employees (full-time or part-time workers), on whom they’d have to pay an employment tax. The companies argue that, because these workers have no direct manager, they’re not employees—but in these cases, the algorithm is often simply doing the work traditionally associated with a manager.

Advocates’ emphasis on employees’ tax status, while important, obscures a host of other problems that can stem from an entire employment experience governed by algorithms. Without any straightforward rules or understanding to explain to workers how the algorithms pay them, workers must engage in a costly trial-and-error process to understand how their behaviors affect their scores, and thus their take home pay. Last year, workers for Instacart, the same-day grocery delivery service, were taken by surprise when the company suddenly added a “service fee” that customers mistook for an automatically included tip, causing a sudden drop in wages.36 And these same Instacart couriers have to start out with no clear information up front about the most lucrative time periods to be available for shifts—they have to learn it, at their own economic risk, over time.

We believe solutions to these problems will emerge through dialogue with those affected, through struggle in the workplace, and through greater investment from the progressive foundation community in tackling the “future of work” question from the angle of surveillance.

New York’s Taxi and Limousine Commission has proposed a rule that would be imposed on Uber, Lyft, Via and Juno that would establish a minimum take-home pay for drivers. The formula would also include a “utilization rate,” factoring in how much time drivers actually spend carrying passengers, which would require the company to pay more per ride if they underutilize drivers. Notably, the TLC was only able to create its formula because Uber gave the commission access to driver data. Thus, as always, transparency is prerequisite for effective regulation.

In traditional employment, workers are paid a predictable, standard (although sometimes low) rate of pay from day one, and have access to work rules through fellow employees, managers, or (when available) employee handbooks. They do not usually take on the economic risk of having to learn the expectations that managers have through trial and error before they learn their rate of pay. Workers employed through platforms are left in this limbo, and have to perform the additional unpaid labor of piecing together ad hoc information about other workers’ experiences. Many workers use employer-based forums (such as Uber drivers’ forums37) or social media networks (such as Facebook groups) to help disseminate crucial38 information, but this carries its own risks. Uber and other companies monitor activity on their internal forums, and Facebook is even free to monetize information shared on its platform and potentially sell it to employers if they like. In 2016, Google shut down a website that Amazon employees were using to discuss workplace issues (only to reopen it, with no explanation, after the media found out).39 This opens the door to a dangerous question: Will monopolistic tech companies create an even larger monopoly by banding together to prevent workers from using their products to organize?

It’s important that labor advocates address the impact of algorithmic management and decision-making in the workplace immediately, because its use is expanding rapidly to new sectors across our economy. Platforms such as ShiftPixy, Snag Work, Allwork, and Coople promise to gigify retail and hospitality work, and white-collar firms such as Bridgewater are investing in artificial intelligence systems that can replace middle management entirely.40 None of these companies is likely to start from scratch with wholly new platform software or data sets to manage their workforces. They will build upon what already exists, which is why it is critical to pay attention to the power asymmetries baked into existing workforce platforms. Workplace organizing and advocacy about data transparency with on-demand workers now has the potential to shape better outcomes for a much larger sector of workers in the future.

Productivity and Wellness Tracking

Another key strategy that employers use to monitor their workers is productivity and wellness tracking. The logistics41 and health care sectors, for example, have introduced various productivity apps that manage workers’ time spent on specific tasks, movement throughout their workspace, and the number of breaks they take.

Esther Kaplan’s 2015 piece “The Spy Who Fired Me”42 describes UPS’s use of telematics software to score workers’ performance against what the company’s drivers say are unrealistic time allotments for deliveries. The software doesn’t take into account fluid external conditions, such as traffic, the physical condition of the driver, or the location of delivery. While drivers are encouraged by the company to use safe lifting procedures, the time it takes to incorporate proper safety precautions often exceeds what’s allowed per stop. The result, Kaplan concludes, is that workers cut corners with their own health and safety, resulting in chronic physical injuries to their knees, shoulders, and backs.

Nurses also have their movements tracked, with electronic devices measuring the amount of time spent with patients and where nurses move throughout the hospital, without any consideration of the nurses’ own expertise on whether some patients may need more nursing time than others to achieve a healthy result.43 Warehouse workers at Amazon face an even more invasive monitor: Amazon recently won two patents for “haptic” wristbands that collect data on performance of warehouse workers and correct workers’ performance through vibrations, while also notifying managers of how frequently workers take breaks, use the bathroom, or move through the building.44

The issue with this technology is not its mere existence—many workers can and would appreciate the use of productivity trackers to improve performance or manage their time. The problems arise from a lack of worker input, control, or ability to contest the implementation of these devices. The devices score workers against an ideal of productivity that relies on consistent external factors, while the reality is that many work days include varying disruptions. As a result, workers are penalized for external factors over which they have little control and are pushed to make up for them through invisible sacrifices—such as to their health and personal time.

While some workers experience monitoring of their physical behavior and rate of task completion, others must contend with near-constant behavioral analysis. For many white-collar workers in particular, software that collects interpersonal communications content, time management data, and location data has become the norm.45 Companies such as RedOwl and Humanyze collect keystrokes, searches, tone, and expression in email, as well as physical movements, to assess the potential risk that employees may pose to an organization, including whether employees may be likely to engage in workplace organizing, talk to reporters, or share sensitive workplace information. Such software can also be used to identify workers who are unlikely to protest wage stagnation or a decline in conditions, due to a combination of personal circumstances, economic liabilities, or emotional disposition that may surface in a firm’s analysis of behavioral data.46 Similar software positions itself more benignly, suggesting that it can track high performers through data analysis or target potential flight risks for incentive-based raises.

The seemingly benign use of data collection also extends to employee wellness programs. In their 2017 article “Limitless Worker Surveillance,”47 Ifeoma Ajunwa, Kate Crawford, and Jason Schultz describe the growth of the workplace wellness industry, which blossomed under incentives from the 2010 Affordable Care Act (ACA). Under the ACA, employers partner with wellness companies that help workers identify health risks, lose weight, eat more healthily, or stop smoking. These companies track this progress through data collection via the use of fitness tracking software such as FitBit, routine health assessments, and mandated viewing of videos about healthy lifestyles (complete with ads).

But that’s not the only thing these companies do. Dr. Anjunwa explains that, once employees join these programs, “companies can work with employee wellness firms that mine employee data to gain deep insights about a company’s employees—which prescription drugs they use, whether they vote, and when they stop filing their birth control prescriptions.” While participation in these programs is usually optional, employers use discounts on health insurance rates or other financial incentives to push workers to join. Workers do not have access to the data that is collected on them by these third-party firms, nor do they have control over how it is used and are not compensated from any profit generated by possession of the data. Workers function as a captive data set for these data brokerage firms, who extract massive amounts of information on these employees as a condition of their employment.

Exiting Employment

Once workers have overcome hiring obstacles and survived employment under constant surveillance, they face additional challenges when attempting to exit their positions and seek new employment.

For workers on platforms such as Uber, Handy, and Instacart, one challenge lies in their ability to demonstrate their job performance to prospective future employers. These workers generate a constant stream data about their productivity and performance, but they don’t have access to any of it—all of that information belongs to their employers, and without direct managers who can provide references, these workers are often trapped without a direct path to advancement or mobility.

Workers across industries face another dilemma: What happens to the data they generated for their employers? Everything from unfairly compiled productivity data to sensitive health data to flags on interpersonal communication can be stored, shared, and sold by employers and third-party firms. In nearly all cases, workers have no idea what that data contains, let alone what value it provides for their employers or how it may affect them in the future. Just as consumers provide two revenue streams to companies they patronize—through their money and their data—workers provide two services to employers: their data and their productivity. However, for consumers shopping in a competitive market, the cost of a given product or service is often reduced when the vendor is able to collect data, so consumers receive some measure of value for their contribution. Workers, on the other hand, are only ever compensated for their services—never their data.

Data extraction and surveillance capitalism already has an incalculable impact on a broad swath of workers, and the practices described in this section are only growing more common, more insidious, and more exploitative. If labor advocates and legislators hope to curtail these practices and restore the balance of power for workers, we must take decisive action as quickly as possible.

Constructing a Response to the Datafication of Employment

Ideally, the law will see its way to protecting individual rights in the digital age, but for workers suffering and losing pay right now, federal legislative solutions may be a case of too little too late. In “Limitless Worker Surveillance,” Ajunwa, Crawford, and Schultz make recommendations for potential legal remedies (a summary of which can be found in the appendix of this paper). Legal processes can be slow and cumbersome, however, and so in the rapidly evolving world of technology, we think it’s important to explore and enact innovative technical solutions that could be achieved through consumer pressure, labor agitation, or state-level legislation.

Our proposed solutions strive for radical transparency: workers have the right to know what data they’re contributing to their employers, what value that data generates, and how that data is being utilized. Responses to the phenomena described above should be guided by the following principles for action:

  1. Data use transparency. Workers should have a right to know how decisions related to their pay, mobility, and performance tracking are made. Employers should be required to expressly communicate these functions to workers.
  2. Data ownership.Workers should own any data related to their work, and should have the ability to transfer that data to new platforms in order to exercise their right to exit. Data ownership by workers should also include the right to negotiate over the value of the data to the company for which it is produced and they should reasonably expect to share in the profits generated by that data.
  3. Freedom from fear. Surveillance technology in the workplace should be closely monitored and studied. Employers should be required to disclose any surveillance technology they are using and the data it collects. There should be limits on the use of that personal workplace data.
  4. Data broker regulation. The use of third-party data brokers in hiring and employment should be regulated. This could be done most sensibly under the Fair Credit Reporting Act, which enforces compliance with privacy protections and consumer transparency with regard to credit data.
  5. Anti-discrimination. Hiring algorithms should be analyzed for their potential to replicate illegal discriminatory practices. Tracking and enforcement regarding the use of these algorithms would most logically be done by the EEOC, but in any case, this effort will be largely dependent upon complete transparency (that is, access to the algorithms and data).

Possible Pathways

Our hope with this report is to spark discussion. We believe solutions to these problems will emerge through dialogue with those affected, through struggle in the workplace, and through greater investment from the progressive foundation community in tackling the “future of work” question from the angle of surveillance. For this reason, we do not offer a comprehensive policy agenda for combating surveillance capitalism—indeed, we don’t anyone yet knows what that would be. Rather, we offer the following four pathways—signposts, really—as a means of turning the focus from diagnosis to remedy.

  1. Leverage user agreements. The proliferation of user agreements, terms of service, and various other contractual relationships embedded in the use of software creates a framework for negotiation. These should be investigated for ways in which they can be grounds for workplace bargaining. For this purpose, workers would be categorized as consumers or users—something that has been met with some resistance from labor allies who believe this would signal giving up on classification fights.
  2. Invest in a counter-force. The investment in organizations that hack, study, and reveal what’s behind workplace algorithms has been scant. But over the past year, Coworker.org has been building partnerships with engineers, coders, and tech activists who are increasingly questioning the social impact of the systems they have built. The social sector should compete with tech companies for talent who can understand the systems that drive access to the economy for so many workers.Furthermore, while the decision-making functions of algorithms and machine learning are invisible, the impacts on workers are not. As more technology is introduced into the workplace, people—specifically people between ages thirty-five and fifty—will be able to articulate the changes they are experiencing, and this will serve as a guide. Funders should respond to these changes by investing in the organizing infrastructure that enables ethical sharing of aggregate frontline workforce data for use by groups of workers to balance the information asymmetry between firms and employees.
  3. Co-opt data technology. The same network technologies that can aggregate and connect people to work can also connect people to one another. There is the potential to create new kinds of solidarities between workers and consumers—who often have similar experiences with the power asymmetry embedded in technologically mediated work. Advocates can help workers avoid the risk of having their communications monitored, shut down, or stolen by their employers or the tech platforms they utilize by creating secure, private tools that enable workers to communicate and organize on their own terms.
  4. Partner with tech companies. Worker organizations should pursue investment in automation and machine learning in contexts that benefit workers, such as improving worker safety and/or accentuating their ability to provide their services. For example, while some companies produce technology to monitor nurses’ activity and create potential risks to their health as they hurry through their duties, researchers in Japan have pioneered a robotic exoskeleton that enables nurses to lift patients safely, alleviating long-term injury risk and improving nurses’ productivity and health over their lifetimes. Labor advocates should pioneer innovative partnerships with tech companies and municipalities to help make work environments more healthy, safe, and productive.
  5. Engage in virtual organizing. Worker organizations and unions should prioritize engagement with technology in their strategy and organizing.48 Management has entered the twenty-first century and is using its innovations to constrain worker power; workers and their organizations must arm themselves as well.

The above pathways are far from exhaustive. Guided by the goal of restoring the balance of power for workers in a data-driven economy, we hope more thinkers, advocates, workers, and organizers will join this vital conversation.

Conclusion

Algorithmic and data-mining tools are promoted as means of transcending error, arbitrariness, and implicit bias in decision-making processes. By parsing more and more data—encompassing more and more realms of activity and experience—machine learning will become progressively more precise and predictive. “Automated decisions often come with an implicit, technophilic promise of accuracy and fairness,”49 say researchers from Data & Society. This is the utopian dream of algorithmic assessment: humans have unconscious bigotries that undermine rational and fair decision-making; machines do not.

The problem is, as Google research fellow Moritz Hardt has argued, “Machine learning is not, by default, fair or just in any meaningful way.” Rather, big data is a social mirror. Algorithms learn to make decisions based on historical instances of the same decision problem (that is, training data). “If the training data reflect existing social biases against a minority,” writes Hardt, “the algorithm is likely to incorporate these biases.”50 If past recruiters were biased against women, so will be the algorithm. Even when race and gender are specifically excluded from the input data, writes Hardt, “[they] are typically redundantly encoded in any sufficiently rich feature space whether they are explicitly present or not.” Companies’ assurances that their algorithmic assessments ignore race, gender, and other protected statuses51 are meaningless. A good algorithm will isolate that variable anyway. Indeed, “inferring absent attributes from those that are present” is precisely what machine learning algorithms do best.

“Just as neighborhoods can serve as a proxy for racial or ethnic identity,” a 2014 White House report warned, “there are new worries that big data technologies could be used to ‘digitally redline’ unwanted groups, either as customers, employees, tenants, or recipients of credit.52” Data-mining, write Solon Barocas and Andrew Selbst, can “reproduce existing patterns of discrimination, inherit the prejudice of prior decision-makers, or simply reflect the widespread biases that persist in society.”53

Instead of removing human prejudice from the equation, data-mining launders bias and reifies racial hierarchies.

Worse still, algorithms may serve to harden and exacerbate existing prejudices. Given the “aura . . . of impartiality that is imbued to algorithms,” disparate outcomes—such as a hiring algorithm which consistently scores white men higher than black women—may serve to confirm and provide cover for society’s preconceived racial assumptions.54 “I don’t think black women are any less deserving than white men,” the hiring manager might tell himself, “but the computer says so.” Instead of removing human prejudice from the equation, data-mining launders bias and reifies racial hierarchies. While automated decision-making systems “may reduce the impact of biased individuals,” writes Oscar Gandy, “they may also normalize the far more massive impacts of system-level biases and blind spots.”55

Artificial intelligence tools—in the absence of deliberate intervention—internalize society’s prejudices.56 Last year, researchers at Princeton and University of Bath discovered that a language learning AI, which trains on vast quantities of online text, was exhibiting signs of gender and racial biases. The AI was more likely to associate the words “female” and “woman” with the arts, the humanities, and the home; while “male” and “man” were associated with math and engineering professions. The program was more likely to associate white-sounding names with pleasant words like “freedom,” “happy,” and “miracle;” whereas black-sounding names were associated with unpleasant words like “filth,” “murder,” and “tragedy.”57

As long as machine learning is based on historical data, bigotry and systemic disadvantage will be baked in. In computer science, the rule is: “garbage in, garbage out”—programs are only as good as the data you feed them. Hamid Khan, the campaign coordinator of “Stop LAPD Spying,” has his own axiom for the age of predictive algorithms: “It’s racism in, racism out.”58

Workers’ lives are increasingly circumscribed and dictated by this sort of machine learning, trained on data extracted from their labor. In the absence of a radical, progressive movement to empower workers and consumers in the surveillance economy—and the datafied workplace—existing structures of racial and economic stratification will only become more severe.

The datafication of work wasn’t built in a day; neither will it be deconstructed in short order. This report doesn’t present all the answers—far from it—but we hope it gives some ideas on where to begin. For those hoping to understand and envision the “future of work” in the twenty-first century, surveillance must be a central lens.

Appendix A: Cultural Marketing

The data-broker industry lays bear the crass cultural and class logics used to identify vulnerable populations to target for certain products and services.

IXI Services, a division of Equifax, markets a “consumer segmentation” product called “Economic Cohorts,” which helps firms “identify top customer clusters, improve communications based on economic profiles, and target new clusters with desired economic potential and tendencies.” The product divides prospective consumer households into precise economic and social categories. Segments include, “Living on Loans: Young Urban Single Parents,” “Credit Crunched: City Families,” “Retiring on Empty: Singles,” and “Tough Start: Young Single Parents.”59 Equifax’s promotional materials note that these “Economic Cohorts” can help firms “more effectively target ads toward a much smaller and more receptive household audience as opposed to the entire universe of online visitors.”60

Experian’s “Mosaic” segmentation product separates consumers into even more specific units, with profiles encompassing socioeconomics, race, geography, political views, vices, insecurities, and online behavior.61 The “Fragile families” segment is composed of recent “Asian and Hispanic” immigrants. “After having made the difficult decision to leave their home country and come to America,” the segment profile reads, “it’s not surprising that they say it’s important to seize opportunities in life. These folks are willing to take risks, confident that they’ll succeed.” The “Hard Times” segment, which Experian describes as “the poorest lifestyle segment in the nation,” is 40 percent African-American. “They do visit some websites frequently,” Experian notes, “especially those that deal with the arts, health, gambling, dating and religion.” They tend to be “moderates who support the Democratic Party.”

Mosaic segments only get more disturbingly detailed. The description of the “Enduring Hardships” cohort—in which “intact families are a rarity”— includes a troubling evaluation of its members financial literacy: “They get by with occasional loans and paying only with cash or money orders. . . . Many admit that they know little about finance, distrust banks and worry that carrying credit cards will result in identity theft.” The “Soul Survivors” are African-Americans with “materialistic aspirations despite their downscale standard of living.” And the “Rolling the Dice” cohort are “ever in search of opportunities to make extra money [and] like to gamble.” Experian adds, “Both credit and debit cards are popular in this segment—saving for the future is not.”

Appendix B: Legal Remedies

We are indebted to Ifeoma Ajunwa, Kate Crawford, and Jason Schultz and their article “Limitless Worker Surveillance” in California Law Review for their exploration of this topic. They suggest three types of increasingly inclusive laws to protect privacy:

  1. An Employee Health Information Privacy Act (EHIPA), which would specifically protect the most sensitive employee data, especially those that could arguably fall outside of HIPAA’s jurisdiction, such as wellness and other data related to health and one’s personhood;
  2. A less narrow Employee Privacy Protection Act (EPPA), which would focus on prohibiting specific workplace surveillance practices that extend outside of work-related locations or activities; and
  3. A comprehensive federal information privacy law, similar to approaches taken in the European Union, which would protect all individuals’ privacy to various degrees regardless of whether or not one is at work or elsewhere and without regard to the sensitivity of the data.

In addition, federal and municipal policymakers should enact “right to know” legislation as follows:

  • Workers should have a right to know how decisions related to their pay, mobility, and performance tracking are made. Employers should be required to expressly communicate these functions to workers.
  • Workers should own data related to their work and have the ability to transfer that data to new platforms in order to exercise their right to exit. Data ownership by workers should also include the right to negotiate over the value of the data to the company for which it is produced, and they should reasonably expect to share in the profits generated by that data.
  • Surveillance technology in the workplace should be closely monitored and studied. Employers should be required to disclose the surveillance technology they are using and the data it collects. There should be limits on the use of that personal workplace data.
  • The use of third-party data brokers in hiring and employment should be regulated under the Fair Credit Reporting Act, which enforces compliance with privacy protections and consumer transparency with regard to credit data.

Editorial note: HireVue has informed us that it does not use third-party sources, and that the only data HireVue analyzes is from video content from their video interviews. This report has been updated to reflect that.  

Notes

  1. Julianne Tveten, “Digital Redlining: How Internet Service Providers Promote Poverty,” TruthOut, December 14, 2016, https://truthout.org/articles/digital-redlining-how-internet-service-providers-promote-poverty/.
  2. Julia Angwin and Terry Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, October 28, 2016, https://www.propublica.org/article/facebook-lets-advertisers-exclude-users-by-race.
  3. Michelle Chen, “Is ‘Big Data’ Actually Reinforcing Social Inequalities?” The Nation, September 29, 2014, https://www.thenation.com/article/big-data-actually-reinforcing-social-inequalities/.
  4. Aleecia McDonald and Lorrie Faith Cranor, “The cost of reading privacy policies,” ISJLP 4 (2008): 543, https://kb.osu.edu/bitstream/handle/1811/72839/1/ISJLP_V4N3_543.pdf.
  5. Shoshanna Zuboff, “The Secrets of Surveillance Capitalism,” Frankfurter Allgemeine, May 3, 2016, http://www.faz.net/aktuell/feuilleton/debatten/the-digital-debate/shoshana-zuboff-secrets-of-surveillance-capitalism-14103616.html.
  6. Katy Bachman, “Poll: Targeted Advertising Is Not the Bogeyman,” AdWeek, April 18, 2013, http://www.adweek.com/digital/poll-targeted-advertising-not-bogeyman-updated-148649/.
  7. Amit Datta et. al., “Automated Experiments on Ad Privacy Settings: A Tale of Opacity, Choice, and Discrimination,” Proceedings on Privacy Enhancing Technologies 1 (2015): 92–112, http://www.andrew.cmu.edu/user/danupam/dtd-pets15.pdf.
  8. Such a correlation could have serious consequences: an employer Google-searching the name of a prospective hire, for example, would be more likely to see ads about arrest records for a black applicant than for a white one. Latanya Sweeney, “Discrimination in Online Ad Delivery,” Data Privacy Lab, January 28, 2013, https://dataprivacylab.org/projects/onlineads/1071-1.pdf.
  9. In 2012, Reuters reported that Google’s single biggest advertiser was the University of Phoenix, which was spending nearly $400,000 per day on online advertising. Other for-profits like Kaplan, DeVry, and ITT Tech were among Google’s top twenty-five ad-buyers. For-profit colleges built their fortunes targeting low-income people of color and veterans—those who can access the maximum amount of federal loans. In 2011, University of Phoenix and another for-profit, Ashford University, produced more black graduates than any other institute of higher education in America. Meanwhile, for-profit colleges have left their graduates with massive debts and substandard degrees. Ninety-six percent of for-profit students take outstudent loans; in 2012, students who attended for-profits accounted for 46 percent of all student loan default. And a 2013 Harvard University study found for-profit graduates have lower earnings and are more likely to be unemployed than those who attend far less expensive community colleges.

    As The Century Foundation’s Bob Shireman has written, for-profits have used “manipulative sales tactics, hired unqualified faculty, enrolled unprepared students, and hid their misdeeds through forced arbitration clauses, all while leaving students with crushing student loan debts and school executives with bulging bank accounts.” John G. Sperling, the founder of University of Phoenix, died at the age of 93 in 2014; he was a billionaire.

    See: Ananthalakshmi A, “U.S. for-profit colleges spend big on marketing while slashing other costs,” Reuters, November 28, 2012, http://www.reuters.com/article/net-us-forprofitcolleges-analysis-idUSBRE8AR0FJ20121128; Abby Jackson, “Guy who spent $37,000 on a computer-science degree can’t get a job at Best Buy’s Geek Squad,” Business Insider, April 14, 2015, http://www.businessinsider.com/profile-of-corinthian-student-michael-adorno-2015-4#ixzz3XJ82Dkmh; Yasmeen Querushi, Sarah Gross and Lisa Desai, “Screw U: How For-Profit Colleges Rip You Off,” Mother Jones, January 31, 2014, http://www.motherjones.com/politics/2014/01/for-profit-college-student-debt/4/; David Deming et. al., “For-Profit Colleges,” Future of Children (Spring, 2013): 137–63, https://dash.harvard.edu/bitstream/handle/1/12553738/11434354.pdf?sequence=1; “Robert Shireman, “The For-Profit College Story: Scandal, Regulate, Forget, Repeat,” The Century Foundation, January 24, 2017, https://tcf.org/content/report/profit-college-story-scandal-regulate-forget-repeat/; and Elaine Woo, “John G. Sperling dies at 93; founder of University of Phoenix,” Los Angeles Times, August 25, 2014, http://www.latimes.com/local/obituaries/la-me-john-sperling-20140826-story.html.

  10. “A Review of the Data Broker Industry: Collection, Use, and Sale of Consumer Data for Marketing Purposes,” Committee on Commerce, Science, and Transportation, December 13, 2013, http://www.jdsupra.com/legalnews/congressional-report-on-the-ways-data-br-44301/.
  11. Experian, ChoiceScore: Improve Targeting and Customer Acquisition in the Untapped Under-banked Population (EXP002353). Cited in ibid.
  12. Brandon Coleman and Delvin Davis, “Report: Florida Payday Lending Law Traps Communities of Color in Endless Cycle of Debt,” Center for Responsible Lending, March 2016, http://www.responsiblelending.org/sites/default/files/nodes/files/research-publication/crl_perfect_storm_florida_mar2016.pdf.
  13. Christine Hauser, “Google to Ban All Payday Loan Ads,” New York Times, May 11, 2016, https://www.nytimes.com/2016/05/12/business/google-to-ban-all-payday-loan-ads.html.
  14. Melkorka Licea, “‘Poor door’ tenants of luxury tower reveal the financial apartheid within,” New York Post, January 17, 2016, https://nypost.com/2016/01/17/poor-door-tenants-reveal-luxury-towers-financial-apartheid/.
  15. Frederic Lardinios, “Humanyze Raises $4m to Help Businesses Better Understand Employee Productivity,” TechCrunch, https://techcrunch.com/2016/05/05/humanyze-raises-4m-to-help-businesses-better-understand-employee-productivity/.
  16. Danielle Keats Citron and Frank Pasquale, “The Scored Society: Due Process for Automated Predictions” Washington Law Review 89 (2014): 1, https://digitalcommons.law.umaryland.edu/cgi/viewcontent.cgi?referer=https://scholar.google.com/&httpsredir=1&article=2435&context=fac_pubs.
  17. Solon Barocas and Andrew D. Selbst, “Big Data’s Disparate Impact,” California Law Review 104 (2016): 671, http://www.cs.yale.edu/homes/jf/BarocasSelbst.pdf.
  18. This idea underlies several new financial tech companies (“fin-tech”). Lenddo, which hopes to target borrowers in the developing world, allows financial institutions to “add a bit of code to their lending application workflow which then allows customers to opt in and share their social media.” LendUp, which advertises itself as “a better alternative to payday loans,” used a variety of new data points to assess consumer risk. In addition to mining Twitter, Facebook, and Linkedin, “LendUp looks at how quickly a user scrolls through the lender’s website. Users who jump to large loan amounts, without reading materials on the site, may be high-risk borrowers,” said LendUp CEO Sasha Orloff. “It’s like walking into a bank and screaming, ‘I need money now!’”

    These companies advertise themselves as making loans available to people who might not be approved by traditional risk assessments. “Commercial use of big—or, at least, bigger—data,” wrote LendUp co-founder and CTO Jake Rosenberg, “is one of few ways to enable access and choice for subprime customers who need financial services.” But a 2014 study by the National Consumer Law Center found that loans underwritten by LendUp—and similar fin-tech startups like ZestFinance and ThinkFinance—offer effective annual interest rates of 134 to 749 percent, not significantly better than your average brick-and-mortar payday loan storefront.

    See: Tom Groenfeldt, “Lenddo Creates Credit Scores Using Social Media,” Forbes, January 29, 2015, https://www.forbes.com/sites/tomgroenfeldt/2015/01/29/lenddo-creates-credit-scores-using-social-media/#3428d03a2fde; Persis Yu and Jillian McLaughlin and Marina Levy, “Big Data: A Big Disappointment for Scoring Consumer Credit Risk,” National Consumer Law Center, March 2014, https://www.nclc.org/images/pdf/pr-reports/report-big-data.pdf; and Dia Kayyali, “Big Data and hidden cameras are emerging as dangerous weapons in the gentrification wars,” Quartz, August 23, 2016, https://qz.com/763900/surveillance-and-gentrification/.

  19. Susie Cagle, “Facebook wants to redline your friends list,” Pacific Standard, August 24, 2015, https://psmag.com/environment/mo-friends-mo-problems-might-have-to-defriend-joey-with-the-jet-ski-bankruptcy
  20. Credit risk itself—even before the advent of more novel analytic approaches—has been criticized as a means of discrimination. Helen Ladd, “Evidence on Discrimination in Mortgage Lending,” Journal of Economic Perspectives 12, no. 2 (Spring 1998): 41–62, http://www.csus.edu/indiv/c/chalmersk/econ251fa12/evidenceofdiscriminationinmortgagelending.pdf.
  21. [Alex Rosenblat, Tamara Knees, and danah boyd, “Workplace Surveillance,” Data and Society Working Paper, Prepared for Future of Work project supported by Open Society Foundations, October 8, 2014 http://www.datasociety.net/pubs/fow/WorkplaceSurveillance.pdf.
  22. Katie Johnson, “The Messy Link Between Slave Owners and Modern Management,” Forbes, January 16, 2013, https://www.forbes.com/sites/hbsworkingknowledge/2013/01/16/the-messy-link-between-slave-owners-and-modern-management/#79ca56b6317f.
  23. Marshall Steinbaum, Eric Harris Bernstein, John Sturm, “Powerless: How Lax Anti-trust and Concentrated Market Power Rig the Economy against American Workers, Consumers, and the Community,” Roosevelt Institute, 2018, http://rooseveltinstitute.org/powerless/.
  24. Laurence Collins, David R Fineman, Akio Tsuchuda, “People analytics: Recalculating the route,” Deloitte 2017 Global Human Capital Trends, February 28, 2017, https://www2.deloitte.com/insights/us/en/focus/human-capital-trends/2017/people-analytics-in-hr.html.
  25. For a list of CornerstoneOnDemand clients, see their website:https://www.cornerstoneondemand.com/clients.
  26. “WeWork’s $20b Dream: The Lavishly Funded Start-Up That Could Disrupt Commercial Real Estate,” CBInsights Research Report, accessed June 3, 2018, https://www.cbinsights.com/research/report/wework-strategy-teardown/#data.
  27. Anthony D’Alessandro, “MoviePass’ Parent Company Helios and Matheson Ups Stake in Monthly Movie Ticket Service.” Deadline, February 16, 2018,
    https://deadline.com/2018/02/moviepass-parent-company-helios-and-matheson-ups-stake-in-monthly-movie-ticket-service-1202291584/.
  28. Kiley M. Belliveau, Leigh Ellen Gray, and Rebecca J. Wilson, “Busing the Black Box: Big Data Employment and Privacy,” Defense Counsel Journal 84, no. 3 (July 2017), https://www.iadclaw.org/publications-news/defensecounseljournal/busting-the-black-box-big-data-employment-and-privacy/.
  29. See HireVue website: https://www.hirevue.com/.
  30. Ifeoma Ajunwa, Sorelle Friedler, Carlos E. Scheidegger, and Suresh Venkatasubramanian. “Hiring by algorithm: predicting and preventing disparate impact.” Available at SSRN (2016). http://sorelle.friedler.net/papers/SSRN-id2746078.pdf.
  31. “Shift Change: ‘Just in Time’ Scheduling Creates Chaos for Workers,” NBC News In Plain Sight, May 2, 2014, https://www.nbcnews.com/feature/in-plain-sight/shift-change-just-in-time-scheduling-creates-chaos-workers-n95881.
  32. “There Will Be Little Privacy in the Workplace of the Future,” The Economist, March 28, 2018, https://www.economist.com/news/special-report/21739426-ai-will-make-workplaces-more-efficient-saferand-much-creepier-there-will-be-little.
  33. Josh Bersin, “People Analytics is Here with a Vengence,” Forbes, December 16, 2017, https://www.forbes.com/sites/joshbersin/2017/12/16/people-analytics-here-with-a-vengeance/#21fb89d532a1.
  34. Rachel Feintzeig “Employees Say Time Tracking Systems Chip Away at Their Paychecks,” Wall Street Journal, May 20, 2018, https://www.wsj.com/articles/employees-say-time-tracking-systems-chip-away-at-their-paychecks-1526821201.
  35. Tom Simonite, “When Your Boss Is an Uber Algorithm,” Technology Review, December 1, 2015, https://www.technologyreview.com/s/543946/when-your-boss-is-an-uber-algorithm/.
  36. Jason Del Rey, “Instacart Is Playing a Game with It’s Workers’ Pay—And Will Eventually Suffer for It,” Recode, https://www.recode.net/2017/2/20/14503128/instacart-service-fee-tips-controversy.
  37. Alex Rosenblat, “The Network Uber Drivers Built,” Fast Company, January 9, 2018, https://www.fastcompany.com/40501439/the-network-uber-drivers-built.
  38. [See Instacart workers’ Facebook group: https://www.facebook.com/Instacartworkers/.
  39. Sydney Brownstone, “Google Shuts Down Amazon Unionization Website,” The Stranger, February 5, 2016 https://www.thestranger.com/blogs/slog/2016/02/05/23534490/google-shuts-down-amazon-unionization-website.
  40. Olivia Solon, “World’s Largest Hedge Fund to Replace Managers with Artificial Intelligence,” The Guardian, December 22, 2016 https://www.theguardian.com/technology/2016/dec/22/bridgewater-associates-ai-artificial-intelligence-management.
  41. Christophe Haubursin (research by Karen Levy) “Automation Is Coming for Truckers. But First, They’re Being Watched, Vox, November 20, 2017, https://www.vox.com/videos/2017/11/20/16670266/trucking-eld-surveillance.
  42. Esther Kaplan, “The Spy Who Fired Me,” Harpers, March 2015,
    https://harpers.org/archive/2015/03/the-spy-who-fired-me/.
  43. David F. Carr, “Florida Hospital Tracks Nurses Footsteps, Work Patterns,” Information Week, March 17, 2014, https://www.informationweek.com/healthcare/analytics/florida-hospital-tracks-nurses-footsteps-work-patterns/d/d-id/1127700.
  44. Ceylan Yeginsu, “If Workers Slack Off, the Wristband Will Know. (And Amazon Has a Patent for It.) New York Times, February 1, 2018, https://www.nytimes.com/2018/02/01/technology/amazon-wristband-tracking-privacy.html.
  45. Kaveh Waddell, “The Algorithms That Tell Bosses How Employees Feel,” The Atlantic, September 29, 2016, http://www.theatlantic.com/technology/archive/2016/09/the-algorithms-that-tell-bosses-how-employees-feel/502064/.
  46. Nathan Newman, “UnMarginalizing Workers: How Big Data Drives Lower Wages and How Reframing Labor Law Can Restore Information Equality in the Workplace,” Information Law Institute-New York University/Social Science Research Network, 2016.
  47. Ifeouma Anjuwa, Kate Crawford, and Jason Schultz, “Limitless Worker Surveillance,” California Law Review 735 (2017): 105, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2746211.
  48. Mark Zuckerman, Richard Kahlenberg, and Moshe Marvit, “Virtual Labor Organizing,” The Century Foundation, June 9, 2015, https://tcf.org/content/report/virtual-labor-organizing/.
  49. Alex Rosenblat et. al., “Data & Civil Rights: Criminal Justice Primer,” Data & Society Research Institute and The Leadership Conference and Open Technology Institute, October 30, 2014, http://www.datacivilrights.org/pubs/2014-1030/CriminalJustice.pdf.
  50. Moritz Hardt, “How big data is unfair,” Medium, September 26, 2014, https://medium.com/@mrtz/how-big-data-is-unfair-9aa544d739de.
  51. “There are some data that Evolv simply won’t use, out of a concern that the information might lead to systematic bias against whole classes of people. . . . Marital status? Motherhood? Church membership? ‘Stuff like that,’ [Co-founder Jim] Meyerle said, ‘we just don’t touch.’” Don Peck, “They’re Watching You at Work,” The Atlantic, December 2013, https://www.theatlantic.com/magazine/archive/2013/12/theyre-watching-you-at-work/354681/.
  52. “Big Data: Seizing Opportunities, Preserving Values” Executive Office of the President, May 1, 2014, https://journalistsresource.org/wp-content/uploads/2014/05/big_data_privacy_report_may_1_2014.pdf?x10677.
  53. Solon Barocas and Andrew Selbst, “Big Data’s Disparate Impact,” California Law Review 104 (September 2016), http://ssrn.com/abstract=2477899.
    Anjunwa et. al., “Hiring by algorithm,” http://friedler.net/papers/SSRN-id2746078.pdf.
  54. Ifeoma Ajunwa, Sorelle Freidler, Carlos Scheidegger, and Suresh Venkatasubramanian, “Hiring by Algorithm: Predicting and Preventing Disparate Impact,” April 1, 2016, http://friedler.net/papers/SSRN-id2746078.pdf.
  55. Oscar Gandy Jr., “Engaging Rational Discrimination: Exploring Reasons for Placing Regulatory Constraints on Decision Support Systems,” Ethics and Information Technology 12, no. 1 (2010): 37–39, https://link.springer.com/article/10.1007/s10676-009-9198-6.
  56. For a bold diagnosis of the justice issues associated with new artificial intelligence technology, see AI Now Institute’s newly released report. Meredith Whittaker, Kate Crawford, Roel Dobbe, Genevieve Fried, Elizabeth Kaziunas, Varoon Mathur, Sarah Myers West​, Rashida Richardson, Jason Schultz, and Oscar Schwartz, “AI Now Report 2018,” AI Now Institute, December 2018, https://ainowinstitute.org/AI_Now_2018_Report.pdf.
  57. Aylin Caliskan-Islam et. al., “Semantics derived automatically from language corpora necessarily contain human biases,” Princeton University, August 30, 2016, https://www.princeton.edu/~aylinc/papers/caliskan-islam_semantics.pdf.
  58. Stephen Buranyi, “Rise of the Racist Robots—How AI Is Learning All Our worst Impulses,” The Guardian, August 8, 2017, https://www.theguardian.com/inequality/2017/aug/08/rise-of-the-racist-robots-how-ai-is-learning-all-our-worst-impulses?CMP=fb_gu.
  59. Equifax, “Economic Cohorts Segmentation Analysis,” provided to the City of Montrose, Colo., released in Montrose City Council minutes, October, 20, 2014, http://www.cityofmontrose.org/ArchiveCenter/ViewFile/Item/2042.
  60. “Best Practices in Segmentation,” Equifax, https://assets.equifax.com/assets/usis/best_practices_in_segmentation_eBook.pdf.
  61. “Experian Mosaic USA Group and Segment Listing,” String Automotive Marketing, http://resources.stringautomotive.com/mosaic_profiles.