(by Andi Wilt, Colorado Law 3L)
When is the last time you Googled someone’s name? There are many reasons why you might have done that. You could have been trying to learn more about someone who was about to interview you for a position, or maybe you were about to interview them, or even deciding whom you were going to interview in the first place. An applicant’s online presence is important to many employers, as one source indicates that 90% of executive recruiters say they do online research on applicants; up to 70% of employers who have used LinkedIn say they have decided not to hire someone based on something they learn about the applicant online.
So what’s the problem with a little online research about a person? Professor Latanya Sweeney found that a Google search for a black identifying name is 25 percent more likely to be accompanied by an arrest-related ad. Professor Sweeney explored the connection between “[b]lack-identifying” and “[w]hite-identifying” first names, which are “those for which a significant number of people have the name and the frequency is sufficiently higher in one race than another.”
These discriminatory search results can be very problematic. As Professor Sweeney explains:
Perhaps you are in competition for an award, an appointment, a promotion, or a new job, or maybe you are in a position of trust, such as a professor, a physician, a banker, a judge, a manager, or a volunteer, or perhaps you are completing a rental application, selling goods, applying for a loan, joining a social club, making new friends, dating, or engaged in any one of hundreds of circumstances for which an online searcher seeks to learn more about you. Appearing alongside your list of accomplishments is an advertisement implying you may have a criminal record, whether you actually have one or not. Worse, the ads don’t appear for your competitors.
Facebook’s Questionable Advertising Practices
In an another example of discriminatory advertising practices, a recent ProPublica report revealed how Facebook allows advertisers to target, or exclude users, based on their race. ProPublica tested these practices by purchasing an ad under housing category section of Facebook’s advertising service that excluded users with an African-American, Asian American, or Hispanic “ethnic affinity.” Facebook approved the ad in 15 minutes—in seeming conflict with both Facebook’s policies prohibiting “use of targeting options to discriminate against, harass, provoke, or disparage users” and the Fair Housing Act, which prohibits discriminatory housing practices involving advertising.
When ProPublica asked Facebook about why this option is available to advertisers, Steve Satterfield, the privacy and public policy manager at Facebook, said that “Facebook began offering the ‘Ethic Affinity’ categories within the past two years as part of a ‘multicultural advertising’ effort.” He further stated, “‘Ethnic Affinity’ is not the same as race, which Facebook does not ask its members about. Facebook assigns members an ‘Ethnic Affinity’ based on pages and posts they have liked or engaged with on Facebook.”
Just because Facebook does not specifically ask users about their race does not mean that this information cannot be readily determined in other ways. For example, Ethnic Technologies recently developed software specifically for the purpose of “predicting an individuals’ ethnic origins based on their full names, addresses and ZIP codes . . . build[ing] algorithms based on patterns in names from various ethnic groups.”
A First Amendment Connection
However troubling, there is a connection between the discriminatory ads at issue in both the Google search results and the Facebook ad-placement examples and First Amendment free speech protections. A court has recognized that Google’s search results and third-party ad placements are protected free speech under the First Amendment. Although that ruling arose out of a website owner’s complaint that Google was violating antitrust laws rather than anti-discrimination laws, this precedent created by this ruling may be found to have precedential value to future court rulings regarding online discriminatory practices. Just as the First Amendment protects Google’s right to treat the complaining website owner unfavorably in its search results, it may be found that these First Amendment protects extend to discriminatory arrest-related ads, or to allowing advertisers to select which of a company’s users may view an ad, based on their race.
Should Google’s search results and advertisement placements be protected by the First Amendment? Should Facebook be able to claim First Amendment protection in allowing its advertisers to customize which users may view its ads based on characteristics such as race? If you answered yes to one of the above and no to the other, how are the scenarios different? Should the ability of companies such as Google and Facebook to correlate data on non-protected characteristics, such as your name and ZIP code, to protected characteristics, such as your race, in designing their algorithms for search results or advertisement options?