Amazon was founded on July 5, 1994, and launched its online store in 1995, letting people buy books from the comfort of their homes. Twenty-five years after its inception, Amazon now sells everything from taco holders shaped like dinosaurs to tongue brushes that humans can use to lick their cats. And you’d have to be living under a rock to not know about Amazon.
But what did people think of Amazon in its early days—the days before the tongue brushes? Today we’ve got a sample from the mid-90s before founder Jeff Bezos was a billionaire.
In November of 1995, Knight-Ridder distributed an article that was published in newspapers around the country explaining that you can find almost any book at this “Internet store” called Amazon.
There’s a big, new bookstore in town, and there’s a catch—you won’t find it on any Seattle street map. So if you want to wander down its aisles and peruse the selection, you’ll have to hook up to the Internet.
Of course, hooking up to the internet was a much more novel experience in 1995. But if you had a connection, and millions of Americans were getting online in the mid-90s, you had access to over 1 million titles.
The Knight-Ridder article noted a few things that might be weird to people in the year 2019. First, you could pay by credit card or you could call a toll-free number and give your credit card number over the phone. You could even fax the credit card info if that was your thing. Secondly, shipping was $3 per order plus $0.95 per book. Today, Amazon has free shipping for all orders over $25 and for anyone who subscribes to the company’s Prime membership.
But what did people think of this new service on the so-called Information Superhighway? The first thing almost everyone mentioned was the impressively wide selection of books.
From the October 22, 1995 issue of the Tallahassee Democrat newspaper:
In a test of the company’s abilities, a search was made for a little-known John Steinbeck book, “The Sea of Cortez.” Within seconds, the Amazon.com search capabilities popped up the title as available.
It may seem ridiculously mundane these days, but being able to find a rare book took quite a bit more effort in the era before Amazon’s arrival. The best you could do was ask your local bookstore to order it for you, but if it was out of print, you might be out of luck. One of the truly revolutionary things about Amazon, at least from this nerd’s perspective, was the ability to find used books on the site.
The Wall Street Journal published an article about Amazon on May 16, 1996, describing “Jeffrey Bezos” as a “whiz-kid programmer on Wall Street” before opening up the online retailer. The people quoted in the article described the convenience of being able to order from anywhere and customers were incredibly loyal.
From the WSJ:
Mr. Bezos says 60% of his orders come from repeat customers. “It’s not in my nature to be hip, but Amazon is the finest bookstore I’ve ever been to,” says Don K. Pierstorff, a 60-year-old college professor in Costa Mesa, Calif., who says he has placed 12 orders during the past several months.
In the early days of Amazon Bezos was also constantly noting that he wasn’t going to put traditional brick-and-mortar bookstores out of business.
“We are not really competing with physical bookstores,” Bezos told the Christian Science Monitor in September of 1996. “The key is that people like to get out of their houses. I still go to physical bookstores, and I’m not going to stop. I even buy books there. I like the tactile experience.”
People like to get out of their homes? Speak for yourself, Jeff. Sorry, speak for yourself Jeffrey.
By 1997, there were plenty of skeptics who thought that Amazon wouldn’t be able to stick around. The company went public on May 15, 1997, and the naysayers were quick to point out any perceived weakness in the company. George Colony from Forrester Research referred to the company as “Amazon.toast.” The Wall Street Journal ran with the headline “Amazon.bomb” in 1999 after the company’s stock tanked.
And Slate went with the headline “Amazon.Con” for an article on January 5, 1997 that was meant to ridicule how difficult Amazon was compared with your neighborhood bookstore. The byline for that Slate piece was shared by two writers, Jonathan Chait and Stephen Glass. Yes, the same Jonathan Chait who supported the “liberal case” for invading Iraq, and Stephen Glass, one of the most famous journalist hoaxers of all time—so famous, in fact, they even made a movie about him in 2003 called Shattered Glass.
What did these two great minds produce? Some zingers that would be considered lame by even elementary schoolyard standards:
In fact, Amazon’s “megawarehouse” in downtown Seattle contains just 200 or so titles. Any other book must be obtained from a wholesale distributor or the publisher. This is exactly what any traditional bookstore does when it doesn’t have a book in stock. The difference is that traditional bookstores start out with a lot more than 200 titles in stock. “Earth’s Biggest Bookstore”? More like “Earth’s Smallest.”
Another complaint from Chait and Glass was that ordering a book from Amazon took way too many steps:
After clicking your purchases into a “shopping cart,” you are directed to a “secure Netscape server” that will encrypt your credit-card information. After this is done, you are told: “Finalizing Your Order Is Easy.” Nothing could be further from the truth. Lower down in the verbiage, Amazon concedes, “Though we have tried hard to make this form easy to use, we know that it can be quite confusing the first time.” Amazon users have to page through screen after screen of details about shipping charges, refund rules, and disclaimers about availability and pricing. Then you are told to allow between three and seven days for delivery after your book leaves Amazon’s warehouse. “Upgrading to Next Day Air does NOT [their emphasis] mean you’ll get your order the next day.”
Total online time from when we accessed Amazon’s home page to when we completed the book order: 37 minutes and 12 seconds. It would be shorter once you got the hang of it.
You can’t please everyone, I suppose.
But Bezos has had the last laugh, it would seem. Not only is Bezos the wealthiest person in the world at over $155 billion, Amazon currently controls 42 percent of the dead-tree book market, 88.9 percent of the ebook market, and half of all online sales in the U.S. Amazon controls 7.7 percent of all retail, online and off, in the U.S. according to the latest numbers. And with its purchase of Whole Foods in 2017, it’s now the fifth largest seller of groceries in the country. And, as of last year, Amazon Web Services controlled 40 percent of the cloud market.
In 2017, the Washington County Sheriff’s Office, just outside Portland, Oregon, wanted to find a man covered in dollar bills. The unidentified man, whose profile photo showed him lying on a bed covered in paper money, had been making concerning posts on Facebook, the sheriff’s office said. So the department ran the image through a powerful new facial recognition and analysis system built by Amazon, called Rekognition, and used Rekognition to compare it with booking photos used by the department. According to Chris Adzima, a senior systems information analyst, the officers “found a close to 100% match.”
Adzima touted the efficacy of Rekognition in a guest blog post for Amazon. He had other examples of its usefulness, too: a suspect who was wanted for allegedly stealing from a hardware store, another who’d been captured on surveillance cameras using a credit card later reported as stolen. Overall, Adzima wrote, Rekognition represented “a powerful tool for identifying suspects for my agency. As the service improves, I hope to make it the standard for facial recognition in law enforcement.”
For the most part, that’s how Rekognition has been introduced and sold: as a wondrous new tool designed to keep the public safer; Amazon’s one-stop superpower for law enforcement agencies.
But superpowers tend to come with unintended consequences, and Rekognition, in particular, has some prodigious—and highly concerning—blind spots, especially around gender identity. A Jezebel investigation has found that Rekognition frequently misgenders trans, queer and nonbinary individuals. Furthermore, in a set of photos of explicitly nonbinary individuals Rekognition misgendered all of them—a mistake that’s baked into the program’s design, since it measures gender as a binary. In itself, that’s a problem: it erases the existence of an already marginalized group of people, and, in doing so, creates a system that mirrors the myriad ways that nonbinary people are left out of basic societal structures. What’s more, as Rekognition becomes more widely used, among government agencies, police departments, researchers and tech companies, that oversight has the potential to spread.
This isn’t a new problem. As Vice wrote earlier this year, “automatic gender recognition,” or AGR, has long been baked into facial recognition and analysis programs, and it’s virtually always done so in a way that erases trans and nonbinary people. Jezebel’s investigation shows that these same issues exist deep within Rekognition.
The program is designed around Amazon’s assumptions about gender identity, an omission that becomes even more disturbing as Amazon’s software gets silently integrated into our lives. On Github, a platform where software developers maintain their code, there are 6,994 instances where Rekognition and gender are mentioned together. These projects represent a future where Rekognition’s technology—and assumptions of gender—forms the backbone of other apps and programs; thousands of basic systems baked into society all silently designed to flatten gender identity. (A spokesperson for Amazon initially agreed to speak to Jezebel about our research, then failed to respond to five followup emails and a phone call.)
If systems are not designed to include trans people, inclusion becomes an active struggle: individuals must actively fight to be included in things as basic as medical systems, legal systems or even bathrooms. This creates space for widespread explicit discrimination, which has (in, for example, the United States) resulted in widespread employment, housing and criminal justice inequalities , increased vulnerability to intimate partner abuse and, particularly for trans people of colour, increased vulnerability to potentially fatal state violence.
These harms have the potential to be even more insidious as the technology becomes imbedded in our day-to-day lives. What happens when you can’t use a bathroom because an AI lock thinks that you shouldn’t be there? What happens to medical research or clinical drug trials when a dataset misgenders or omits thousands of people? And what happens when a cop looks at your license and your machine predicted gender doesn’t match what they see? A world governed by these tools is one that erases entire populations. It’s a world where individuals have to conform to be seen.
When Amazon introduced Rekognition in November 2016, it was depicted as a fun search engine—a service meant meant to help users “detect objects, scenes, and faces in images.” The product was relatively non-controversial for its first two years of life, but that changed in May of 2018, when the ACLU of California revealed Rekognition was being sold to police departments as a fundamentally different—more serious, and more powerful—tool. Amazon had been aggressively marketing the product as a potent surveillance tool, the ACLU reported, suitable for both government agencies and private companies.
It’s not just Amazon, of course: as companies, governments, and technologists tout the business and security insights gleaned from machine intelligence, public debate over the rollout of such software has escalated—particularly as examples of gross misuse emerge. The Chinese government has used facial recognition to profile and track Uighurs, a persecuted, largely Muslim minority population. In the U.S., CBP continues to tout the efficacy of biometric surveillance both on the U.S Mexico Border and within the country’s interior, where airports nationwide are beginning to roll out facial recognition based check-in.
In April of this year, Microsoft’s president said the company had rejected a request from a law enforcement agency in California to install their facial recognitionsystem in officers’ cars and body cameras due to concerns of false positives, particularly of women and people of color,since the system had largely been tested on photos of white men. Those concerns are proving to be well-founded: last year a study from MIT and University of Toronto found that the technology tends to mistake women, especially those with dark skin, for men. Separately, an investigation by the ACLU found that Rekognition falsely matched 28 members of Congress with booking photos.
Meanwhile, though, facial recognition products continue to be scooped up by police departments and private companies, and used in opaque, unregulated, and increasingly bizarre ways. Amazon clearly wants to be the industry leader: the company’s shareholders recently voted down a proposal by activist investors to ban the sale of facial recognition to governments and government agencies.
“When we’re thinking about imperfect systems like facial recognition there are two distinct concerns,” Daniel Kahn Gillmor, a senior staff technologist at for the ACLU’s Speech, Privacy and Technology Project, says. “One of them is that the systems might not be good enough, that they’ll sweep up the wrong people, that they’re biased against certain populations because they haven’t been trained on those populations, that they’re more likely to make mistakes or put people in certain categories based on biases that are built in.”
But on other hand, Gillmor says, “If there are no technical problems with the machinery,” which might be the case in a decade or so, “we have another set of problems, because of the scale at which this technology can be deployed.” Facial recognition and analysis, Gillmor says, “provides a mechanism for large scale potentially long-lasting surveillance” at a scale, he says, “that humans have never really been able to do before.”
That’s why the technology is a concern, regardless of whether it’s extremely accurate or extremely inaccurate, according to Gillmor. “If it’s not good enough, it’s a problem,” he told us. “And it’s a problem if it is good enough. It’s always a problem.”
To understand how Amazon’s software analyzes trans and nonbinary individuals, we built a working model. But it was unclear whether we could responsibly look into Rekognition’s gender analysis systems at all.
Researching corporate machine learning algorithms is fraught with ethical concerns. Because of their ability to optimize in real time, the act of testing a system inevitably ends up refining it. And by calling out flaws in its design, you may be tacitly condoning an update to the system rather than recommending that the entire premise of Automated Gender Recognition should be reconsidered.
And everything that we load into the program becomes part of the large Amazon’s large corpus of data. According to Amazon’s FAQ, Rekognition stores image and video inputs in order to improve their software; Rekognition data becomes Amazon’s training data. This means, for instance, that Happy Snap, a Rekognition powered find-and-seek adventure mobile app designed for kids, is likely inadvertently training the Washington County Sheriff Office’s surveillance operations. (Happy Snap did not immediately return an email requesting comment.)
You are able to opt out of this default behavior, but the process isn’t exactly easy or clear. Before we felt comfortable using Rekognition to investigate their AGR, we wanted to ensure that our research would not inadvertently optimize a surveillance system that we’re simply not sure should have existed in the first place. To opt out, Jezebel contacted Amazon’s Technical Support, who in turn contacted the Rekognition engineers. After two weeks we received confirmation that the images we would use to test the AGR components of Rekognition would not be used to optimize the system. Only then did we feel comfortable starting our experiment.
Amazon provides developers with detailed documentation for how to build a facial analysis and recognition system using their Rekognition software—a troubling fact given how easily such technology can be weaponized. We used this documentation to build a version of Rekognition that compared the gender predictions and confidence thresholds across hundreds of photos of nonbinary, queer, and trans individuals with binary ones.
Our nonbinary and trans dataset was sourced from the Gender Spectrum Collection, a stock photo library of trans and gender non-conforming individuals created by Broadly, a former Vice subsite focusing on gender and identity.
Zackary Drucker, the photographer and artist who shot the photos for Vice, explains to Jezebel that the stock photos were created “to fill a void of images of trans and nonbinary people in everyday life. Stock photographs have always neglected to include gender diverse people, and that has further perpetuated a world in which trans people are not seen existing in public life.”
For our purposes, the GSC also helped us draw a very direct comparison between how Rekogition treats gender-conforming versus non-conforming individuals.We sourced our binary dataset by using the captions from the Broadly photos to identify visually similar photos of gender-conforming individuals on Shutterstock. For instance, if a photo from Broadly’s dataset was captioned “a transmasculine person drinking a beer at a bar” we would scrape Shutterstock for photos of “a man drinking a beer at a bar.”
Of the 600 photos we analyzed, on average Rekognition was 10% more confident of its AGR scores on the Broadly dataset than the Shutterstock dataset. However in spite of these confidence scores, self-identified transmasculine and transfeminine individuals were misgendered far more frequently. Misgendering occurred in 31% of the images that contained a self identified trans person. Meanwhile, misgendering only occurred in 4% of the images in the Shutterstock dataset of binary individuals.
Interestingly, misgendering was also inconsistent across individuals. Broadly’s dataset contains multiple photos of the same people; there were instances where Rekognition’s AGR correctly gendered a person they had previously misgendered.
Though our data sets were admittedly limited, the difference in how Rekognition performs on a data with trans and nonbinary individuals is alarming. More concerning however is that Rekognition misgendered 100% of explicitly nonbinary individuals in the Broadly dataset. This isn’t because of bad training data or a technical oversight, but a failure in engineering vocabulary to address the population. That their software isn’t built with the capacity or vocabulary to treat gender as anything but binary suggests that Amazon’s engineers, for whatever reason, failed to see an entire population of humans as worthy of recognition.
Alyza Enriquez is a social producer at Vice and a nonbinary person, who participated in putting together the Gender Spectrum Collection and is featured in some of its photos. They weren’t surprised to learn that Rekognition doesn’t recognize nonbinary identities.
“It’s obviously disconcerting,” they told us. “When you talk about practical applications and using this technology with law enforcement, that feels like a dangerous precedent to set. But at the same time, I’m not surprised. We’re no longer shocked by people’s inability to recognize that nonbinary identity exists and that we don’t have to conform to some sort of category, and I don’t think people are there yet.”
Morgan Klaus Scheuerman is a PhD student in information science at the University of Colorado-Boulder. He and his collaborators have been conducting a large-scale analysis of how individuals of different gender identities are classified, correctly or otherwise, in AGR systems, particularly large-scale, cloud-based systems like Amazon and Microsoft.
Scheuerman’s research with his collaborators and his advisor, Jed Brubaker, an assistant professor in the Information Science department, found many of the same issues that we identified. They, too, struggled with whether it was ethical to test AGR systems at all. “We were going to analyze a specific system, and we decided not to,” Scheuerman says, “because their TOS said it was going to retain these images. We had to do the same tradeoff: what is the benefit of this research versus the impact it could have? Even if you can take your data back out, it’s probably embedded in the model already.”
In their lab, Brubaker and Scheuerman and their collaborators look at all kinds of areas of uncertainty, where tech systems struggle to respond adequately to the shades of grey in people’s lives. “We look at gender, death, breakups — scenarios where system don’t understand our social lives appropriately,” Brubaker explains.
Those shades of grey are often disregarded outright in building new technologies. Scheuerman points out that AGR, with all of its problems, is the continuation of a vastly oversimplified classification system that’s existed for a very long time, one that sorts people into “male” and female” and has very little room for any other identity.
“Even as people become more aware of differences in gendered experiences, there’s still this view that when that difference occurs, it’s an outlier,” Scheuerman says. “Or the term that engineers use, an ‘exception case,’ a ‘boundary case.’”
The “edge cases,” Scheuerman says, are often disregarded when building new technologies because, as he puts it, “they make things more technically difficult for people to accomplish. So we have this system whose main task is to identify gender. [S]tarting with a clear binary is technically the most feasible. But then when you say we want to include nonbinary, that makes the entire task obsolete. Basically, the data that you’re using then makes men and women no longer fall into a specific category.”
Brubaker, Scheuerman’s advisor, points out that AGR is also engineered towards how gender looks, not how people experience it.
“The systems have a strong bias towards how gender is presented, not how it’s experienced,” he says. “These are systems whose only way of understanding the world is through vision. They just see. That’s all they do. So if you think of gender as something that can only be seen, not self-reported, that’s a very narrow, particular, and in some cases very bizarre way to think about gender.”
Zackary Drucker, the photographer who shot the photos we used in testing Rekognition,said our use of the photos is “further evidence that the collection has utilitarian purpose in the world far beyond what we may have imagined. On the other hand, it’s so alarming that technology may be used to identify people.”
“There’s this element to trans and nonbinary people just throwing a wrench into the system that disproves its accuracy,” she added. “If this system is equally confident that one person is one gender or the other in different situations, that’s evidence that this technology is as false as the gender binary.”
In the past, Amazon has responded to critiques of its product by arguing that critics fail to understand the nuances of the tech. In a response to a New York Times article, for instance, Amazon stated: “Facial analysis and facial recognition are completely different in terms of the underlying technology and the data used to train them. Trying to use facial analysis to gauge the accuracy of facial recognition is ill-advised, as it’s not the intended algorithm for that purpose (as we state in our documentation).”
More specifically, facial analysis is meant to predict human attributes like age, emotion, or gender, whereas recognition is about matching two faces. Our investigation, focused specifically on the gender predictions from Rekognition’s facial analysis algorithms. Automatic Gender Recognition (AGR) is a subfield of facial recognition that seeks to identify the gender of individuals from images or videos.
Researchers have historically built these systems by analyzing the frequency of a human voice, looking at the texture of skin, measuring facial alignment and proportions, and analyzing the shape of breasts. AGR systems, in other words, have been designed under the presumption that gender is physiologically-rooted wherein the body is the source of a binary gender, male or female.
This design shortcoming has real consequences, and the problems will only be compounded as they’re added to other surveillance systems.
This isn’t just about calling out the evident biases of Amazon’s engineers. The real concern here is that these biases are so deeply embedded into the very structure of an institution as powerful as Amazon, which deploys technology for our schools, businesses, and governments. As writer and educator Janus Rose points out, “If we allow these assumptions to be built into systems that control people’s access to things like healthcare, financial assistance or even bathrooms, the resulting technologies will gravely impact trans people’s ability to live in society.”
Rose points out that AGR has already been used to bizarre effect:
Evidence of this problem can already be found in the wild. In 2017, a Reddit user discovered a creepy advertisement at a pizza restaurant in Oslo, which used face recognition to spy on passers-by. When the ad crashed, its display revealed it was categorising people who walked past based on gender, ethnicity and age. If the algorithm determined you were male, the display would show an advertisement for pizza. If it read you as female, the ad would change to show a salad.
The problem is, in other words, both larger and more intimate than just how it might be used by police. We can envision, for instance, a world in which this AGR system implemented in an office in the form of prohibiting someone of the “wrong” gender to enter a bathroom. With this technology, it’s entirely possible that a trans individual would unable to use a single bathroom in the building.
We reached out to all the companies using Rekognition for facial analysis as listed on Amazon’s information page for the product. Only two got back to us in a meaningful way. One was Limbik, a startup that uses machine learning to help companies understand whether their videos are being watched, and by who. They told us that Amazon’s binary gender settings posed a problem for them: “We have noticed this as an issue for us, as the better we can tag videos with proper tags the more accurate we can be with predictions and improvement recommendations. It would be best if we could get this type of information as it would help us categorize videos better and help with prediction.”
Without that information, Limbik added, they have to specify to customers what their analysis, using Rekognition, does and doesn’t do. “Since Rekognition only returns a binary value for gender, we have to make sure that, to customers, we specify that it is biological sex that is examined and not gender specifically and that it isn’t perfect. We have internal conversations about this issue and have discussed remedies but as we can have upwords of 1000 tags connected to a video coming from other Rekognition services, our internal tagging methods, manual human tagging and other methods, we haven’t found a good way to address this.”
Arguably, no one will find a “good way to address this,” or one that even remotely repairs the potential harms that Rekognition could do to vulnerable populations. That Amazon is pushing software like this at a time when trans rights are under full on assault by the Trump administration indicates that Amazon, intentionally or not, is working against the interests of some of society’s most vulnerable members. This is not a theoretical concern. It’s an urgent one, and the dystopic implications of how it might be used are getting more real every day.
Correction: An earlier version of this post misidentified Os Keyes as a PhD. They are a PhD student.
Yesterday, as the video game industry’s attention was focused squarely on the final day of the E3 convention in Los Angeles, Amazon’s video game division quietly laid off dozens of employees.
Amazon Game Studios, which is currently developing the online games Crucible and New World, told affected employees on Thursday morning that they would have 60 days to look for new positions within Amazon, according to one person who was laid off. At the end of that buffer period, if they fail to find employment, they will receive severance packages.
Amazon also canceled some unannounced games, that person told Kotaku.
The company wouldn’t say exactly how many employees it laid off, but confirmed the news when reached by Kotaku today.
“Amazon Game Studios is reorganizing some of our teams to allow us to prioritize development of New World, Crucible, and new unannounced projects we’re excited to reveal in the future,” an Amazon spokesperson said in a statement. “These moves are the result of regular business planning cycles where we align resources to match evolving, long-range priorities. We’re working closely with all employees affected by these changes to assist them in finding new roles within Amazon. Amazon is deeply committed to games and continues to invest heavily in Amazon Game Studios, Twitch, Twitch Prime, AWS, our retail businesses, and other areas within Amazon.”
Amazon first began ramping up its video game division in 2014, although things haven’t gone well for Amazon Game Studios so far, with the company canceling its first game Breakaway back in 2017. Some of its prestige hires, like Far Cry 2 director Clint Hocking and Portal director Kim Swift, left the company before shipping any games, and Amazon’s massive investment in the Crytek engine to create Lumberyard has not made much of a splash.
While Amazon’s efforts to get into video games haven’t panned out, there’s one arena where the company has managed to release multiple titles: its own warehouses.
According to a report from The Washington Post, Amazon has a number of its warehouses outfitted with optional games playable from screens at workers’ desks. The games, which come with colorful names like Dragon Duel and CastleCrafter, are played by completing the tasks warehouse employees spend most of their time doing: Pulling packages from shelves and sorting them into outgoing bins. As more tasks are completed, workers progress towards set goals that reward them with badges, points, or “Swag Bucks” that can be redeemed for company merch.
The ultimate goal seems to be gamifying the drudgery of working in an Amazon warehouse—where conditions have reportedly been abysmal for some time. As anyone familiar with video games knows, grinding for points, progress and rewards can go a long way towards making menial tasks more bearable—fun, even—but it’s only fun if it’s fair, and it’s only fair if players know how the system works. Which, the Post notes, isn’t necessarily the case, since there’s nothing stopping an employer from bumping up quotas without telling anyone in an attempt to up worker productivity.
Amazon has a well-documented history of treating laborers poorly, and Amazon leadership has an equally well-documented disinterest in using the company’s vast resources to significantly improve their circumstances. Games may be useful in improving worker happiness, but it’s only sustainable if their needs elsewhere are being met. Otherwise, they’re as predatory as any loot box scam.
Fun fact: Snippets of your Alexa conversations may be heard and read by thousands of Amazon employees. According to recent reports, Amazon has an international team of employees who work to help Alexa better understand your many commands and develop new ways for the AI to interact with users. This requires them to listen to snippets of what your Echo speakers and other Alexa devices are recording. Sounds eerily familiar to us.
Not only are real people listening to you talk to (and around) Alexa, but the conversations they listen in on are being transcribed and annotated by Amazon’s employees. These transcriptions are then used to “teach” the Alexa AI to recognize more commands.
If you’re sketched out by this, we understand. Especially since what you say is only kind-of, sort-of associated with your account, as Bloomberg describes:
“A screenshot reviewed by Bloomberg shows that the recordings sent to the Alexa reviewers don’t provide a user’s full name and address but are associated with an account number, as well as the user’s first name and the device’s serial number.”
While you’ll never be able to stop Amazon employees from listening in on whatever you say to your Alexa, you can at least turn off any features that make this easier. For example:
Open the Alexa mobile app
Tap the Menu button in the upper-left of the screen
Go to Alexa Account > Alexa Privacy > Manage how your data improves Alexa
Turn off “Help develop new features” and “Use messages to improve transcriptions” for all profiles on your account
Bloomberg notes that Amazon’s team might still analyze your Alexa recordings “by hand,” but this at least opts you out of some facet of Amazon’s voice study. The only real solution at this point is to ditch your Amazon devices altogether, but adjusting these privacy settings should hopefully help keep unnecessary third parties out of your business a little bit.
If you’re already a Nintendo Switch Online customer you can still take advantage of the deal. Instead of your free months happening now, your free months will stack on top of the months you’ve already paid for. The deal is available through the end of September.
If you have a Nintendo Switch and Amazon Prime it’s pretty much a no-brainer.
Goodbye Big FiveReporter Kashmir Hill spent six weeks blocking Amazon, Facebook, Google, Microsoft, and Apple from getting her money, data, and attention, using a custom-built VPN. Here’s what happened.
Week 6: Blocking them all
A couple of months ago, I set out to answer the question of whether it’s possible to avoid the tech giants. Over the course of five weeks, I blocked Amazon, Facebook, Google, Microsoft, and Apple one at a time, to find out how to live in the modern age without each one.
To end my experiment, I’m going to see if I can survive blocking all five at once.
Not only am I boycotting their products, a technologist named Dhruv Mehrotra designed a special network tool that prevents my devices from communicating with the tech giants’ servers, meaning that ads and analytics from Google won’t work, Facebook can’t track me across the internet, and websites hosted by Amazon Web Services, or AWS, hypothetically won’t load.
I am using a Linux laptop made by a company named Purism and a Nokia feature phone on which I am relearning the lost art of T9 texting.
I don’t think I could have done this cold turkey. I needed to wean myself off various services in the lead-up, like an alcoholic going through the 12 steps. The tech giants, while troubling in their accumulation of data, power, and societal control, do offer services that make our lives a hell of a lot easier.
Earlier in the experiment, for example, I realized I don’t know how to get in touch with people without the tech giants. Google, Apple, and Facebook provide my rolling Rolodex.
So in preparation for the week, I export all my contacts from Google, which amounts to a shocking 8,000 people. I have also whittled down the over 1,500 contacts in my iPhone to 143 people for my Nokia, or the number of people I actually talk to on a regular basis, which is incredibly close to Dunbar’s number.
I wind up placing a lot of phone calls this week, because texting is so annoying on the Nokia’s numbers-based keyboard. I find people often pick up on the first ring out of concern; they’re not used to getting calls from me.
On the first day of the block, I drive to work in silence because my rented Ford Fusion’s “SYNC” entertainment system is powered by Microsoft. Background noise in general disappears this week because YouTube, Apple Music, and our Echo are all banned—as are Netflix, Spotify, and Hulu, because they rely on AWS and the Google Cloud to get their content to users.
The silence causes my mind to wander more than usual. Sometimes this leads to ideas for my half-finished zombie novel or inspires a new question for investigation. But more often than not, I dwell on things I need to do.
Many of these things are a lot more challenging as a result of the experiment, such as when I record an interview with Alex Goldman of the podcast Reply All about Facebook and its privacy problems.
I live in California, and Alex is in New York; we would normally use Skype, but that’s owned by Microsoft, so instead we talk by phone and I record my end with a handheld Zoom recorder. That works fine, but when it comes time to send the 386 MB audio file to Alex, I realize I have no idea how to send a huge file over the internet.
My Gmail alternatives—ProtonMail and Riseup—tell me the file is too large; they tap out at 25 MB. Google Drive and Dropbox aren’t options, Dropbox because it’s hosted by Amazon’s AWS and relies on Google for sign-in. Other file-sharing sites also rely on the tech giants for web hosting services.
Before resorting to putting the file on a thumb drive and dropping it in a IRL mailbox, I call up my tech freedom guru, Sean O’Brien, who heads Yale Law School’s Privacy Lab. He also does marketing work for Purism, the company that makes my laptop. O’Brien tries to avoid tech giants in favor of open source technologies, so I figure he might be able to help.
O’Brien directs me first to Send.Firefox.com, an encrypted file-sharing service operated by Mozilla. But… it uses the Google Cloud, so it won’t load. O’Brien then sends me to Share.Riseup.net, a file-sharing service from the same radical tech collective that is hosting my personal email, but it only works for files up to 50 MB.
O’Brien’s last suggestion is Onionshare, a tool for sharing files privately via the “dark web,” i.e. the part of the web that’s not crawled by Google and requires the Tor browser to get to. I know this one actually. My friend Micah Lee, a technologist for the Intercept, made it. Unfortunately, when I go to Onionshare.org to download it, the website won’t load.
“Hah, yes,” emails Micah when I ask about it. “Right now it’s hosted by AWS.”
As I encountered at the beginning of this experiment, Amazon’s most profitable business isn’t retail; it’s web hosting. Countless apps and websites rely on the digital infrastructure provided by AWS, and none of them are working for me this week.
Micah suggests I download it from Github, but that’s owned by Microsoft. Thankfully, O’Brien tells me I can download the Onionshare program directly from Micah’s server via command line on my Linux computer. He has to walk me through it step-by-step, but it works. I’m able to run Onionshare, drop my file into it, creating a temporary onion site; I send the URL for the onionsite to Alex so he can download it via the Tor browser. Once he downloads it, I tell Onionshare to “stop sharing,” which takes the onion site down, erasing the file from the web.
(In the end, Alex doesn’t even wind up using my audio for Reply All’s year-end finale. Sigh.)
I realize that’s a long story about sharing one file, but it’s a nice summation of what online tasks are like this week. There are workarounds for services offered by the tech giants, but they take extra research to find and are often more difficult to use. I wind up in strange parts of the internet, using Ask.com (formerly known as Ask Jeeves) as my search engine, for example, after I ixnay Google.com and realize DuckDuckGo is hosted by AWS.
But Ask.com is not necessarily a great replacement: it’s owned by IAC, the media and dating company behemoth. I’ve just traded one huge corporation seeking to monetize my searches for another, less competent one.
Some strange things are delightful: I discover that my Nokia phone can play the radio, so when I go running, I listen to NPR instead of my usual go-tos: Spotify, a podcast, or an audiobook. I’m planning a trip to South Africa, and wind up in charming conversations with the travel agents I have to call for help; it’s more costly and less efficient to book via a travel agency, but it’s the only option because travel-booking websites aren’t working for me.
Something not delightful is my Nokia 3310’s camera; it takes terrible, dark photos. I have an old Canon point-and-shoot digital camera, but I find I don’t take many photos this week—because without Facebook and Instagram, I don’t have anywhere to share them.
Sometimes I just can’t find a digital replacement. Venmo won’t work without a smartphone, so I pay our babysitter in cash. I start using a physical calendar to keep track of my schedule. When it comes to getting around, Marble Maps is an option, but I’m confused by the interface, so I stick to places I know, and buy a physical map as a back-up.
“It’s funny because Nokia used to have amazing navigation with Navtech,” a technologist says to me one day when I’m talking about how hard driving is without mapping apps, “but then they sold themselves to Microsoft.”
Fuck, I think, my Nokia 3310 might be made by Microsoft.
But it turns out, while Microsoft did buy Nokia’s mobile devices division for $7.2 billion in 2014, it sold Nokia’s “feature phone assets” two years later for a painful write-down, $350 million, to Foxconn (of Apple outsourcing fame) and to HMD Global, a Finnish firm helmed by a former Nokia executive. HMD Global now uses Nokia’s “intellectual property,” i.e. brand, to sell phones. Most “Nokia” phones are Android smartphones, but there’s a line of “classic” phones, including the 3310, which run an operating system called FeatureOS made by Foxconn.
My Nokia 3310 is not a tech giant phone, but it’s certainly tech giant adjacent.
To find out why the HMD Global is still selling dumbphones, I call its Hong Kong-based chief product officer, Juho Sarvikas. Sarvikas tells me that the company thought the core market for “classic” phones would be in Asia and Africa, where smartphones are less prevalent, but he says the devices have done surprisingly well in America.
“Digital well-being is a concrete area now,” he says. “When you want to go into detox mode or if you want to be less connected, we want to be the company that has the toolkit for you.”
“So these phones are the nicotine patch for smartphone addiction,” I say.
He laughs, “I’ve never put it that way before, but yes.”
I had assumed that the phones were for parents who wanted their kids to have phones sans a pipeline to social media and apps.
“That too,” says Sarvikas.
Many people I talk to about this experiment liken it to digital veganism. Digital vegans reject certain technology services as unethical; they discriminate about the products they use and the data they consume and share, because information is power, and increasingly a handful of companies seem to have it all.
When I meet a full-time practitioner of the lifestyle, Daniel Kahn Gillmor, a technologist at the ACLU, I’m not totally surprised to discover he’s an actual vegan. I am surprised by the lengths to which he’s gone to avoid the tech giants: he doesn’t have a cellphone and prefers to pay for things with cash.
“My main concern is people being able to lead autonomous healthy lives that they have control over,” Gillmor tells me during a chat via Jitsi, an open-source video-conferencing service that will work on any web browser. There’s no proprietary app you have to download and it doesn’t require you to create an account.
Gillmor hosts his own email and avoids most social media networks (he makes exceptions for Github and Sourceforge, because he’s an open source developer who wants to share his code with others). He refers to joining social networks as being “bait” that lures other people into “surveillance traps.”
Gillmor thinks people will have better lives if they aren’t being data-mined and monetized by companies that increasingly control the flow of information.
“I have the capacity to make this choice. I know a lot of people would like to sign off but can’t for financial reasons or practical reasons,” he tells me. “I don’t want to come across as chastising people who don’t make this choice.”
And there are definitely costs to the choice. “How things are structured determines the decisions people can make socially,” he says. “Like you didn’t get invited to a party [via Facebook] because you chose not to be part of a surveillance economy.”
Gillmor teaches digital hygiene classes where he tries to get people to think about their privacy and security. He usually starts the class by asking people if they know when their phones are communicating with cell towers. “Most people say, ‘When I use it,’ but the answer is, ‘anytime it’s on,’” he says.
He wants people to think about their own data trails but also when they are creating data trails for other people, such as when a person uploads their contacts to a technology service—sharing information with the service that those contacts might not want shared.
“Once the data is out there, it can be misused in ways we don’t expect,” he says.
But he thinks it’s going to take more than actions by individuals. “We need to think of this as a collective action problem similar to how we think about the environment,” he says. “Our society is structured so that a lot of people are trapped. If you have to fill out your timesheet with an app only available on iPhone or Android, you better have one of those to get paid.”
Gillmor wants lawmakers to step in, but he also thinks it can be addressed technologically, by pushing for interoperable systems like we have for phone numbers and email. You can call anyone; you don’t need to use the same phone carrier as them. And you can take your phone number to a different carrier if you want (thanks to lawmaker intervention).
When companies can’t lock us into proprietary ecosystems, we have more freedom. But that means Facebook would have to let a Pinterest user RSVP for an event on its site. And Apple would need to let you Facetime an Android user.
The Amazon block continues to be the most challenging one for me.
My friend Katie is in town from New York; we have plans to meet for dinner one night at a restaurant near my house, an event marked on my physical calendar. On the morning we are to meet, I get an email from her to my Riseup account with the subject line, “What is happening.”
Katie had been sending me messages for days via Signal, but I hadn’t gotten them because Signal is hosted by AWS. When she didn’t hear from me, she sent an “ARE YOU GETTING MY TEXTS” email to Gmail, and got my away message directing her to my Riseup account.
I tell her dinner is still a go, but it’s a reminder of the costs of leaving these services. I can opt out, but people might not realize I’ve left, or might forget, even if they do know.
One day, I ask my husband, Trevor, who declined to do the block with me because he has “a real job,” what the hardest part of my experiment is for him. “I never know if you’re going to respond to my texts,” he says.
“What do you mean?” I ask. “What have I not responded to?
“I sent you some messages on Signal,” Trevor says, having forgotten I am off it.
The block provides constant conversation fodder, and I find myself in conversations more often because, at social gatherings, I don’t have a smartphone to stare at.
An Ivy League professor tells me he regularly employs a Google blocker. “I had to disable it when I paid my taxes because they have Google Analytics on the IRS website,” he says. “It was kind of horrifying.”
People under 35 are intrigued (and sometimes jealous) of life without a smartphone; people over 35 just seem nostalgic.
One night, I run into Internet Archive founder Brewster Kahle, who is delighted to hear about the block. “It’s hard to get away from technology,” he says. “A friend was just telling me about trying to get a TV that wasn’t smart and didn’t have a microphone. It was impossible. He wound up getting a 27-inch [computer] monitor.”
Sometimes we make the choice to bring technology into our lives, but sometimes it’s forced upon us. Television makers have turned their products into surveillance machines that collect what we watch and what we don’t watch and sometimes even what we say, and that’s just how most TVs come now.
This week, I stop watching TV altogether because we don’t have cable and internet TV isn’t an option. I hadn’t meant to make this experiment a “rejection of all technology”—but it happens despite my intentions.
I’m most frustrated by this with my phone. I would love to be using a tech-giant free smartphone, but they aren’t really commercially available yet. If you want one, you need to be technically savvy and install a custom operating system on special phone models. That will hopefully change soon, with commercial offerings on the horizon from Eelo and Purism.
In the past, I would have assumed that idealistic projects like these were doomed, but there seems to be a heightened awareness these days of the dystopia created by the tech giants. Everywhere I look, I see criticism of the Frightful Five.
The tech giants laid down all the basic infrastructure for our data to be trafficked. They got us to put our information into public profiles, to carry tracking devices in our pockets, and to download apps to those tracking devices that secretly siphon data from them.
It’s in the air. The tech giants were long revered for making the world more connected, making information more accessible, and making commerce easier and cheaper. Now, suddenly, they are the targets of anger for assisting the spread of propaganda and misinformation, making us dangerously dependent on their services, and turning our personal information into the currency of a surveillance economy.
The world is flawed, and, fairly or not, the tech titans are increasingly being blamed.
A new book about “surveillance capitalism” by Harvard Business School professor Shoshana Zuboff argues that the extreme mining and manipulation of our data for profit is making an inescapable panopticon the driver of our economy.
Zuboff’s publicist sent me an advance copy as an e-book, and I’ve really been enjoying it, but I have to put it down this week because I can’t read it on my Kindle. Instead, I’m reading a physical book—Henry Thoreau’s Walden, which I ordered from Barnes & Noble. It too is full of calls to re-immerse ourselves in the natural world and not get too caught up in the distractions of modern life.
But, because it was published in 1854, it warns people to get away from work and newspapers rather than smart devices and screens.
For ideas about what the government can do about all this, I call Lina Khan, a fellow at the Open Markets Institute who wrote a blockbuster paper on the need to regulate Amazon’s monopoly power. (At least it’s a blockbuster by academic standards.)
Khan is in New York doing an academic fellowship at Columbia University where she is working on more papers. Khan doesn’t have a Prime account and avoids Gmail. Right before I call her, I see a tweet from a video producer at the Washington Post who got bombarded with baby ads after she had a stillborn delivery.
“Please, Tech Companies, I implore you: If your algorithms are smart enough to realize that I was pregnant, or that I’ve given birth, then surely they can be smart enough to realize that my baby died, and advertise to me accordingly — or maybe, just maybe, not at all,” she wrote in yet another reminder that privacy invasions have real harms.
I recount the story to Khan at the beginning of our call and say that this type of anger seems to be on the rise.
“The tech companies’ own actions are prompting the tide to turn. It is a belated reckoning, but it seems to be a reckoning nonetheless,” she says. “Companies started monetizing user data far before most users even realized their data was valuable, let alone being collected by private actors. If users had been told that the price for access would be near-total surveillance, would they have agreed? Would companies have been forced to offer different business models?”
Khan thinks law enforcers need to get involved to keep these companies from using anti-competitive tactics to dominate the business landscape, as public officials did in the ‘90s against Microsoft.
“Several of the big tech firms have acquired rivals and inhibited competitors through predatory conduct,” she says, a topic that’s been in the news recently with the exposure of Facebook emails where CEO Mark Zuckerberg talks about cutting off then-viral video service Vine’s access to the Facebook social graph. “They have engaged in practices that, a few decades ago, were widely considered monopolistic. We need investigations by the Department of Justice, the Federal Trade Commission, or state attorneys general.”
Europe is on the case, its regulators fining Google and saying Facebook can’t combine users’ data from Facebook, WhatsApp, and Instagram without their consent. But antitrust regulators in the U.S. have stayed away from these companies because their services are cheap or free, so they’re perceived as pro-consumer, which is ultimately what regulators want to encourage. But how does that work when the “consumer” is what the company is selling?
An uncomfortable idea I keep coming up against this week is that, if we want to get away from monopolies and surveillance economies, we might need to rethink the assumption that everything on the internet should be free.
So when I try to create a fourth folder in ProtonMail to organize my email and it tells me that I need to upgrade from a free to a premium account to do so, I decide to fork over 48 euros (about $50) for the year. In return, I get a 5 GB email account that doesn’t have its contents scanned and monetized.
However, I’m well aware that not everyone has $50 dollars to spare for something that they can easily get for “free,” so if that’s the way things go, the rich will have privacy online and the poor (and most vulnerable) will have their data exploited.
The previous week, my 1-year-old, Ellev, started saying that Alexa is “scary” and “spooky,” concepts she learned while trick-or-treating. It’s not unreasonable; I can see how a disembodied voice that’s always there and always listening would be disconcerting to a toddler—or really any normal human being.
But this week, she keeps crying for Alexa, wanting her to play “Baby shark” and other music that is otherwise absent from our home. “I miss Alexa,” she says, and I feel terrible both for depriving her and for making her dependent on an AI at such a young age.
On the last day of the block, Trevor and I are flying to New York, and he’s begging me to end the experiment early so we can use the iPad to keep Ellev happy. However, I’m adamant about maintaining the blockade for the six-hour flight.
“I’m changing my seat to a different part of the plane,” Trevor warns, kiddingly.
Trevor charges the iPad up in case my will falters. But I hold strong. We read books with Ellev, doodle on a magnetic drawing board, sing songs, and play for at least an hour with sticky, flexible “Wizzle sticks” that come in her Alaska Airlines snack pack. She sleeps for the last hour and a half of the flight, something she doesn’t usually do if there is an iPad available.
That was Ellev’s 26th flight. In the taxi after we land, Trevor turns to me and says, “That’s the easiest flight we’ve ever had with her.”
We get to our Airbnb in Brooklyn, which I booked months before the experiment. (It should technically be banned because Airbnb is hosted by AWS.) There’s a lock box on the outside of the apartment building that I open with a four-digit code. Inside is a key that gets us into the building and the same four-digit code opens a digital lock on the apartment’s door. I had written down the address and code on a piece of paper knowing I wouldn’t be able to access the Airbnb website.
We get in with no problem. We’re starving so head to a restaurant we passed in our taxi. Afterward, we need groceries, but Ellev is melting down, so I head to the Airbnb while Trevor goes to shop. I get into the building with the key, but once Ellev and I climb four flights of stairs to the apartment, I realize I don’t have the piece of paper with the door code on it—and I don’t remember the code.
Ellev is crying and trying to turn the doorknob. I start to feel that desperate panic of an earlier age that nowadays accompanies a dying smartphone battery.
My laptop is inside the locked apartment. I use a password manager, stored on that laptop, to get into all my online accounts, so I couldn’t get into Airbnb on another computer even if I wanted to toss in the towel on the blockade.
A masochistic part of my brain reminds me that I am in this mess because I used a site hosted by AWS. I could have just booked a normal hotel room via the phone, and then I would be picking up a new key card at this very moment. Technology creates the problems that technology solves, and vice versa.
While soothing Ellev, I try a bunch of different combinations on the lock based on my vague recollection of what the four numbers are. One of them works. As soon as I get inside, I plug my iPhone into the charger, relieved I’ll resume using it the next day.
Critics of the big tech companies are often told, “If you don’t like the company, don’t use its products.” I did this experiment to find out if that is possible, and I found out that it’s not—with the exception of Apple.
These companies are unavoidable because they control internet infrastructure, online commerce, and information flows. Many of them specialize in tracking you around the web, whether you use their products or not. These companies started out selling books, offering search results, or showcasing college hotties, but they have expanded enormously and now touch almost every online interaction. These companies look a lot like modern monopolies.
Since the experiment ended, I’ve resumed using the tech giants’ services, but I use them less. I deliberately seek out alternatives to do what I can, as a consumer, not to help them monopolize the market.
But the experiment went beyond that for me; it made me reexamine the role of tech in my life more widely. It broke me of that modern bad habit of swiping through my phone looking for a distraction rather than engaging with the people around me or seeking stimulation in my real world environment.
I deleted time-wasting apps like Words With Friends and a Hearts app. I look at Instagram less often, such that I see friends have tagged me in their stories, but don’t see the stories because they’ve already reached their 24-hour expiration mark.
I turn my phone off around 9pm each night and don’t turn it back on until I really need it the next day. It took two weeks of using my “nicotine patch” dumb phone, but I eventually lost the urge to start my day by reaching for my smartphone on the bedside table.
My iPhone tells me in my weekly “Screentime” reports that my usage is down significantly, to under 2 hours per day. My phone feels less like an appendage and more like a tool I use when necessary. I still love using Google Maps or Waze when I’m driving to an unfamiliar place, texting far-away friends and family members, and sharing a beautiful photo on Instagram—but I have regained the ability to put my phone away.
I went through the digital equivalent of a juice cleanse. I hope I’m better than most dieters at staying healthy afterward, but I don’t want to be a digital vegan. I want to embrace a lifestyle of “slow Internet,” to be more discriminating about the technology I let into my life and think about the motives of the companies behind it. The tech giants are reshaping the world in good and bad ways; we can take the good and reject the bad.
I ask Trevor if he notices anything different about me since the experiment.
“You never know what time it is anymore,” he jokes, but it’s true. I look at my phone infrequently and there are rarely clocks around, personal devices apparently having made them obsolete. I am more in the moment, but less aware of the actual hour and minute.
This is easily solvable: I’ll get a watch. It definitely won’t be a smart one.