News
Leave a comment

The Lure of a Fully Randomized Life

The Lure of a Fully Randomized Life


Max Hawkins had started to feel trapped by his optimized life. Every weekday, he woke up at exactly 7 a.m. and grabbed a single-­origin pour-­over from the best café in his San Francisco neighborhood, at least according to Yelp. He got on his bike and rode 15 minutes and 37 seconds along the best possible route to Google, where he was a software engineer. He spent eight hours working, then met friends for a beer at a craft brewery or a hang in Mission Dolores Park. But despite his great job and charmed life, something felt off.

One afternoon at work, while reading an academic paper, he located the source of his ennui. The study, which tracked the movements of 100,000 anonymized mobile-phone users over six months, had found that human mobility is surprisingly predictable: Our days default to simple, repeatable patterns.

The engineer part of Max’s brain thought the research was pretty cool, but he also found it unsettling. “There was something very programmed about the way I was living,” he told me. If his movements were that predictable, where did that leave his free will?

That night, as he lay in bed, he started thinking about how the structure of people’s lives determines the outcomes of their lives. His life’s structure had become disconcertingly rigid. He didn’t like the sense that, day to day, he was reading a story he’d already read.

The following Friday, Max and a friend were planning to hang out at a bar that had recently opened, one with all the qualities Max usually looked for: good beer, soft lighting, nostalgic indie hits on the playlist. But he couldn’t get the human-mobility study off his mind. The new hip bar is exactly where a computer would expect me to go, he thought. So he decided to design an algorithm to help him break from his routine.

Max had long been fascinated by how to infuse randomness into his work. (In college, he had learned to make computer-generated art, and often tried to inject a sense of serendipity into otherwise rigid coding projects.) So while others might have sought out variety by, say, trying a new restaurant, Max created an app.

The program allowed Max to call an Uber to take him to a surprise location in the city, known only to the driver. In what was perhaps a sign from the universe, his first attempt took him and his friend to the ER at the San Francisco General Hospital. (They ended up going to a bar around the corner and had a great time.)

Though Max had been living in San Francisco for years, his continued trials with the random ride generator brought him to places in the city he hadn’t known existed: a leather bar in the Castro, San Francisco State University’s planetarium, a bowling alley on a side of town he had never visited. His experiments were like uncertainty exposure therapy—and they became a bit of an obsession. He decided to apply the same process to other decisions in his life, building half a dozen apps to randomize the restaurants where he ate, the music he listened to, and even the tattoos he got. (He now has two geometric stick figures permanently etched on his chest.) Soon, Max was outsourcing as many decisions as possible to his army of randomization algorithms. “In choosing randomly,” he said, “I found freedom.”

Yet as I learned about Max’s experiments, I wasn’t so sure. Was ceding his life decisions to a computer algorithm actually a source of freedom—or a different kind of trap?


Humans have long designed mechanisms to outsource their decisions to chance: dropping sticks, flipping coins, rolling dice. And social-science research suggests that even if a person ends up making their own decision, aids such as these can help. In one 2019 study using coin flips, researchers from the University of Basel, in Switzerland, found that participants followed the counsel of the coin or used their reaction to the result as a window into their true preference. The action helped them make up their mind.

If you’re anything like me, the idea of surrendering your life choices to something like a six-­sided plastic cube is terrifying. Though “The dice made me do it” could, at times, be a convenient excuse, my hesitance to relinquish control would outweigh any potential for serendipitous delight. (In this way I am, I suppose, very different from Max.) But although making decisions randomly might seem like the ultimate act of the unknown, Michel Dugas, a psychology professor at the Université du Québec en Outaouais, in Canada, who specializes in uncertainty, told me that he isn’t so sure.

In the 1990s, Dugas created a scale to measure an individual’s capacity to withstand ambiguity and uncertainty; he coined the phrase “intolerance of uncertainty” as an explanation for many of his patients’ anxiety disorders. “When people are highly intolerant of uncertainty, they exhibit one of two behaviors: They either seek information or become impulsive,” he said. “Imagine you’re looking to buy a new pair of jeans. If you’re extremely intolerant of uncertainty, you may either try on every pair of jeans in the store or buy the one in the window.” Dugas doesn’t see random decision making as an indication of one’s superior uncertainty tolerance—rather, he believes it’s more likely to be another form of avoidance. By outsourcing your decision to chance, you are effectively dodging any responsibility for the result.

Another way of looking at this is through the explore-exploit trade-off, a concept from theoretical computer science. Say you’re an engineer in charge of writing code that chooses the next song that Spotify plays. The algorithm can “exploit” a user’s preferences by playing a song they are likely to enjoy, based on past data, or it can “explore” a person’s preferences by playing something different.

Exploiting is generally seen as the safe option, as the program bases its recommendation on what a user seems to like. However, this understanding of someone’s preferences can be incomplete or misleading. When an algorithm exploits, it risks missing out on a better option or failing to adapt to a changing environment. Anyone who has repeatedly played a song until they no longer enjoy it understands this conundrum.

Exploring, by contrast, comes with uncertainty. If the algorithm suggests a song that strays too far from a person’s typical tastes, it risks driving them away. But exploration is also how the system learns what people like. A playlist that relies too much on exploitation will eventually bore the listener, whereas the delight of an unexpected song might be what keeps them engaged. That said, seeking novelty can also have diminishing returns. Striking the right balance between exploiting the known and exploring the unknown is crucial for the sustainability of any system, our own life included.


In 2015, Max left his job at Google and went all in on randomized living. He gave up his apartment in San Francisco and wrote an algorithm to recommend different places to live around the world within his budget. He figured he would live one to two months in each place, before packing up and rolling the proverbial dice once more. His first move was to Ho Chi Minh City, Vietnam, on a one-way ticket. He would maintain a nomadic lifestyle for more than two years.

He also went to random gatherings. On one particular Saturday in Berlin, he attended 14 events, including a baby-photography meetup, an intro course on European truck driving, and a get-together at a sauna where all attendees lathered themselves with honey. On the whole, the hosts of these events were very welcoming. Max didn’t show up to a new environment and say, “The algorithm made me.” Instead, he approached each experience open to what it might teach him: He showed up curious, and his hosts responded in kind.

After a few years of living nomadically, Max returned to the States, but he continued his experiments with randomness. At the start of the coronavirus pandemic, Max and his then-­girlfriend, now-­wife, decided to take a road trip across the U.S., letting the algorithm decide their stops. The couple went all over—­from Mesa, Arizona, to London, Kentucky. After months of this, the algorithm sent them to Williamston, a rural swamp town in North Carolina’s Inner Banks region. Williamston was the home of a prisoner-­of-­war camp during World War II and later the site of freedom rallies in 1963. But by 2021, when Max and his girlfriend arrived, it was primarily a farming community.

While they wandered the town’s historic streets, Max was struck by a new sense of the futility of his own experiment. What are we even doing here? he wondered. In Williamston, they had no family, no friends—­not even a random Facebook event to attend. Max had realized that there might be a cost to randomizing his life, and the stop in Williamston laid it bare. “When you live randomly, you create lots of noise, but that noise doesn’t really move in any particular direction,” he said. “I realized I was seeing all this newness but wasn’t building toward anything.”

There is no fixed level at which we ought to explore or exploit; it varies from person to person and will change over time and circumstances. As the computer-science researchers Brian Christian and Tom Griffiths write in their book Algorithms to Live By, “Life is a balance between novelty and tradition, between the latest and the greatest, between taking risks and savoring what we know and love.” A 20-­something who is still trying to refine their tastes might explore more, whereas an octogenarian, who has a keen sense of who they are and what they like, might exploit what they know.

You might not think that taking an alternative route to work or visiting that restaurant that you’ve walked by a million times will fundamentally change who you are, but people benefit from exploration in at least a couple of ways. For one, exploring helps us find our tastes. If you always order the same dish at a restaurant, you’ll never know if there might be another one down the menu that you like better. But research has also shown that exploring exposes people to the type of low-­risk situations that build their tolerance for uncertainty. Trying a new exercise class or talking to a stranger in a relatively safe environment can make you more comfortable with uncertain situations in the future.

After Williamston, Max and his partner decided to make changes and put down roots. They signed a lease on a house in Los Angeles. But settling down did not mean that Max had abandoned his attempt to infuse more randomness into his life. He found a middle ground where he could take advantage of the benefits of a predictable routine without locking himself into more and more algorithmic sameness. Intrigued, I flew to L.A. to see what he meant.

We agreed to meet for dinner at a restaurant selected by Max’s algorithm. “It chose Oki-­Dog, a legendary punk hangout,” he texted me. “The food is…pretty bad.” As I arrived, I felt the butterflies you might feel before a blind date. When I entered the run-­down hot-dog joint, the guy behind the counter delivered some bad news: They were closing early.

A moment later, a man in a long-­sleeved graphic T-­shirt, purple pants, and wire-­rimmed glasses approached—this was Max. I remembered how he had told me about another algorithm he had written to send him a random clothing item from Amazon each month. I wondered whether the pants were part of his bounty. “Looks like the restaurant is closed,” I said. “No sweat,” he replied, with the nonchalance of someone used to pivoting. He prompted his app to pick another spot.

Ten minutes later, we were seated at a Chinese restaurant called Genghis Cohen. “Are you down to order randomly?” Max asked as he whipped out his phone. I recalled that according to a few of Max’s friends, with whom I spoke, he also liked to ask the waitstaff which dish people ordered the least, and then to order it. Ordering randomly seemed preferable to me. “Sure,” I said.

Max opened his phone’s calculator, which he had customized to include a button that would generate a random number. He divided the menu into sections that corresponded to different numbers, and soon enough, the algorithm had selected two dishes for us: curry chicken wings and a vegetable soup. They wouldn’t have been my first choices, but the first rule of randomized living is “Thou shall obey thy computer.”

Between slurps of surprisingly delicious soup, I asked Max what he’d learned from his experiments over the years. “I gained an appreciation for just how easily my life could be different,” he said. “A lot of people get very invested in the arc of their lives, but it made me realize how many aspects of my identity were based on arbitrary circumstances.”

As I listened to Max’s stories of visiting yoga classes in Mumbai and preschools in Dubai, I wondered how much of his lifestyle was performative versus authentic. Was he too committed to the bit? But the more I talked to Max, the more I was impressed by his level of self-­awareness. He hadn’t just been pursuing novelty for novelty’s sake. He was genuinely passionate about getting outside his bubble. Surrendering to the computer had given him the courage to sample the lives of the many people he might have been. “When you have a fixed plan, a fixed identity, a fixed routine,” Max said, “it’s easy to become trapped in a prison of your preferences.” I loved that phrase—“prison of your preferences”—because it perfectly captured the hollowness of a life that feels too expected, like a bag of chips engineered for your taste buds that somehow fails to satisfy.

Max told me that he isn’t sure how much he’ll continue randomizing his life. He and his wife plan to have a baby, and small children, he knows, thrive on routine. But even though he probably won’t pick up and move every month, he’ll probably continue to find ways to infuse his life with small doses of serendipity.

When I first learned about Max’s experiment, I thought he had found a convenient way to dodge taking responsibility for his decisions. Sorry, the computer made me do it. But I came to see that no matter where the algorithm sent him, Max had cultivated an admirable equanimity about where he ended up. He’d traded the security of knowing exactly where he was going for the serenity of being present wherever he arrived.


This article was adapted from Simone Stolzoff’s new book, How to Not Know: The Value of Uncertainty in a World that Demands Answers.


​When you buy a book using a link on this page, we receive a commission. Thank you for supporting The Atlantic.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *