The Cloister and the Starship
How to protect intellect and imagination from artificial intelligence
This past weekend, I took my daughter and her best friend to remote Tory Island, a defiant swath of land and rock around 9 miles off the coast of County Donegal, Ireland. Home to around 140 full-time residents, the island’s isolation makes it a blissful retreat from the hustle and bustle of modern life. Tory’s rugged beauty and promise of solitude has attracted numerous artists over the years, including Derek Hill (1916–2000), whose painting hut still stands overlooking the roiling Atlantic. As we meandered around the island and searched for puffins on its windswept cliffs, I found myself thinking about quiet and stillness, reflecting on how grounding it can be to step away, even briefly, from the ubiquitous technology of twenty-first century life.
Back in the early days of personal computing, technology was widely viewed as a pathway to the future, a valuable workplace tool that enhanced our productivity and expanded our access to information. But over the past decade and a half, as technology has increasingly come to reshape our attention, behavior, and even values, that optimistic vision of ever-smarter tools and ever-greater efficiency has morphed into a pessimistic reality of dependency. We have transitioned from using technology with purpose in the workplace to being passively controlled by it at all times, which has especially significant consequences for those of us engaged in creative fields.
While on Tory, I found myself thinking a lot about the unintended consequences of the technology we use—but also about how each new wave of technology, from the widespread adoption of PCs, to the Internet, to smartphones and now artificial intelligence, has forced increasingly abrupt periods of social and economic adjustment. It took years for society to grasp the implications of the personal computer and the Internet, but less time to adjust to smartphones. Now, AI adoption is happening almost overnight, evolving faster than our cultural, social, and psychological frameworks can keep up. It feels imperative to step back and reassess.
I. The iPhone and a Tale of Unintended Consequences
When Steve Jobs launched the first iPhone back in 2007, he called it “a revolutionary and magical product"—and it was, representing a thrilling promise of connectivity, empowerment, and possibility. Apple sold 270,000 iPhones domestically in the first week after launch, and sales rapidly rocketed into the millions as consumers embraced the idea of devices that were more than just phones; they were miniature computers, cameras, music players, GPS systems, and internet portals all rolled into one. They promised liberation: access to information anytime, anywhere. They held enormous potential for creativity and productivity, but they were small enough to fit in a back pocket.
For young people, in particular, smartphones promised greater independence and reach. Teens could express themselves, find communities, and be seen beyond the confines of their neighborhoods and schools. Smartphones enabled greater civic engagement, too, giving young people the ability to organize, mobilize, and raise awareness. But with all these promises and possibilities came a raft of unintended consequences that became clearer after 2012, the year the majority of American teenagers owned smartphones.
Steve Jobs, who dropped out of Reed College after one semester, continued to live on campus and audit classes there, including Shakespeare, calligraphy, and modern dance. After co-founding Apple at age 21, he centered his business philosophy around the intersection of technology and the liberal arts, with a deep belief in simple, user-focused, elegant design. He had a vision for how technology should serve people, not just functionally, but emotionally and aesthetically as well. He died just four years after launching the iPhone, and would most likely have recoiled in horror from the idea that his “magical and revolutionary product” could become, for many, an addictive obsession.
But that, sadly, is what happened. A growing body of research has linked excessive smartphone use with rising rates of anxiety, depression, and loneliness among adolescents. Social media, initially seen as a social equalizer, has become a space where users curate idealized versions of their lives to mask real-life feelings of anxiety, emptiness, and low self-esteem. Additionally, phones are hijacking our focus, leading people to spend hours scrolling, not out of enjoyment, but from compulsive behavior patterns. All the while, this constant stimulation reduces the time and space available for reflection, boredom, and face-to-face interaction, all crucial aspects of healthy mental development and emotional resilience.
To make matters worse, the time spent on phones and other devices dramatically increased during the COVID-19 pandemic. Amid lockdowns and physical distancing, people of all ages turned to devices not just for productivity, but also for entertainment and connection. This reliance on screens blurred boundaries between personal and professional life, and led time on devices to skyrocket: Adults’ screentime increased by 51 percent during those years while children’s increased by 67 percent. While the screentime of adults has since returned almost to pre-pandemic levels, the screentime of children and teens has remained worryingly high. For young people, it would seem that the pandemic simply established a new threshold of “normal.”
With Apple now manufacturing 200 million iPhones a year, and over 90 percent of American teenagers owning one—in addition to their AirPods, iPads, smartwatches, laptops, games consoles, and other connected gadgets—parents, educators, and psychologists alike have begun to worry about the long-term impact of raising children in this hyperconnected environment. Some irrefutable realities have emerged. As Jonathan Haidt illustrates in his groundbreaking book The Anxious Generation, all this technology is fundamentally changing how young people form identities, build relationships, and spend their time; their constant electronic engagement is also eroding patience, self-regulation, and the ability to engage deeply with complex ideas. Years ago, literature professors began to report that they no longer assigned lengthy novels because students could no longer read them; recently, the chair of Georgetown’s English department told the Atlantic that his students had trouble staying focused on even a sonnet.
Faced with growing concerns, some parents and schools took deliberate steps to create tech-free environments for children. These efforts have included device-free classrooms, phone bans during school hours, tech-free summer camps, and digital sabbaticals at home. Following Covid, a group of Brooklyn teens formed a “Luddite Club,” abandoning their phones and taking to the woods with books and guitars; the New York Times profiled the group, whose founders have since formed a nonprofit and launched branches around the country. The goal of all these various efforts was to restore focus, creativity, and real-world social interaction, giving kids the chance to develop their personalities and intellects and social circles without the constant pull of digital distraction, and ideally fostering healthier habits and a more balanced relationship with technology.
II. More Unintended Consequences: AI and the Dismantling of Education
Just as parents, educators, and policymakers were making some headway in addressing the screentime crisis, generative AI became widely available in late 2022, bringing a new wave of challenges. Now, instead of passively scrolling, people are engaging with intelligent systems that can chat to them, entertain them, do their reading, do their homework assignments, provide emotional support, and even make their life decisions. Having a chatbot always on hand for “deep” conversations can become emotionally addictive, leading some people to marry their AI chatbots. The efforts we made in helping kids unplug and reclaim focus is now being tested by technology that’s smarter, faster, and far more immersive than ever before.
Probably the most serious manifestation thus far is the destruction not only of high-school and college learning, but the philosophy of education and cognitive development that underpins it, which goes all the way back to Plato and before. Consider Hua Hsu’s recent essay in the New Yorker, “What Happens After A.I. Destroys College Writing?” for which the author interviewed current undergraduates to gauge the impact of AI on their studies. The students were initially coy when the author approached them over email, claiming that they used AI only for tasks such as organizing notes. When interviewed in person, they became much more voluble and candid. “Any type of writing in life, I use A.I.,” one student said, admitting he even used it for texting girls. He later acknowledged having used Anthropic’s Claude to generate two humanities papers in under an hour. Despite getting A- and B+ grades on the papers, he emailed Hsu that “I didn’t retain anything,” admitting “I couldn’t tell you the thesis for either paper hahhahaha.”
One could perhaps suggest that the small sample of students Hsu interviewed are the exception, not the norm. Professors do not want to believe that their students are engaged in wholesale cheating, or that the time they spend “grading” is being devoted to material churned out by ChatGPT. Parents do not want to imagine that their children are not getting the college education they are paying for—sometimes at great expense—or, indeed, any meaningful education at all. But the blunt reality is that AI use has now become the norm among students. Writing for New York Magazine, James D. Walsh sums up the new reality in his essay “Everyone Is Cheating Their Way Through College”:
Two and a half years [after the launch of ChatGPT, students at large state schools, the Ivies, liberal-arts schools in New England, universities abroad, professional schools, and community colleges are relying on AI to ease their way through every facet of their education. Generative-AI chatbots—ChatGPT but also Google’s Gemini, Anthropic’s Claude, Microsoft’s Copilot, and others—take their notes during class, devise their study guides and practice tests, summarize novels and textbooks, and brainstorm, outline, and draft their essays. STEM students are using AI to automate their research and data analyses and to sail through dense coding and debugging assignments. “College is just how well I can use ChatGPT at this point,” a student in Utah recently captioned a video of herself copy-and-pasting a chapter from her Genocide and Mass Atrocity textbook into ChatGPT.
Maybe it’s time to scrap those college bumper stickers. It doesn’t help that many professors haven’t taken the threat of generative AI seriously, instead dismissing it as a "stochastic parrot” that remixes language without any understanding. But this attitude ignores both the rapid improvement in AI writing over the past two and a half years and students’ rapid adoption of AI systems to complete all of their college assignments in record time. This undermines all pedagogical goals—especially those centered on problem-solving, original thinking, development of an argument, and rhetorical and expressive skill—leading to a situation where universities are now rubber-stamping degrees earned largely through the use of chatbots. Of course, by devaluing the college degree (any qualification issued after 2022 is now suspect) and by depriving themselves of learning skills the hard way, students are only sabotaging their prospects in the long run—but few 18- and 19-year-olds fully understand the repercussions.
Some software tools promise to detect the use of generative AI in academic assignments, but these have proven to be highly unreliable in practice. In tests, AI detection software has attributed wholly AI-generated essays to human authors while claiming that the Book of Genesis was almost certainly written by ChatGPT. Such scattershot results make accusations of cheating easy to dispute, giving plausible deniability to any accused student. Again, the combination of AI’s rapidly improving capabilities, academia’s lack of preparedness for its ethical and practical implications, and the lack of any definitive way of proving whether an assignment was completed by AI signals the death knell of traditional college. These conditions do not seem likely to change anytime soon. Reform in academia is a glacial process, while AI development is currently progressing at lightning-fast speed; and the faster AI advances, the more capable it becomes of evading the tools designed to detect it.
III: A Radical Solution: Niall Ferguson’s “Cloister” and “Starship”
In his Times essay, “AI’s great brain robbery — and how universities can fight back,” Niall Ferguson proposes a radical reimagining of higher education through a return to what he calls the “cloister,” a device-free, monastic-style learning environment designed to shield students from the cognitive erosion he (correctly) believes is being caused by generative AI. In this cloister, students would surrender all their electronics and engage with education in its most traditional form: reading printed books, participating in in-depth discussions, writing essays and solving math problems by hand, and undergoing rigorous oral and written examinations. By cutting off access to digital devices and AI during this part of the day, the cloister aims to foster intellectual discipline, deep comprehension, and the development of critical thinking skills that are now being undermined by easy access to AI tools.
Importantly, the proposal does not advocate for a complete rejection of AI. Rather, it seeks to shield students from unintended consequences by separating spaces of foundational learning from those where advanced technologies are used. Ferguson proposes that students would spend approximately seven hours each day immersed in focused study and interaction with peers and educators, while the remaining time—referred to as the “starship”—would be available for engaging with AI and other modern technology. The idea is to cultivate students who are comfortable using AI but who can do so with discernment and originality, grounded in a solid base of human-acquired knowledge and skills. This two-part structure reflects the reality that true education requires learning how to think independently, ask meaningful questions, and grapple with complexity without digital shortcuts.
To implement this vision, Ferguson suggests significant changes to university admissions practices. Candidates would need to demonstrate not only academic potential but also the capacity for self-discipline, resilience, and a willingness to embrace a more demanding educational model. The best students would thrive in the rigor of the cloister while also being adept in the high-tech landscape of the starship. The author admits that such a transformation is unlikely to occur at existing universities, which are deeply invested in their current systems, but he sees a unique opportunity at the University of Austin, where he serves as a founding trustee. There, he intends to push for this model to be adopted from the outset of the next academic year.
The aim of the cloister is not nostalgia for pre-digital education, but the preservation of human intellect in the face of rapid technological change. By creating an AI-free space during the critical stages of learning, the model seeks to prevent a future in which reliance on machines leads to widespread cognitive atrophy and loss of creativity. It is, Ferguson argues, a necessary response to the threat of “pseudo intelligence” overwhelming actual human intelligence. And with the bottom falling out of the job market for recent graduates—for the first time, the unemployment rate of Americans aged 22 to 27 with a college degree is higher than the national average—some may find this new path attractive.
IV: The Creative Imperative
Although Ferguson proposes the concept of the cloister as a safeguard for education, its relevance extends far beyond the classroom, and particularly into the realm of creative work. Creative disciplines, including screenwriting, fiction, poetry, and even visual storytelling, face similar challenges to those confronting academia. As generative AI tools have become increasingly capable of generating ideas and producing passable scripts, stories, and art in seconds, there is a growing risk that the human imagination will also atrophy under the weight of convenience. In this context, the cloister can be understood not just as a space for intellectual rigor, but a sanctuary for creativity itself.
Screenwriting, for instance, demands more than structural competence or clever dialogue—it requires insight into the human condition, the ability to convey emotional truth, and the ability to write in a distinctive voice. These are qualities that cannot be synthesized from predictive models or crowd-trained language patterns. If writers come to rely too heavily on AI, the result may be technically sound but artistically hollow; indeed, new research from the Wharton School at the University of Pennsylvania suggests that AI enhances the quality of an individual’s creative ideas but also causes groups to converge on similar concepts, diminishing creative diversity and true innovation. The cloister model offers a countermeasure, a protected environment in which the writer must wrestle with blank pages, flawed ideas, and evolving drafts without digital scaffolding. In this solitude, originality is shaped by struggle, contemplation, and the slow burn of authentic creative labor.
Moreover, just as Ferguson suggests students need to develop the art of Fragestellung—the formulation of deep, purposeful questions—so too must screenwriters cultivate the habit of creative inquiry. What makes a character’s decision believable? Why does a story matter now? What truth is being dramatized? These are questions that cannot be meaningfully answered by AI. They require silence, discomfort, reflection—conditions best fostered in a cloistered space where the creative mind is free from the relentless influence of digital suggestion and algorithmic patterning.
Importantly, this approach does not imply rejecting AI altogether. Just as the starship exists alongside the cloister in Ferguson’s educational model, the creative process might be structured to alternate between cloistered ideation and AI-assisted execution. For example, a screenwriter might use cloistered time to write the first draft by hand, then use AI tools later to check pacing, formatting, or to experiment with alternate endings. The key is sequence and intention: the core creative decisions must originate in the mind, not the machine.
Reclaiming the creative process from AI’s encroachment is not about preserving tradition for its own sake; it is about protecting the fragile conditions under which true creativity arises. Applied to the arts, the cloister becomes a radical but necessary framework for nurturing the kind of work that moves, provokes, and endures in an age when content is abundant but depth is rare. Tory Island functioned as such a cloister for Derek Hill and other artists who worked there—and such a model may be the only way to ensure that human creativity remains vital and irreplaceable in the years ahead.
Again. a rich meal of ideas to ponder.
We live in perilous times. We have way too much daily input to process thoughtfully. Modern humans have lost a motivating purpose. We are pre-programmed to be goal oriented and to gain satisfaction from making progress toward that goal. We need to do something that makes us contribute to something more enduring than ourselves.
An animal living in the wild is driven to sustain itself with food and its species through reproduction. When an animal has avoided immediate threats and eaten its fill, it rests, sleeps or plays. We turn to our phones.
Most of us humans are over-fed and have an excess of un-purposed time. Compare our “free time” with that of a hunter-gatherer or a servant in a prior century. We now seem to feel a need to be “busy” when there is no purpose to fulfill. The amount of medication taken to alter our sense of wellbeing is proof that we don’t know how to relax and be comfortable with our own company even when we have the time. Relaxation that is physically restorative is essential. But mental time to process the life being lived is necessary, too. Fill that time with your phone-life and your intellectual “in-box” will be overwhelmed and shut down leaving you in an unprocessed emotional wilderness. Drugs dull the effect, but don’t fix the primary problem.
I hope we will collectively start experimenting with ways to awaken the potential of the human brain rather than dull it by having AI do the thinking for us down to and including our emotional relationships. AI can easily be your best enabler.