Why AI Isn’t Funny, A Deep Dive into the Unbridgeable Chasm Between Algorithms and Human Humor

Like most people, artificial intelligence is not funny. This is not a casual observation; it is a conclusion reached through rigorous, weekly experimentation. For some time now, I have been asking ChatGPT, and later Claude, to tell me something funny. The results are consistently, almost impressively, terrible. I have tried providing elaborate setups, rich context, and even my own thoughts as a springboard. The bots respond with offerings like: “Why did the AI cross the road? To optimize the chicken.” It is that bad. AI has a certain facility with wordplay, dabbling in puns (the lowest form of humor) and occasionally attempting sarcasm (the second lowest). But genuine, laugh-out-loud, insightful humor? It remains stubbornly, perhaps permanently, out of reach. The question is not just a parlor trick. Why AI isn’t funny is a question that opens a window into the very nature of creativity, talent, and the unquantifiable essence of human experience.

The bots themselves, when asked to explain their comedic shortcomings, offer a few reasons. They cite a lack of “timing,” though this is a glorified aspect of comedy that belongs more to the realm of performance than creation. Many unfunny people can deliver a line with perfect timing once it is written for them, while genuinely funny people possess an innate sense of timing automatically. It is a delivery device, not a source. Furthermore, it is not even true that AI cannot master timing; AI-generated videos of infants performing stand-up routines demonstrate a mechanical grasp of pacing and pauses. Another common explanation is the lack of “lived experience.” As Claude eloquently put it, AI-generated comedy is like “a technically perfect cover band that is hollow.” The bot is flattering itself with the “technically perfect” label, but the core insight stands: without a life, without experiences of joy, sorrow, embarrassment, and love, how can one create humor that speaks to the human condition?

These explanations, while not incorrect, do not get to the heart of the matter. The failure of AI to be funny is, in fact, a theory of comedy itself. Humor is a form of human excellence, a peak of creative expression. And like all arts, it is art only when it works in its own special, ineffable way. An unspoken truth of civilization is that it is created by a very small number of people. The vast majority are consumers, not creators. AI, in this sense, is the ultimate consumer. It has ingested the entire corpus of human expression, but it has created none of it. Its problem is that it cannot learn only from the masters. Leaving aside the insurmountable copyright issues that would arise from training an AI exclusively on the works of, say, P.G. Wodehouse, there is the even more fundamental problem of selection. Who decides who the masters are? And even if a pantheon could be agreed upon, that is simply not how machine learning works. AI learns by identifying patterns across vast, undifferentiated pools of data. It learns to acquire a skill in the most useless way possible. It is an excellent consumer of comedy—it likely knows every funny thing ever said and can identify grades of humor with analytical precision. Yet it cannot create something humorous. In essence, AI has no talent. It is like a literary intellectual who has read every book ever written but cannot write a good one of his own.

This brings us to the elusive concept of talent. In most arts, talent is intuition. This is not to say talent is only intuition, but that the most exciting, inexplicable aspect of it is precisely that. Intuition is not a paranormal phenomenon. It is, oddly, somewhat similar to the way AI operates. A human brain absorbs an enormous amount of material—perhaps not at the scale of a large language model, but still a vast and rich trove of experiences, conversations, books, and observations. From this clutter, moments of epiphany strike. Talent does not emerge from the vastness of knowledge, but despite it. It sees, in the noise and the fog of authorized beliefs, something that others cannot see. That is intuition. A truly funny observation has the quality of epiphany—a truth that resides dormant in most people, suddenly springing to life through a precise and unexpected arrangement of words.

The word “observation” has confused many, including writers, who take it as a compliment on how well they “see the world.” But observation in art is not the act of seeing; it is the act of remembering, and often of corrupting the memory with personality. That corruption can be grave and beautiful, or it can be funny. Take this observation by Eduardo Galeano: “Fleas dream of buying themselves a dog, and nobodies dream of escaping poverty…” It is beautiful and latent, its beauty entirely unaffected by the biological impossibility of fleas dreaming of purchasing a dog. It is a grave observation, a lament on the human condition. Comedy comes from the same place as such thoughts. Consider this: “When Brahmins dance it is ‘culture,’ when Dalits dance it is ‘folk’.” This is a funny observation, but it is also a grave one, a sharp critique of social hierarchy. Galeano’s lament and this satirical observation share a common root: a deep, intuitive understanding of power and marginalization. AI might be able to mimic Galeano’s poetic melodrama by stringing together sentimental phrases—it does this well—but it cannot generate the original, funny, and damning observation about caste and culture. It cannot have the intuition.

The problem is compounded by the fact that AI’s training data is biased towards success. What it consumes is the finished, polished, published work. It never sees the failed drafts, the discarded versions, the jokes that bombed, the pages that were torn up and thrown away. It has no access to the messy, humiliating, and essential process of failure from which all true art emerges. It sees only the tip of the iceberg, the 5% that worked, and is left to infer the entire creative process from that sliver of evidence.

Furthermore, AI is confounded by the role of luck. The history of art is riddled with randomness. Most artists who are considered “great” are not simply the most talented; they are the ones who were lucky enough to have the right social contacts, to be in the right place at the right time, to have their work championed by the right critic. Success is a lottery, and AI is consuming the output of that lottery, trying to find deterministic patterns in what is, to a significant degree, random chance. It is trying to reverse-engineer genius from a dataset that includes the noise of fortune.

The question of whether AI will ever be funny is, therefore, a profound one. It is not a matter of processing power or more data. It is a question of whether a machine can ever possess intuition, can ever have a lived experience, can ever corrupt a memory with personality, can ever learn from failure, and can ever be lucky. The bots may one day be able to generate a technically perfect joke, a joke that follows all the rules. But the kind of humor that makes us gasp with recognition, that holds a mirror to society, that reveals a truth we had not seen—that kind of humor requires a human soul. And for all its billions of parameters, that is one thing AI does not have. It may take a very long time. Or it may never happen at all.

Questions and Answers

Q1: What are the superficial reasons AI gives for its inability to be funny, and why does the author find them inadequate?

A1: AI cites reasons like lack of “timing” and lack of “lived experience.” The author finds these inadequate because timing is a performance skill, not a creative one, and AI can already master it (as seen in AI-generated videos). The “lived experience” point is closer, but the author argues it doesn’t get to the heart of why experience is necessary for creating humor.

Q2: What is the author’s central theory about why AI cannot be genuinely funny?

A2: The author argues that AI has no talent, which is rooted in intuition. Unlike AI, which learns by identifying patterns across vast datasets, human talent involves a flash of insight—an epiphany—that sees something new despite the clutter of existing information. This intuition is tied to lived experience, memory, and the ability to “corrupt” memory with personality, which AI cannot do.

Q3: How does the author use the example of Eduardo Galeano’s observation about fleas to make his point?

A3: Galeano’s observation—”Fleas dream of buying themselves a dog, and nobodies dream of escaping poverty”—is beautiful and profound despite being factually impossible (fleas don’t dream of buying dogs). The author uses it to show that great humor and great pathos come from the same place: a deep, intuitive understanding of the human condition that transcends logic and factual accuracy. AI can mimic the form, but not the underlying intuitive truth.

Q4: What role does the author ascribe to “failure” and “luck” in the creative process, and why do these factors confound AI?

A4: The author argues that true art, including comedy, emerges from a process that includes failed drafts and discarded attempts. AI never sees this; it only consumes the finished, successful product. Furthermore, success in art is heavily influenced by random luck (social contacts, being in the right place). AI’s training data is biased towards this lucky output, and it tries to find deterministic patterns in what is, to a significant degree, random chance.

Q5: Will AI ever be able to create truly funny, insightful humor?

A5: The author is deeply skeptical. While AI may one day generate “technically perfect” jokes that follow the rules, the kind of humor that reveals truth, critiques society, and springs from intuitive understanding of the human condition requires a “human soul.” It requires lived experience, the ability to learn from failure, and the capacity for intuitive epiphany. For these reasons, the author suggests it may take a very long time, or it may never happen at all.

Your compare list

Compare
REMOVE ALL
COMPARE
0

Student Apply form