Synthetic Future

In Conversation with Bill Bain

By Trudy Hall • Artwork from Adobe Stock via Cosmos

Bill Bain took his first class in artificial intelligence in the autumn of 1977 at Yale University. He had arrived at Yale intending to major in mathematics, which he found to be a regrettable choice, and after watching friends take introductory courses in computer science, he decided to jump into that instead. He loved it and began taking AI courses to look into other areas of computation.

At that time, Roger Schank, a pioneer in the field, had established the AI Lab at Yale as a research and teaching hub alongside the major centers at MIT, Stanford, and Carnegie Mellon, with other emerging centers in various stages of early growth. Schank prided himself on graduating PhDs who would go on to establish or augment AI centers at other universities, including UC Berkeley, UC Illinois, Georgia Tech, and more. He was outspoken and controversial about early AI, particularly by provoking academic rivalries with Noam Chomsky at MIT about the foolishness of asserting that humans largely understood language through the power of syntax while leaving the mechanisms for describing meaning, semantics, to relatively unknown processes. He also clashed with the Stanford contingent led by Ed Feigenbaum, which reduced human reasoning to systems of codified rules that were unable to learn.

Bain’s first introductory AI course was taught by one of Schank’s early PhD students, Wendy Lehnert, who became a mentor during his undergraduate years. Their focus areas were various forms of natural language processing, including memory models and language generation. Bain worked on multiple aspects of NLP for ten years starting in 1978, and during that time he watched a parade of intellectuals come through Yale. Among them was Marvin Minsky, one of the four primary founders of AI from the Dartmouth conference in the 1950s. Bain also points to Minsky’s book Perceptronsas having had profound effects on what would become the AI best known today, including large language models.

Doug Lenat came through Yale a couple of times as well. His work on CYC was a large-scale example of symbolic AI, which was the universe the lab worked in at the time. Bain notes that symbolic AI is not obsolete today and believes there is some likelihood it may be useful in sharpening areas of weakness that have started to become familiar in generative AI, such as hallucinations and gaps in reasoning. At the same time, he describes the painstaking work Lenat set out to do, to represent all human knowledge in digital form, as becoming nothing less than a Tower of Babel, costly and endless. Douglas Hofstadter was another visitor Bain recalls, the author of Gödel, Escher, Bach, whom Bain described as brilliant but at the time more of an observer and thinker rather than a doer.

That first class with Lehnert led Bain to more AI coursework, culminating in his acceptance into a special program during his senior year in which he undertook a single NLP project for the full year with no other class requirements. He wrote what was essentially a master’s thesis showing multiple parsing techniques for a Q&A system designed to give advice on how to plant and raise organic vegetables. After graduating, he took a year off to work and then returned to Yale for graduate school, with Roger Schank as his advisor. This was the period during which Bain met leading figures in the field and ultimately decided not to pursue a career in academia. His dissertation focused on case-based reasoning, using memory models of prior experiences to help navigate complex situations, and he modeled how judges make sentencing decisions in criminal cases.

Following graduate school, Bain went to work for Schank’s local company, Cognitive Systems, which applied techniques from the AI Lab at Yale to commercial applications. The company produced language translation capabilities, far less sophisticated than what is available today given the highly manual nature of work required for representations in symbolic AI, and NLP Q&A systems for selling banking and insurance products at Citibank, as well as travel products for American Express, where Bain would eventually work as a member of the Advanced Technology team, expanding beyond AI into other emerging technology fields including image processing, automated software development, and early applications using the then-emerging market for personal computers. Bain recalls meeting Steve Jobs during that period, as well as Scott McNealy, the founder and CEO of Sun Microsystems, which at the time in the early 1990s was the hardware and networking platform that powered the rise of the internet.

When considering what is most compelling today, Bain points to the delivery of advanced technologies on mobile devices and how mind-numbing games, some of which he loves, transformed society even before the impacts of AI. He describes sitting in public spaces today and witnessing a nearly complete meltdown of socialization in contexts that would have been unthinkable to prior generations. People, he says, are more isolated than at any point in his lifetime, and many remain in bubbles that typically allow only people they know or wish to interact with, such as baristas, to penetrate.

Bain warns that layering AI onto this landscape will probably strengthen these bubbles, further institutionalizing isolation. He also warns that the ability to digitally mimic everything and everyone could have catastrophic impacts over time as people need to be on guard online about whom and what to trust. If financial and other systems become riddled with fakes and frauds, he suggests livelihoods might be at stake. Even so, he points to a possible silver lining, that these conditions could force people to actually deal with each other face to face, something he notes you cannot spoof.


Having witnessed the Schank–Chomsky debates firsthand, how do you assess that divide in retrospect? Do contemporary large language models implicitly side with one position over the other, or do they represent a third approach that neither camp fully anticipated?

More of a third approach, definitely not anticipated. Chomsky was just dead wrong about people having some kind of syntactic part of their brain that guides language understanding. Schank’s position emphasized the ability to codify meaning, which today’s models infer from vast troves of information, though it’s debatable how much they “understand.” One of the hardest problems for symbolic AI was the fact that systems could be built for highly focused knowledge domains but those would not generalize to other areas. In contrast, syntax is broad but devoid of meaning. LLMs are broad overall due to training sets that can run into billions of data sources, so they can seem to be “experts” on just about anything.

Your dissertation and earlier work treated memory as a structured record of experience rather than a passive store of information. How do you see today’s AI systems misunderstanding or oversimplifying “memory,” and what consequences does that have for judgment, context retention, and error propagation?

The concepts of memory and structured records of experience are quite different between the two eras. In the symbolic AI world, these structures had to be hand-coded and provided to systems. Very few were able to “learn” in the sense that they could add new memory instances, let alone new structures. Newer AI capabilities don’t necessarily encode new experiences, but can sharpen their accuracy with access to more specific data that helps to provide context that’s more relevant than what might be scraped from generically built LLMs. Many companies working with AI tools today follow this approach to tuning systems for their own specific purposes rather than relying on generic systems. These steps can reduce the negative consequences.

You were present during the early commercialization of AI through banking, insurance, and travel applications. How did the shift from academic research to enterprise deployment reshape what AI systems were allowed to do, and where do you see parallels with the current platform-driven AI ecosystem?

The shift from research to real-world applications was previously constrained by technology speed and the need for manual encoding of symbolic representations. Even the commercialization of CYC never led to any sort of exponential growth in scale. Today’s systems benefit from vastly faster processor speeds, as well as entirely different hardware architectures and topologies capable of access to and processing of vast amounts of data. Anywhere there’s manual intervention is pretty much a bottleneck in today’s world, where that used to be fundamental to early AI, although a bottleneck then as well. Consider that some of the early generative AI work focused on natural language processing, which then rapidly expanded to image generation due to the configurations of technologies used. That would never have happened with symbolic AI, which was largely fit for purpose.

The parallels between then and now are quite different. The jump from academic to commercial work back then required bringing academics into the commercial world. That doesn’t exist in anything close to the same form today, where non-academics are proliferating the use of available AI tools.

You describe contemporary public life as marked by unprecedented social withdrawal and isolation. Do you see this primarily as a consequence of interface design choices, economic incentives, cognitive overload, or longer-term cultural adaptation, and which of those do you believe AI is most likely to intensify?

It’s pretty much all of the above, plus portability. Steve Jobs took the mobile phone concept and turned it into a personal computing device that’s no longer tethered to a wall socket. What started as a phone is probably now least used as one.

A parallel exists with television for kids today. Close family members have two boys, ages four and five, for whom they maintain clear discipline with TV watching. While they like old-style cartoons, every time new CGI-style cartoons come on, the boys look like zombies. They are glued to the set. If one of their parents turns off the show or interrupts their watching, they become extremely upset. It seems like their brains get weirdly rewired.

A form of that rewiring seems rampant among adults as well. Look at how people behave in any context in which they are waiting. Ninety percent or more are glued to their phones in settings where conversations were far more common in the past. It’s easy to tune out the world while flying on a plane, between listening to music privately and playing a game. Dating used to involve meeting people face to face in one place or another, now dramatically dominated instead by online offerings that filter possible dates or love interests. Influencers are usually no longer people you know, but people with millions of followers.

In one area after another, the digital world has overtaken the realities of the physical and social worlds that my generation, and all those before it, grew up in and understood. That leaves new imprints on how kids grow up now. Social media means that whatever identity or privacy you wish you had are not necessarily those you get. Cyberbullying did not exist when I was young. Today it can be fatal. A kid that just wants to stay invisible used to be able to make that happen. It’s not possible anymore, which can reinforce isolation even further.

Australia, where I live now, has just recently become the first major country to ban social media for kids under sixteen years old. It will be interesting to see how that goes.

As digital systems become increasingly capable of mimicking people, institutions, and signals of authority, how do you think trust will be reorganized? Do you expect meaningful technical safeguards to emerge, or do you think the response will be largely social, behavioral, or structural rather than computational?

As of now, I don’t see any meaningful answers to maintaining the levels of trust we need.

Systems of trust will have to adapt, although that will be easier said than done in many regards. Consider that journalists used to be fired when they made up stories or cited fake sources. Fabrication seems to be routine now, and it’s next to impossible to keep up with validating what’s real versus not. “Don’t click on a link in an email that’s spam” seems like an obvious warning, but it’s getting harder for people to discern and might become next to impossible. Even computational defenses, such as encryption and digital validation, are under attack or soon could be, such as with the rise of quantum computing, which, if successful at scale, could break some of the most sophisticated encryption capabilities that exist.

The ability to swamp our information feeds with fakery has only been growing and is now more accessible by more people than ever before. Shortly after generative AI was first launched, I attended a conference where an academic presented a talk about how to spot fake news. The only problem with his pitch was that the techniques he proposed amounted to using a bow and arrow against a nuclear weapon. Nothing about his approaches was scalable or usable by thousands or millions of people at the same time.

Possibly the only thing that governments could do to bring order to the growing chaos would be draconian, including clamping down on information pathways to control what sources become accessible. In the United States, this would be seen as censorship, but the choice to not do something drastic could lead to financial system meltdowns.


What Bain’s account ultimately reveals is not a story about artificial intelligence advancing toward understanding, but about human systems retreating from responsibility. The technical story has changed dramatically since the 1970s, the structural one has not. We continue to build systems that scale faster than judgment, optimize fluency over comprehension, and reward simulation where verification is hardest. The surprise of contemporary AI is not that it bypassed symbolic reasoning or syntactic theory, but that it made those debates partially irrelevant. Meaning is no longer engineered or inferred in any principled sense, it is approximated at scale. That approximation works well until it doesn’t. When failure occurs, it propagates outward not as a bug but as a believable narrative, indistinguishable from truth without grounding.

Bain’s deeper concern is not technological collapse but social atrophy. As interfaces absorb more functions of mediation, dating, waiting, learning, deciding, people lose rehearsal in the unscripted encounters that once established trust. Systems trained on existing behavior inevitably reinforce withdrawal and the erosion of shared context. If there is leverage left, it lies outside the model. Trust will not be restored by better pattern recognition alone, nor by countermeasures that assume bad signals can be filtered faster than they can be generated. The remaining anchors are structural and behavioral, including proximity, accountability, and limits. These are qualities machines struggle to reproduce because they are inefficient. In a landscape saturated with mimicry, the rare skill may be discernment, knowing when to rely on automation and when presence itself is the only reliable proof. The future he sketches is not one of replacement, but of reckoning, a return to boundaries that technology cannot cross, and responsibilities it cannot absorb.

Previous
Previous

Introduction

Next
Next

Collective Atrophy