(Re)discovered Lem by chance, like the best things are discovered. A tweet from Bratton that referenced this foundational text from '64 resonated with me, echoing Jorge Carrión's podcast, Solaris, which Piscitelli brought back to our attention in December 2019 before the summer course on the series Westworld. Now, with this text in hand (or on screen), I was captivated: it wasn't just science; it was philosophy disguised as space adventure. Summa Technologiae can be seen as a futurological treatise that anticipated debates we are only just beginning to take seriously in the 21st century. Artificial intelligence, virtual reality, nanotechnology, bioengineering, search engines. It was all there, written by a guy in communist Poland in the sixties, without access to the internet (because it didn't exist), without smartphones, without any of the things we take for granted today.
The most amazing book you have never heard of. Stanislaw Lem's Summa Technologiae from 1964https://t.co/WWWXepq7W2
— Benjamin Bratton (@bratton) December 6, 2025
For Lem, intelligence as a cosmic phenomenon was one of the great unanswered questions. Is intelligence an accident or a necessity in the universe? Lem wasn't satisfied with the easy answer of anthropocentrism: that we are special, unique, the pinnacle of creation. For him, the emergence of intelligence was a statistical problem. If we think of the universe as a gigantic laboratory testing all possible combinations of matter and energy, intelligence would just be one of those combinations that had to eventually appear. But here's the twist: Lem wondered if that intelligence necessarily had to resemble ours. Could there be forms of consciousness so radically different that we wouldn't even recognize them as such? This question, which we discuss today when talking about non-human artificial intelligences, Lem posed 60 years ago. And he did so from a philosophically uncomfortable position: accepting that perhaps we might never recognize or communicate with other forms of intelligence, even if they were right in front of us.
Summa Technologiae can be seen as a futurological treatise that anticipated debates we are only just beginning to take seriously in the 21st century. Artificial intelligence, virtual reality, nanotechnology, bioengineering, search engines.
Artificial is a word we use too lightly. We often oppose the "natural" to the "artificial" as if they were airtight categories, when in reality they are merely linguistic conventions. Lem dedicated a large part of Summa Technologiae to dismantling this distinction. For him, technology was not an "aberration" of nature but its continuation by other means. Biological evolution and technical evolution followed the same logics: trial and error, selection, adaptation, extinction. The only thing that changed was the speed and the agent. Where nature took millions of years to "design" an eye, technology could create a microscope in decades. But both processes responded to the same homeostatic drive: the attempt of living systems to maintain balance in the face of the universe's entropic chaos. The big difference, Lem said, was that biological evolution was blind, while technical evolution could (in theory) be conscious and directed. Although, and this is crucial, Lem also warned that technology had a life of its own, developing according to gradients that no one fully controlled. It was and is a self-organized process where humans are just one component among many.
He then generated a fascinating concept he called "extended homeostasis." Basically, the entire history of civilization could be understood as a cybernetic process of expanding the homeostatic range. Simple organisms could only maintain their internal balance under very specific conditions. More complex ones (like us) developed mechanisms to partially detach from the environment: temperature regulation, immune systems, brains that plan. Technology was the next step: tools that allowed us to survive in hostile environments, accumulate resources, predict threats. But Lem went further: he wondered if we would eventually reach a point where we could regulate not only our immediate environment (the climate of a city, for example) but also phenomena on a planetary or even stellar scale. What would happen, he asked, if we could ever control the nuclear fusion of stars? Or design entire ecosystems from scratch? These questions, which sounded like pure science fiction in 1964, are now on the agenda of geoengineering and terraforming. Lem wasn't predicting the future: he was identifying the deep logics that govern all complex systems.
This method Lem used to think about the future was peculiar. He didn't try to guess what specific inventions would appear (although he got several right), but rather to identify what types of problems would arise and what types of solutions were structurally possible. It was more an exercise in logic than prophecy. For example, he didn't predict YouTube, but he did anticipate that information would become so abundant that we would need automated systems to filter and organize it. He called this "ariadnology," referencing the thread of Ariadne from Greek mythology: the need for guides to help us not get lost in the maze of data. What are the recommendation algorithms of Netflix, Spotify, or Google if not applied ariadnology? Lem also understood that every technological solution generated new problems. Writing solved the problem of transmitting knowledge but created the problem of mass literacy. The printing press democratized access to books but created information overload. The internet connected us globally but fragmented us culturally. Lem was neither pessimistic nor optimistic: he was realistic. He knew that technology would neither save us nor condemn us; it would simply change us in ways we couldn't fully anticipate.
Content and form were, for Lem, two sides of the same evolutionary coin. Biological evolution had a fascinating characteristic: it first created the "hardware" (the body, the organs, the tissues), and then that hardware conditioned what kind of "software" (behaviors, instincts) could run. Birds have wings, therefore they fly. But technical evolution could do something more flexible: change both hardware and software simultaneously, or even create software without specific hardware. An app can run on millions of different devices. This plasticity was, for Lem, both an advantage and a danger. An advantage because it allowed us to adapt rapidly to new contexts. A danger because we could create technologies whose consequences we didn't understand. Lem used the example of nuclear weapons: they were designed for a specific purpose (to win World War II) but created a completely new problem (the possibility of total extinction of humanity). The content (destroying an enemy city) had overflowed the form (a bomb) and created something unprecedented: a species capable of committing planetary suicide. For Lem, this decoupling between intention and consequence was the central drama of the technological condition.
By analyzing the limits of knowledge, Lem arrived at uncomfortable conclusions. One of his most provocative theses was that eventually science would encounter insurmountable barriers, not due to a lack of ingenuity but because of structural limitations. There were phenomena that were too complex to be understood by minds like ours. Lem used a brilliant analogy: asking a dog to understand quantum mechanics. It's not that the dog is stupid; it's that its brain isn't wired to process that level of abstraction. What made us think we were better equipped to understand the universe in all its complexity? Hence, the author proposed that sooner or later we would need to extend our cognitive capacities. Not only with education or books but with technology that literally amplified our minds. This is what he called "intelectronics": the symbiosis between biological brains and artificial information processing systems. It sounds like cyberpunk, but Lem was thinking about it in the sixties, decades before brain-computer interfaces or the neural implants that Neuralink talks about today existed. And he thought of it not as a transhumanist whim but as an epistemological necessity: either we enhance our cognitive capacities, or we stagnate.
Lem's phantasmagorias have a technical name in Summa: phantomatics. This is probably the wildest and most visionary concept in the book. Lem asked: what would happen if we could create completely synthetic realities, indistinguishable from the "real"? He wasn't talking about movies or video games (which already existed in primitive forms) but something more radical: total sensory experiences, simulated worlds so convincing that living them would be identical to living in the physical world. Lem distinguished between "peripheral phantomatics" (deceiving the senses from the outside, like 3D cinema does) and "central phantomatics" (directly intervening in the brain to generate experiences). The first was limited; the second, potentially infinite. If you can hack the brain, you can create any experience: flying, being immortal, living a thousand lives in one. Lem saw this as inevitable: once we had the technology, we would use it. But he also saw the dangers: what would happen to shared reality if everyone could live in their own personalized simulation? What sense did the material world have if we could create infinitely more pleasurable mental worlds? These questions, which we discuss today in relation to virtual reality, the metaverse, and brain interfaces, Lem posed six decades ago. And he had no comforting answers.
The important questions are not technical but philosophical: what do we want to do with the power that technology gives us? How do we prevent it from destroying us? Can we consciously direct our technical evolution or are we doomed to be swept along by it?
There are several fulfilled prophecies in Summa Technologiae. Lem anticipated search engines (his "ariadnology"), virtual reality (phantomatics), artificial intelligence (intelectronics), nanotechnology (which he called "molecular engineering"), and even something resembling social networks (he spoke of "information networks" where we would all be simultaneously consumers and producers of content). But the most impressive thing is not that he got specific technologies right but that he correctly identified the underlying dynamics. For example, he foresaw that information would become the most valuable resource long before Google or Facebook existed. He understood that automation would not only replace manual labor but also mental work. He anticipated that the most serious ethical problems would come not from the technology itself but from its unequal use: some would have access to cognitive enhancements, others would not; some could live in paradise-like synthetic realities, while others would be trapped in material misery. Lem was not utopian: he knew that technology did not distribute its benefits equitably and that, in fact, it tended to amplify existing inequalities. He also knew there was no turning back. Once you invent something, you can't un-invent it. The question was not whether we would develop AI, virtual reality, or bioengineering, but when and how.
Technologies derived from the present confirm many of Lem's intuitions. We live in a world where algorithms make decisions that affect our lives (who banks grant credit to, what news we see, who we match with on dating apps). We are developing brain-computer interfaces. We are starting to edit genomes with CRISPR. Virtual reality is no longer science fiction. All of this was, in some way, in Summa. But the most unsettling thing is that we are also experiencing the problems Lem anticipated. Information overload is real: there is too much content, too many news stories, too many options, and our brains are not prepared to process it. Technological inequality is real: there are countries with access to cutting-edge technologies and countries that barely have basic connectivity. The dependence on systems we do not understand is real: how many of us know how the Instagram algorithm or the operating system of our phone works? Lem was right: technology was making us more powerful but also more vulnerable. And the worst part is that there was no "pilot" directing all of this. Technology evolves according to its own logics, like a river flowing where gravity tells it to, indifferent to which cities it floods in the process.
We continue to think of technology as something optional, reversible, controllable, when in reality it is constitutive of who we are as a species. Lem understood this 60 years ago.
There is only one step from fiction to reality when you read Lem. His novels (Solaris, Cyberiad, Congress of Futurology) were conceptual laboratories where he explored ideas that he later technically developed in Summa. But Summa is not fiction: it is hardcore technology philosophy, with references to cybernetics, thermodynamics, systems theory, and evolutionary biology. Lem was not a dabbler; he had studied medicine, knew the science of his time, and voraciously read everything that came his way. His method was rigorous: he started from solid scientific premises and extrapolated them to their most extreme logical consequences. Some accused him of being too speculative, but over time it has been shown that his speculations were well-founded. The problem, in any case, was not that he was too imaginative but that reality turned out to be as strange as he had anticipated. Today, when we discuss whether AIs can be conscious, whether we should regulate genetic editing, whether virtual reality can replace physical experiences, we are having exactly the conversations Lem proposed in 1964. Only now they are urgent because technology is already here.
Lem remains relevant not only for his insights but for his method. He taught us to think of technology not as a collection of gadgets but as an evolutionary process with its own dynamics. He showed us that the important questions are not technical but philosophical: what do we want to do with the power that technology gives us? How do we prevent it from destroying us? Can we consciously direct our technical evolution or are we doomed to be swept along by it? Lem had no definitive answers, but he had brilliant questions. And in a world where technology advances faster than our ability to reflect on it, questions matter as much as answers. Summa Technologiae is an uncomfortable book because it forces us to confront our own precariousness: we are a very powerful species but very little wise. We have developed tools that can alter the planet and perhaps the solar system, but we continue to operate with brains designed for the African savanna of the Pleistocene. Lem knew this, and his work is an attempt to give us a map to navigate that contradiction.
Reading Summa Technologiae today is a strange experience. On one hand, you realize that many of the "novelties" of 21st-century technology were already conceptually laid out in 1964. On the other hand, you understand that Lem was not a prophet but simply someone who took seriously the implications of what science already knew in his time. The difference between Lem and us is not that he saw the future; it's that we, with the future upon us, still don't see it clearly. We continue to think of technology as something optional, reversible, controllable, when in reality it is constitutive of who we are as a species. Lem understood this 60 years ago. We are just beginning to digest it. And while they are the same questions we ask ourselves today, the sense of urgency is palpable, and we don't have 60 more years to think about the answers.