PLATFORM/CARPO Mario The Post-Digital Will Be Even More Digital (2018)
PLATFORM / CARPO Mario The Post-Digital Will Be Even More Digital (2018)
200221

The cycle of digital innovation in architecture is far from over.

Book presentations, or book launches, are holdovers from ages long past. One could argue that the same applies to books in print themselves; but we still read and write books, never mind in which shape and form, while I do not see many reasons to keep presenting them in brick-and-mortar bookshops, or similar venues. Friends in the publishing industry tell me that a single tweet, or a successful hashtag on Instagram, can sell more copies than a book launch—and at a lesser cost, for sure. Besides, one of the most baffling aspects of book launches is that, traditionally—and I remember this was already the case when I was a student—a significant fraction of the public in attendance tends to be viscerally and vocally hostile to the topic of the book being presented. Why would readers who dislike a book as a plain matter of principle take the time to read it in full then vent their anger at its author, I cannot tell; but this is to say that having published a book last fall titled The Second Digital Turn: Design Beyond Intelligence, I had plenty of opportunities, in the course of the last few months, to glean a vast repertoire of technophobic commonplaces. Chiefly noted among them, due to its sheer outlandishness, was the objection that digital innovation would by now have fully run its course: having adapted to, and adopted, some new tools and technologies, architects would have moved on, free at last to get back to things that really matter to them (whatever they might be). 

As I only recently found out, that argument is also the theoretical mainstay of the so-called post-digital movement, advocating the nonchalant use of technologies old and new in the pursuit of loftier architectural aims. No architect would object to that, evidently—but the PoDig project is predicated on at least one false premise. The cycle of digital innovation is far from over. Yes, we have gotten used to email, word processing, and Photoshop; but we find the mere prospect of reliable driverless cars threatening and apocalyptic, we resent the predictive capabilities of search engines and social media we nonetheless keep feeding with content, and when a computer wins against the best human champions in a game of chess, or Go, we see that as the harbinger of some epochal decline of our civilization—or simply as the end of the world. In fact, most applications of artificial intelligence, even the most pedestrian, still arouse deep feelings, often of discomfort, alarm, and disbelief. And rightly so, as there are many reasons to be worried or excited by artificial intelligence right now. But no matter how high or low I look, the one thing I cannot find is indifference. No one seems to be arguing for technological nonchalance right now: absolutely no one—except, apparently, a few architects. So forgive me for assuming, for lack of a better explanation, that the PoDig argument may be disingenuous—a ruse de guerre engineered by a lobby of good old techno-bashers in disguise, of whom the design professions never fail to generate steady cohorts, with only marginal variations in quality and quantity over time.  The first digital turn was about bits and atoms. The next is going to be about bits and neurons.

As for artificial intelligence itself, the source of so much hype and fear and loathing today—that is far from being a novelty. The term was already widely used in the 1950s and ’60s, when the pioneers of cybernetics thought that electronic brains should imitate the way we think, and replicate the formal logics of the human mind. That project failed, spectacularly, in the sense that it never produced any usable result, and artificial intelligence was soon relegated to the dustbin of technical history. For almost two score and a few years—let’s say from the mid-1970s to more or less now—the term “artificial intelligence” was simply forgotten. If it is revived now, almost as spectacularly as it was once jettisoned, it is because AI today, or something akin to it, has started working surprisingly well. Unlike vintage AI of the cybernetic age, however, today’s AI is not even trying to imitate the logic of the human mind. To the contrary, advanced electronic computation can now solve apparently intractable problems—problems we could not solve in any other way—precisely because computers appear to have developed their own logic, their own scientific method, and their own way of thinking, which is quite different from ours. Computers do not think the way we think due to a simple but drastic structural difference between our mind and theirs: unlike computers, we were never hard-wired for big data. What we today call “big data” means, simply, data too big for us to manage—but which computers can manage just fine. 

It follows from the above that computers can notate, calculate, and fabricate buildings, for example, quite differently from the way we always have. Think of geometrical notations—the measurement of the position in space of all the parts of a building, which we used to draw in plan, elevations, and sections. No human designer could conceive of a building made of, say, four gazillion different particles, each one individually notated in space—because no human mind could take in, and take on, that much information. This is why our (human) notations tend to simplify buildings, converting the messy complexity of nature into leaner geometrical figures, which we can more easily draw with lines, or script with math. Computers need none of that. If a given problem can be better solved by the robotic assembly of four gazillion different and minuscule 3D-printed particles, they can go for it. Ditto for structural engineering, when computers can optimize any given structure by simply trying, sequentially, four gazillion different solutions—among so many, it doesn’t take any degree of intelligence, either natural or artificial, to find one that will do the job, and solve the problem at hand. But we (humans) cannot work that way, because it would take forever. 

Evidently, buildings conceived, calculated, and built that way tend to look very different from anything we ever designed. They also tend to be better fit to specs (i.e., stronger or lighter or cheaper or whatever specs we choose to optimize) because that’s the spirit of the game—that’s where computation outsmarts us. That does not seem to me a prospect that architects should look down upon with benign neglect. We already know what the first digital turn was—that’s history. But we can already figure out what the second digital turn is going to be. The first digital turn was about bits and atoms. The next is going to be about bits and neurons. There is more digital after the digital, whether we like it or not.

The cycle of digital innovation in architecture is far from over.

Book presentations, or book launches, are holdovers from ages long past. One could argue that the same applies to books in print themselves; but we still read and write books, never mind in which shape and form, while I do not see many reasons to keep presenting them in brick-and-mortar bookshops, or similar venues. Friends in the publishing industry tell me that a single tweet, or a successful hashtag on Instagram, can sell more copies than a book launch—and at a lesser cost, for sure. Besides, one of the most baffling aspects of book launches is that, traditionally—and I remember this was already the case when I was a student—a significant fraction of the public in attendance tends to be viscerally and vocally hostile to the topic of the book being presented. Why would readers who dislike a book as a plain matter of principle take the time to read it in full then vent their anger at its author, I cannot tell; but this is to say that having published a book last fall titled The Second Digital Turn: Design Beyond Intelligence, I had plenty of opportunities, in the course of the last few months, to glean a vast repertoire of technophobic commonplaces. Chiefly noted among them, due to its sheer outlandishness, was the objection that digital innovation would by now have fully run its course: having adapted to, and adopted, some new tools and technologies, architects would have moved on, free at last to get back to things that really matter to them (whatever they might be). 

As I only recently found out, that argument is also the theoretical mainstay of the so-called post-digital movement, advocating the nonchalant use of technologies old and new in the pursuit of loftier architectural aims. No architect would object to that, evidently—but the PoDig project is predicated on at least one false premise. The cycle of digital innovation is far from over. Yes, we have gotten used to email, word processing, and Photoshop; but we find the mere prospect of reliable driverless cars threatening and apocalyptic, we resent the predictive capabilities of search engines and social media we nonetheless keep feeding with content, and when a computer wins against the best human champions in a game of chess, or Go, we see that as the harbinger of some epochal decline of our civilization—or simply as the end of the world. In fact, most applications of artificial intelligence, even the most pedestrian, still arouse deep feelings, often of discomfort, alarm, and disbelief. And rightly so, as there are many reasons to be worried or excited by artificial intelligence right now. But no matter how high or low I look, the one thing I cannot find is indifference. No one seems to be arguing for technological nonchalance right now: absolutely no one—except, apparently, a few architects. So forgive me for assuming, for lack of a better explanation, that the PoDig argument may be disingenuous—a ruse de guerre engineered by a lobby of good old techno-bashers in disguise, of whom the design professions never fail to generate steady cohorts, with only marginal variations in quality and quantity over time.  The first digital turn was about bits and atoms. The next is going to be about bits and neurons.

As for artificial intelligence itself, the source of so much hype and fear and loathing today—that is far from being a novelty. The term was already widely used in the 1950s and ’60s, when the pioneers of cybernetics thought that electronic brains should imitate the way we think, and replicate the formal logics of the human mind. That project failed, spectacularly, in the sense that it never produced any usable result, and artificial intelligence was soon relegated to the dustbin of technical history. For almost two score and a few years—let’s say from the mid-1970s to more or less now—the term “artificial intelligence” was simply forgotten. If it is revived now, almost as spectacularly as it was once jettisoned, it is because AI today, or something akin to it, has started working surprisingly well. Unlike vintage AI of the cybernetic age, however, today’s AI is not even trying to imitate the logic of the human mind. To the contrary, advanced electronic computation can now solve apparently intractable problems—problems we could not solve in any other way—precisely because computers appear to have developed their own logic, their own scientific method, and their own way of thinking, which is quite different from ours. Computers do not think the way we think due to a simple but drastic structural difference between our mind and theirs: unlike computers, we were never hard-wired for big data. What we today call “big data” means, simply, data too big for us to manage—but which computers can manage just fine. 

It follows from the above that computers can notate, calculate, and fabricate buildings, for example, quite differently from the way we always have. Think of geometrical notations—the measurement of the position in space of all the parts of a building, which we used to draw in plan, elevations, and sections. No human designer could conceive of a building made of, say, four gazillion different particles, each one individually notated in space—because no human mind could take in, and take on, that much information. This is why our (human) notations tend to simplify buildings, converting the messy complexity of nature into leaner geometrical figures, which we can more easily draw with lines, or script with math. Computers need none of that. If a given problem can be better solved by the robotic assembly of four gazillion different and minuscule 3D-printed particles, they can go for it. Ditto for structural engineering, when computers can optimize any given structure by simply trying, sequentially, four gazillion different solutions—among so many, it doesn’t take any degree of intelligence, either natural or artificial, to find one that will do the job, and solve the problem at hand. But we (humans) cannot work that way, because it would take forever. 

Evidently, buildings conceived, calculated, and built that way tend to look very different from anything we ever designed. They also tend to be better fit to specs (i.e., stronger or lighter or cheaper or whatever specs we choose to optimize) because that’s the spirit of the game—that’s where computation outsmarts us. That does not seem to me a prospect that architects should look down upon with benign neglect. We already know what the first digital turn was—that’s history. But we can already figure out what the second digital turn is going to be. The first digital turn was about bits and atoms. The next is going to be about bits and neurons. There is more digital after the digital, whether we like it or not.