I. Shape Machine Symposium Introductions
Nancey Green Leigh
Welcome, everyone! I'm Nancey Green Leigh, and I am the Associate Dean for Research in the College of Design and a Professor in the School of City and Regional Planning. The details of the Shape Machine Symposium are really new to me and I had some things explained, which I want to share with you from Thanos - it was an excellent tutoring.
I want to welcome you to this symposium, which is really quite exciting, not just for the Shape Computation Lab, but for the College of Design as well. And besides mastering the Shape Machine, which is a lot of what the symposium is about, Professor Economou and his team apparently have mastered the prediction of weather, because they ended up with an absolutely beautiful day. It's just an amazing spring day, one of those perfect days that causes people to fall in love with the City of Atlanta, despite the pollen, which you can hear affecting my voice. Our city is known as the city of trees and the most beautiful trees and bushes are in bloom right now. So, I hope you'll enjoy those, those of you that have come from far away. And if you'll bear with me, because I think it's really cool, we understand we have 91 registries for this event. And they're coming, you're coming from all over the world! So, in random order, I'll read the list of what we have so far. And if we've missed anything, let us know. You're from Turkey, Trinidad and Tobago, Germany, Greece, Serbia, Wales, Austria, Korea, Chile, Japan, Taiwan, Canada, Singapore, and of course, the little U.S. of A. - that's really an amazing geographic diversity that's represented here, and I think it's very exciting!
So, let me share just a little bit of the history of the research that led to holding the symposium. The Shape Machine has been the Shape Computation Lab's major focus for the past three years. The first attempt to develop this technology began a decade ago, when Thomas Grasl, who is here today, obtained his master's degree at Georgia Tech under Thanos and then went on to doctoral studies at TU Vienna. While his doctoral work differs in significant ways from the approach taken to create the Shape Machine, it provided an initial legacy project and the state-of-the-art knowledge that has helped to develop this solution. In turn, this led to the AI EDAM, the Artificial Intelligence for Engineering Design, Analysis and Manufacturing special issue on shape grammar interpreters, that was co-edited by Professor Economou and published in 2018. And as Professor Economou has shared - and I'm going to start saying Thanos because it's a big mouthful - it led to the belief in his lab that this seemingly unattainable technology could be realized.
Today, Thanos and his core team working on the Shape Machine includes three doctoral students - and hopefully you'll stand up and raise your hand. Kurt Hong, who has four degrees and is working on his fifth - he beat my record by one - Heather Ligler, who has her Bachelor of Architecture, M.Sc.Arch. and is a licensed architect with some significant practice experience; and then, James Park, who has a B.Sc. in architecture, a Masters in Architecture and is an instructor in the core computational media and modeling courses in the School of Architecture.
The current team started looking at the problem literally from scratch in 2016 and from there pursued two very different approaches. The first focused on the development of the data representation and shape recognition algorithms. The second focused on developing a uniform characterization of shape as an interplay of visible spatial elements and hyperplane arrangements. The first part resulted in the Shape Machine, and this was led in the lab by Kurt Hong. The second part, the Shape Signature, was generously supported by Josephine Yu, who is an Associate Professor at the School of Mathematics at Georgia Tech and her PhD students, including Cvetelina Hill, who will also talk at the symposium, and two undergraduate students: May Cai and Nicholas Liao.
The Shape Machine project is sponsored by an NSF I-Corps Sites grant for a research-based University startup, working closely with Jeff Garbers at Georgia Tech's VentureLab. The technology is currently undergoing intellectual property review, and may ultimately result in as many as 14 patents. Thanos considers the cross-disciplinary connection between designers, engineers, and mathematicians as a key part of the success story of the progress to date.
In closing, I hope you'll find this symposium to be an exciting opportunity to explore the possibilities of the Shape Machine. Welcome to Atlanta.
I am glad that Nancey did that, so that I don't have to explain what they have been doing! I'm Scott Marble, the Chair of the School of Architecture at Georgia Tech. First, I want to thank Nancey for organizing all the research in the College. She does an amazing job, and this symposium is the result of one of the grant programs that she's put together to bring all of the research across the College, not just within the School of Architecture, together. And it's really been very helpful in getting more people to talk to each other.
I also want to reflect on the audience here. From what I understand, it seems obvious that everybody that has relevance in the history shape grammars is being pointed out - it’s significant that you all are here today. And you know, I think it must be really rewarding and just a great thing to see at least three or four generations of thought in this area all in one place - many different threads, many different directions that this discourse has taken with different people, and I think that's really quite exciting to bring focus to this symposium. It's really big.
It's awesome and exciting for me that Georgia Tech has been part of this discussion. It is becoming, hopefully a leader and a new center of gravity for where this can go, the promise that this thought has. It’s great to be part of that and to be sponsoring this discussion to get everybody together to talk. And it's really a unique format that we are talking about. It is not the typical format where everybody is giving presentations. It is really intended to be very interactive, very reflective of where this work goes.
So, there are a couple of things about the group that I've been pleased and really thrilled to see over the three years that I've been here. When I first got here, Thanos and I spent a lot of time together, a lot of late nights over drinks talking about things. But in the last year and a half, there's been a real shift in what I've seen. And it's one of those things where, you know, I walked by his lab two, three times a week, and there's a group of four people that are literally tethered together around this table, talking intensely. And they don't know if anybody is walking by behind the glass wall, but everybody sees them. But, I don't think they see anybody else because they're so focused on what they're doing.
I would walk home in the evening and I’d see them kind of walking in a group, then having dinner together at nine o'clock at night. They don't even acknowledge me because they’re so engaged, but they're great guys. So, there's clearly a flight of intense excitement about what they do and it's been great to watch. I'm very happy and proud of what they're doing. I've seen the demo and it's very, very exciting. So, I just want to welcome everybody. I think this is going to be a great day. I want to congratulate you guys on all the work you are doing and the point you have brought the work to at this moment, especially Thanos.
Thank you Nancey and Scott for your kind welcome and introductions! I am Thanos Economou, Professor in the School of Architecture at Georgia Tech, Director of the Shape Computation Lab and the happy leader and partner of the Shape Computation Group – the group of PhD students that you will see today going over the ins and outs of the Shape Machine technology – the reason for this symposium. Before we start, I would like to thank the Office of the Dean of the College of Design and specifically Dean French and Nancey Green-Leigh, our Associate Dean of Research for supporting this event, allowing us to invite key people in the field, and making this happen. I would also like to thank the architectural offices of John Portman and Associates and Perkins and Will for sponsoring the reception at the end of the day and the lunch, respectively. It speaks to my heart to see that our event here is supported by both academia and the profession too. I would also like to thank the Office of the Arts at Georgia Tech for sponsoring the exhibition that will conclude the event today at the Cohen Gallery at the School of Architecture. This exhibition showcases four projects all related to the work that we will be seeing and discussing throughout the symposium.
But most of all, I would like to thank our six guests who agreed to come and join our panel today: George Stiny, Professor of Design and Computation at MIT; Chris Earl, Professor of Design at the Open University, United Kingdom; Terry Knight, Professor of Design and Computation at MIT; Ulrich Flemming, Professor of Architecture at Carnegie Mellon University; Alison McKay, Professor of Design Systems at the University of Leeds, United Kingdom, and Lars Spuybroek, Professor of Architecture at Georgia Institute of Technology. I will properly introduce our six respondents in the second half of the day before the focus workshop sessions begin. But clearly, none of that would matter if you all were not here, and I do know that some of you come from far away - Nancey has talked about this. Thank you all for joining us, I only hope the presentation and the discussion afterwards will be rewarding for all of the efforts that you have put in to join us today!
A few words about what you will be seeing today: you will see two projects closely interrelated one to another. Both projects started at the same time three years ago, when we decided in the lab to take on the problem of shape recognition from two vantage points. We did not know whether these two trajectories would converge but we were confident that this parallel inquiry would only reinforce both pursuits. The first project focused on the development of data representation and shape recognition algorithms, resulting in the Shape Machine, a general solution for the recognition of shapes consisting of lines and arcs and their combinations under isometries, similarities, affinities and perspectivities, in fact, the complete Erlanger program of transformations. This first project has been led in the lab by Kurt Hong, PhD student and a formidable electrical engineer - and an architect too. The second project focused on the development of a uniform characterization of shape and ended up in the Shape Signature, a unique symbolic description of shapes consisting of lines. The questions we asked were: How many shapes consist of two lines? What do they look like? How many consist of three lines? What are their boundaries? How do the reduction rules of shape grammar theory help to enumerate them? Is this underlying representation necessary for a fully parametric Shape Machine in the future? This inquiry led to a model consisting of a counterpoint of visible spatial elements and underlying carrier lines modelled by hyperplane arrangements and matroid theory. Clearly, we needed help for all of that and wherever I asked around at Georgia Tech everybody was pointing to Josephine Yu, Associate Professor at the School of Mathematics at Georgia Tech, and a guru in algebraic geometry and combinatorics. Josephine joined the group with Cvetelina Hill, her PhD student and a speaker later today, and two undergraduate students, May Cai and Nicholas Liao, and yes, we got the numbers right. It has been a magical collaboration! More on all this later this morning too!
The symposium is structured like a design review: we will present the work, then the panel and the audience will respond. In the morning session we will present the Shape Machine in a live, interactive demo and we will run some sample computations to give a range of the expressiveness of the technology. We will follow this demo with four talks exploring various aspects of our project, from the underlying representations in the Shape Machine and the Shape Signature to fully blown applications in design research. Kurt Hong, PhD student at the School of Architecture, will kick off these brief presentations with his talk on Shape Machine and Interpreters exploring the state of the art in the field of shape grammar interpreters. Cvetelina Hill, PhD student at the School of Mathematics, will follow with her talk on Shape Machine and Shape Signature discussing a new formal description of shape that we have been working on at the Shape Computation Lab. James Park, PhD student at the School of Architecture, will continue with his talk on Shape Machine and Parametrics, exploring how the Shape Signature can be implemented in the Shape Machine to produce a completely new way of representing parametric shapes in shape computations. And finally, Heather Ligler, PhD student at the School of Architecture, will conclude the morning session with her talk on Shape Machine and Architectural Theory discussing her work on the formal analysis on John Portman’s house, Entelechy I, with the Shape Machine and the ways this new technology can begin to support formal studies in architectural theory discourse. In addition to these four presentations, we will be having a brief show-and-tell by Professor Ying Yu, visiting professor at Georgia Tech at the School of Civil Engineering and an expert in origami engineering design. Ying saw our first demos of the Shape Machine this past fall, she got excited and wanted to try the technology in the origami design field; we look forward to see the first results of her work.
In the afternoon session we will switch to a workshop mode to explore possible relations of the technology discussed today with current discourse in design. We will be having six focus groups led by our guest respondents that each will run in two one-hour sessions. The Shape Grammars and Shape Machine group will be led by George Stiny; the Design Education and Shape Machine group will be led by Terry Knight; the Design Research and Shape Machine group will be led by Chris Earl; the Design Synthesis and Shape Machine group will be led by Alison McKay; the Interpreters and Shape Machine group will be led by Ulrich Flemming; and the Architectural Theory and Shape Machine group will be led by Lars Spuybroek. We will talk in more detail in the afternoon session about the focus groups.
In the last session, and hopefully the most exciting of the day, we will open up the floor for a final discussion. The six respondents will give a brief overview of the findings of their groups to start discussion about the possible ramifications of the work presented today. The idea is to have an open-ended discussion that will hopefully produce even more questions than the ones that are currently structuring the events in this symposium.
But let’s leave all these aside for later and turn now to the main reason why we are here - the demo of the Shape Machine!
II. Shape Machine Symposium Workshop
Welcome everyone back for our second session. In this second half, the charge is to reflect upon the work that was presented in the morning and to speculate on the effects of this technology in a variety of fronts pertaining to design. This second half will be led by six professors, theoreticians and leading experts in their fields, invited to lead focus workshop groups and engage in a general discussion with the audience afterwards.
Our panel consists of George Stiny, Professor of Design and Computation at MIT; Terry Knight, Professor of Design and Computation at MIT; Chris Earl, Professor of Design at the Open University, United Kingdom; Ulrich Flemming, Professor of Architecture at Carnegie Mellon University; Alison McKay, Professor of Design Systems at the University of Leeds, United Kingdom, and Lars Spuybroek, Professor of Architecture at Georgia Institute of Technology. I promised I would properly introduce them today and now is the time to do so.
George Stiny is a theorist of design and computation and Professor of Computation at the Department of Architecture at MIT. He co-created the concept of shape grammars with James Gips in the late 1960’s and he has been exploring their sweep in art and design ever since. He is the author of Pictorial and Formal Aspects of Shape and Shape Grammars; Algorithmic Aesthetics: Computer Models for Criticism and Design in the Arts (with J. Gips); and Shape: Talking about Seeing and Doing. Currently he is working his fourth book on how shape grammars include S. T. Coleridge’s famous distinction between fancy and imagination, and Oscar Wilde’s critical formula to see things as in themselves they really are not. Stiny is educated at MIT and UCLA. He has taught at the University of Sydney, the Royal College of Art (London), and the Open University. He was on the faculty at UCLA for fifteen years before joining the MIT Department of Architecture in 1996.
Terry Knight is William and Emma Rogers Professor of Design and Computation in the Department of Architecture at the Massachusetts Institute of Technology. She conducts research and teaches in the area of computational design, with an emphasis on the theory and application of shape grammars. Her book, Transformations in Design, is a well-known introduction to the field of shape grammars, and she has published extensively on shape grammar-related topics in design research journals. Her recent research is in the new area of Computational Making, where she is exploring the incorporation of material, sensory, and improvisational, and temporal aspects of making things into grammars. She currently serves on the editorial boards of the Journal of Mathematics and the Arts, Design Science, Design Studies, and ArchiDoct. She is co-editor of the Routledge book series, Design, Technology and Society. She holds a BFA from the Nova Scotia College of Art and Design, and an MA and PhD in Architecture from the University of California, Los Angeles.
Chris Earl is Professor of Design at the Department of Engineering and Innovation, and Faculty in Mathematics, Computing and Technology at the Open University, UK. He conducts research in Design, CAD, Robotics and Manufacturing Systems (with RC and EC funding), Design Computation and Design Processes. He teaches courses in Design, Manufacturing Systems and Engineering at UG and MSc programs. He has also taught at Newcastle University, Engineering Design Centre and Faculty of Engineering 1991-2000 and the Bristol Polytechnic, Faculty of Engineering 1985-1990. He holds a PhD from the Open University in Design.
Ulrich Flemming is Professor Emeritus at the School of Architecture, Carnegie Mellon University. He has written extensively on generative design in architecture and engineering. Areas of expertise include theories of form generation in architecture, generative design systems and applications of formal grammars to architectural design; integrated design systems; knowledge-based and case-based design; design databases; design system interfaces; and human/computer interaction (HCI) in design. Research projects at CMU include software development for Integrated Building Design Environment (IBDE), Human-Computer Interaction in CAD, Computer-Assisted Layout Generation, and Software Environment to Support the Early Phases in Building Design (SEED). He holds a PhD from the Technical University Berlin, Germany.
Alison McKay is Professor of Design Systems in the School of Mechanical Engineering at the University of Leeds and director of the Socio-Technical Centre, a multidisciplinary research center based in the Leeds University Business School. Her research focuses on socio-technical aspects of engineering design systems and the networks of organizations that both develop and deliver products to market. Areas of expertise include engineering design and engineering information systems, design descriptions, supply chain innovation, product development systems, enterprise engineering, and socio-technical systems. She is a Fellow of the IMechE and member of the Design Society. She holds a PhD from the University of Leeds, UK.
Lars Spuybroek is Professor of Architecture at the Georgia Institute of Technology in Atlanta where he teaches design methodology and aesthetic theory. As an architect, he built the HtwoOexpo water pavilion, the Maison Folie in Lille, France, and large electronic public artworks such as the D-tower and Son-O-House in the Netherlands. His works have been exhibited at various Venice Biennales, the Victoria & Albert, the Centre Pompidou and are part of the collections of the FRAC in Orléans and the CCA in Montreal. More than 400 articles have been written about his architectural work. The last ten years Spuybroek has turned his focus to writing and teaching. He is the author of The Architecture of Continuity; Research and Design: The Architecture of Variation; Research and Design: Textile Tectonics; and The Sympathy of Things. He is currently working on a book for Bloomsbury entitled Grace and Gravity: Architecture of the Figure.
Our guests will lead six focus groups to explore relations, possibilities, and interfaces of Shape Machine on various fronts pertaining to design research, practice and education. These six fronts are: Shape Grammars, Design Education, Design Research, Design Synthesis, Shape Grammar Interpreters and Architectural and Design Theory. The focus group on Shape Grammars and Shape Machine will be led by George Stiny. The group on Design Education and Shape Machine will be led by Terry Knight. The group on Design Research and Shape Machine will be led by Chris Earl. The group on Shape grammar Interpreters and Shape Machine will be led by Ulrich Flemming. The group of Architectural Theory and Shape Machine will be led by Lars Spuybroek. To spice things up, we have decided to split the workshop in two sessions, each lasting one hour, to give the opportunity for each of you to interact with two guests during the workshop. I am sure that most of you would like to sit in on more than one table to engage in different topics and work closer with our guests. I hope the rotation between two tables will give the opportunity to our guests to pose their questions in two different settings.
At the end of the workshop, we will take a brief break for a coffee and we will resume for the last session of the day where the panel will briefly discuss their findings from their focus groups and open up the floor for a general discussion with the audience. I am very much looking forward to hear what you will come up with in your groups and even more to the discussion afterwards. Several people here have already reached out to me with very exciting ideas and I would like to see how they will play out in this discussion.
[Focus Group Workshop Sessions and Break]
Shape Grammars and Shape Machine: George Stiny
Design Education and Shape Machine: Terry Knight
Design Research and Shape Machine: Chris Earl
Design Synthesis and Shape Machine: Alison McKay
Interpreters and Shape Machine: Ulrich Flemming
Architectural Theory and Shape Machine: Lars Spuybroek
III. Shape Machine Symposium Final Discussion
Welcome back, the chairs and the tables are all back to the morning format and we are ready for the last session of the day. I hope you all enjoyed the discussion in your two sessions, I personally wish I would have been at each one of them! Alright, we have about one hour and a half, and I'm looking forward to our discussion! We can perhaps start with the inventor of shape grammars, George.
I'm happy to start. I'd like to say that in the spirit of shape grammars: I have no position. And that truly is the whole point of schemas and rules and shapes is that you can change your mind and move around. I think that's what is most impressive about the demonstration we saw this morning is that Thanos and his group have finally put together a machine that actually allows you to put things together with reduction rules and then do embedding again. I thought that was very impressive and certainly the enthusiasm of his students and researchers in the group only bodes well for the future.
My basic position is that I'd like to encourage everybody to remain positionless, not with respect to shape grammars, but with respect to what you do when you apply them to design. It's that positionless-ness that leads to creativity and imagination. And certainly it is the kind of thing that you pick up when you read people like Oscar Wilde and S.T. Coleridge - people who are really trying to look at what imagination is in literature and pictures. The interesting thing is that it also comes up in mathematics with people like von Neumann, who was looking at the limits of calculating and what you can do and what you can't do.
So, again to positionless-ness.
I’m just trying to do a little report because the two sessions that we had were very different, probably because there was no leader.
The first session I opened it with what I thought is an important question, “What is shape? And how does shape differ from form, thing, entity, or figure?” Obviously, there’s a whole philosophical range of entities. Since I’m from theory, I take a philosophical point of view, where the notion of a thing is highly defined. And form too has an enormous history. But shape, hardly.
I was asking the question, “What makes a shape?” And, that’s where the trouble starts because I got a lot of answers saying, “Shape is what you see in a thing,” almost gestalt-ish notions of shape, which, at the time of Gestalt theory, meant a multiplicity. So, the question then arises if that is a multiple entity? Then, it’s a question of how does that thing exist by itself? Is it really dependent on our perception or does a shape exist by itself?
Then, we asked the question of parts and wholes, “Are shapes made of elements?,” because if there’s a grammar there must be a grammar of elements that make up a shape – and then, obviously, these elements are shapes too. So, you get levels of shapes - mini-shapes and a maxi-shape. Then the question is really, how does that thing exist? It’s really an ontological question, not just a phenomenological question of how we perceive that thing, but how it exists by itself.
There was a little conclusion in the sense that each thing, or each shape, exists on two levels: one is its geometry and the other is its organization. Kant already said that for things to be real they need to have schema. But he saw that schema as exclusively geometric, like Plato did. Which brought us to the question of how things are organized by multiplicity. Of course there are many topological answers, non-geometrical or parametric answers to geometry, meaning that there must be some kind of systemacy that allows elements to combine in multiple ways, not just rigidly geometric. The elements we draw are like continuous lines, but must somehow be related by non-continuous lines, by dashed lines, making the elements relate to one another to actually create shapes. In short, I have issues with the notion of shape and also how it relates to thing, form, entity, figure, etc. Shape seems like a very innocent term, but I don’t think it is.
The other discussion was very interesting in the sense that it started with the question of a sketch, “What is a sketch? And how does a sketch differ from a drawing and from a doodle?” Clearly, a sketch goes beyond mere doodling; it is more than automatic drawing, but it is not a drawing yet. How can shape grammar help reading sketches to become drawings, to find direction? The issue is then between possibility and potential, because if you sketch you try to find a direction and it might be that if your shape grammar software is reading your sketch then it gives all the possibilities. Like with Thanos’s triangles made up of three lines, it can give all of the 509, but maybe I’m only interested in five of them. There’s a whole range of possibilities. The real question of what is possibility versus potential or tendency? In that sense the question of sketch and shape run parallel to those of form and multiplicity. That’s the summary.
I didn't really take notes, so if I forget something important or completely misrepresent something that was discussed, please, I encourage the members of the two workshops to not be shy and speak up. First of all, a big compliment to the interpreter that we have seen. In terms of the interface, it showcases how intuitiveness can be combined at the same time with precision by defining the left-hand side immediately by taking it out of the existing shape and then defining the right-hand side by manipulating it - and always related to a clear point of reference. I might also add that in the two shape grammar interpreters that I ever implemented - that admittedly had extremely rigid time constraints - they did not have this important characteristic that to me every shape grammar interpreter in principle should have. Namely, that the left and right-hand sides are syntactically identical such that you can apply each rule backwards and forwards. In the extreme you have a finished shape and you can test if it is canonical if by applying enough rules backwards you actually end up with the initial shape, or not. How feasible this is, I don’t know, but just to give you an idea about what backwards rule application means: whereas the shape grammar interpreters that I implemented had to use some existing shells that had a matching within it - because we did not have enough time to write matching from scratch - it always turned out that there is no syntactical equivalence between the left and right-hand side. The left-hand side typically is a collection of predicates that has to be satisfied and at the same time help fill in the parameters with real values given the current shape. And the right-hand side is a procedure. It just tells you what to do with the parameters that you have collected. So, I am very pleased to see a shape grammar interpreter that is really a right one in this aspect.
We got very interested in the question of how signatures are used in the interpreter for matching. Apparently, this is a work in progress and we did not get a definite answer on that, but it seems to be a very interesting avenue to pursue. We also wanted to bring in the discussion the idea of a manifest design space that gives the designer who uses the system some idea of where he or she actually is, rather than always seeing just the existing state of the design. But, as I said, we never got into that. I’ll leave it at that and hand it over to my friend Chris.
Thank you, Ulrich. I'll just make a couple of quite brief remarks. I apologize to my colleagues beforehand, I’ll fill in the gaps later. One of the things that has sort of intrigued me is the categories of applications in design that you might use a grammar or an interpreter for. It seems that one category is essentially about exploration: it takes rules and you look at what might happen. And you might constrain that in all sorts of ways, but essentially that's an exploratory mode. It's not fundamentally very interesting, but certainly the tool that we've seen this morning does that very well. The second category, which is rather more difficult and probably seen from the discussion around engineering, mechanical engineering, and building, is secondary in a way where you have a number, let's say two, two sets of rules which are making ostensibly the same object, the same design. Two descriptions, two separate descriptions, two sets of rules which make the same thing, and then you end up with an activity that isn't really addressed by the interpreter, which is around resolving the differences between those descriptions. So that's a matter of the meeting of two sets of rules ostensibly describing the same thing. And the third category in design that you might use the rules - and I’ll hand it over to Terry because she’s certainly the person who knows about all this - but the way that one set of rules to make something is transformed into another set of rules. And that is a sort of signature, creative act that seems to be quite important in design.
So first: one set of rules; secondly: two sets of different rules generating difference and requiring resolution; and third: one set of rules transformed into another, creatively. Those seem to be, just on reflection, the categories of activity - ways that the grammar or sets of rules might be used in design. I think it's those things which this Shape Machine needs to address.
Before I say anything about our task to talk about the Shape Machine and education, I would like to publicly thank Thanos for bringing all these people together today. It is really overwhelming. I've told a number of people that it is personally overwhelming for me to see over thirty years of my students, here, in one place.
Thank you very much for putting this together, you deserve a round of applause! And Thanos was one of our first students at UCLA, congratulations!
Okay, so we were tasked with talking about the Shape Machine and education. And about ten minutes after the second session concluded, we discovered that we had, in fact, not talked about the Shape Machine at all. I'm sorry about that, but we did talk about software and we did have some interesting discussions. First, about using shape grammar software in education. We talked about the benefits and drawbacks of working with shape grammars through software versus working with shape grammars by hand. And those of you who know me, know that when I teach my classes I insist that students do everything by hand. My position is that in order to really understand how shape grammars work and the benefits of shape grammars, you need to apply rules by hand and you need to do it slowly.
That is my position and it was quickly countered by other people at the table who talked about the benefits of the software. So we had some interesting discussions about when and how and where to introduce shape grammar software in an educational process.
Andrew Li was at our table, and Andrew has had a lot of experience teaching shape grammars with software and being very aware of the need to have software interfaces that are designer-friendly. We often talk, when we talk about implementations, about the back end and the stuff that you can't see, the technical parts, but none of that actually becomes usable unless we have nice interfaces. I was very impressed with what you showed this morning in terms of the interface and the real-time demonstrations of how the Shape Machine worked - that was very impressive. We talked about analog versus digital and doing things fast and doing things slowly - the benefits, again, and the weaknesses of those two approaches. From there, our conversations were pretty free-ranging.
We had some discussion about generative design. The idea of being able to generate multiple possibilities and very large design spaces - and the implications of that in terms of teaching. If you have really huge design spaces generated by a rule-based approach, how is that presented to students? And what do they learn from that? And again, we had a discussion about different ways of searching through a design space generated by rule-based design. From there, we talked about the differences between generative design and parametric design, which is very popular and has helped, actually, the shape grammar community in advocating for rule-based design. When I first started teaching, parametric design was not popular - at all. We introduced rule-based design and this notion of variational design and we were met with, “Boo hiss, we don't want to know about multiple possibilities, there's only one good possibility and that's it.” But since parametric design has been on the rise, it has helped boost the shape grammar community indirectly. Of course, we know that shape grammars are much more powerful than parametric design and it calls for more powerful software.
We also talked about the trade-offs, or the balance between predictability and unpredictability, when you're using generative design or shape grammars, either analog or digital. When you're using shape grammars, you want to not get anything and everything, you want to get a certain range of possibilities that are meaningful. But not totally predictable, and not totally unpredictable - we had some interesting discussions around that. And then we talked about how one introduces shape grammars to different levels of architectural design students from undergraduate to graduate. And the notion, at least at the undergraduate level, of introducing rule-based design implicitly and not even mentioning shape grammars. Don't even use the word shape rule, just the notion of general rules. What do you see in a design? How does it repeat? And, can you replicate this? Can you communicate the idea to your neighbor through some means? - so the idea of rules is introduced implicitly versus explicitly. We talked about explicit instruction in shape grammars and how one might do that and what happens at the graduate level as well.
We also talked, getting back to what Lars was talking about, of the difference between shapes and things. We had some discussion of that, but from quite a different angle. We talked about grammars that work with shapes, but in the process of shape computation material manifestations of the shapes are made. And reflecting on the material manifestations of the shapes, and using that to then go back and guide the transformation or regeneration of designs with shapes. We can go back and forth between abstract shapes and material, real physical material manifestations of the shapes in the world. The benefits of doing that, in terms of teaching shape grammars, are that we actually see the physical results of shape computations as opposed to just having them on paper or on the screens with software.
Ok, I'll leave it at that. I'm sure I left out a lot.
Our question was, “How does this technology affect design synthesis?” The first thing we started to talk about was how the interface allowed you to sketch. So, in a way, it showed us a tool for doing design that accommodated human behavior and preferences rather than having a tool that one has to learn to use because it does things in its own way. From that, we were talking about how we might pitch it to designers. The view was that designers want to feel like they're designing. So, turning up and saying, “We've got something that will get rid of you and automate you,” doesn’t allow you to succeed. Really, our discussion was around pitching it as something that can improve people's creativity because it means people don't waste time doing things that the machine can do.
In terms of design synthesis, we asked, “What kinds of things can we design?” It could be shapes, rules, designs - we went through solution spaces - so your grid this morning, Heather, was a solution space. In the UK, we’re doing some stuff that builds materials, which is another kind of solution space. Then, we went on to think about things that aren't shapes - things that are represented by shapes, but actually aren't shapes. That took us through to new ways of analyzing, interpreting, which I think Chris mentioned - being able to take a design and then using rules to superimpose on it, say different manufacturing processes. Companies spend a lot of money doing this, but if you can automate that, it would save a lot of time and speed up the development process.
And then, in terms of the technology - well, it seemed that there were parts of the Shape Machine that we thought could be used and exploited quite quickly. The search method and being able to find subshapes could be applied in all kinds of places. The other thing was the cleaning of DXF files, quite a few people were quite excited about being able to do that. Then the idea that you could have a design system where designers didn't need to know or care about what the representation was, because literally they are just working with their sketches. Another thing that was of interest to people was this idea of using AI in design synthesis and maybe applying it to designs, but also applying it to rules and grammars and the sort of metadata behind what people are designing with.
We were talking about the visual computing demo with Shape Machine, the sorting of numbers and letters. It seemed to us that that really gives a lot of potential. You could imagine doing visual computations where, actually, people can't see what's being computed. The idea that something visual would work - and that would lead to a need for new kinds of computation infrastructures which would run the visual computation processes more efficiently. And then, finally, if I look at the whole question, “What does this technology and design synthesis bring together?” We had some conversations around capturing design intent, functionality, different aspects of design that go with shape - but there’s more scope to talk about that.
Thank you all, lots of things to go through! I have to say, because of the setting here in the room - with the weight of the shape grammar panel all on this side and you, Lars, being the outsider in this group – I would like to start with you and ask you what you think about Shape Machine?
Well, I enjoy the notion of visual computation, I think it’s an enormous invention. I’m just in the dark on how it works.
Well, your studios are all about finding these bits and pieces in shapes - foregrounding them, redrawing them, reworking them to create new things.
Basically, I don’t use the word shape. I use the word figure, which is slightly different, I think, but I’m not sure if I should make that argument. A figure is - like shape - a line, a very happy line and when it comes close to another line it responds, not just by moving but also by changing its form. It’s very Ruskinian, very Hogarthian – it’s like S-lines, C-lines, J-lines, meaning there’s a range of figures, and each figures contains again a range of variations. It’s very Gothic by nature. These lines sniff at each other, they have behavior like wolves in a pack, and then they team up. As far as I am concerned, that’s a shape, but I call it configuring. Lines are active agents, they act on each other and interact with each other.
In the case of shape grammar, I’m never sure if the element is moved by the grammar. Is the grammar on the outside of the element or does it follow the internal movement of the element? Like it’s the case with your triangulated set of three lines: there’s this triangle in Heaven that tells these three lines that they are actually part of an invisible triangle? I’m trying to say it as Greek as I can. There’s this Heavenly Triangle that orders these three lines, these godforsaken lines that don’t know anything, that are just there. They don’t have behavior or sniff at each other, they are just told by this divinity that they are part of this secret thing called a triangle.
That’s why my issue with shape grammar starts because it relates the lines by an external ordering mechanism, not by the internal behavior of those elements. It’s an external, geometrical, not very parametrical analysis.
I’m trying to think, what is shape? How is it different from pattern, from form? Can you guys position that more?
I think you’re overthinking it.
Well, that’s sort of my job.
Not that what you’re thinking is uninteresting - that’s not a critique of what you’re saying, but in terms of visual computation I think that you are overthinking it.
Visual computing is computing – it’s algorithmic, it’s process-oriented.
Yeah, but it’s using visual, spatial entities as opposed to text or lines of code.
So, the visual aspect is the one looking at the screen, it’s not internal to the figural component?
Well, no it’s not necessarily on the screen. Originally, it was on paper – just drawings on paper.
Yeah, but you need a second agent to see. The visual component is with the agent.
The agent is you. The agent is the user or the designer.
That’s what I mean – so, the agency is not in the elements, it’s in the person looking?
Yeah, absolutely. So, you have some visual, spatial entities and the user is the designer – like you do in the studio. The designer is looking at things and saying, “Oh, I see this and I want to do that with it.”
Yes, but the argument is really that when these elements have internal behavior, they actually see each other without me seeing them.
That’s my position.
There is nothing without the human perceiving.
Really? We’ve been on Earth for like 5 million years, and the Earth is more than a thousand times older …
No, I don’t want to get into a philosophical thing.
Well, there was perception before us humans roamed the planet … and mountains had shapes.
What really helped me connect shape grammars to what was going on in an architectural process was when one of my colleagues at Carnegie Mellon kept saying to students, “You have got to let the drawing talk to you. You’ve got to have a conversation with the drawing.” What he was saying was that you’ve got to look at what you’ve done with fresh eyes every time you look at it – you’ve got to let that drawing suggest to you the things to do – and that’s exactly what shape grammars try to formalize. The grammar system enables the viewer, which may be the computer.
I totally get that, it’s super interesting.
My point is – it’s nothing more than that.
I’m just wondering if there’s so much multiplicity in that shape, how do you then draw it as such? There’s probably a million ways of looking at a square, but if there’s multiplicity in that thing, then maybe you could draw it differently. It means there must be a loop between the multiplicity of you seeing, and the one who generates, wouldn’t that be the argument? The argument is that if there’s variability in the perception of it, how does that variability become generative and actually create a set of shapes?
Now you’ve got it, it’s a choice. Choice has got to be enabled.
Well, I think it needs something.
What do you mean, choice?
Well, that variability … Let me try to explain with an example. There’s this beautiful diagram of Ruskin, it’s called “Abstract Lines,” and there are lines of leaf margins, or ballistics, or the contour of a glacier, basically these are all shapes made by movements. These are action lines, they aren’t so much abstract lines, but lines abstracted from material existence that however remain sensual. That’s how he analyzes those lines - not as they rest, but as still active. What’s interesting there is that the variability is actually part of the element, not just in his reading of it, his perception, but it’s actually the line itself that’s variable. Of course, they are curves, so they are very good at variability. In fact, it’s the reason why they have taken on the shape of curves, Leibniz told us that.
There’s an issue of how variability itself becomes generative in design. It doesn’t have to be a “curve.”
Well, you can do that with a rule.
Yeah, of course it’s rule-based, and rules are sets and sets are multiplicities, little blocks of variables, not singular shapes.
Yeah, it’s rule-based, so whatever it is you see in that square or in that line, then you can turn that into a rule and say, “I see this and I’m going to do that with it because I see it in this particular way.”
I just want to add something about the role of shape grammars in design. It would be a complete misunderstanding to think shape grammars are something that are handed to a designer. I think they ultimately are designed by the designer, him or herself, because he has some idea and wants to explore the implications of the idea – either for fun or in the context of a given project. I want to throw in a name, which to my surprise I have never heard mentioned in any piece of literature on shape grammars, and that is Harold Cohen, who wrote, decades-ago, a little production system that produces drawings like he would do by hand.
I actually bought one of his drawings for ten bucks. He had an exhibition and there was a big, for nowadays, very primitive, printer that would print these. I have a drawing that has the typical Harold Cohen shapes, these organic shapes.
Aaron is the name of the program.
In this particular drawing, his program suddenly takes off into the upper right corner and draws a little cloud into the middle of nowhere and then it takes off and draws something else. I’m sure Cohen was totally surprised why this happened and as it happened. Of course, it was extremely interesting, and he sold it as a drawing to me, signed.
So, shape grammars should be seen as a tool that is under the designers’ control, not something handed to him or her from some higher authority.
One of the places I’ve used shape grammars is in teaching students. What I see is that they’ve got trouble using a CAD system - and the problem isn’t that they can’t use the CAD system, it’s that they don’t actually know what they want to define. You can’t design with a CAD system, because you’ve only got the entities you’ve typed into it.
How do you teach students to do design synthesis? God alone knows, you just go off and get inspired or something.
Actually, having a shape grammar where you can compute some designs just helps inspire students, but to me the things that the shape grammar is operating on are just passive things that you see. Having anything to look at is good if you’ve got nothing, and it just takes people off on paths. That’s certainly the feedback I’ve had from students: that having anything that acts as a seed to set them off is what they need to create new things.
I’ve never thought of the things that I’m looking at as being things that have a behavior themselves, they’re just stuff that people see. Were you suggesting that the things have got behavior?
So, they might have behavior, but that would just be an interpretation.
We were discussing how Rem Koolhaas won the Seattle Library. There was a little competition for the clients, who wanted to see how the architects worked. He started with a large piece of paper – I guess he knew what he was doing – so he’s like cutting this piece and folding it. It was the 90s, so he was “folding.” The trick was of course that the client sees floors, because he’s holding sheets like they are floors, but he gets the bending and the folding for free because it’s paper. That’s what the paper wants – it’s not drawn, he’s acting by holding those levels of paper separately and then the paper is doing the folding. So that’s added by the material, not by human perception. That’s what I expect from a machine, that there’s something I do and the machine gives me more - it adds something to it or puts me on the wrong foot or … I don’t want to be in control.
I want only to be a bit in control, like when I’m cooking. Well, when I’m cooking, I’m not very controlled, but you know: you add elements, you warm them up, they mobilize and then they take their own direction.
That’s very good because shape grammars are just like cooking, they really are.
Yeah, they cook?
Yes, it’s spatial cookery.
Ok, I’m part of the group then.
There’s the same amount of control and unpredictability and magic and bringing things to life. The ingredients for some recipe are dead as far as I’m concerned, it’s the cook that brings the stuff to life and it’s the same with shapes. They are passive until someone looks at them and brings them to life, animates them.
So shape is not the end product, but it’s the recipe?
No, it’s the computational process – it starts with the rules, like a recipe, and then the cook brings it to life. The user, the designer, takes those shapes and just makes magic out of them visually and spatially.
That’s where the perceptual and the tactile part comes into play.
Lars, the way that you actually talk about forming now, it reminds me of when we were all trained as architects. Back then, I remember my first day we were given a range of pencils – here is the H6, the H3, the H1, the B6. So, in theory, we had to understand how a line is drawn differently with a H6 versus a line drawn with a B6. I think that’s what you are referring to – if I can try to make some connections here between the discussion. People would call “line” the line that is done by an H6 and they would call “line” the line that is done by the B3 or the B6 – but I suspect you would say, “No, these are not the same lines because there is a materiality involved. The way that the hand moves has to do with the type of pencil and the type of paper too.” No? Are these your complaints ?
I'm not sure they are complaints. I really want to know and figure out what is going on. No, I totally believe in grammar, I'm just wondering if shape is not a sort of defensive term, on purpose left very undefined. That could be fine, but it's still important to say what is in fact form and what is pattern, and what is a thing, and what is an object. How does shape differ from all those? - because I think it does need definition. Now I do understand that shape in that sense is unfinished, right? So somehow there's an incompleteness to shape - and it opens up and it allows for multiple readings, so it's not a finished thing.
But it does mean that if there's a range of readings of each state, that those readings don't go in all possible directions, but actually sort of team up and become family, right? - that there's some direction to that thing. I'm also thinking about Venturi’s contradiction. You get multiple shapes and these shapes have ambiguous readings of one another, but they don't exclude one another. They overlap, that's why it's ambiguous. Is it one house or two houses, for instance?
Or I'm thinking of the wall in Haus Müller from Adolf Loos where you see a wall and there’s a hole in it and you think, is it a window or is it two columns? That's very specific playing with the size of those things, but these two are family - it's not like that's one reading and that’s another reading. It means that this multiplicity somehow makes the thing richer, not just self-contradictory. I think that's quite important - that when there is incompleteness it opens up a thing perceptually that makes it richer. That all these variations are actually, still in your mind, somehow combinable and don’t exclude one another or select negatives in one another.
I want to take a stab at answering some of what you are provoking. I think that another way of phrasing what you're asking is that you're asking for a shape-oriented ontology. And that's kind of abstract when you just say, “shape.” But before that, there is one that is more commonly understood, a material-oriented ontology, which we may say is a bottom-up approach in architecture or design where we say, “here's a material, here is wood, what does wood want to be?” So instead of sculpting, maybe I can bend it - and I understand that as a constraint or a ruleset, which I can use to design the thing. So, I step into a framework and I can do shape grammars within that framework to explore. On the other hand, if we were to look at shapes, and as a more broad term, I'll use the example of your Son-O-House. When I analyze that through mathematics, which you are very familiar with, I look at it and I say, “Well, this house is actually a cylinder. And by combining parts of cylinders and a sphere, you can create your own Son-O-House through a series of combinatorics of combining parts of shapes with mathematical DNA: sine and cosine.” That’s a type of shape grammars, but that's a much more combinatorial type of shape grammars. Shape grammars to me, again, is much more about embedding and constantly searching for new opportunities.
So, the embedding is done by the designer? Did I get that right?
The embedding is done by the designer, but it's also about being consciously aware of what framework we choose to step in - and at what time. Because each framework we step in, whether it's the mathematical framework, the ecology of shapes, or the material framework - or whether it’s doing it by hand or using software, they have constraints and limitations and bias. That doesn't mean we're not doing shape grammars, it just means that we have to be conscious about what framework we’re stepping into, so that we can step out of it and do other things. But, I think, as in the example of the Son-O-House, yes, we can look at things in an object- or shape-oriented manner, but you then have to specify what kind of shapes are you looking at, just like we have to specify what materials are we going to look for. We can’t just say, “Get all the materials,” we pick materials.
First of all, congratulations! It’s a great piece of work. It looks like we are addressing most of the key computational and mathematical issues - it has created a context to explore. My comments are on the Shape Machine itself. For me, the most valuable thing that I have seen today is the history - the vertical history, the sequence of rule applications. I see a lot of potential there. I am coming back to your comment about “What is the machine giving me back?” So, trying to answer that, at least my interpretation is, what if we have a real-time update of the design rules? We have a trail, a history of rules. Every time you change something, you trigger an automatic update. So, we have a chain reaction in this thing. If you do so, it means that we can constantly update every single state of the design development and that would be very handy to get feedback from your own rules. You would see that you do something and it is not just one step talking to you, it is all of the steps talking to you at the same time. The only thing is, every time you are creating this history, there is an implicit decision to be made: in this triangle, or the other triangle? Apply this rule in this corner or the other corner?, right? - and there are many of them.
So, my comment is more on the tool itself, the actual implementation, the entity that stores the dialogue about how to do that. There must be some binary tree, some data structure to store all of the history.
What I saw that was really interesting was, and I am not a student of shape grammars, so I apologize - I am ignorant of shape grammars.
As everybody else here.
It seems like what you’ve done is that you’ve created a way to encode this atomic thing called shape in a way that’s recognizable by the tool. And you can find instances of that shape in a larger context that we call a drawing. And you can find the inverse, of course, which is a very complicated thing. So, the challenge it seems, is if your encoding system is clever enough that you can encode a meaningful shape? - whether it’s an electrical outlet, or a spiral stair, or a garden window? Can your encoding system, is it robust enough to hold up when given a meaningful shape at the bottom end? And, at the top end, you have the entire internet full of drawings - and they are waiting to be examined - so this is machine learning. At the top end, this is big. It screams out to be treated with a machine learning algorithm that says: “Ok, I am just going to throw this huge library of meaningful shapes at this equally huge library of drawings.” And I am going to try to put little bits of data on the drawings that say: “This was a building that burned down and people died because there were too many dead-end corridors.” And it’s like the dog and the cat - if you throw enough of this data at it, you will begin to find meaning in the bottom working up and in the top working down. So, on the one hand: you need to test your system to see if it is robust enough to handle meaningful shapes, and I think on the second hand: you need to look at developing a machine learning framework to plug it all in.
I think, Robert, from the two questions you raised - the first one on meaningful shapes, leads to questions about how a shape is meaningful, or better, when a shape is meaningful. Even in the examples that we saw today, this is something that speaks to the way that we use the Shape Machine in the lab: the things that we try to find, by themselves, are quite meaningless. In the past, the kind of shapes that we wanted to search - they had to look good, they had to be meaningful somehow. And Heather showed that nicely - and we had other examples too. But, the thing that we want - once it is embedded in a drawing - it is quite important. But, if you take that same thing out, it looks like you don't know what you are looking at. For example, the corner detail when we were lasso-ing the wall of the Mies van der Rohe. We were saying, “Find me all the corners and clean them up,” which is like a drafting exercise. If you put that thing outside of the context of the drawing, it looks like a letter P, or like a U-shape with an E, going back to George's “A plus E” paper. The point is that this thing - a seeming fragment of a shape, once put in the drawing, it becomes absolutely meaningful. Meaning resides entirely in context.
The second question leads to far reaching implications: I had a discussion once with George, I was showing him the plan of Savannah, one of the most handsome cities here in Georgia and very close to Atlanta, and I was comparing it with one of the Chinese lattice squares in his ice-ray grammar, and this square motif happened to be identical with the basic division of the Savannah city plan. It was a delightful discussion we had that day looking at this little thing and how it could become, in different scales, a lattice ornament or an urban plan. That brings in all of this notion of design in its own right - beyond scales, beyond disciplines - and the notion of machine learning framework for shapes and drawings seems so appropriate.
I invite others to jump in, George, what will it take to have you here?
You are innocent?
Ho, ho, ho, I don’t believe that.
Let’s talk about Oscar Wilde. I was reading George’s paper, ‘The Critic as Artist,’ and I'm really interested in Oscar Wilde. And let’s keep in mind that he was a Ruskinian …
… he was one of his teachers
Oh yes, he was one of the few teachers that made Wilde sweat. Oscar was actually pushing the wheelbarrow with the rubble of the road Ruskin’s students were building at Hinksey, north of Oxford … Anyway, Wilde comes from Ruskin. And Ruskin comes from earlier movements, such as the picturesque, and Hogarth, that is, a larger, Romantic movement where the notion of imperfection slowly enters aesthetics. In the 1750s-60s, there's this notion of cracking and breaking, of making things imperfect, but not like the earlier ruin was. The followers of the picturesque, such as Ruskin, were obsessed by the cottage. While the ruin is a whole that has been broken, the cottage is the reverse: it’s made up of aggregating, active parts. Uvedale Price describes it as being aggregated piece by piece, room by room, over the years. It’s built without an architect. This is important for many reasons, it’s the start of the Romanticism as an aesthetic of imperfection and process, but for Ruskin it means a way of understanding the Gothic and teach it. Now, with Oscar Wilde, something happens with that imperfection. It becomes far more extreme in the sense that it goes from aesthetic to decadence - beauty becomes obsessed with death.
Well, yeah …
That’s like saying there's no difference.
Now, that's where I wanted to go, that’s pretty extreme.
So, at first he's an aestheticist. It's all beautiful, and it's chintz and china porcelain, and it's peacocks, blue velvet and purple velvet. And then, suddenly, he switches with Dorian Gray to this obsession with death and it's like beauty and death are now coexistent, right?
To put it in shape-grammar terms: things falling apart, that is, shapes falling apart are almost the same thing as shapes coming together at that point. And I think that's maybe where this whole thing becomes really interesting ... I don't mean decadent in the sense of, “Oh it smells like death or the sublime,” but it's really that you don't know if a thing is falling apart or if things are actually coming together.
I thought that was really the point, where I hoped, I was actually understanding what George wrote in his essay on Wilde - like “Okay, I get his point, that is what a shape is,” it is really not sure if it is a shape - if it is falling apart or coming together.
That’s a nice way to put it.
I think that's where Wilde wanted to be as well. The only issue is, of course, how the hell do you design with that? To actually enjoy things like that, to be Ruskin looking at a cracking wall in Venice and admire its beauty is different than actually designing it, no?
That's my question to all shape grammarians, how do you actually design with it, so that it starts to topple over? I think for Wilde this was really the sense of beauty: that a thing has its own sort of demise built-in. Do you have an answer to that?
Well, let me say that I like Coleridge and I like Wilde.
I especially like Coleridge when he talks about imagination and he makes a distinction between fancy and imagination. Fancy is very much combinatorial design and imagination is when things fuse and re-divide. If you look at Coleridge on imagination, it sounds exactly like shape grammars. The reason I bring Coleridge in is that Wilde wouldn't have been possible without Coleridge.
Wilde has an aesthetic principle called “the aesthetic spirit” - and it's essentially ‘the critic as artist.’ The idea is to “see things as in themselves they really aren't,” and I think that one nice example of that is Dorian Gray. My take on Dorian Gray isn't so much that it's about death and decadence and dying and perversions - funny Victorian sensibilities about different social issues - but it's about change. I think that's what Wilde is all about: that things are never the same when you look at them again. A sense of beauty is not so much that it's blue velvet or porcelain or that wallpaper or whatever Wilde talked about in this humorous way, but it was in realizing that beauty is the symbols. By that, he meant that beauty was something that you could put anything into.
From a shape grammar point of view, that's especially interesting, because Wilde talks about a beautiful form as being something that you can put anything into. Then, you go more than a hundred years later, and listen to John von Neumann talk about calculating. He essentially says, “Well, you know there are limits to calculating,” and it ends up in the Rorschach test, which is just a picture that you can put anything you want into it.
And the shape grammarist says, “Wow, this is really cool, I get to put Wilde and von Neumann together and they’re shape grammars,” which are a generalization of standard, discrete, combinatorial, calculating, Turing machines.
Once you do that, you're off and running - you get Coleridge, you get Wilde, you get von Neumann - you get all the things that come up in design in really exciting ways.
But, it's not anything. You cannot put anything in it, there's a range of things.
Well, you can put anything in it that fits.
That means any of its pieces, any of its parts - and most design prevents you from doing that.
There's an example I like to use, there's this guy who started out as a computer scientist and then became a philosopher and then a Dean … I like to call that the slippery slope.
… You can imagine what the top is and what the bottom is - and it definitely is a slippery slope.
He gives this example from object-oriented programming - he says, “I take two squares and I put them together and I get a little shape, it looks like a rectangle.” And he says, “We don't know how to do that because two squares are two objects - and objects combine - and they keep their objectivity, they’re objects.” And, of course, the shape grammarist says, “Hell, we solved that problem 25 years before you even knew it was a problem. That's because shapes fuse and the minute they fuse, you can do what people like Coleridge say - you can re-divide them and that's where imagination is.”
Coleridge took the idea, I think, probably from Kant, who thought imagination was the soul organ. I like that. It is your soul - and to realize that you can handle that with calculating, with shape grammars, is terribly exciting.
So, I agree with you. I'm as enthusiastic about Wilde and Coleridge as you are. I think our takes are slightly different - I'm not so worried about the death and decadence, but I do like the change and seeing things as in themselves they really aren't.
That's why I was position-less to begin with.
Thomas, what do you think hearing both of your professors here?
I think that it is a fascinating and interesting discussion, but we are not getting to the bottom of it.
There is no bottom.
Well, as a general comment, again, as Marcelo said, this is fantastic work. I have been watching this work develop over the last three years. I think this is a breakthrough for you, to achieve what you guys did - congratulations!
One of the things that I think is really interesting, and especially to address some of the issues that the panel are talking about, is - figuring and shape.
Well, maybe what you are doing here is allowing the intuition of the designer to have access to a kind of computational world. So, you know, Lars’ figuring world - good designers typically aren't really that great in computing. Let’s just be honest, the best designers typically don’t write great code and the best coders typically aren’t the best designers. I mean, it is controversial, but it’s there.
What you've done here is you’ve allowed a kind of interface to the world of the folks on the right and to the world of the folks on the left - the Lars-ians of the world. And I think that is actually what George is talking about relative to imagination, right - and fantasy, because you are allowing that kind of thinking to have access to this world, which they may not have had before. That’s really exciting.
One thing that has come up a lot in our discussions the last couple of months during our presentations to the community, is the kind of possibilities that this technology enables. People often look at this work with disbelief, they say, “Well, we do have precision modelers, and parametric modelers, we have Grasshopper models that can handle 600,000 lines or more, and BIM models, and new workflows between them and energy models, and more. What’s up with this technology, why are you showing these examples with the squares? What is it with these elementary models of geometry?”
… It is perplexing. Perhaps, to continue the analogy I used earlier today on the word processor and the geometry processor, it would be useful to rethink the path from spoken language to writing, to typography, to word processing. Plato says the story about Socrates complaining that writing would make us forget. The fact that we have writing does not mean that we have more Odysseys or Iliads around … Clearly, the invention of the technology of writing enabled a different kind of engagement with story-telling. But does this invention make us create better stories? Did the invention of the technology of typography make us better writers, so to speak, because we can type? Now, does the invention of word-processing make us better writers?
The ability of seeing and doing through the Shape Machine certainly suggests to me, or to us in the lab, new ways to interact with digital geometry. Will it make us better writers, better creators, better designers? I have some doubts about that, but that's really what we will try to do. That’s what we are here to discuss.
Thanos, from my own experience, I would like to contradict you a little bit about word processing as it relates to the quality of writing. My writing has absolutely improved with the advent of word processing.
In the old days, I wrote the first draft of my paper in longhand, gave it to the secretary, she typed it, I had one round of corrections, and then I had to turn it in. With my word processor, I'm not kidding you, sometimes I massage a sentence five, six, seven times. I hear it in my head until it sounds really right. So, to me, word processing is a wonderful tool for writers who believe in revising their own work - and I believe that similar tools, like shape grammars, that make it much easier to revise and re-revise and discover things that we did not see before, may have the same effect. I am not saying that they are word processors, but …
For me, the step was LaTeX. It was from word prep writing to something done like a text editor to LaTeX - that gave me the ability to revise. It gave me the ability to revise all aspects of my writing: the wording incurrence, the layout of the page - the tools to be a master of the form. So that's an appeal for an ever more symbolic canvas on the things that you do.
I want to point out, just in general, that this is a marvelous intellectual experience - and thank you for that.
We are all blind, or most of us are. We are blind because we keep on using plural nouns in these discussions. A derivation sequence is a container for a plural noun, for plural shapes. The language of the grammar is a plural, it’s a container for plural. We are resolute in coming up with single noun interfaces. We look at one shape and shape rules operating over one shape. And we saw references from Heather today, saying, “After I derived four by hand, I derived a bunch by computer, but I only ever saw the bunch together happening in the end.”
For me, a fundamental thing missing here is the ability to do this in parallel. The ability to apply one rule to multiple shapes at the same time, and to control that through the selection of the multiples you want to apply it to. To pass rule sets to multiple shapes, to take them away from some shapes - it's working with multiple designs. Because if you watch a designer with a sketchbook, that's what they're doing. It's not a single sequence of sketches, rather it is a pattern of sketches and where the movement comes next, references might come from any of the sketches on the ten pages the thing is. So, we are missing multiplicity in our interfaces. You talked about that, Marcelo. There is something implied, where you were saying that there is a relation in that multiplicity, there is a history, there is a derivation - and I think that when we limit ourselves to that kind of thinking, it hides the real problem. The real problem is that we need interfaces that allow us - in very, very flexible ways, to deal with multiples of our results at the same time.
By the way, that’s what I did.
Along the same lines, just as an example - we’ve been working at Perkins and Will on automation tools for the space planning of hospitals. For the representations and the multiplicity in that particular case study, we were collaborating with Nirvik Saha, a Ph.D. student here. Great work - we still have issues with the corridor, by the way.
You have, let's say, the contour line of the site, then setbacks, then the shape of the building. So, the shape of the building could be either literally the site setbacks, or any shape that fits in that new perimeter, right? That is one issue: this is one of the problems in one level of complexity. There is another level of complexity, and this, in that particular example of a hospital, is the distribution of each department: the doctors, the examination areas, the nurse's station, and so on - in the context of that plan. And then, a third layer of complexity is the distribution of the rooms themselves, right? And there are many of them, and they have certain rules that some of them like each other and they want to be together, and some of them don't want to be together. So, the adjacency is still a major issue. Actually, the adjacency can be applied at the corridor level but also at the room level. So, then you have at least three levels of complexity interacting, right?
From that notion of multiplicity, let's say, can we apply design rules in the three levels at the same time and deal with it? I mean fire the levels because that is questioning the historical sequence of operations. So, to do this, we visually compute - we do that, right? But, what about the repeating rules that happen all the time in design? So, I want this and that - can the machine deal with it? What do you have for me? Do we have, let's say, a nearest-thing somewhere? If you cannot do this, try that, for example.
I'm asking those questions from our current historical perspective, because nowadays lots of young designers are very familiar with conditional statements. People are trying to use machine learning to optimize and stuff. There is a lot of exploration dealing with population. So, once you have parametric modeling, the natural evil brother is parametric analysis and applying all sets of analysis on those populations. So, there are many interpretations. I see a lot of potential.
I want to go back to Ulrich's point, who correctly said that clearly this technology has enabled us to write better. But that was not exactly my point before and I did not clarify that very well. I will have my second take on this - to say that composers, architects, designers, writers, thinkers, they always do an incredible job no matter what the media at their disposition. Consider, for example, Beethoven’s manuscripts. You can see the struggle that he had composing imprinted in his manuscripts - writing his music, erasing, rewriting and continuing revising and revising it. This, as opposed to Mozart's manuscripts, all pristine and impeccably written without a typo, with not a single edit. The point is that the specific technology - music notation, paper, and pencil - allowed Beethoven to write and edit his music - he didn't have any software, he was writing and erasing - and it is unclear, to me at least, how his music would have been if he did not write it and revise it on paper; Mozart’s presumably would be the same, even if he was using some other means to write it. And still, in both cases, masterpieces were written. So, the issue here is not really about individuals or geniuses. But rather, about how a particular technology can raise up the level of expressiveness in a medium or enable better ways to discuss the work - and clearly for us academics, to see how we can better teach students. Do you see Ulrich what I’m trying to say here?
Geniuses you always have … It’s really about how do you raise the average level. I think that’s the aim of every studio.
Thank you for putting it better.
I think it's also, empirically, very plausible that the tools and medium of design influence the design itself. Without computers, the use of these fully freeform curves would not have been realizable. You could not have really represented them - and you could not have built them. That does not mean curved buildings are better or worse - they're just different. They're different because they were designed with different tools, and we have to accept that - I think that is exactly what is so exciting about it.
I'm pretty much convinced that the oral tradition, as represented by Homer, produced work that really differs fundamentally with work that was written down.
Oh yeah, that’s true.
I'm not an expert. But as far as I understand, fundamental components of Homer’s lines were “fillers”, standard attributes associated with the characters of the story, which the the bard could recite by rote so that he had time to improvise the next line at the same time. When texts were written down, there was no longer a need for such devices. I think the same is true for music. I definitely believe it's true for architecture, which I know a little better.
That’s basically McLuhan's argument, right? Because he came to study with Havelock and Northrop Frye and all these people from studying oral culture - Walter Ong’s students, basically, and he was like reading the whole thing. Wasn’t it in the Gutenberg book? - Oral, and then, written, and printed?
You actually see the changes in the types of writing - it's not the same writing.
I also want to say one of the reasons that I love word processing is that I never learned to type with a typewriter.
Okay, other comments from our audience. Thomas, let’s come back to you - you have studied with both Lars and George, what do you make of all this?
Maybe visual computation and Lars’ approach to shape as agencies are two different ways of approaching the same thing. If it’s visual computing, I think the key question is, “Where do rules come from?” George would say, “Well, you do the thing you want - that's what it’s for, that’s why the formalism is so flexible.”
That's excellent, but I actually think it's not enough. But if you look at the design literature, a lot of things are also just about constraining what you want to do. I think that's why we talk about concepts in architectural education, I would say, mainly to cut down the space of possible solutions.
Lars, on the other hand, is offering a method of how to make shapes do things. It's a methodology that suggests to you things you should do, things that you could do, things that could happen. In that sense, I would say it is more generative.
In the visual computation form, I can do everything - but I still need to do something. I would say in the shape grammar literature, there is not enough on that topic. Yes, you can do anything, but what should you do? - and that might come from what you can do. If, for example, you look at any of the rules of the John Portman house, it's not something that you would just write down. It's not something that comes to you.
It's much easier to look at the design and derive the rules, but the other way - to go forward, I think is more difficult, and it is there that more work needs to be done.
I just wanted to say one thing - that Alberti, in a little book On Sculpture, talked about where artistic ideas came from. You've suggested that they come from the artist's head or wherever they come from - that takes some thinking about, but Alberti was more practical. He said, "Well you know, where they come from clods of earth and tree trunks." And his whole idea was that you looked at a clod of Earth, and you saw a hamburger, or you looked at a tree trunk, and you saw a picture of God or whatever it was - and you traced the outline, and you got a design.
Well the shape grammarist’s view, as to where designs come from, is that they come from looking at other buildings and other designs. And you pick this bit out, and you pick that bit out, and you embed things in ways that the original architect didn't know or hadn't thought about. And it makes for a very rich kind of interaction between designs and history - and designs over time, and how we make them, and how we do it.
It's also the kind of thing that someone like Harold Bloom talks about when he talks about his revisionary ratios in poetry. Poets have arguments with each other, not face to face, but in their poems and in poetry. And it's the same kind of thing: you look at something and you see it in a different way than the original author did it - and you either generalize it or you do something with it that leads to new poetry and new art.
And I think that's the way buildings work. I think that's the way paintings work. It's a really rich way of doing it. And it takes the burden of having to do things de novo, from absolute first principles that make no sense to anybody, and puts them in the culture, and puts them in the art, and puts them in the architecture, and puts them in the poetry - and that's where imagination comes in.
That's why Coleridge is important, because you can reconceive things - and that's where Wilde is important, in saying that you see things as, in themselves, they really aren't.
That's where the new art comes from. It's a nice way of thinking about originality and creativity that gets away from this lone genius, who operates in a vacuum and does stuff without knowing anything or seeing anything, I guess.
I just wanted to say that artists often pick out bits from other artists. I think an extremely important mechanism is misunderstanding.
Yeah, that’s right. It’s a basic principle.
It’s an art.
What is that? It's embedding.
I think Richard Meier can be very well understood as someone who misunderstood Le Corbusier!
It’s embedding. That's exactly what it’s all about.
I have a question about iterations.
When we teach in the studio, we ask students to produce one iteration per week. But there's always this discussion around how to describe a design iteration.
I was curious how you would describe it in the context of Shape Machine? And potentially how the panel could speculate about it, both for the sets of rules and for the designs?
I’m hesitant to answer that because we have not yet used the Shape Machine in a studio setting. This will be next year's goal. The work so far in courses was not as robust, we did it only for small exercises in the shape grammar class - very controlled experiments.
That worked very well. Iteration worked there because we had this series of successive experiments, where the requirement was to start from something X, produce something on the same thing, change something consciously, and produce something else, in three or four stages.
We did have that. But this was something very controlled, almost like a chemical biologist fooling around with things - not yet something at the level or the ambition of which you’re speaking.
Can I ask a question back? What is an iteration with a sketchbook?
That’s an open-ended question for me as well.
That’s the point: it’s an open-ended question here too.
I think it’s going back to the editing question, when do you stop producing something to look at it and edit? And these are not necessarily from encountering certain things, but when do you generate something enough - and then explore the same design space?
Wilde has a nice answer to that: it's when you get hungry and need to go have dinner.
I'm serious, that's exactly what he says.
The way we teach it, you can basically see after six weeks, all computer screens of students look like parking lots. They have hundreds of variations. I think variation is really the answer to iteration in the sense that you have to find something that's variable, then change the numbers in the variables, and then see how far you can push this.
It's basically a breeding technique, right? We often talk about breeding - it's like long-eared rabbits and short-eared rabbits - and you put those together, and then you put those together, and you put those together. Or tulips - I’m Dutch, so we have to talk about tulips - and then we get these variations. So iteration really means it's a stepwise generative procedure. You get rows and columns, matrices. You get first variation, and then you bring them together, and then you get the next row, and the next row. And then you see them be selected out.
You can do that - iteration - but, you do need a double mechanism.
Again, the argument is that you see both the form and the organization of the form. So you see what the parameters of the organization are, and then you can change the forms - and vary those.
I would say that the medium you are using deeply affects any of this.
So the sketchbook makes some things easy, and some things quite difficult. It makes replication of a whole rather difficult, because it takes time. Whereas, the parametric modeling system makes variations of the structure you've already built absolutely too trivial, but we pause to rebuild the structure.
Shape grammars have a very different logic - a very different sense of how they produce quick iteration. So the answer is tied up, somehow, in how every medium supports different affordances.
I was just wondering if, in this context with the Shape Machine where enumeration is now so easy, if iteration doesn't actually come in the designs themselves that come out of it, but in the way that we actually establish the rules to begin with? So iteration is in rules and not actually in results.
Well, it has to be.
It has to be, or else it can’t produce variation.
Well, variation is inherent to both systems, right?
Lars, when you work variation is inherent.
Sure, of course.
So to me, it seems iteration comes in with the rules that set up variation, not in what comes out itself.
Sure, but you need instantiations or else you can’t judge.
Well, that’s how you prove iteration.
You see me standing up, which means something here in terms of conventions. But the good news, or the bad news, is that we are not yet done - but we have a good thing coming up. We will wrap up the day at the School of Architecture at the Shape Machine Exhibition and the final reception.
Now it is 5:35 pm, and we need to go soon to the Cohen Gallery at the School of Architecture for the final event of the day, the closing of the Shape Machines exhibition sponsored by the Office of the Arts at Georgia Tech and the reception of the symposium sponsored by John Portman and Associates. The School is only 200 yards away but I suspect some people would like to stretch a bit. The reception starts at 6 o'clock and the caterers want us to be there promptly then so that they can take care of us. And we have alcohol too - so, it's all promising.
I hope you will enjoy the exhibition: it features projects foregrounding the underlying ideas of what you saw today, both on the Shape Machine and the Shape Signature, but with a different, evocative and playful twist. Just to give an idea of what you’ll see: an 80 foot continuous poster of the ShapeHaus project - it is but a small section of the 1-mile long poster required to illustrate the complete catalog of the 317,000 possible shapes that consist of 4 lines! Clearly the Office of the Arts at Georgia Tech could not support such a project at the moment, but we still hope this small sample will help us make the case for plotting the complete catalog some other time around!
Altogether we will see four projects: The ShapeHaus: A pictorial enumeration of all 4-line shapes; PlayShape: An interactive construction of all 3-line shapes; AlphaShape: A card game based on the lattice of 3-line shapes based on a triangular configuration; and ShapeAtlas: An encyclopedia of all shapes up to 4 lines in 496 volumes!
I wish Lionel March, my mentor at UCLA, would have seen this - he would have loved it. There is an interesting twist about it too. There are 496 books for a particular reason, it seems appropriate that the number of books to illustrate all the shapes that we can do with four lines should be itself the fourth perfect number, that is, 496!
It’s time to eat, Thanos.
Indeed it is! I invite you all to join us at the Shape Machines exhibition and reception, thank you!
IV. Postscript: Shape Machines Exhibition
One, two, three and four lines - fragments of visual thought, elementary shapes, grids, letters, symbols, notes, chords, all waiting to be combined and transformed in aesthetic or design inquiry. The work here explores these small visual worlds, brings them to light, and gazes on their familiarity or strangeness. In doing so, the project bridges design, cognition, and discrete math from symmetry, permutations, combinatorics, Burnside’s lemma, Pólya’s theorem, and matroids to shape grammars. These small worlds are all built one upon the other in a contrapuntal fashion, recalling Klee’s counterpoint with lines in his Thinking Eye and Johannes Fux’s Gradus ad Parnassum in his pedagogical treatise for teaching the art of counterpoint – or line against line.
The fundamental block here is the spatial relation between two lines. There all 8 here: one for two lines lying parallel to one another on the same underlying construction line ; one for two lines lying parallel to one another on two different construction lines ; one for two lines that cross over one another ; one for two lines that meet at a T-intersection ; one for two lines that project a T-intersection ; one for two lines that touch to make a V-figure ; and two for two lines that project a V-figure - and .
The ways this counterpoint with lines unfolds are uncanny. We all can recognize simple shapes and relations effortlessly, regardless of their specific geometric characteristics. For example, quadrilaterals of all sorts - squares, rectangles, rhombi, kites, parallelograms, trapezoids or what-have-you. After all, all quadrilaterals consist of four lines (edges) connected in four points (vertices). Still, if someone asks the question about how many shapes can be made from four straight line segments (or four curves for that matter), the answer is not straightforward.
The calculations presented in this work are telling: a line has an unlimited number of parts when it is in no spatial relation with some other line, up to 5 parts when it is in a spatial relation with another line, 13 parts with 2 lines, 41 parts with 4 lines, and so forth, following the sequence of centered square figurate numbers and their corresponding powers in higher dimensions for more elaborate combinations of number of parts per line. The very same parts combine one with another to create a visually staggering world rising from the singularity of the 1-line shape, to the 8 2-line shapes, to the 519 3-line shapes (among which there is the triangle and its parametric variations) and to the 317,065 4-line shapes (among which there is the square and its parametric variations).
The algebraic calculations of the figure inventories of the lines are done in Mathematica. The calculation of the chirotopes for the underlying matroids are done by hand. The specification of the shapes is modeled in Grasshopper and visualized in Rhino. The project has been produced at the Shape Computation Lab at the School of Architecture, College of Design with help from faculty, graduate and undergraduate students of the School of Mathematics, College of Sciences. The Shape Machines are presented at the Cohen Gallery at the School of Architecture from March 26 - April 11.
A lattice of 3-line shapes based on a Delta configuration
A pictorial enumeration of all 4-line shapes
An interactive construction of all 3-line shapes
A 496-volume publication of all shapes up to 4 lines
Project: Athanassios Economou
GT Arts Proposal: Heather Ligler
Mathematica Shape Signature: Josephine Yu, Cvetelina Hill
Python Shape Signature Script: James Park
Matroid Rank 3 Chirotope Calculation: May Cai, Nicholas Liao
AlphaShape 3-line lattices: Cvetelina Hill
PlayShape Interface design: Tzu-Chieh Kurt Hong
PlayShape programming: Tzu-Chieh Kurt Hong
PlayShape Machine Design: Nunggu Ahn, Andressa Martinez
PlayShape Machine Construction: Nunggu Ahn
PlayShape Machine Shop Drawings: Nunggu Ahn
PlayShape Machine Fabrication: Benjamin Tasistro-Hart, Nunggu Ahn, Jake Tompkins
ShapeHaus Poster Graphic Design: James Park
ShapeHaus Poster Print: Nunggu Ahn, Carl Dilcher
ShapeHaus Poster Installation: Nunggu Ahn, Tzu-Chieh Kurt Hong, Wen Yi Vincent Chang, Heather Ligler
ShapeAtlas Layout Design: Heather Ligler
ShapeMachines Exhibition Poster: Heather Ligler
ShapeAtlas Digital Installation: Perry Minyard, Paul Cook, Jeff Langston
ShapeMachines Finissage: Carmen Wagster
School of Architecture, College of Design
School of Mathematics, College of Sciences
Georgia Tech Arts Council
Georgia Tech Office of the Arts