As the world’s cities continue to grow to accommodate an ever-increasing global population, land is becoming increasingly scarce and expensive. However, open space is vital in urban areas. Green areas and healthy vegetation help buffer rising temperatures, sustain biodiversity, and contribute to citizens’ mental and physical health. One solution to open space challenges in urban environments may be more vertical development.
Several approaches have already been explored with mixed results. For example, while visually appealing, green facades or living green walls can cause structural damage to buildings due to the effects of root penetration and moisture retention. “Green buildings,” where plants and vegetation are dispersed throughout a building on planted terraces, are another approach; however, they have led to pest infestations or plant diseases due to the wrong selection of species or inadequate care for the plants.
“Semiramis,” named after an early Assyrian female ruler, is a new approach to architectural installations. Recently inaugurated in the latest Tech Cluster in Zug, Switzerland, with design and construction coordinated by Gramazio Kohler Research, it follows the idea of a vertical garden using a standing structure comprised of pagoda-like pods. These different pods, or platforms, are stacked in irregular shapes and arrangements, enabling rain and sunlight to reach each pod equally and allowing plants and small animals to thrive independently from human interaction.
Beyond the human/nature interaction, Gramazio Kohler Research wanted to explore the collaboration between machines and humans using current trends in computational design and digital manufacturing. This exploration has resulted in different computer methodologies facilitating and expanding each project stage, from the AI-augmented design to the robotic assembly of the pods’ panels.
One current trend is parametric design, which has revolutionized the computational design field, enabling a more straightforward observation of solutions to utilize available space. By manipulating the design parameters, architects can better understand how the changes in geometries impact pre-defined performance attributes such as cost, the material used, exposure to environmental conditions, etc. However, given that these attributes commonly constrain the final design choice, architects still need to understand the relation between the features and the geometries, i.e. the design parameters.
Parametric models are becoming increasingly more complex, mapping between input design parameters and associated performance attributes. They are comprised of numerous steps and arguments, unfolding non-linear relations and intricate inter-dependencies. As a result, gaining the intuition to manipulate inputs successfully is also becoming increasingly complex, resulting in the need for manual tuning that, in many cases, resembles blind guessing. Furthermore, designers can linger on one pre-conceived or discovered strategy, which can develop a bias toward one type of solution while many others remain unexplored.
Through “Semiramis”, the Swiss Data Science Center has been responsible for implementing an AI methodology to assist architects during design exploration. The main objective is inverting the design paradigm given a parametric model. Given some specific performance values, the model can provide a set of geometries that approximately fulfill them. As we can observe in Fig. 1 - specifically for the vertical gardens - the inputs required by the parametric model are the “constellations” (i.e., the set of active supports that define each platform and the radii around each of them), and the “performance attributes” that will condition the design choice (surface per platform and total sun and rain occlusion). The latter are parameters required by the landscape architect, as they provide information on the exposure of the pods to the sun and rain, which are essential when selecting species and planning for the correct care of the plants.
This methodology utilizes an auto-encoder: a neural network architecture characterized by a bottleneck that aims to find a lower dimensional latent representation of the input \((Z)\), as represented in Fig. 2. As the input must be reconstructed at the output, this latent representation, or embedding, needs to capture the structure of the data and encode it without significant information loss. Furthermore, given that each input geometry \((W)\) is associated with a set of performance attributes \((X)\), we must additionally enforce the reconstruction of these in the latent space. Once the model is successfully trained, the two main elements of the autoencoder play essential roles:
The decoder enables the generation of new geometries \((W)\) based on requested values of performance attributes \((X)\). Considering that the inherent structure of the input data is captured in the latent representation \((Z)\), we can obtain new geometries \((W)\) by randomly sampling \((Z)\) conditioned on the requested \((X)\), and then passing vector through the trained decoder.
The encoder acts as a surrogate model for the parametric model allowing quick estimations of the performance values \((X)\) given some newly designed geometries \((W)\), for example, to perform a post-selection of the best geometries.
Once the auto-encoder model is fully trained, a set of tools and visualizations allow the designer to interact with the AI model. The intention here is not to provide a methodology that automatically outputs the most efficient solution, but rather a human-in-the-loop approach, where the designer interacts with the AI model. Through this “conversation”, the user can gain a richer understanding of the design problem at hand, find performance attribute requests that lead to unattainable geometries, and discover yet unthought-of solutions from those generated by the model. In conclusion, the AI model is intended to augment human intuition.
Different strategies have been implemented to ensure an unconstrained interaction further. The focus has been on the possibility of underspecifying the requested performance attributes. The designer can, for example, only specify a parameter of rain occlusion and find out which values for surfaces and sun occlusion better align with this request. Or, they can enforce a small upper platform (for structural reasons) plus some value of the total surface, and the AI model will provide geometries where the areas for the remaining platforms are adapted while keeping one of the upper platforms fixed. Thanks to the model needing less than five seconds to generate tens of geometries for some requested performance attributes, the implemented solution grants the user the freedom to intuitively and interactively explore a suite of solutions, deepening their understanding of the problem.
Additional tools have also been provided to enable a more comprehensive exploration of the solution space. In Fig. 3, we give some results of an inquiry based on the distribution of two performance attributes, the rain occlusion, and the total area. The grey heatmap represents the distribution from the training set samples of these performance attributes. Despite generating the training set by randomly sampling the design parameter space, we observe that the distribution clearly shows where the total area and rain occlusion have greater representation. Therefore, requests in that region will lead to more precise geometries, as depicted by the black diamonds that closely approximate requests B, C, and E (red dots). But the user is free to explore some a priori unfeasible combinations of performance attributes, such as G. In this case, the AI model still provides designs but pushes them to possible values of rain occlusion and total area, as the original request is unachievable, low rain occlusion (scattered platforms) and large total area (big platforms stacked vertically) obviously being contradictory requirements. More interestingly, the user can explore areas in the fringes of the distribution, as performed in A, D, and F. Surprisingly, the model can generate feasible and accurate solutions, demonstrating that, even though these geometries are under-represented in the training set, they can correspond to perfectly possible realizations of the structure.
To carry out a more exhaustive exploration of the feasibility and accuracy of solutions, we can leverage the surrogate model to perform a brief analysis of the complete attribute’s space. As a measure of goodness, we computed the request error as the difference between the value of \(X\) requested and that estimated by the encoder for each newly generated geometry. In principle, this should approximate the requested performance. In Fig. 4, we present a heatmap depicting the mean rain occlusion’s request error incurred when choosing different combinations of rain and sun occlusion. These two performance attributes are clearly correlated, leading to the highest error in the bottom right corner. But this plot can help the designer discover areas that intuitively might seem unfeasible. For example, for a high rain occlusion of 64% and a much lower sun occlusion of 21.7%, we can still obtain geometries that satisfy these requirements with errors below 5%. Users can perform this study with all the different combinations of parameters to decipher which areas are worth exploring, accelerating the exploration process.
Using these visualization tools and an array of other means, the AI model can help the designer recognize the most suitable ranges for the performance attributes while ensuring they fit project-specific requirements. At this point, in traditional parametric modeling, the choice of specific performance values would require the designer to tweak the design parameters, helped mainly by their intuition and previous experience, until the design performances are achieved. A laborious process, as illustrated in this video:
In contrast, the implemented methodology allows the decoder to be interrogated quickly, which, when given some performance attributes, will generate any number of requested geometries while providing information on the accuracy of these designs, as exemplified in the following video:
The designer can now quickly move back and forth between the selection of interesting and feasible performance values and the qualitative evaluation of the generated geometries to incorporate other non-quantifiable criteria, such as aesthetics.
An architect without prior knowledge used the implemented model during the early design stages. Throughout the process of using the AI model, she was impressed with its interactivity, future potential, and ability to generate models. She gained a better understanding of the problem by exploring the space of solutions and the accuracy of generated geometries. She was impressed by the variety and versatility of designs like those depicted in Fig. 5.
Based on the architect’s feedback, we concluded that by freeing them from the need to fine-tune many different (but similarly performing) designs manually, the AI pipeline saved their time and effort, allowing them to explore viable solutions more broadly before deciding upon one.
Following the final design selection, Gramazio Kohler’s project also involved automatizing many other tasks, such as the precise cutting of the wooden beams forming the structure, their assembly, and using industrial robotic arms to create the basic structure of the pods. Finally, each independent platform was assembled in their facilities, transported, and fully mounted at the Tech Cluster in Zug. We were present during the inauguration of this magnificent and revolutionary architectural installation, and it looks terrific!
Semiramis is just the first proof-of-concept for a far-reaching project we are carrying out in collaboration with Gramazio Kohler Research. However, we believe the same approach for generative design can be generalized and applied to all architecture, engineering, and construction projects where parametric models are used. To that end, the project’s next phase is to build a general toolbox that will allow designers to better understand their compositions and generate new designs conditioned on some performance requirements, augmenting the designer’s intuition.
Luis is originally from Spain, where he completed his bachelor studies on Electrical engineering, and my Ms.C. on signal theory and communications, both at the University of Seville. During his Ph.D. he started focusing on machine learning methods, more specifically message passing techniques for channel coding, and Bayesian methods for channel equalisation. He carried it out between the University of Seville and the University Carlos III in Madrid, also spending some time at the EPFL, Switzerland, and Bell Labs, USA, where he worked on advanced techniques for optical channel coding. When he completed his Ph.D. in 2013, he moved to the Luxembourg Center on Systems Biomedicine, where he switched his interest to neuroscience, neuroimaging, life sciences, etc., and the application of machine learning techniques to these fields. During his 4 and a half years there as a Postdoc, he worked on many different problems as data scientist, encompassing topics such as microscopy image analysis, neuroimaging, single cell gene expression analysis, etc. He joined the SDSC in April 2018.
The lack of transparency of black-box models is a fundamental problem in modern Artificial Intelligence and Machine Learning. This work focuses on how to unbox deep learning models for image classification problems.