December 3, 2007

Research Paper

0.proto-manifesto

The basic assupmtion is that if you continue to read this paper you will remain in the same environment (quite likely shifting your position just a little bit). The paper, together with the abstract has ...... words, which means that average reader (with english as a second language) is going to read it in .......h...........m. If you are in the metro carriage or waiting for a plane, you will most likely have to leave before finishing, but if you are at home, for .......h ........m you are likely to enjoy the calmness of your living room or your studio.
The Wikipedia gets reedited ....... times an hour and there are approximately ....... active contributors. It means that it will get reedited ..... times while you are reading. With time your house, which we will from now call the tangible environment, will gain the ability of re-edition. The re-edition will take place basing on similar ethics as Wikiedia re-edition does. Your current house will be a momentous outcome of the constant process of re-edition executed by collective inteligence.
Let us see if it could really happen.

1. Vitrual and possible
Virtual is the condition with no predefined solutions. It is what one might do but no neccesseary will given any cicrumstances. …
2.1 tangible environment

The idea is to use a physical objects (sometimes even tools we use every day) in order to interact with computers. This concept means that using technologies like computer vision, tracking devices, touch screens etc. the system is able to know how the user is manipulating a collection of physical objects, and then it is able to translate these actions into events in the computer interface. There's a basic paper you should read in order to understand this concept. The author is one of the TUI parents (Hiroshi Ishii).

We see the physical space as the tangible environment. Architecture that is entireliy physical is the tangible environemnt, but for architecture that is partially physical, the tangible environment is just one side of it. Maybe even one atom (the rest are the bits). The tangible environement augmented with technology is now called the performative architecture.

2.2 performative

(ONL) Performative architecture it is performance what moves the attention away from the static object and towards a complex and dynamic plane of relations. It is a effects that transforming culture, architecture is becoming a cultural production. In the Performative Architecture technology and culture aren’t separate elements, becoming integrated and reacting on each others.


2.3 platform

The platform is the communication space between real and possible or virtual. The interface is the mean of communication with a device capable of computing.
The final platform of communication is always the pre-designed software that processes the inputs. Different proposals vary depending on the source of the input.
When exhibited at Centre Pompidou, Muscle NSA reacted to people waling by it. The exhibition visitors could also reprogram it’s configuration on the screen, literally connected to it. Two of the input gathering methods were applied, the tracing one (physical) and the on-the-site interface (software interface based, but not enabled by the web).
In Decoi’s wall the singer that performs in front of it is an actuator at the same time. The platform recieves the tracing input only.
“Negotiate my Boundary!” has different levels to which the idea of the performative is applied. One of them is the skin. The performance of the skin is pre-set for every user, depending on two factors. One of them is the visibility from the other side, the other is the ability to go through the skin. The skin opens and closes in real-time depending on user’s position. It is another example of the tracing input.
The other performative system in RAMTV’ proposal is “the genotype”. “what is refered to as one household is treated as something other than a single genotype, or even a combination of several other genotypes”. The momentous state of the genotype is an outcome of two factors: the pre-designed location (that depends on the site access, sun exposure etc.) and the “stock-exchange” negotiation process. The actual form of the genotype (and therefore a household) is an outcome of a carefully researched web-based community interface that enables the mass-customization by fulfilling indyvidual needs. The input comes from the web interface, but the actual negotiation takes place only within the limits of the pre-set matrix of 1 meter cubes.

conclusion:
The negotiation process in RAMTV design is a try to enable the bottom-up reconfiguration of the tangible environement. The negotiation borders are however pre-designed to fit in the 1 meter grid. They remain pre-designed. It turns out that for contemporary performative architecture platforms, no matter wheather the input comes from on-the-site interface, web interface or tracing, connects the real with the possible only.

3. How we became cyborgs

While reading “Me++” you have the feeling that every part of our everyday life (or rather every kind of the network) is transforming from static, “bulky” form towards the portable form attached to the body or at least being very close to it. Both blocks of ice in a drink and communication became personal. Mitchell claims that by this process we witness the dawn of dematerialization era, where information will be fully relifed from the place.
While wraping our body with both tangible and virtual networks, we become cyborgs. It is an interation of the post-human idea, but, at this point we already understand that there is no need to “literally take carbon to zero”.
Architecture might be one of the disciplines that still remain in the realm of bulky, improgrammable devices. Architecture, as we understand it today, remains in the realm of the before dematerialization era, therefore being niedostosowana to needs of contemporary cyborgs. Architecture need to understand the cyborg condition and recreate the design process focusing on the enabled-by-the-cyborg possibilities. Architecture needs an update.


4.2 Prosumerism and customer centricity

In customer centricity approach companies decide on the basis and customers get to choose more detailed settings of the product they are planning to purchase. This kind of approach was introduced my many companies already. The customization available on Dell’s or Apple’s websites, famous Nike trainers that can change colors, M&M platform for inscribing your girlfriend nickname on the candy and ONL’s Variomatic all bare the same quality of the possible only. Customers get to decide between the already fully pre-designed products. The promise of mass customization in therefore limited only to the pre-designed solution, which makes the array of available products more diverse than in a traditional producer-consumer relation, but still far away from truly personally suited product.
In 2003 Don Tapscott coined the term prosumerism in order to describe how “the gap between producers and consumers is blurring. Tapscott and Williams give the Second Life as an example of the community where prosumerism is leading marketing force. The participants create goods of any kind, using only their creativity in order to sell them for Linded dollars, which are worth real money. Interesting issue is, that the regulations which need to rise when any kind of commerce enters the scene, are created mostly in a bottom-up manner, pretty much in a way like Wikipedia society erases the spoilers of it’s content form their rows. Second Life not only enables, but demands the creativity of it’s users in order to maintain and grow, but it is a fully virtual environment.
Prosumerism is possible also in regard of the tangibles. Customers not only answer to the limited set of questions regarding the future product. The self-organize to create their own products. One of the examples is how prosumers hacked into the I-pods. The I-pod hack is available on the net, and while you install it, the I-pod functions are much wider – the Podzilla can be installed also and I-pod becomes the pocket Linux environment. Companies, as usual in regard of the new collective intelligence in action, can try to fight with those attempts, or can use it for their purposes.
If we can think about performative architecture as the code of behavior, then we can hack the code. If we would hack the ONL Muscle tower and reprogram it, we would act like prosumers. The question is if Muscle would then bare the quality of negotiation between real and virtual, not only possible?


here are some other thoughts on the subject:

Analysis of the existing mode of collective intelligence in action of production intangibles:
The information technologies revolution have radically introduced many new context, like social, economical or political one. A question to be answered is wheather it has introduced the new spatial context. “To find something out, or to get something done in the city, you now have a choice. You can navigate the brick-and-mortar half in a time-honoured way, or increasingly, you can switch to its electronic twin.” (Mitchel) But the electonic twin is not merely a mirrored condition of the physical. It provides new possibilities of communication, interaction and production. The swarm egzisting in the electronic twin is now beeing referd to as collective intelligence. The inteligence that is capable of introducing it’s own paradigmatic shift on a scale of the industrial age or a information technologies revolution. It already delivered new means of production, among which peer production and prosumerizm are of our particular interest.

peer producing:
Several examples prove that peer producing works outside of the realm of software design, but Paul Diguid[1] points out that there has to be certain rules fulfilled in order for it to work in a specific environment. First rule indicated by Diguid is “peer production projects constantly change. What is flawed today may be flawless tomorrow”. Wales himself provides an answer to that, claiming that content on Wikipedia goes through “Dariwnian process of evolution”. An article at Wikipedia is changed in average twenty times during it’s editing procedure. The quality does nor rise lineary, but rather evolves in an evolutionary manner, while article reciving inputs form people with different backgorunds. Wales therefore puts the constant flux of information as an positive factor of final input shaping.
It is the purest form of producing goods and services. In many communities, the work is voluntary and nonmonetary. Wikipedia is an example of peer production where people may join to community and create information base. The reason is that people becoming a part of community because they want, they may decied in what part they may share with their knowlage - self-selection . .The big companies might learn how to use the potential of theirs workers – “ People just self-select to do project where they have expertise and intrest”
[1] Paul Diguid, „Limits of Self-organisation: Peer production and laws of quality”

About the Materiality:-

Role Of Computers
It is a commonly held fallacy that a computer is a thing or a tool such as a hammer or a jigsaw. Many thinkers have clarified before that the computer is anenvironment which contains thousands of tools. A computer is a place where one finds all sorts of magical materials and means to play and produce.
Within a short space of time the computer has become a widely accepted feature of architecture, both in the design process and in the everyday operation of buildings, and we are constantly aware that the computer's introductions into architecture will eventually have farreaching
consequences. After all, the current revolution is not just about the computer as a tool but about its role and effect on the form of architecture and thinking

The digital revolution is affecting not only the way we produce drawings, but also the way we think about architecture. Such expressionistic, neo-baroque forms would have been unthinkable without higher technology, which allows for customization at a massive scale.

Following are the three different ways Architects conceive a design work in soft materials:



1. Working with Solids completely bounded by planar surfaces.



Eisenman’s work, well until the last couple of years, has been concerned with solid geometry and transformations of solid geometry. His transformationtechniques included subjecting solids to the logic of surface deformations such as Bezier curves. His Columbus Convention Center is a good examplefor this.

2. Polynomial Surfaces (Splines)


Clues to Gehry’s imagination (Fisher Center for the Performing Arts and Bard College, Annandale,USA, 2003) are evident in his recent sketches where one finds a field of flowing and complexlyinterwoven curves with no definiteboundary definition. Compare these to his earlier sketches where the boundary conditions and tectonic are much more Euclidean and physical (for brevity purposes, illustrations could not be included here). Gehry now thinks in splines. For his purposes, Gehry quite extensively uses CATIA’s surface modeling module.


3. Blobs (Isomorphic Polysurfaces)Popularized by Greg Lynn (1998), blobs were originally developed for the study of complex molecules. The parameters that define blobs are such things as mutual gravity (weight), extent ofinfluence (threshold) and form type (ellipsoid etcetera). At the level of imagination, these modalities of definition lead to works that are distinctly different from solids or surfaces.Greg Lynn’s work is imagined in Blobs but defined and constructed using Solids. His first built work, Korean Presbytarian Church, is much more exciting as an idea than as a built reality from this viewpoint. However, it is perhaps the first to use and popularizethe softerial, Blob. In a way, Blobs were his medium.


Imagining, defining and constructing with softerials becomes definitely more exciting, rewarding and lucrative activity. In such a world, softerials play a more major role than does brick-and-mortar architecture. Once the difference between mediumand building vanishes, medium becomes the material out of which buildings are made. Solids, surfaces and blobs are three softerials that have begun to transform the way we imagine, defineand build a world that really matters.


Entropic and organic structures by François Roche

François Roche was born in 1961 in Paris,. He obtained his architecture degree from UPA n° 3 in Versailles in 1987. His partner Stéphanie Lavaux was born in 1966 in La Réunion, and she left the French National Fine Arts Schooll (ENSBA) in 1990.
The exipbition catalogue explained that “The architecture of R&Sie François Roche/ Stéphanie Lavaux is inseperable from the environment; one might speak of a kind of furtive architecture. In his projects François Roche attempts to refrain from radically modifying the territory,sseking a form of a dialogue with it that is entropic and organic. He is currently undertaking a critical experiment with new morphing technologies to prompt architectural “scenarios” of cartographic distortion, substituition, and territorial mutations.
Although R&Sie has not done much sctual building until now, their ideas and influence have been of considerable importance.



1. “Terra Incognita


One of their latest work is called “Terra Incognita”.
http://www.new-territories.com/terraincognita2.htm










HOW FLATNESS IS REVERSABLE / Terra incognita is an island which only recently appeared in the Antarctic continent, as a result of climatic global warming and the subsequent merging of ices. Here, the ice white albino penguin can be also be found. Their differences of color produce their rejection from the penguin colony. In the icy winter wind, their loneliness become the main factor of their progressive freezing and their upredictable death.
In their homepage…. They state that
“Our installation unfolded as a series of surfaces on 200m2 , made from honeycomb aluminium. Its form was generated using a parametric script, and was manufactured by controlling milling water jets through the computational script. The single surface was then stretched and sheared by counterweights consisting of different volumes of water that represent different volumes of melted Antarctic ice. Amidst this unstable balance stood a robotic albino penguin whose only gesture was the occasional blinking of an eyelid. More the water in the counter balances is evaporating itself in the climate condition of the indoor Museum, more the artificial and aluminium island is re-defining its own flatness.

This process is talking about the instability, indeterminism of the biotopes. It's talking how the the warming of the climate is deeply introducing conditions of uncertainty ness.
The Odyssey is the story in three steps ;
- the report on the real situation of Antarctica
- the scanning of the territories emerging from the melting of the snow
- a re-development of the process of instability in two place / the Tate and the Mam”

Click here in order to see the movie done on this project:
http://www.new-territories.com/terra%20essai%208.mov




2.“I have heard about

“I have heard about…”is considered to be a flat, fat, urban growing experiment.
http://www.new-territories.com/I


























R&Sie state that “I’ve heard about something that builds up only through multiple, heterogeneous and contradictory scenarios, something that rejects even the idea of a possible prediction about its form of growth or future typology.
Something shapeless grafted onto existing tissue, something that needs no vanishing point to justify itself but instead welcomes a quivering existence immersed in a real-time vibratory state, here and now.
Tangled, intertwined, it seems to be a city, or rather a fragment of a city.
Its inhabitants are immunized because they are both vectors and protectors of this complexity.
The multiplicity of its interwoven experiences and forms is matched by the apparent simplicity of its mechanisms.
The urban form no longer depends on the arbitrary decisions or control over its emergence exercised by a few, but rather the ensemble of its individual contingencies. It simultaneously subsumes premises, consequences and the ensemble of induced perturbations, in a ceaseless interaction. Its laws are consubstantial with the place itself, with no work of memory.
Many different stimuli have contributed to the emergence of “I’ve heard about,” and they are continually reloaded. Its existence is inextricably linked to the end of the grand narratives, the objective recognition of climatic changes, a suspicion of all morality (even ecological), to the vibration of social phenomena and the urgent need to renew the democratic mechanisms. Fiction is its reality principle: What you have before your eyes conforms to the truth of the urban condition of “I’ve heard about”.
What moral law or social contract could extract us from this reality, prevent us from living there or protect us from it? No, the residence protocol of “I’ve heard about” cannot cancel the risk of being in this world. The inhabitants draw sustenance from the present, with no time lag. The form of the territorial structure draws its sustenance directly from the present time.

Made of invaginations and knotted geometries, life forms are embedded within it. Its growth is artificial and synthetic, owing nothing to chaos and the formlessness of nature. It is based on very real processes that generate the raw materials and operating modes of its evolution.
The public sphere is everywhere, like a pulsating organism driven by postulates that are mutually contradictory and nonetheless true. The rumours and scenarios that carry the seeds of its future mutations negotiate with the vibratory time of new territories.
It is impossible to name all the elements “I’ve heard about” comprises or to perceive it in its totality, because it belongs to the many, the multitude. Only fragments can be extracted from it.
The world is terrifying when it’s intelligible, when it clings to some semblance of predictability, when it seeks to preserve a false coherence. In “I’ve heard about,” it is what is not there that defines it, that guarantees its readability, its social and territorial fragility and its indetermination.”

In order to see the movie done by R&Sie on this project click here:
http://www.new-territories.com/videos/film_robot/film_robot.htm


Emergent Form

1.1Emergent Form-definition and purpose

Looking at the definition of emergent form we can find it related to all different kinds of fields and sciences, from computation to design biology and mathematics.

Emergent phenomena are the result of interactions between elements of a system over time, often being unexpected results of simple interactions between simple components. An emergent property or behavior is shown when a number of simple agents operate in an environment, forming complex behaviors as a system, that the things themselves do not have. For instance, consider water (H20): hydrogen (H) and oxygen (O) are extremely light gaseous substances at room temperature, while water, the effect of their combination, is a heavy liquid. Liquidity is therefore one of the emergent properties of the system of hydrogen/oxygen. There is nothing about the property of liquidity- its wetness, hydraulic dynamics, Brownian motion, and potential for heat exchange- that can be predicted by examining the properties of either H or O.

When it comes to architecture, such kind of processes are used to create forms based on structural pattern formation and emergent behavior. This way of production is part of a larger contemporary movement in architecture referred to by Detlef Mertins in 2004 as ‘Bioconstructivism’, where biology, mathematics, and engineering combine to produce an architecture characterized by its variability and performance. This is not something unknown in nature of course. Nature is filled with variation and complexity that architecture has only started to explore. There are differences between architecture and biology as in nature it is all about iteration, mutation, and feedback through fitness testing, in order to produce both elegant and durable species and formations, but it is exactly the study of the process of random mutation and natural selection in nature that provides a model for how a dynamic feedback between excesses and efficiencies can create innovation and elegance in design process.

Another part of this research is called Biomimicry, also known as Bionics. By that we mean the use of methods and systems found in nature to the study and design of engineering systems and modern technology. The transfer of technology between life forms and synthetic constructs is desirable because evolutionary pressure typically forces natural systems to become highly efficient as well as formally elegant. Biomimetics can be relavent to architecture in terms of design, systems, and processes and can refer to both morphological and behavioral characteristics.

1.2 Tools

These kind of generative processes are carried out through the use of evolutionary algorithms. The regeneration of complex formal and behavioral patterns exhibited by organisms in the laboratory have been enabled by non-linear dynamics and computation using both generative and analytical algorithms and design techniques. Such algorithms have been applied to an ever increasing variety of design domains, for which they have achieved human competitive results on small design problems. In order to improve the applicability of such systems, fundamental research must be undertaken to discover how to construct increasingly more sophisticated designs. Many different design tools and software have been developed during the past years in aid of this study. A great proportion of them are the outcome of the research carried out in the Emergent Design Group of MIT, such as Gener8, Weaver, Agency, germZ and Moss. Also one can find the widely used Top Solid and Generative Components, and other ones such as Rhinoscript, Max script, Mel scripting, Perl and Processing.

One of the most commonly used software is GENR8. Genr8 is a plug-in for Alias/WavefrontMonday, December 3, 2007s 3D design tool Maya and it was developed by the Emergent Design Group at MIT in 2001. The Emergent Design Group was an interdisciplinary group that developed new ideas in architecture by bringing together researchers in Artificial Intelligence and architects. The purpose of this innovative surface design tool was to provide architects with access to creative surface design by giving them influence over generative processes. As they explain, a generative process is the activity of iteratively executing some encoding that creates and then modifies an artifact. So, during creating this tool they chose the thing that was most intriguing and of use to architects, which is modeling cellular growth interacting with an environment. GENR8 is an design tool that combines many different kinds of powerful growth languages with evolutionary search. The software combines 3D map L-systems that are extended to an abstract physical environment with Grammatical Evolution. Evolutionary Algorithms (EA) typically adapt 'on-line' but GENR8 is designed to accommodate the back and forth control exchange between user and tool during on-line evolutionary adaptation. Users may interrupt, intervene and then resume GENR8. This allows for interactive design evaluation and computational multi-criteria search. The investigative software is written in C++ as a plug-in to Alias|Wavefront Maya.The technical power beneath GENR8 has more than one implications: evolutionary search and HEMLS (Hemberg Extended Map L-Systems). A HEMLS, the generative process, is interpreted by GENR8 to generate a surface. GENR8 uses evolutionary search to discover its own HEMLS that adaptively evolve towards surfaces with features the user has specified.

Mel Scripting on the other hand is a Maya Embedded scripting language that is used to simplify tasks in Autodesk's 3D Graphics Software Maya. Through Mel one can achieve most tasks that can be done through Maya's GUI, as well as certain other that GUI doesn’t offer. MEL gives the opportunity to accelerate complicated or repetitive tasks and it also allows users to redistribute a specific set of commands with others.


2.1 Emergent Form Practitioners

The people that are behind much of this research are not only researching form its self but all of the social economic and environmentally conscious branches of emergent culture that are part of what is emerging from the newly possible fields of practice in engineering, building technology and construction. Peter Testa, Tom Wescombe and Martin Hemberg are just some of the architects/engineers/computer programmers/economists/etc. that are starting to drive this part of architecture that is being called emergent.

Peter Testa is a researcher in the field of emergent design, not only as part of the emergent design group but as a practicing architect. Currently he is working on a carbon skin, solid state tower that would be the lightest and strongest building of its type. This building is currently more of a design theory rather than an actual design proposal at this point but potentially has the ability to create a shift in the way that building technology and material manufacturing is currently recognized and used. He is proposing that this building could be completely manufactured on site with essentially the use of two materials and the robots to actually do the construction. This process would entail the use of wood, which is readily available from renewable forests and carbon fiber, which would be quite expensive (but less so in this process) and he claims that the high costs of carbon fiber would be offset enough by the use of wood and the manufacturing process to make this project not only feasible but cheap in comparison to a similar building constructed using traditional building techniques. This process is described as having robots on site which essentially lay in wood and weave/cure an interwoven surface of carbon fiber over the wood, similar to the way that a hockey stick or a ladder is made. This practice would be cheap, efficient and most of all environmentally friendly.

Martin hemberg is a bioengineer and scientist interested in the organization of biological systems. He has recently published a paper on the properties of scholastic genetic oscillators in which the chemical master equation is used as a starting point in an investigation into the difference in a time series between the chemical master equation and the stochastic differential equation. Martin Hemberg is essentially the architect who designed the program GENR8 as his masters thesis at the imperial college of London in association with MIT and the architectural association this program is based on the ideals of evolving plant structures and behaves as if it were growing a surface that strives for optimization an efficiency, similar to a plant. This type of thinking and this type of influence has rarely been seen in the past as part of the architecture and building community. This view on architecture has the potential and seems to be heading in the direction of creating the possibility for super efficient, strong and lightweight building skins. Plus the idea that this type of system will not only be seen in radical and rare buildings but in and increasing number of buildings that are more along the lines of an average project.

Tom Wescombe in a way is combining the ideas of both Hemberg and Testa and implementing them in a more technical engineering based approach which utilizes the type of research that hemberg dose with the implementation more similar to that of Testa. He has developed a computer program that can look at the various forces acting on a system and can reduce the system to be the most efficient yet stable system possible. This results in unique unconventional systems that would be impossible to produce without the advent of computer aided manufacturing. He bases his theory around the idea of emergence its self(the name of his firm as well as the name of is theory) that with the sum of many parts there can be a whole that would otherwise have been impossible. This theory also revolves around the idea of bioengineering and the discovery of the mathematical systems put in place that are able to create the forms that are only found in nature(for now).

ADDITIONAL REFERENCES

http://projects.csail.mit.edu/emergentDesign/genr8/index.html

http://projects.csail.mit.edu/emergentDesign/genr8/hemberg_chap8.pdf



FORMS AND TECTONICS OF CELLULAR AGGREGATION

“The idea that innovation weather scientific, technological or architectural is a by product of artistic chance or a result of singular genius is Anachronistic. Complex theory revels that innovation- the creation of the new – is the direct result of Bottom-up evolutionary process. Architecture is just beginning to engage the concept” * says Architect Tom Wiscombe whose practice (http://www.emergentarchitecture.com/) essentially engages the idea of evolutionary or emergent design. Tom along with Peter Testa and Marcelo Spina instructed spring studio 06 at Sci-Arc whose title was
‘On Forms and Tectonics of Cellular Aggregation’, the objective of the studio was to explore the generative procedures, tectonic and special qualities and constructive and assembly processes necessary to produce and deploy emergent forms of cellular aggregation.

Taking this as our springing point we would like to conduct the research under the same title but in a theoretical way rather than in an applied way. The objective of our research will be finding ‘Cellular Aggregations’ that exist in nature and representing their forms and tectonics by virtue of diagrams. For example ‘Slime Mould’ which shows characteristics of both animal-like and plant-like behavioral patterns. The research therefore can involve the morphological study of the cellular patterns of the slime mould in regards to the physiological changes. However the aim of the research will not be about the biological accuracy but about diagrammatically representing the flexibility, efficiency and robustness present in these systems.
The research will do a simultaneous reading between, the first aspect dealing with the nature and understanding of emergent practice in architecture and the second aspect essentially dealing with the study and analysis of few cellular aggregations present in nature. With this research we are hoping to find some principals which can be algorithmically coded to derive at some surprising end results.






PART I (TO BE ELABORATED IN MORE DETAIL)

EMERGENCE: THE IDEA OF EMERGENCE - EMERGENCE TALKS ABOUT THE COLLECTIVE BEHAVIOR OF A SYSTEM OF ORGANIZATION WHERE THE WHOLE IS DIFFERENT THAN THE PARTS AND EXHIBITS BEHAVIORS AND PROPERTIES WHICH ARE NOT PREDICTABLE BY THE OBSERVATION OF THE PARTS. EMERGENCE EXISTS IN NATURE IN MACRO AS WELL AS IN MICRO BIOLOGICAL SYSTEMS FOR EXMAPLE ANT COLONIES, FLOCKING OF BIRDS, BEE SWARMING OR SLIME MOULD SHOWS THE EMERGENT BEHAVIOR.
STEVE JOHNSON IN HIS BOOK EMERGENCE TALKS ABOUT 5 SIMPLE RULES WHICH AN ANT FOLLOWS IN THE SWARM BEHAVIOR

1. MORE IS DIFFERENT
2. IGNORANCE IS USEFUL
3. ENCOURAGE RANDOM NUMBERS
4. LOOK FOR PATTERNS IN THE SIGNS
5. PAY ATTENTION TO YOUR NEIGHBOURS

THIS SHOW THAT THE EMERGENT SYSTEMS ESSENTIALLY INCORPORTAE PRINCIPLES OF SELF ORGANIZATION, STYGMERGY, FLEXIBILITY AND ROBUSTNESS.
ARCHITECTURE HAS JUST STARTED UNDERSTANDING IDEA OF EMERGENCE WITH ITS BOTTOM UP APPROACH AN ALTERNATIVE TO THE TOP DOWN APPROACH. THE BOTTOM UP APPROCH INVOLVES CELLULAR ORGANIZATION WHERE EACH CELL CAN BE CONNECTED TO THE NEIGHBOURING CELL BY MEANS OF MATHAMATICAL PRINCIPALS DERIVED FROM LOGICS OF THE BIOLOGICAL SYSTEMS. THIS INDUCES FLEXIBILITY WITHIN THE ORGANIZATION. WHAT DOES IDEA OF FLEXIBILITY AND ROBUSTNESS MEAN TO ARCHITECTURE? EACH ELEMENT HAS THE POSSIBILITY T ADAPT TO THE CHANGES IN THE NEIGHBOURHOOD AND RECONSTRUCT ITSELF. OBJECTS ARE LINKED WITH OTHER OBJECTS BY RELATION. THESE RELATIONS CAN BE LOGICAL OR GEOMETRICAL. THE SELF ASSEMBLAGE OF CELLULAR AGGREGATION CAN PLAY AN IMPORTANT ROLE IN THE ARTIFICIAL SYSTEMS FOR EXAMPLE ARTIFICIAL RESOURCE DISTRIBUTION NETWORKS SUCH AS TRANSPORTATION SYSTEM OR UTILITY GRID. AS OF NOW WHICH DEPENDS UPON TOP DOWN DESIGN PARADIGM LAID BY BEST ANALYTIC METHOD AVAILABLE. MODIFICATIONS ARE MADE IN ADHOC MANNER. TOP DOWN MANAGEMENT APPROCH TO COMPLEX SYSTEM BECOMES MEANINGLESS. SO IT BECOMES IMPORTANT TO STUDY THE IDEA OF EMERGENCE AND THEREFORE CELLURAL AGGREGATION. THE MORPHOLOGICAL PRINCIPLES BEHIND THEM.
ONE EXAMPLE IS OF SLIME MOULD.





PART II (EXAMPLE OF CELLULAR AGGREGATIONS SLIME MOULD, LICHEN/ EXAMPLE OF CELLULAR GROWTH SMOLUSCAN SHELL)

SLIME MOULD: DESCRIBING SLIME MOULD BY ITS MORPHOGENESIS AND DETAILING OUT THE PARADIGMS BEHIND ITS SWARM BEHAVIOR.
HOW CAN IT BE USED IN DIFFERENT FIELDS? A GOOD EXAMPLE IS SLIMEBOT (ROBOTICS)
ANOTHER EXAMPLE IS OF ‘LICHEN’ COLONIES OF SYMBIOTIC ALGAE AND FUNGI WHICH GROW ON THE ROCKS FORMING INTERESTING AND INTRICATE GROWTH PATTERNS. THEIR MORPHOGENESIS CAN BE STUDIED BAED ON THE AVAILABLE LITERATURE.



IMAGE OF LICHEN ON ROCK FORMATIONS

history UNWIRED





Developed in 2005, this project was a first-ever mix of mobile video, animation, audio, and Bluetooth locative technologies in the tourism sector. The tour takes visitors around the neighbourhood of Castello, guided by the voices of Venetian citizens who depict a particularly local experience of art and craft, history and folklore, public and private spaces.

Their philosophy is to develop "content-driven technology". That is, instead of creating filler for new technology, they are developing innovative stories and adapting the technology to those stories. Thus they have formed a relationship with Dell, Motorola and MIT Media Lab that allows them to develop software and features in Smartphones that arise from storytelling needs and human interaction with mobile media.

●Bluetooth: they are using this location sensing ability of bluetooth beacons to trigger interactive art along the course and to reward prudent exploration of private spaces. The plot, path, and tone of the content evolves according to individual’s footsteps.

●AGPS (Assisted Global Positioning System): they have developed several location-specific "media clouds" along the bustling, Via Garibaldi, a key point in the tale of Castello's evolution after World War II. AGPS available on 3G phone networks can sense the general location of the walker and load a sound-video collage as they move down the street. (this feature was only modelled for the 2005 experiment)

●Flash-Video: They are using the sophisticated web browsers on these phones to display Flash content and seemlessly link to video content.

●Thermachromics: They have installed two interactive art pieces along the course that activate in the presence of bluetooth. Panels in the facade of an abandoned greenhouse have been covered with black, thermachromic ink. When walkers pass by the greenhouse a circuit is tripped by bluetooth in the devices and the panels are heated to reveal a growing plant form in the facade. Also they coated hanging laundry with thermachromic ink, and wires in the laundry heat up to reveal the outline of the landscape you are viewing

Venice has a lot of “surface tension.” The place looks like a movie set and many tourists marvel at the rich history and intimate details of its monuments. However overcrowding has taken some of the magic from Venice’s main monuments (500.000 tourist in 1960 and approximately 14 million tourist in 2003.) History Unwired will use mapping and multimedia as a Trojan Horse to give some depth to these wonders and lead tourists to encounter with unexplored monuments, historical figures, and neighbourhoods.



Bibliography


1. Feuer, Alan. "To Venetians' Sorrow, the Sightseers Come in Battalions" New York Times. June 10, 2004, p D1.

2. Venice Card website. Venice Card is a service of the Comune di Venezia designed to consolidate cultural and practical offerings. http://www.venicecard.it/itinerari/itinerari_ita.jsp

3. La Biennale website. http://www.labiennale.org/it/news/arte

4. Jason Spingarn-Koff. Museum Tour: Walk This Way http://www.wired.com/news/culture/0,1284,42152,00.htm
http://mit.edu/frontiers/english/tour.html demo





December 2, 2007

We found different examples in which we detect the fusion of translated signals applied in architecture. Different kind of sensors convert a signal from one form of energy to another, from one form of information to another, enabling the whole system to become interactive.

Son-O-House
The Son-O-House, is a public pavilion that is both an architectural and a sound installation that allows people to not just hear sound in a musical structure, but also to participate in the composition of the sound…This permanent installation creates an interaction between the sound, the architecture and the visitors. To create the specific experience the architects collaborated with Edwin van der Heide, an artist who continuously experiments with the sound, exploring the creation of interacting and learning environments. The particular system of sounds is based on moiré effects of interference of closely related frequencies. 23 sensors are positioned at strategic spots to indirectly influence the music.These sensors detect the presence, activity and the approximate location of the visitors. The particular information is analyzed and quantified,in a growing database and is used to control the nature of the sound. Therefore the visitors are challenged to re-interpret their relationship with the environment. As a visitor one does not influence the sound directly, which is so often the case, one influences the real-time composition itself that generates the sounds. In this way the sound environment of the Son-O-House is in continuous evolution. The score is an evolutionary memoryscape that develops with the traced behavior of the actual bodies in the space.
Galleria West Shopping Centre in Seoul, Korea - 2002-2004
Designed by Ben van Berkel from UN Studio architects and Arup Lighting, the Galleria West has a perpetually changing, light-reactive and computer-programmable facade that behaves like a giant video screen.
The shopping centre’s façade works like a large low-res television, with each LED fixture acting as one pixel. It is the control system that converts and transmits data to the 40,000-square-foot screen that most sets this project apart. “This is the first time the user doesn't need lighting programming skills,” explains Van der Heide. “You can create animations using any software that you are comfortable with, and just upload it to a server. Once the data on the server is converted into a proprietary protocol based on TCP/IP, it then travels over 32 DMX lines (or universes), which control 512 channels each, to deliver the many commands that 'dress' the façade. The system can also be connected to and programmed wirelessly from a laptop on the street, for example.” [1]
The facade is made up of 4330 glass disks, each 850mm in diameter, that were treated with a special iridescent foil, which causes constant changes. The disks are programmed to generate up to 16 millions colors, showing astounding displays in every imaginable shade. Beneath each one is a polyester dichroic light filter creating a range of colors that change depending on the position of the sun. The filter separates the different wavelengths of light, absorbing some and reflecting others depending on the angle at which it hits. Also fitted behind the glass discs are LED fittings that come into their own at night, and are programmed to emit a sequence of colors and patterns between sunset and sunrise.
The building makes a complete transformation during the day and evening. The colour of the façade changes, depending on the position of the sun and the viewing position. During the day, it reflects the subtleties of natural light on the dichroic glass discs. At other times the building can even become a giant billboard, its pixels feeding text or images around the entire external structure. At night a special lighting scheme illuminates the discs by reflecting the dynamics of the weather conditions that happened during the day.

references:
http://www.unstudio.com/projects/year/2004/1/141
http://www.bdonline.co.uk/story.asp?sectioncode=453&storycode=3051692
http://www.arup.com/netherlands/newsitem.cfm?pageid=6693
http://www.archlighting.com/industry-news.asp?sectionID=1312&articleID=454081

[1] http://www.bdonline.co.uk/story.asp?sectioncode=453&storycode=3051692

Smart floors
One of the surfaces that by nature the human have more interaction with is the floor. Gravity keep us in direct contact with it, this is why Robert J. Orr and Gregory D. Abowd think on why not to do it an interactive surface or "smart floor". Their main goal was two of the most important one on Ubiquitous Computing: identifying and locating a user.
The way this smart floor works, is having measurements cells located on each corner of the floor tile, each tile have 3 tiles resting on it. Each one of this systems working together can measure the force of the user's foot (ground reaction force, GRF) as the user walks around the space and this information can be storage on a system trough a network to have it like a signature of each individual user, so that way the user can be recognized by the system just by matching the information. Once the system recognize the user, tracking this individual becomes really easy.

Intelligent bathrooms
On March 30 of 2005, two large companies of Japan, Yamato House and TOTO have meet together to create a user interactive system called "the intelligent rest room". With the main goal of family health, integrating intelligent devices into a normal bathroom space.
The idea of this integration is to have more control over human body health care by adding measurement sensors all around the bathroom space and objects, measuring sugar levels, blood pressure, body fat percent and weight.
All this daily analysis can be send to the family computer or even to the doctor's office trough a wireless network, once in the computer software health analysis that comes with the system, will analyze this results and advices on health care, diet or exercises to do. All this results will be displayed on a digital screen located on the bathrooms wall, at the same time this information is storage for medical history.

Research Beginings on the Finite Element Method

Specifically, the Finite Element Method or Finite Element Analysis is a system to take a complex problem and separate it into parts. From these smaller parts you can derive approximate solutions of each element and then combine the solutions and begin to a form an overall solution for the problem. The overall accuracy of FEA depends on the number of elements the problem was divided into, the assumptions made about the individual elements to derive a mathematical solution, and how the isolated elements were amalgamated into a coherent result.

Depending on the complexity of the problem there are steps for the finite method to follow so as to achieve the desired result, the more complex the problem, the more steps in the method. It is important to remember that much of finite element method can be defined in simple one-dimensional or two dimensional mechanical physics if the elements are divided properly1.

The first step is Idealization, or taking the problem and reducing the entire system into a simplified, physics model. In other words, taking the question and relating it to an already developed system of physics and mathematics.

The second step is Finite Element Discretization, decomposing the question into the required amount of elements to gain an accurate solution. Essentially this part is taking the mathematical or physics model that is used to represent the question and partitioning it into separate, more manageble parts.

Local Approximation or the Discrete Solution is the the third step. This is the mathematical portion of solving the individual parts by the sum of their forces (in mechanical physics models).

The final step is the Amalgamtion. This is taking all of the individual solutions and forming them into a single cohesive, overall solution. In other words, this is the assembly of all of the parts to answer the question.

It is important to note that there is a give and take relationship between step one and two as well as step two and three. If the mathematical model assumed to represent a part of the question is incorrect than the entire solution will also be incorrect, therefore it is important to be extremely precise in the suppositions made from step one to step two. On the other hand it is not always possible to know the specific amount of parts you need and the FE discretization step might need to be returned to on multiple occasions to gather the correct information needed. There is a certain amount of “guess and check” involved with the finite element method to gain a faithful solution.



FEM opens wide range of possibilities for architects and designers to analyze their projects before realization. Using softwares based on FEM one can predict how particular form will work and behave considering loads impact. One of the softwares based on the FEM is Abaqus.
Abaqus suite consist of Abaqus/Standard, Abaqus/Explicit, Abaqus/CAE. Abaqus Standard is applied to static, low-speed dynamic, or steady-state transport analisis; while Abaqus/Explicit may be applied to those portions of the analysis where high-speed, nonlinear, transient response dominates the solution. Using Abaqus CAE one can create geometry, import CAD models for meshing or integrate geometry – based meshes that don’t have associated CAD geometry.

The software is used by engeneers working in fields of aerospace & defense, automotive & transportation, industrial design such as furniture and packaging (including both the design and the production process), high-tech, industrial equipment, services industry, shipbuilding, power process & petroleum industry, life sciences, and of course in the field of architecture and construction.

The process of analysis using the Abaqus software is divided in three parts:
Phase 1 preprocessor, phase 2 processor, phase 3 postprocessor. All phases are described below.


Phase 1 preprocessor
The object of the preprocessor is to define the discrete model.
First the geometry which one wants to analyze, prepared in 3d (imported from other software as .stl or .igs file, or created in Abaqus) is simplified to the physical model. To get the discrete model one has to defined all the data such as mesh definition, material data, loads for the physical model. Those decisions influent on the precision and time taken for calculations of the part 2.
Finally one gets the input file (.inp) - text file - which contains the numerical description of the model.

The model is defined by:
ß geometry: defined by mesh based on the finite elements
Library of Abaqus let the user choose from 200-300 kinds of elements which will create the mesh from the analyzed surface. It is possible to change the size and amount of elements, it means the density of the mesh.

ß element section properties: complement information about geometry

ß material data

ß loads and boundary conditions

Two typical loads are: the concentrated load ( force [N] ) which defines the force impact on particular point of the mesh and the distribution load (pressure [Pa] ) which defines the pressure on the area of the mesh.

Boudary conditions define degrees of freedom for the geometry. Each point of the geometry has six degrees of freedom – three transitions and three rotations, considering x, y ,z axis.

ß kind of analysis : static (in Abaqus/Standard )or dynamic (in Abaqus/Explicit)


Phase 2 processor
It is the phase of calculations based on the input file. Adequate procedures are activated and the task is accomplished.
The program informs user of any problem or mistake of the input file. Some typical mistakes are “comma” instead of “dot” or “o” instead of “zero”.
Considering the complexity of the analysis the processor phase can take from few seconds up to several hours.
Outcome of the processor is described as text file or binary file.

Phase 3 postprocessor
That is the final part. It transforms the result of calculations into visual file such as pictures or animation. Abaqus/CAE also offers comprehensive visualization options which enable users to interpret and communicate the results of any Abaqus analysis.
The postprocessor part is really important considering communication between engineer and architect or designer, and communication between them and the client.

Gehry Production House

The Gehry Technologies is a consulting group composed of architects and engineers, experienced over the years for advanced geometric constructability and construction processes gained across a wide range of projects at all stages of development, from schematic design through engineering and construction administration. It provides knowledge required to make the application of advanced Digital Project tools and integrated process methodologies for the architectural projects and construction.

Design Processes

Parametric Design: The design processes of the Gehry technologies start from the digitizing the design concepts and making parametric models for the same. Digital Project is built on Dassault Systèmes parametric-associative V5 design technology. These models can be reconfigured as per requirements and can be modified at various stages of the growth of the design configuration. The design issues are carefully handled as at no stage the conversion from informal design inputs to the formal outcome is the main idea is compromised. The parametric models are efficient in their data structure which are the outcome of a systematic approach and can also provide turnkey model solutions directly to the client.








Geometry Solutions: The complex geometries and non linearity are the components of the architecture of the digital age. And to formally put forward the geometrical expression which clearly states the design intentions and analyses is the primary prerogative for the successful completion of the project. Developing geometric solutions for designs, both as pre-rationalized geometric system and the post-rationalization of complex designs into constructible form are the underlying principles of these solutions and how these vary between different projects makes the task a little less normal.









Constructability Assessment: The process of fabrication of the complex geometries precisely for the construction process is a tedious job. For this the constructability assessment deals with the analyses, reports and information about the constructional components of the project so that the design solutions can be realized with in the project parameters. Materials, formwork, design complexities and fabrication processes are looked into with critical guidance for the projects’ achievement.









Visualization and Simulation : 3D visualization of the project at various stages of construction and design act as a guiding tool for design managers and engineers on site.

Construction Simulation: A wide variety of building process (4D) simulation techniques, including construction sequencing and assembly make the construction process easier.
Visualization: Rendering, fly-through animation and other visual representation support is available as a service to end-user and project teams.
Visual Analysis: These are techniques such as Clash Detection and Dynamic Sectioning which allow users to systematically analyze, coordinate and track the resolution of design conflicts.









Kowledge Capture : is the process of capturing the trends and patterns for automation, repetition or customized development of reusable components ranging from schematic to very detailed design components. This helps in increasing the efficiency of the projects. Optimization helps bringing the costs of the project down.










Tools Development (R&D): As the project goes on the need for specific software tools such as macros, plugins are needed on top of the existing tools for a particular operation. This could vary from special modeling, analysis, project automation or data interoperability tools. This acts as an automatic research and development exercise which gives you the edge over others and helps to perform tasks better. Collaboration with experts from the software fields makes this a bilateral process leading to better tools.




www.gehrytechnologies.com

Recombinant Architecture



Recombinant Architecture examines the deep cultural impact of biotechnologies, including genetic, genomic and transgenic engineering, on the architectural imagination. Recombinant architecture is multiple, and Benjamin Bratton divides it into three different indexes:

a. Algorithmic Bio-morphology, the conception of architectonic forms in the image of genetic, biomorphic corporeality (architecture as physiognomic index of the posthuman),
b. Post bodies, the deliberate fashioning of recombinant bodily forms (genomic entities in the image of architecture) and
c. Genomic spatial systems, the application of artificial biomaterials in the construction of the built environment (architecture as the result of genomic design) - from bodies to buildings and back again.


Algorithmic Bio-morphology
the conception of architectonic forms in the image of genetic, biomorphic corporeality
(architecture as physiognomic index of the posthuman).

Genetic architecture elaborates the epistemic centrality of a now genomically self-concious body as a methological index of structural investigation. The genetic body is considered to name and contain multiple and incongruous animate forms to be given architectural expansion. Each one of those is a figurative principle that could be used so as to extend purely biological processes into more comprehensive bio-technical systems.

According to Karl Chu: "Genetic space is the domain of the set of possible worlds generated and mitigated by the machinic phylum over time. This is the zone of emission radiating out from the decompression of reality, a supercritical explosion of genetic algorithms latent with the capacity to exfoliate out into genetic space. This is not a passive receptacle but an active evolutionary space endowed with dynamical properties and behavior of the epigenetic landscape." In his theory of hyperzoic space, laws of physics that ordinate the play between genotype, phenotype and environment, are themselves evolving, and are condensations of multiple manifest and virtual modulations of genetic-algorithmic enunciation.

Greg Lynn’s Embryological House is considered by Benjamin Bratton “likely the most publicly appreciated genetic architectural project”. It re-imagines dwelling according to genetic form as a first principle of iterative animation. The House adjust itself, reacts and anticipates sunlight and environmental variables according to data received. Bratton believes that not only the Embryological House, but also Genetic Architecture itself, remain beholden to traditional architectural problematics. The House is a genetic metaphor in architecture and although there have been used bodily forms and human morphologies, it remains allegorical of genetic processes. As he comments about it: “It is undecided whether Embryological House is yet genetic architecture, or rather still architecture about genetics.”


Post bodies
the deliberate fashioning of recombinant bodily forms
(genomic entities in the image of architecture).

Recombinant architecture looks to the figure of the artificially designed body (genomically, surgically or otherwise realized) as a cyborgian measure of both structure and inhabitant, while genetic architecture infers or applies genetic grammars into the moment of creating formal architecture. The body is the first architecture: the
habitat that precedes habitation. Architecture looks toward the body for its telos, its image of unified singularity, its continuous historicity. “The condition of embodiment and its material poetics of scale, temperature, solidity and pliability, reproducibility and singularity have located the horizon of design from Vitrivius to Virilio.” (Benjamin Bratton)

Bodies are now imaged as genomic territories, as cities of DNA events, due to the fact that they are sliced into component subvariables and statistical predispositions. Bodies could be considered not only as the first architecture, but also as the first digital architecture. DNA is a binary code which produces forms, the bodily forms produced are themselves architectonic in the highest order. Like all the other naturally occurring architectures these genomic manifestations are incredibly perfect as they are and available modifications.

Bodies could be considered as machines, and machines as bodies, therefore they can be used for new design practices and modifications. A spatial example could be the ear-mouse, in 1995 Dr. Joseph Vacanti, a transplant Surgeon at Harvard, who cultured a human working ear under the skin of a mouse, which was then removed, without harming the mouse. Additionally, the extreme body modification and plastic surgeries could be considered as “a deliberate renovation of the first habitat (of the Self), and of the public production of performative space (of the singular Other)” (Benjamin Bratton). Although in the fields of primary mechanics the ultramodern Body is a highly recombinant form, the ultimate realization of genomic digital auto-fabrication, it is unlikely to happen for legal and ethical reasons.

Recombinant architecture understands the primary figure of bio-materiality, the body, as itself an architectural event, therefore re-designs the built environment both as and with artificially derived biomaterials. ”As ever, buildings become bodies only as bodies become buildings”. Because of the fact that it looks at architecture as genetic bodies, it look at genetic bodies as architecture.


Genomic spatial systems
The application of artificial biomaterials in the construction of the built environment
(architecture as the result of genomic design)

Every day growing database of structural biomaterials, genetic and genomically designed fabric systems, is nowadays widely being explored and finds a lot of applications in medicine, agriculture, military and even conceptual art. At the same time the application of genetic material engineering to the design of physical habitats quite often collapses literal gaps between body and architecture.

First conclusion for creating a durable human habitats might be just a replacement of traditional materials with new artificial biomaterials in the formation of traditional forms, spaces, and programs (box, room, dwelling, house.) But some architects are not satisfied with 'biomorphic chairs,' nor even chairs made of genomically designed materials and try to redefine the shape of the architecture created out of biomaterials. As for example Benjamin H. Bratton from SCI_Arc is describing in his article “The Premise of Recombinant Architecture: One” that “recombinant architecture” gives the premise “ to explode the sitting-machine into new bodies of spatial narrative, new modes of habitat-circuit, new questions, and not just new answers. This redefinition of program 'from the DNA out' will undoubtedly result in several recognizable forms. Buildings, like bodies, have membranes, and the vocabularies of 'skin' should only become more pronounced. Buildings, like bodies, have orifices, and the materialities of interiorization/ exteriorization should likewise become further pronounced, even as bodily-programmatic conventions based on them (kitchen/ bathroom, for example) mutate beyond recognition.”

But the form of architecture based on biomaterials is most probably going to be an outcome of the way this materials will be used which will be based on their specific characteristics. So far biotechnology research is mostly focused on medicine and agriculture which is due to interest of science on fulfilling the fundamental needs of humanity. As a result most of nowadays money is putted on modifications of native plants into improved food crops and findings for miracle drugs. This industries are hopping to have the fastest benefits. That might be a reason why there is so far no specific research in finding biomaterials which could be applied in the construction of human habitats.

http://www.nettime.org/Lists-Archives/nettime-l-0304/msg00011.html
http://www.rizoma.net/interna.php?id=151&secao=anarquitextura

Technology molding existing materials




In this search for the impact technology has had in recent history towards traditional materials, we have found that three materials in specific have been the most extensively used in construction throughout the world: concrete, bricks and wood are the most representative and the most used.

“Concrete, the solid that forms at room temperature from mixing a grey powder (mainly Portland cement) with water and aggregates, is the most widely used material on Earth. Current estimates of world cement manufacture are around 1.7 billion tons/year, enough to produce well over 6 cu km) of concrete per year or at least 1 cu m) per person. The demand is rising: conservative estimates predict a cement demand of 3.5 to 5 billion tons/year in 2050.”[1]

By this assumption, we know for a fact, that concrete is one of the materials which needs priority in its relation with technologies, for improvement in its performance, capabilities, possibilities, and relation with the environment. So in order to improve this product, there have been a series of advances and appliances that have made concrete more useful and advanced than never before.

Basically what we know is that concrete advances happen in some specific areas:

-Material performance (structural)
-Material innovation (mixing with other materials to define new uses or new applications)
-Material Impact(environment)

We have several examples in which concrete receives new treatments which allow it to perform in ways we have never seen before:

-Light Transmitting Concrete
-Concrete as a Display
-Bendable Concrete



In addition to this specific applications to concrete, we have a number of additives and chemicals which specifically engineered, can solve problems in construction that can be directly linked to the architectural project, we were very interested in a specific building, which was conceived by Zaha Hadid, the Science Centre Wolfsburg in Wolfsburg, Germany, and solved in its engineering by AKT. What becomes really interesting is that the concrete design and pouring where specifically designed to solve the complexity of the project.

Volumetrically, the building is structured in such a way that it maintains a large degree of transparency and porosity on the ground since the main volume, the exhibition-scope, is raised thus covering an outdoor public plaza with a variety of commercial and cultural functions which reside in the structural concrete cones.

An artificial crate like landscape is developed inside the open exhibition space allowing diagonal views to the different levels of the exhibition-scope, while volumes, which protrude, accommodate other functions of the science center. A glazed public wormhole-like extension of the existing bridge flows through the building allowing views to and from the exhibition space.

The building consists of a basement car park out of which rise 10 reinforced concrete cones, flaring out to support the main exhibition space, two stories above. Each cone is of a different geometric shape, and they all change shape as they rise. Four of the cones continue through the exhibition concourse to support the steel framed, metal-clad roof. The cone walls are inclined up to 45°, which blurs the boundaries between walls and floors.

AKT treated the whole bulging as a single entity, and then analyze it for gravity loads, thermal loads and shrinkage in one model. Although the basic construction method is traditional, the engineer specified concrete with a self compacting admixture for the cone walls and parts of the course slab. There were two main reasons, the height of the pours and the inclination of some walls. The external walls of the cones are only 300mm thick, and since are heavily reinforced they had to use a self compacting admixture gels because it would have been impossible to use a traditional poker to compact the concrete.
Since the structure was designed as a single entity, ad the cones and slab are so dependent on each other for support, the whole structure had to be propped until the entire concourse slab had been poured.

[1] Concrete, New and Improved, by Prof. Franz-Josef Ulm, adapted from a speech at MIT Family Weekend, Oct. 13, 2006. http://cee.mit.edu/index.pl?id=20581

mit instant house project


The instant house project, developed by Marcel Botha and Lawrence D. Sass for MIT ‘s Department of Architecture, studies how to digital design and fabrication can be utilized within an urgent housing environment. Specifically designed as a relief effort for natural disaster areas, refugee camps or any other improvised emergency human habitat, they propose of a system that is both rapidly deployable and scalable, while fostering a large degree of individuality within the newly rebuilt community.

Botha and Sass intend to create an atypical solution in large quantities for emergency, transitional and developing contexts, while giving personal ownership to the end user, through generative computational methods and CNC fabrication techniques. The Instant House ships as a flat packed structure ready for implementation. A generative system that mechanizes the interaction between user, designer and fabrication, attempts to effectively deploy customized dwellings without incurring a cost premium. It is not intended that the process proliferates cosmetic change, but more importantly structural and spatial variation.

Past examples of generative methods have tended to produce house designs as spaces and forms only. The instant house combines concepts of prefabricated low cost design systems with those based on shape and a system for digital fabrication. The Instant House process produces a customized, habitable mono-material plywood structure. Various joint types that sustain their assembly through friction connect each component of the system, eliminating the need for nails, screws or glues. The process is divided into five stages; shape design, design development, evaluation, fabrication and construction.

metropolis magazine - living for tomorrow
the instant house
Kolarevic, B:2003, Architecture in the Digital Age – Design and Manufacturing, Spon
Press, New York.


REJECTING MATERIALITY: IN-FORMING FORMS

DTA Group 01 Javier Olmeda, Maite Bravo, Luis Odiaga

PAPER OUTLINE – DEC 2/07

5 QUESTIONS:
WHAT
WHY
FOR WHOM
HOW
WHEN


1. WHAT: [Objective of research] Can architects able to use novel (innovative) ways of dealing with digital interfaces in order to explore space with such intuitive approach?

2. WHY: Conception and development of innovative design processes interfaces.

3. FOR WHOM: Architects, artists, digital tech, people dealing with responsive environments.

4. HOW: [Methodology]

a. Study of REACT table as the development process to identify needs, processes and solutions.
i. Interview with creators (if possible);
ii. Explanation of REACT table in terms of structure, functionality, complexity;
iii. Experiential interaction with table to understand processes and to gain insight on creative/individual input.

b. Research of 3 projects that incorporate tangibles as method of design to identify:
i. Inputs/outputs to determine variables: Logic.
ii. Systems of combinations: Media.
iii. Outputs: Final reaction/effect.

Projects suggested:
a. Reaction pollution
b. Enric Ruiz (Cloud 9) IM Museum
c. ????
d. Electronics/Technology: Interview to Victor Vina to identify;
i. sensors available to be used in architecture,
ii. amount of energy required,
iii. input/outputs.

5. WHEN: Digital era today.

6. Conclusion:

a. Interactivity, re-contextualize in actual terms; era of consume- becomes productive, society of service- not products. Architecture as multi-sensorial experience? Will digital media be able to modify concepts of spaces? Open doors to acknowledge human experiences (sound, light) and energy efficiencies/interactivity with the natural world?

b. End of architecture as we understand it, disappearance as an activity, transforming into modifiable surfaces and platforms of design; cut/paste architect; master-builder as organizer of processes and delegates; architect will orchestrate several layers of information/ fields with a general knowledge (NOT specific driven).

c. CONS; Reliability on technology ignores/ left out, utopian ideal that technology can solve problems of humanity; will it change quality of life? Pride on human knowledge and technology? Will robotics, genetics, etc resolve .GOD-like humans? Can tech produce better architecture? Sustainability of programmable surfaces, Dialogue between logic/intuition driven? Emotional side of architecture?

HYPERBODIES: Complex Adaptive Dynamic Multi-Agent Systems (CADMAS) as Self-Sufficient Sustainable Environments of Inhabitance (SS SEI)



Draft outline and Progress

INTRODUCTION:

SELF SUFFICIENT SUSTAINABLE ENVIRONMENTS OF INHANTANCE (S-SSEI)
The argument of building sustainability could be simply defined as the act of producing buildings that can be maintained in their environments indefinitely. The various studies produced as a result of the visible signs that are highlighting stains that the increases in human population and energy consumption are placing on the environment. The visible effects (global warming and increased natural disasters said to be attributed to global warming), resulting global research and political summits, discussions and agreements, suggest that sustainability should be central to future architectural strategies. It is not the intention of this research to outline those discussions or pass judgment on aims of governments, developments or architects in their achievement (or not) of sustainable goals. Discussions about new forms of architecture must however consider sustainability as integral to proposed processes, theories and strategies, for them to be applicable to future. This paper attempts to search for new approaches to sustainability utilizing more dynamic approaches to the environment.

How sustainability is achieved in current approaches to sustainable design is however somewhere between science and intuitive knowledge. It could be argued that technological systems holds the greatest potential for 'bridging' this gap and creating buildings that are more intelligent and responsive to their environments. Intelligent facade systems (such as Jean Nouvel's Arab Institute, Sir Norman Foster's Reichstag museum) achieve a basic level of interactivity between building and environment, but only on a basic 'single interface' level. There is a programmed input (track the sun) and single output (move to block the sun). In terms of the complexity of interface systems, these examples are simple and do not allow for any future adaptability, interaction between whole, parts, other systems and .


COMPLEX ADAPTIVE SYSTEMS (CAS)

Characteristics
• Auto-Organisation
• Auto-Reproduction (genetic re-coding)

Key Examples
• Kas Oosterhuis

Kas Oosterhuis understands architecture projects as hyperbodies in terms of communication and responsive actions, so that he defines swarm architecture as the one that is feed on data generated by social transactions. This kind of architecture could be understood as a hive mind of new transformation economy and also has the capability to react in real time. For Oostehuis architecture becomes the science of fluid dynamic structures and environments running in real time. He tries to come up with researching projects that explore practical possibilities by using parametric and genetic design principles to build real and physical inhabitance. This collaborative work based on parametric and associative tools makes buildings pro-active hyperbodies shaped as prototypes for fluid and dynamic structures to achieve environments in real time. Auto-organization by establishing feedback relationships between people and buildings is one idea that seems to be in Kas work. Nonetheless the auto-re-production and self-sufficiency in terms of sustainability is something missing in his buildings. In our opinion linking both auto-organization and auto-re-production would be a right choice to generate CADMAS (Complex Adaptive Dynamic Multi-Agent Systems) that can be used to figured out some sustainable and responsive strategies to build contemporary inhabitance.


Technologies
• Embedded Systems or Ubiquitous Computing
MIT Media Lab


= HYPERBODIES:
(Other definitions of Hyperbodies)
(The goal of what we are researching here)
To combine the ideas of CAS, Auto reproducing and organizational systems, that can be seen in the work of Kas Oosterhuis and other architects, but focusing on new ways of producing sustainable solutions (of program and usage) rather than simple bio-climatic systems.