Friday, August 30, 2013

Future Computing Lab


8/23/2013

Of the labs that we have toured, I think that computer vision is the easiest to understand as architects. They deal with space, the study of space, and the way that people use space. Their motivations for study are roughly similar to architects, and the relevance of their research is immediately recognizable from the architectural perspective. While they are not interested in the form-making aspect of design, I think their work provides the greatest amount of overlap into the study of architecture, and through a mutual understanding allows us the greatest insight into the motivations of computational study.

Firstly, I’d like to address the process of spatial study using computer vision, and then move onto the larger implications, and motivations. I know, that one of the greatest challenges is the technology behind the scenes. Each research project is limited by the type and amount of information that can be gathered. They can not easily tell the qualities of a person that are readily available through observation, thus studies focus on positional qualities, and time spent at any one location.  I don’t think this is really a bad thing from a spatial study perspective, since the path that people take through a space is much more interesting than what color shirt they are wearing.

From an architectural perspective it’s interesting because throughout the design process we can, but do not have to think about the way people will inhabit the space. Whereas the FCL only study the way people move through space, architects can take multiple inspirations to flesh out a design. They can draw from precedent, general design guidelines, or even take a form-making approach, where the inspiration is not inhabitation at all, but rather the overall configuration of space. In this way, the research is just a focused study of one piece the spatial design that seeks a more scientific approach to our understanding of space.

It seems to be a common theme among the labs that we visited. Each topic, from a computational perspective is about creating a technological framework through which graphics, interaction, or spaces can be measured and analyzed. From a simplistic perspective, it would seem that the research encroaches on much of the work that designers have traditionally done. However, computers can not understand the abstract work that designers do, and all of this research is not about replacing designers, but getting computers to a point were they can understand all of these things.

Thursday, August 29, 2013

Computer Vision

8-23-13

            Dr. Souvenir and his assistance have been working on developing various methods of Computer Vision, the way in which computers detect and analyze their environments.  In this department, Dr. Souvenir and his team have focused specifically on human motion and the ways in which computers can identify the various nuances of how people move through a space and the interactions that occur when two or more people meet.  The goal of this research is to determine patterns and combine this behavior recognition with anthropology in a way to analyze human behavior.  As of now, Computer Vision is capable of person detection and person tracking, but has difficulty with more specific levels of interaction.  These may be low-level interactions such as a handshake or hug, or specific activity recognition such as a group of people in a meeting. 
            The computer vision team is using camera arrays set up in a space to calculate these interactions and movements.   The basic method of determining where a person is in a room is based on creating a background image then searching it for changes.  The greater the number of cameras in an area will yield more accurate results because of the added information.  The cameras triangulate an objects position and look at it through a series of horizontal slices.  This method allows for the cameras and computer to determine the height of an object or person introduced into a space based off of where they exist within the slices.  The difficult part of analyzing people is with all of the subtleties that go into reading body language.  Because body language is such a key part of human communications, the next step in computer vision is to try to create a way to encode human activity and be able to analyze it for recognizable patterns.

            The project has a long way to go, but it’s got a great foundation to build from. Analyzing the way people move through the space gives anthropologists and architects and large amount of information with which to work from.  This data can help architects design spaces in a more responsive manner and aid anthropologists in finding behavioral patterns they may not have noticed with a group of people beforehand. 

Friday, August 23, 2013

Daylighting


Dale Brentrup
8/16/13


Daylighting has always struck me. Through all the years its taught in school it seems like an optimization problem. We are taught to use one type of shading device, and from there, we use one of the many pieces of software available to us to optimize its position and orientation. From this perspective, daylighting is not a very interesting topic for designers. If the only goal is optimization, then there is no place for the designer, only a set of rules to follow. As a whole, given a finite number of distribution techniques, and an infinite amount of computational power, daylighting should be considered an optimization problem.



My first rethinking of this stance came from the reading. Its not that the article laid out the decision making process from a looser perspective, but it came from the use of genetic algorithms. The problem, in my mind, came from the decision-making process used in the algorithm. Such an algorithm is used when the final solution is not clearly defined computationally. Instead, the focus of a genetic algorithm is choosing a direction for the optimization, changing the weight of each variable, so that the computer can make calculations around the desires of the designer. If a genetic algorithm was a viable way of solving daylighting problems, then it could not be a pure optimization problem.



By recognizing that the problem of daylighting is about optimizing a set of design decisions, and not just about adding more shading devices onto a façade, I can give a much better description of the process. The processes seems very close to the decision making process that the visualization lab uses, where they have an established set of proven solutions that can be optimized, but primarily deal with the decisions that best achieve their goal. Thus, thinking about the daylighting problem computationally, the designer’s role is about choosing a combination of shading and reflecting techniques, and optimizing each part of the daylighting system.  This is way you see so many different approaches to the issue of shading; because daylighting isn’t inherently an expressive design approach, and thus, we must have different ways of achieving the same goal. If we didn’t, and the ONLY way to solve daylighting was through optimization, there would be no expressive qualities, and no place for the designer.

Thursday, August 22, 2013


Ideas Seminar: Dale Brentrup
Date: August 16, 2013

Professor Dale Brentrup’s research focuses on the development of simulation systems for the optimization of daylight, energy and envelope system performance. He talked about some examples of daylighting optimization through the use of a Genetic Algorithm to save electricity consumption, and the inclusion of developing work on the visualization of thermal performance. I’ve heard there are lots of studies that show great importance of daylight in the office or residential buildings. Actually they showed natural lighting has a positive impact on increasing the mental and physical abilities also work productivity. The sunlight as a renewable energy source can be used to improve the energy efficiency of the integration of lighting control systems in office buildings. Therefore the relationship between light and architecture is inseparable.

Demand for environmentally friendly buildings is growing, and architects consider to create buildings that how to maximize and optimize natural lighting into the building for energy saving. Many studies show that daylighting is a important function and consideration factors of building design such as building’s location and orientation, size, shape, window size, even specifies materials. Because architects make these decisions, and determine a building’s daylighting performance. Their decisions have significant energy consumption consequences. The goal of the research is the investigation into the application of Genetic Algorithyms to provide visual and optimized data to assess the systemic energy efficient impact of whole building systems related with global climate change. This study presented utilizes climate-based simulations for each design combination which affect the thermal and luminous performance. This work is enabling a movement toward regional market transformation in energy efficiency, carbon sustainable building design practice and appropriate technology through the application of common methods and verification instrumentation.

In future years, the prices of energy sources will be expensive and affect the rising of costs of living and doing business. Therefore, developing of sustainable energy is really important. I think studies of daylighting performance evaluation continue to be advanced and developed in order to find solutions to daylighting design problems that are better than those that would have been found using traditional methods. Architects, engineers, and lighting designers can now perform dynamic climate-based daylighting simulations, made possible by the development of related algorithms and software.

 

 

Wednesday, August 21, 2013

Dale Brentrup and Daylighting

8-16-13

Dale Brentrup and his research focus on the aspects of daylighting and utilizing this design methodology to reduce the energy consumption of new architecture today.  Since buildings consume about 40% of the energy produced, and of that a bout 20% is devoted to lighting the building.  By reducing this load, designers can create more environmentally friendly and cost efficient buildings.  Unfortunately, the key parts of the design process where the buildings orientation and utilization of daylight are largely determined are the most expedited parts of the design process.  This results in hasty daylighting analysis, if any at all and is detrimental to the energy efficiency of the building in regards to its daylight harvesting. 
Dales goal is to implement a Genetic Algorithm for daylighting in order to account for the short amount of time that designers give to accounting for daylighting in the design.  Daylightning analysis relies on taking luminance data from various control points in a given space.  Through computer programs such as Radiance, these points receive data from the sky vault and can calculate the amount of light reaching the control point based on simulated weather conditions, date, and time of day. From here, daylighting analysis would then utilize various techniques such as manipulating the ceiling slope, shading elements, and the window size and height to try to evenly distribute daylight throughout the space. With the Genetic Algorithm, the computer creates various scenarios of daylighting for a space then selects the most efficient method and builds from that until an optimal lighting condition is reached.  This process can speed up the analysis through automation and can quickly give the designer optimal dimensions and measurements for lighting elements such as light shelves, ceiling height, and ceiling slope.

Unfortunately, there’s no way to determine the absolute efficiency until the systems are implemented into the built environment.  This is simply because nature can act very differently than a computer program at points.  What the daylighting lab gives us is a good representation of the general nature of daylighting specific to an area.  It allows us to prepare for given conditions that have been previously recorded and averaged so that, as designers, we can prepare for the trends that we have already seen.

Friday, August 16, 2013

Computing In Place, VAI Roma, and Architecture in the Modern Age


Eric Sauda
8/15/2013

Throughout this presentation it became clear way DArts has always had such a strong relationship with the visualization department. In many ways the logic used to come up with, and solve problems is very similar. Both rely on the organization of large systems of knowledge to solve problems, and deal with problems that may or may not be major problems in people’s minds. Thus, the problem solving logic, especially when dealing with the more abstract problems, seems to be about trying to make the problem smaller and smaller so that it is more manageable. It’s less about solving the whole problem, and more about solving, or identifying potential solutions to parts of the problem.

Having been involved with the Computing in Place project for the past year and a half, but not focusing on the problem solving procedure that I have been applying I think I will start there. Unlike Architecture in the Media Age, the problem, or direction of the project is fairly clear, to find the place of large-scale media displays in an architectural context. It sets up a path that can take many branches and detours as each interactive technique is approached and experimented with. When progress has been exhausted, we can go back to the beginning and re-examine things, or begin exploring an entirely new mode of interaction. The project ultimately tries to put media within an architectural framework, and explores the multiple ways accomplish that goal.

While at first glance Computing in Place and Architecture in the Media Age seemed to be very different projects, I don’t think they are. Computing in Place tries to find a place for media in architecture, while Architecture in the Media Age tries to place architecture in the realm of media. The latter project is defined quite a bit more ambiguously than Computing in Place, but I think that is necessary, since we do not usually think of architecture as a source of media. They both explore the relationship between media and architecture, and discuss the increasing blurring of the two. Both are about diagramming, or setting up a network of issues pertaining to the architecture-media relationship, and focus less on trying to create a grand theory that ties the two together. They are set up as parallel projects that should give a greater understanding about the middle ground between the two topics.

Eric Sauda

8-15-13

            Eric Suada does not run a particular type of research, but instead works in collaboration with multiple people at once in order to bring multiple sets of ideas to fruition simultaneously.  Eric has had his hands in many different disciplines regarding the interaction of architecture and computer science.  The first is Computing in Place, which deals with the way in which display screens have become a ubiquitous part of architecture today.   The second is Computer Vision, which addresses the manner in which we can use computing to analyze the movements of people through a space in an anthropological manner.  Lastly, the VAIRoma project addresses the way in which people can access data about Rome through visual analytics and come up with new methods in which to study Roman history.
            Computing in Place deals with the display screen and its place in architecture today.  These screens are a static part of buildings and in most cases are little more than an electronic billboard.  Computing in Place seeks to discover ways in which to make these display screens much more than that. Because of the relative newness of these screens, there has not been a particular uniting element to the way in which they are presented to the public.  Computing in Place seeks to utilize the knowledge of the architectural environment, knowledge of place from an anthropological perspective, and the knowledge of interaction with displays to create a manner in which to further the use and interactivity of these screens.  By studying these various elements, one can determine how people interact with a screen in a certain area and determine ways in which to increase the level of interactivity in order to give the user a more meaningful experience.
            Computer Vision deals with similar elements of the Computing in Place project in the sense that the project is attempting to analyze people in an anthropological sense in order to get a better idea of how to design a space around human encounters.  This project looks at the way people move through a space and interact with each other in order to determine specific patterns.  By gathering this data, we can determine ways in which to predict the way that people interact in a given space based on the way in which they congregate, and therefore enhance the experience of the person in the space through design.  This design may be through digital methods such as adding a display based off of the information gathered and the understanding of Computing in Place, or a physical change to the space itself through installation or renovation.  Further, this could help by providing precedent for future designs of similar spaces.
            Finally, VAIRoma looks at computing and architecture in a strictly informatic sense.  Visually representing the enormous amount of data available on the Roman Empire is a daunting task, but when complete it will provide ease of accessibility to architects and historians looking to study the information through its layers.  Being able to see the vast amount of information presented in this manner will allow for new connections to be made and serve as an insight to those studying in this field.

            All of these studies seek to merge different fields together in a way that is mutually beneficial to all involved. Evaluating the various projects would have to depend on the ease of use and effectiveness across the multiple disciplines.  This can only come through use and testing of these projects.

Thursday, August 15, 2013


Ideas Seminar – Eric Sauda
Date: August 15, 2013

Professor Eric Sauda started off by providing a brief introduction and a precise description about his research area. Initially, he talked about issues related to urban visualization from the perspectives of urban design and computer visualization. He stresses on the interactive visualization which is a new form of communication, providing presentations that allow viewers to interact with information in order to construct their own understandings of it. I think most of people are not expert in research area. We should try to communicate and make them understand. To do so, this project enables the development of new computation technologies to view data and to use interaction abilities and better communicate the results.

The project presents urban visualization in Charlotte. Actually, there’s a lot of system to display large collection of data for urban study. However, most of systems are focused on statistical chart, manual calculations to understand relationships in urban environment. Furthermore, these systems often limit the user’s perspectives on the data, limiting the user’s spatial understanding and cognition of the viewing region. But he suggests 3D view of the urban model, a separate but integrated information visualization view displays multiple disparate dimensions of the urban data, allowing the user to understand the urban environment both spatially and cognitively in one glance. Urban visualization system contains features that interact with urban data, and enhances their ability to better understand the urban model.

I thought the lecture was extremely interesting, not like the traditional talks that are full of jargon but talk more about improving the interactive visualization. As many researchers have pointed out, computing today is moving away in a number of ways. In this sense, I can understand how helpful this interactive urban visualization tool will be in developing and providing intuitive understanding of the urban data and learning algorithms. For the future work, he tries to enable a user to gain a sense of urban legibility that will include both the geometric form as well as the flow of information and goods. I guess, for doing this we need to continue to be a great deal of borrowing of techniques and ideas from artificial intelligence and other areas of computer science.

 

Jefferson Ellinger

8-14-13

            Jefferson showed us a variety of projects that he and his firm E/Ye Design have been working on in the past.  Their design process and concept is along the same lines as Chris Beorkrem’s approach to parametric design.  Mainly, in the way that they use conventional construction methods in conjunction with customized parts.  Jefferson described this process as “critically engaging normative construction methods.” Additionally, E/Ye Design takes an ecological approach to their design along side the parametric approach.  They use tools such as Vasari and Fluent to analyze their buildings for efficiency after designing them with parametric scripting tools such as Grasshopper or Generative Components. It’s this combination that lends specific forms and spatial uniqueness to E/Ye Design’s projects.
            E/Ye Designs earlier work involved parametrics at a small scale. By examining the construction methods and material limitations, they determined the simple ruled strategy would be an effective way to create interesting spaces.  This way, conventional construction methods could be used and would make assembly of the designed project relatively straightforward for someone unfamiliar with the design.  The ruled surface allowed for pushing and pulling that created an interesting space in which to interact in.  They discovered that by manipulating this type of form, they could activate certain areas of the space in which they were designing, whether that be a conference room or PS1 competition entry. This process could continue to be used and applied at small scales, but became less effective as the scale of the structure increased.  For larger buildings, programming and scripting were the choice design method, all while still implementing the pushing and pinching that was inherent in the rule surface strategy.  In the larger scale, they weren’t simply for defining or activating space like they were at the smaller scale, but instead served to function as ecologically mindful design elements.  These activations would funnel air for a passive ventilation technique or act as a way to capture the sun and control heating in a passive manner. 

            Because some of these projects are being constructed, we can evaluate the effectiveness of the design based off of real world results.  Obviously, testing in climate simulations can tell us a fair amount of how the building would act in a given location, but actually constructing and inhabiting the building can shed light on future designs and methods of construction.