Tuesday, October 20, 2009

Maria Wong

I am available anytime from monday next week.
those who needs equipment for both test run and final presentation please let us (management people, i reckon me, robert, dane, tom, and william?) know before the day. we'll try to book it.it will be good if you can contact us couple days before the day.
thanks!

-m-

Monday, October 19, 2009

Joseph Vozzo

Hi Guys,

My name is Joseph Vozzo I was in the Presentation role but I have swapped with Michael Paraska and now I am in Documentation. I will be free to meet up any day but as long as it is after 6:30pm because I have to drive up after work. For your reference my number is 0405627834. My presentation was for seminar weeks 2+3.

test run

so i'll start it =)

i'm available next week:
monday after 12
tuesday between class
wednesday all day

we really cant afford to do it any later than that
cheers for everyone who posts their availability for a test run!
-Lauren

Thursday, October 15, 2009

Wk 1 - Anamorphism

SUMMARY

Anamorphosis is a process which allows the exterior world to be viewed from different viewpoints.

Throughout history, different visual models have been used by humanity to produce images. The camera obscura was the prominent visual model used throughout the 1500’s-1700’s. Due to its structural principles and monocular way of viewing the exterior world, the model collapsed in the early 1900’s.

The next visual model proposed around the 1820’s, was that of autonomous vision. This saw the introduction of linear perspective and vanishing points. Autonomous vision brought about new techniques of ‘power’. Due to the different ways of observing the exterior world, the observer could now manipulate and control their object in different ways, creating an accurate image of the ‘seen’.

All readings display a certain visual ‘truth’ regarding the observer and the seen. Due to factors regarding verisimilar perspective and the view of the observer, a ‘correct’ representation of the outside world has been desired throughout history. Visual models have been developed, rejected and embraced by society and today models focus on producing accurate images of the exterior world with the help of linear perspective. This has created a conventional way of viewing which to most of humanity is the ‘normal’ and the sensible way to observe an object.

Anamorphism creates a type of escape from this normality. It occurs when the observer moves positions, not the object, which in turn, removes the observer from the space and reinserts them with a different take upon the exterior world. Due to the morphing of the image, eg on a cylindrical plane, the viewer will only be able to comprehend the image once distorting the body so for the image to become ‘normal’. Anamorphism allows the viewer to see things from another perspective, and to some, another world. This type of removal can be seen in Robert Lazzarini’s skulls, and the experiential explanation Hansen deploys. This rejects the authoritative hold that linear perspective visual techniques have on society. It creates a different space, like a distorted universe for one to experience.

Wednesday, October 14, 2009

Oliver Petrie - Week 6 Digital Camera Technology

Digital Camera Technology

The introduction of microchips has allowed a range of new technologies today, but has also replaced a variety of technologies that used tubes or valves, such as those in cameras. Digital cameras have replaced still cameras, allowed for web-cams and are gradually replacing CCTV systems.

While most still cameras are digital today, many large budget movies are still produced with analogue film, many arguing that it is still superior to digital methods in terms of quality. This, however, is starting to change as the digital technology becomes better at reproducing images.
One of the reasons the issue of quality is raised in digital cameras stems from the underlying problem of capturing light.


The majority of digital camera technology uses an array of four sensors for each pixel, one red, one blue and two to capture green light (due to the human sensitivity to green-yellow light), known as the Bayer Filter. This method can therefore be disputed because it isn't technically accurate, being bias toward one part of the colour spectrum. The basic problem in this model is, how do you split a square pixel into three equal colours. There have been many similar arrays created with this technique, some with a white pixel to measure intensity, or advanced patterns which introduce other colours or shapes of array.

Another method known as the Foveon sensor uses three coloured filters in a layered arrangement, each layer capturing just the wavelength of light it is design for and allowing other wavelengths to pass to the next layers. This method has its own problems arising from allow light to pass between layers and of course cost, as each sensor is in essence a microchip.
There are also a wide variety of other issues that come into play in looking at the quality of digital camera that can arise from how much light can be captured, how many pixels, the size of the sensor and the calculations used to transform that information into an image.


Human Vision

To understand how to adapt digital cameras for use in architecture we also understand on some level how our vision works, and that it depends on multiple systems of brain function to decipher colour, motion and shade to form an image. How we perceive the world relies heavily on how the data is processed in conjunction with memory, natural assumptions and approximations to form a view. Evolutionary changes and world interaction have driven these cognitive effects, many of which we do not fully understand. We unintentionally assume objects are lit from above and that objects have mass rather than being hollow. We group objects and colours together and our vision is bias toward particular colours over others. This colour spectrum shows where the sensitivities of human vision lie, with the peak being in the green-yellow wavelengths, especially at night time.

Camouglage

We have become accustomed to seeing the world through the internet, and will often sacrifice a real world experience, to instead view it over the internet, such as in site visits. The lack of privacy that comes with this technology causes a desire to hide or disconnect from this. CCTV cameras use some infra-red wavelengths of light to enhance the image in low lighting levels. There have been concepts invented that us a method of camouflage by using bright Infra-Red LEDs to overload these sensors in CCTV cameras. This is one example of something that humans would not physically see in reality, but which CCTV cameras would pick up. This method can both camouflage an object from security cameras, but could also be used to draw attention to something in the digital world that would remain seemingly anonymous to us in reality.


Invisibility and thoughts

Architecture, in a traditional sense, deals with the built form rather than with digital reproductions or creations. The line between real and digital is becoming increasingly blurred. So then, should architecture try to combat this move toward digital experiences over real, or embrace the digital world as another totally different experience? Can the digital realm become the primary experience of the project?

This then throws into question of what a real experience is in architecture. As we have seen with the Blur Building by Diller and Scofidio, the building envelope separating interior and exterior can be broken down creating a totally new experience. In 2007 scientists were able to create an artificial material, an electromagnetic meta-material, with a negative index of refraction for visible light, meaning that the material was able to bend light around it, in essence becoming invisible to humans. Although this happened on a microscopic scale it does open up a range of possibilities for both the physical and digital world. In decades to come it will be possible, in theory, to create a building that shows up on some cameras but is invisible in reality.

If architecture deals with real experiences, then it may be in our interest to seek to hide buildings from digital views, to camouflage in order to draw a greater audience to seek the real experience. On the other hand there is the possibility that we use the digital view to seek another dimension to architecture.

Monday, October 12, 2009

Nikola Kovac: Week 6 Camera Technology, Resolution and Scale

A digital camera works by reflecting light from the lens onto a digital sensor. The sensor can either be a CCD or CMOS sensor and is usually coupled with a colour filter array over it. Colour filter arrays filter the light by their wavelength range before they hit the sensor and then undergo a demosaic process to come to a final image.

A standard RGB colour filter array is a Bayer Filter. As John Savard points out in Colour Filter Array Designs, the Bayer Filter array has 50% green, 25% red and 25% blue. These colours are the three primary colours in our visual system taking into account that green is the dominant colour component contributing to spatial resolution. The RGB colour model is additive in nature which means that colours are either placed close to each other or follow each other in quick succession to produce different colours in the colour gamut. When the Red Green and Blue are added at equal intensities it produces white. Red and green combine to make yellow, green and blue make cyan, and blue and red make magenta and this is the basis of CMY colour model. The CMY is a subtractive colour model in that it subtracts wavelengths from white to produce colour.

There are colour filter arrays based on the CYYM model and the CYGM just to name a few. As Savard highlights in his piece there are possible endless combinations for how the colour filter array can be organised. If someone was able to produce all the filters and you tried them, you’d get slight to very different variations in their output. The Foveon is another popular sensor using three stacked active pixel sensors and the wavelength of different colours is absorbed at different levels so that each stack gets 100% of the colour it could be getting from the array.

The interesting thing about this is the repercussions it has on how colour is received and how the urban environment is then viewed with alterations of the filtering. Many web camera websites provide codes to unlock its raw mode. By doing this you get to experience the raw Bayer filter effect before the hardware alters it to a finer output. The Logitech website says that if you do this you will “lose all the benefits of the many days that Logitech engineers spent on tuning the hardware. You have been warned.”

Alternately many at home astronomers using digital viewing scopes have created a bunch of freeware apps to alter the state of your webcam filtering. Open source software like AVIRaw allows you to experiment with the outputs created by reconfiguring the Bayer pattern from GR to RG to BG to GB.

In terms of architectural applications in how this can shift the way we as designers work or how we can push interactive architecture beyond just a media façade I was interested in the Bamboostic project by the Hyperbody team at the University of Delft.

It’s an interactive installation that is influenced by CCTV surveillance and explains some possible architectural applications that can occur from the utilisation of security web cameras and their sensitivity to different colours. In one room there is an aquarium with a gold fish. The gold fish’s movements are tracked by a web cam equipped with Max/MSP software. This detects groupings of pixels in a similar colour range. In this case the goldfish stands out of the aquarium and the computer maps patterns of movements based on the colour orange. In another room people are encouraged to enter a bamboo Forrest and another camera equipped with VirTools tracks movements based on pixel colour changes. These occur when someone wearing a shirt with a colour different to that of the floor enters the space. It then calculates the average coordinates of those changing pixel patterns in the space.

Using the vector output of the fish and the vector output created by the movement of people, this causes “different attractors at different strengths, and creates unpredictable swarm behaviour” causing the muscles of these bamboo poles to contract and move towards and away from the vector pathways in its system.

Similarly this is the chameleon chair lounge exhibit. It was an art installation piece but it shows the potentiality of web cameras recognising and responding to pixel changes. These chairs detect colour changes in pixel clusters. They attempt to calculate average colour changes in the pixel clusters. And then they identify it within the lab colour space and project that light back from within the chair. This has some interesting architectural possibilities in terms of what it could do for façade treatments once the technology develops further.

I did a bit more research about the web camera as a viewing port of anamorphic display within urban spaces and I stumbled across something seemingly silly but something gathering quite a bit of hype.

So what you are seeing here is augmented reality. The way this works is very similar to how a green screen works in movies. In reality it is just a planar surface printed on a piece of paper. However when we see this through the lens of a web camera we’re experiencing something entirely different. Paper vision is an open source code for Flash Cs3 and it is quite simple in what it does. Basically using maya or 3ds max or even Google sketch up you generate a model. Once you’ve done that you input the paper vision source code into flash. The source code tells flash and the web camera to target its vision on an image symbol. The camera uses the symbol as the reference point of an xyz coordinate plane and so through the viewing port of the web camera you’re experiencing an augmented reality. And then in flash video, it will project that image onto your target.

The manipulation of what the web camera does has a lot of implication in terms of how we view the urban. You can view this world through web cam but you can also view it on the iphone so that if you were to scan a space with AR Glyphs hidden through it, you would see a new world of hidden geometries. And AR has even released its own range of Media goggles which people have now used to create some really crazy worlds within worlds.

Architecturally speaking through the use of projectors acting as windows, you could create some interesting optical illusions and spatial challenges.

In discussing the possibilities of shading, Ramachandran in his ‘2D or Not 2D’ questions why it is that evolution granted some species the ability to shade themselves. He questions traditional gestalt theories by suggesting that there may be a deeper evolutionary cause as to why our visual system associates shading with light sources and ultimately three dimensionality. He explains how the brain justifies shading patterns.

A simple example he details in his piece shows the image on the left as either convex or concave. It can’t be perceived as both simultaneously and we have an idea of where the light source is coming from. However with the image on the right we can’t identify the light source and we have to individually assume whether each circle is convex or concave.

He goes on to talk about our natural assumptions that light comes from above. In this image he explains that the images in the bottom panel are holes while those on the top are convex. Ramachandran’s describes his studies whereby subjects were found that by tilting their heads even 15 to 20 degrees, visual sense was able to be determined as a result of shading rather than a basis of retinal or gravitational cues.

I was interested in this and what I could define in terms of architectural applications of this theory. This is a projection RUBIK 555 in. What we see here is merely a projection onto an existing building but the ideas of counter shading that Ramachandran highlights are quite visible. Our visual system creates light sources which gives these images a three dimensionality. As the animation progresses certain elements appear convex and some concave and at varying times the rotation of the shadows changes yet to us it always has a sensible three dimensionality despite the absurdity of moving concrete panels.

Saturday, October 10, 2009

Tom Chan: Week 6 Camera Technology, Resolution and Scale

[2] The human eye renders colours differently to how a camera captures colour. The human eye contains three separate types of colour receptors that respond to red, green or blue wavelengths.
[3] A digital camera is designed to capture and express colour in the same fashion a human eye does and this is usually done through a silicon-based image sensor array. Unfortunately silicon doesn’t perceive colour exactly the same way we do.
They are more sensitive to near infra-red wavelengths and poor at detecting light at the blue end of the spectrum.
[4] Therefore a variety of techniques are used to adjust and correct this perception. There is the expensive and difficult multi-sensor camera approach, utilizing three RGB colour components to create a full colour image.
The second technique is the widely used Bayer, single sensor cameras. However due to the techniques involved, the image quality can be poor as there can be a lot of colour crosstalk in the colour perception process.

[4] Currently, most web cameras usually come under the Bayer, single sensor camera technology. As a method of controlling the façade as internet camera technology becomes more numerous and abundant we can begin to explore manipulating not the actual camera technology, but exploiting the what the camera perceives.

[5] As internet camera technology is usually similar and cheap, it is possible to easily identify their issues and design to exploit the colour/shape sensor technology, thereby possibly hiding or distorting the image.

Possibly by viewing the internet camera as the ‘predator’ and the façade as ‘prey’ we can start using the principle of countershading to influence façade shape designs to conceal or distort the façade.

[5] This would mean exploiting an internet camera’s vantage point, whether that be from a cityside cam or google street view but from a fixed vantage point many optical illusions could be utilized for various effects. Michael Bach discusses the effect of the illusion of size constancy, especially when distance information is not available. We instantly deduce flat two dimensional renderings of objects as three dimensional such that through an image such as the view through an internet camera, the opportunity to exploit facades and create façade adjustments the opportunities are numerous.

[7] In the Artful Eye Ramachandrian talks about the relationship between depth perception and the role that shadows play in helping us determine the solidity and shape of objects. The visual system believes generally that there is only 1 single source of light, that the light comes from above like the sun and that objects are usually convex, that is, protruding. This is useful in determining the size, shape and depth of complex objects, when unfamiliar with that object.
Therefore when a series of shaded ‘eggs’ are produced, those with shading on the bottom side of the eggs are seen as convex, and those with shading on the top half are seen as concaved.
[8, 9] These are some visual tests done through the effect of shadow to change the three dimensional aspects of a façade.

[10] Felice Varini is an artist that predominantly does perspectival artwork. Using the rules of perspective and by exploiting the vantage point, Varini paints objects that seem to float in the air.

[13] As internet cameras are developing better technology and become practically omniscient in society, it becomes harder to affect change between camera and façade especially through colour (the relationship between viewer and viewee), however through the fixed vantage point of a web camera, the tool of illusion is still a viable option whether as a strategical tool or a whimsical aesthetic addition to the façade.


As a thought
Also due to the technology used in internet cameras, again as a way of controlling the image or façade in a way that exploits the disadvantages of the peering electronic eye, one could theoretically influence the design of a façade to appear outside of the internet camera’s visual perceptive range. During the brightest hours a building could literally disappear by reflecting light to near IR-range so that the internet camera fails to perceive and collect the light source, thus rendering the building imperceptible. However with the advent of the digital signal processing (DSP) camera and back light compensation (BLC), bright image areas and low light areas that could only be perceived by high-end cameras with digital back-light masking capabilities can now be processed quite easily by any camera with DSP compensation.


Bibliography
Bach, M and Poloschek, C 2006, Optical Illusions, Visual Neuroscience www.michaelbach.de/ot/index.html

Fraser, B 1996, Color, Adobe Magazine
www.adobe.com/products/adobemag/archive/pdfs/9611febf.pdf

Kruegle, H. 2007, CCTV Surveilannce: analog and digital video practices and technology, Elsevier Butterworth Heinemann, Amsterdam.

Ramachandrian, V. S. ‘2-D or not 2-D – that is the question’ in Gregory, R.L. Harris, J. and Heard P. (eds) 1995, The artful eye, Oxford University Press, Oxford, pp. 249-267.

http://www.quadibloc.com/other/cfaint.htm - Savard, J ‘Colour Filter Array Designs’

http://www.definitionmagazine.com/issue_pdfs/def32/sensors.pdf --Whalen, M. ‘Capturing Colour’.