Monday, October 12, 2009

Nikola Kovac: Week 6 Camera Technology, Resolution and Scale

A digital camera works by reflecting light from the lens onto a digital sensor. The sensor can either be a CCD or CMOS sensor and is usually coupled with a colour filter array over it. Colour filter arrays filter the light by their wavelength range before they hit the sensor and then undergo a demosaic process to come to a final image.

A standard RGB colour filter array is a Bayer Filter. As John Savard points out in Colour Filter Array Designs, the Bayer Filter array has 50% green, 25% red and 25% blue. These colours are the three primary colours in our visual system taking into account that green is the dominant colour component contributing to spatial resolution. The RGB colour model is additive in nature which means that colours are either placed close to each other or follow each other in quick succession to produce different colours in the colour gamut. When the Red Green and Blue are added at equal intensities it produces white. Red and green combine to make yellow, green and blue make cyan, and blue and red make magenta and this is the basis of CMY colour model. The CMY is a subtractive colour model in that it subtracts wavelengths from white to produce colour.

There are colour filter arrays based on the CYYM model and the CYGM just to name a few. As Savard highlights in his piece there are possible endless combinations for how the colour filter array can be organised. If someone was able to produce all the filters and you tried them, you’d get slight to very different variations in their output. The Foveon is another popular sensor using three stacked active pixel sensors and the wavelength of different colours is absorbed at different levels so that each stack gets 100% of the colour it could be getting from the array.

The interesting thing about this is the repercussions it has on how colour is received and how the urban environment is then viewed with alterations of the filtering. Many web camera websites provide codes to unlock its raw mode. By doing this you get to experience the raw Bayer filter effect before the hardware alters it to a finer output. The Logitech website says that if you do this you will “lose all the benefits of the many days that Logitech engineers spent on tuning the hardware. You have been warned.”

Alternately many at home astronomers using digital viewing scopes have created a bunch of freeware apps to alter the state of your webcam filtering. Open source software like AVIRaw allows you to experiment with the outputs created by reconfiguring the Bayer pattern from GR to RG to BG to GB.

In terms of architectural applications in how this can shift the way we as designers work or how we can push interactive architecture beyond just a media façade I was interested in the Bamboostic project by the Hyperbody team at the University of Delft.

It’s an interactive installation that is influenced by CCTV surveillance and explains some possible architectural applications that can occur from the utilisation of security web cameras and their sensitivity to different colours. In one room there is an aquarium with a gold fish. The gold fish’s movements are tracked by a web cam equipped with Max/MSP software. This detects groupings of pixels in a similar colour range. In this case the goldfish stands out of the aquarium and the computer maps patterns of movements based on the colour orange. In another room people are encouraged to enter a bamboo Forrest and another camera equipped with VirTools tracks movements based on pixel colour changes. These occur when someone wearing a shirt with a colour different to that of the floor enters the space. It then calculates the average coordinates of those changing pixel patterns in the space.

Using the vector output of the fish and the vector output created by the movement of people, this causes “different attractors at different strengths, and creates unpredictable swarm behaviour” causing the muscles of these bamboo poles to contract and move towards and away from the vector pathways in its system.

Similarly this is the chameleon chair lounge exhibit. It was an art installation piece but it shows the potentiality of web cameras recognising and responding to pixel changes. These chairs detect colour changes in pixel clusters. They attempt to calculate average colour changes in the pixel clusters. And then they identify it within the lab colour space and project that light back from within the chair. This has some interesting architectural possibilities in terms of what it could do for façade treatments once the technology develops further.

I did a bit more research about the web camera as a viewing port of anamorphic display within urban spaces and I stumbled across something seemingly silly but something gathering quite a bit of hype.

So what you are seeing here is augmented reality. The way this works is very similar to how a green screen works in movies. In reality it is just a planar surface printed on a piece of paper. However when we see this through the lens of a web camera we’re experiencing something entirely different. Paper vision is an open source code for Flash Cs3 and it is quite simple in what it does. Basically using maya or 3ds max or even Google sketch up you generate a model. Once you’ve done that you input the paper vision source code into flash. The source code tells flash and the web camera to target its vision on an image symbol. The camera uses the symbol as the reference point of an xyz coordinate plane and so through the viewing port of the web camera you’re experiencing an augmented reality. And then in flash video, it will project that image onto your target.

The manipulation of what the web camera does has a lot of implication in terms of how we view the urban. You can view this world through web cam but you can also view it on the iphone so that if you were to scan a space with AR Glyphs hidden through it, you would see a new world of hidden geometries. And AR has even released its own range of Media goggles which people have now used to create some really crazy worlds within worlds.

Architecturally speaking through the use of projectors acting as windows, you could create some interesting optical illusions and spatial challenges.

In discussing the possibilities of shading, Ramachandran in his ‘2D or Not 2D’ questions why it is that evolution granted some species the ability to shade themselves. He questions traditional gestalt theories by suggesting that there may be a deeper evolutionary cause as to why our visual system associates shading with light sources and ultimately three dimensionality. He explains how the brain justifies shading patterns.

A simple example he details in his piece shows the image on the left as either convex or concave. It can’t be perceived as both simultaneously and we have an idea of where the light source is coming from. However with the image on the right we can’t identify the light source and we have to individually assume whether each circle is convex or concave.

He goes on to talk about our natural assumptions that light comes from above. In this image he explains that the images in the bottom panel are holes while those on the top are convex. Ramachandran’s describes his studies whereby subjects were found that by tilting their heads even 15 to 20 degrees, visual sense was able to be determined as a result of shading rather than a basis of retinal or gravitational cues.

I was interested in this and what I could define in terms of architectural applications of this theory. This is a projection RUBIK 555 in. What we see here is merely a projection onto an existing building but the ideas of counter shading that Ramachandran highlights are quite visible. Our visual system creates light sources which gives these images a three dimensionality. As the animation progresses certain elements appear convex and some concave and at varying times the rotation of the shadows changes yet to us it always has a sensible three dimensionality despite the absurdity of moving concrete panels.

No comments:

Post a Comment