Showing posts with label canberra. Show all posts
Showing posts with label canberra. Show all posts

Saturday, December 12, 2009

Data Walks - a #climatedata proposal

In response to the UK Met Office's recent data release and Manuel Lima's call for visualisations, there's been a flurry of #climatedata activity in the last couple of days, including some revealing visualisations. Though I'm looking forward to playing with the data myself, this isn't a post about visualisation. It's a simpler proposal for a way to make the data tangible.


Global warming is ultimately a question about change in a single measurement - temperature - over time. One way or another, it can be boiled down to a line graph. How best to make that line tangible? Visualisation is great, but how else could we feel those changes, especially over time? One way would be to walk the data. We could make a kind of giant line graph, in the form of a path or road, then walk from 1850 to 2009. According to the Met Office's graph - remixed above with a picture of my local landscape - this would be a fairly undulating journey, but the last half especially would be a distinct and noticeable climb. Building this path at a walkable scale seems like hard work though. It would be much easier to use the paths we already have. So, here's a recipe for a #climatedata walk:

  1. Make a graph. There are all kinds of options here. The Met Office graph shows global difference from a long-term (1961-1990) average. You could for example use local data only, or use raw average temperatures rather than difference from average. You would also need to select a year range from the data - want to walk the whole century or just post-WW2? All the data choices should be made clear to any walkers.
  2. Fit to landscape. This is the tricky part. The idea would be to find a walkable route with changes in elevation that fit your line graph well. Finding a perfect fit will be very difficult, but finding an OK fit should be possible. This will involve some scaling questions: how long will the walk be, and how much elevation will it cover? Accessibility, ergonomics, experience design, affect - lots of juicy design decisions here. One crude but easy fitting procedure would be to begin with a route, find its elevation profile, then scale the graph to fit the start and end points of the graph to the route start and end, then note the points where the path and the graph intersect. Maybe some GIS / maps people could help with software tools here for route finding and fitting?
  3. Tick marks. Walk the route and mark it out in order to make the whole thing legible. Mark out years or decades, as well as temperature variation (elevation). One option for paths with an imperfect fit, would be to notate the difference between the path and the graph at certain points, as well as points where the path and the graph intersect.
  4. Walk. Again you can imagine many ways to do this, ranging from big organised public walks, to smaller private ones. Of course walking often leads to talking - and in a different way to, say, looking at a graph.
I should emphasise that I haven't even tried this, yet, but I hope to - Canberrans, if you're interested in helping organise a walk here, let me know. Wherever you are, if you do try it, let me know - also feel free to adapt / refine / repurpose the procedure. Could be fun, even informative - at the very least, you'll walk up a hill.

Read More...

Wednesday, April 15, 2009

Master of Digital Design / Grow Your Own Logotype

Over the past year or so I've been working on a major new offering here at UC. So, I'm delighted to finally launch the new Master of Digital Design online. This course will offer something quite unique in the Australian context: a trans-disciplinary coursework Masters focused on digital practice for designers and creative practitioners of all sorts. The key practical approaches are generative techniques, data visualisation and design, and physical computing; and we'll be using these to address three core themes or questions: the urban, the public, and the sustainable.

As readers of this blog will know, these themes and approaches are right in line with my own research and creative interests; so frankly, I'm thrilled to be leading this course. Teaching with me will be a crew of talented designers, artists and researchers including Stephen Barrass, Sam Hinton and Geoff Hinchcliffe. Finally, we'll be drawing on the wisdom and experience of an international advisory panel whose work exemplifies what we mean by digital design - a practice that engages deeply, and critically, with digital processes, digital materials, and digital contexts: Karsten Schmidt, Rory Hyde, Nervous System, Anthony Burke and foAM.


The course launch has also provided a great excuse (er, opportunity) to play with some ideas around generative branding and marketing. I've been tinkering with this logotype for ages; it uses the same basic algorithm as Limits to Growth but artificially constrains the growth to a letterform (in the guise of a hidden bitmap image). Lately I've extended the logotype into a little generative marketing gadget; a Processing applet that lets you grow endless variations, and receive the results as a PDF file, attached to an email. The aim is to provide a little taste of the power - and pleasure - of generative design.

Behind the scenes this project was yet another demonstration of the brilliance of Processing and its community. The key technical challenge was the upload-and-email functionality. Seltar's "save to web" hack provided the template; upload image data over HTTP, and have a PHP script catch and save the file. From there it was relatively straightforward to have PHP generate the email, with the help of the Pear MailMime package. The final step was uploading a PDF, rather than a bitmap. This seemed impossible, because the built-in PDF library needs to write a local file, which means the extra annoyance of a signed applet. I posted a query on the Processing forums and within 24 hours PhiLho saved me with a solution that extends the PDF class to allow access to the PDF data as a Byte array, without first saving the file. Amazing: thank you! Add the super-useful ControlP5 for the UI sliders and buttons, and the whole thing is built on, in and with free, open-source software. Again, a demonstration of why digital design is such an exciting field of practice right now.

Read More...

Sunday, March 15, 2009

Watching the Street (Navigator) / citySCENE

Vague Terrain 13: citySCENE has just launched. As editor Greg J. Smith writes:

This issue of Vague Terrain is founded on two notions - that the city is a stage set for intervention and an engine for representation.
The collection expands out from this premise in multiple directions: carto-mashups, projection-bombing, sound walks, psychogeographic imaging and ubicomp experiments. Early highlights for me included Crisis Fronts' Cognitive Maps and Database Urbanisms, which presents some impressive work on data visualisation and generative models as urban mapping strategies (below: Case Study: Los Angeles). Overall, on a first look, this collection is incredibly rich. It shows that a creative, wired-up, critical urbanism is not just a wisftul aspiration of the technorati, but a real practice.


Having said all that, it's a privelege to be a part of this collection. My contribution is Watching the Street (Navigator), a browsable visualisation of a single day of images from the Watching the Street dataset. It tests out the hunch that these time-lapse slit-scans can be used to read real patterns in the urban environment - that they are (or can be) more than just suggestive abstractions. It uses a simple interface to display both a single source frame, and a correlated slit-scan visualisation, with image-space and time-space sharing an axis, a bit like a slide rule. Greg Smith called it an "urban viewfinder", which sums the intention up nicely.


Playing with the navigator for a while seems to confirm that hunch. The composites reveal temporal patterns in the environment, but not the spatial context that allows us to identify their causes; the source frames show that spatial context, but not the change over time. Reading the two against each other involves chains and cycles of discovery, analysis and inference. These might be open-ended (spatiotemporal browsing) or more directed. What time do the sandwich-boards go out? How long does the delivery truck stay?

Building the navigator presented some interesting technical challenges: mainly, how to make a web-friendly interface to 1440 source frames (240 x 320) and 480 slit-scan composites (720 x 320). That adds up to about 75Mb of jpegs. Processing 1.0 came to the rescue, with its new built-in dynamic image loader. requestImage() pulls in an image from a given URL, on cue, without bringing the whole applet to a grinding halt; it provides some basic feedback on the state of that image - whether it's loading, loaded, or un-loadable. I also blundered into two other useful lessons: how to use the applet "base" parameter, and how to manage Java's local cache, which kept throwing up earlier versions of the applet during testing.

Having made a lean, mean, browser-friendly version, I'm now thinking of adapting the navigator into a full-screen, offline app, with the whole eight-day dataset, and perhaps some tools for annotation and intra-day comparison. Best of all would be a long term installation; a sort of urban space-time observatory, watching the street but also opening it up to ongoing interpretation. If you'd like it running in your foyer, let me know.

Read More...

Friday, January 16, 2009

JCSMR Curls

This post is (belated) documentation of a project I worked on in 2007-8, creating an audio-responsive generative system for a permanent installation for the Jackie Chan Science Centre (yes, that Jackie Chan) at the John Curtin School of Medical Research, on the ANU campus. Along with some Processing-related nitty gritty, you'll find some broader reflections on generative systems and the design process. For less process and more product, skip straight to the generative applets (and turn on your sound input).

In mid 2007 my colleague Stephen Barrass and I were approached by Thylacine, a Canberra company specialising in urban art, industrial and exhibition design. Caolan Mitchell and Alexandra Gillespie were designing a new permanent exhibition, the first stage of the new Jackie Chan Science Centre, housed in a new building - a razor-sharp piece of contemporary architecture (below) by Melbourne firm Lyons. Instead of just bolting a display case and a few plaques to the wall, Mitchell and Gillespie (wonderfully) proposed a design that hinged on a dynamic generative motif - a system that would ebb and flow with its own life cycles, and echo the spiral / helix DNA structures central to the School's work, and already embedded in the building's architecture.


My initial sketches (below) took the spiral motif fairly literally, drawing vertical helices and varying their width with a combination of mouse movement and a simple sin function - the results reminded me of the beautiful spiral egg cases of the Port Jackson Shark. At that stage we were talking about the possibility of projecting back onto the facade of the building, which has big vertical glass panels; this structure informed the vertical format. I made a quick video mockup of the form on the facade - which was incredibly easy, thanks to the robust, adaptable, extendable goodness of Processing (a recurring theme in the process to come).


These sketches meet the simplest criteria of the brief (spiral forms) but do nothing about the more interesting (and difficult) ones: cycles of birth, growth and death, and dynamics over multiple time scales. Over the next couple of months I developed two or three different approaches to this goal.

The phyllotaxis model blogged earlier was one attempt. Spurred on by the hardcore a-life skills of Jon McCormack and co. at CEMA, I built a system in which phyllotactic spirals self-organised spontaneously. Because in Jon's words, anyone can draw a spiral, what you really want is a system out of which spirals emerge! The model worked, but I had trouble figuring out how phyllotactic spiral forms might meaningfully die or reproduce. Also, by that stage I had two other systems that seemed more promising.

From the early stages I wanted to make the system respond to environmental audio. The installation would be in a public foyer with plenty of pedestrian traffic, so audio promised a way to tap in to the building's rhythms of activity at long time scales, as well as convey an instantaneous sense of live interaction. In the two most developed sketches audio plays a key role in the life cycle of the system.

One sketch moved into 2d, and started with a pre-existing model for growth, by way of the Eden growth algorithm (this system would later be adapted again into Limits to Growth). I had already been playing with an "off-lattice" Eden-like system where circular cells could grow at any angle to their parent (rather than the square grid of the original Eden model). This system also made it easy to vary the radius of those cells individually. The next step was to couple live audio to the system; following a physical metaphor, frequency is mapped to cell size, so that larger cells responded to low frequency bands, and smaller cells to high frequencies. Incoming sound adds to the cell's energy parameter; this energy gradually decays over time in the absence of sound. Cell reproduction, logically enough, is conditional on energy.


The result is that cells which are best "tuned" for the current audio spectrum will accumulate more energy, and so are more likely to reproduce, spawning a neighbour whose size (and thus tuning) is similar to, but not the same as, their own; so over time the system generates a range of different cell sizes, but only the well-tuned survive. The rest die, which in the best artificial life tradition, means they just go away - no mess, no fuss. In the image below cells are rendered with stroke thickness mapped to energy level. The curves and branches pop out of rules sprinkled lightly with random(), resulting in a loose take on the spiral motif, which is probably the weak point in this sketch. I still think it has potential - nightclub videowall, anyone? Try the live applet over here (adjust your audio input levels to control the growth / death balance).


The third model takes this approach to energy and reproduction - about the simplest possible a-life simulation - and folds it back into the helical structures of the first sketches. In this world an individual is a 3d helix, built from simple line segments. Again each individual is tuned to a frequency band, which supplies energy for growth; but here "growth" means adding segments to the helix, extending its length. Individuals can "reproduce", given enough energy, but here reproducing means spawning a whole new helix, with a slightly mutated frequency band. All the helixes grow from the same origin point - they form a colony, something like a clump of grass.


This sketch went through many variants and iterations over the next month or so; in retrospect the process of working to a brief, within a design team, pushed this system further than I ever would have taken it myself. At the same time I was testing the system against my own critical position; I've argued earlier that the generative model matters, not just for its generativity but the entities and relations it involves.


From that perspective this system was full of holes. Death was arbitrary: just a timer measuring a fixed life-span. "Growth" was a misnomer: the number of segments was simply a rolling average of the energy in the curl's frequency band, so the curls were really no more than slow-motion level meters. Taking the organic / metabolic analogy more seriously, I worked out a better solution. An organism needs a certain amount of energy just to function; and the bigger the organism, the more energy it needs. If it gets more than it needs, then it can grow; if it gets less than it needs, for long enough, it will die. So this is a simple metabolic logic that can link growth, energy and death. Translated into the world of the curls: for each time step, every curl has an energy threshhold, which is proportional to its size (in line segments); if the spectral energy in its band is far enough over that threshhold, it adds a segment - like adding a new cell to its body; if the energy is under that threshhold, it doesn't grow; and if it remains in stasis for too long, it dies. Funnily enough, the behaviour that results is only subtly different to the simple windowed average. Does the model really matter, in that case? It does for me at least; if and how it matters for others is another question.


Next, the curls developed a more complex life-cycle - credit to Alex Gillespie for urging me in this direction. In line with the grass analogy, curls grow a "seed" at their tip when they are in stasis; when they die, that seed is released into the world. Like real seeds, these can lie dormant indefinitely before being revived - here, by a burst of energy in their specific frequency band. After several iterations, the seed form settled on a circle that gradually grows spikes, all the while being blown back "down" the world (against the direction of growth) by audio energy (below). As well as adding graphic variety, seeds change the system's overall dynamics. Unlike spawned curls, seeds are genetically identical to their "parent" - attributes such as frequency band are passed on unaltered. Because each individual can make only one seed, that seed is a way for the curl to go dormant in lean times; if it gets another burst of energy, it can be reborn. The curls demo applet demonstrates this best (again, adjust your audio input and make some noise).


A few technical notes. One big lesson here was the power of transform-based geometry. Each curl is a sequence of line segments whose length relates to frequency band (lower tuned curls have longer segments); each segment is tilted (rotateZ), then translated along the x axis to the correct spot. A sine function is used to modulate the radius of each curl along its length; frequency band factors in here too; this radius is expressed as a y axis translation. Then the segment is rotated around the x axis, to give depth. I iterate this a few hundred times to get one curl, and repeat this process up to twenty times to draw the whole world - each curl has its own parameters for tilt, x rotation increment, and frequency band.

In the live applet audio energy ripples up the curls, from base to tip. This was added to reinforce the liveness of the system and add some rapid, moment-by-moment change. It was implemented very simply. I used a (Java) ArrayList to create a stack of audio level values; at each time step, the current audio level value is added at the head of the list, and the ArrayList politely shuffles all the other values along. So each segment's length is a combination of three values; the base segment length, a function to taper the curl towards the tip, and the buffered audio level.


The graphics are all drawn with OpenGL - following flight404 I dabbled with GL blend modes, specifically additive blending, to get that luminous quality. The other key visual device here is the smearing caused by redrawing with a translucent rect(); instead of erasing the previous frame completely this fades it before overlaying the new frame. It's an easy trick that I've used before. But as Tom Carden explains, in OpenGL it leaves traces of previous frames. I discovered this firsthand when Alex and Caolan asked whether we could lose the "ghosts." I was baffled: on my dim old Powerbook screen, I simply hadn't seen them. Eventually, juggling alpha values I could reduce the "ghosts" to almost black (1) against the completely black (0) background - but no lower. Finally I just set the initial background to (1) instead of (0), and the ghosts were gone.


The adaptability of Processing came through again when it came to realising the installation. The final spec was a single long custom-made display case, with three small, inset LCD panels. These screens would run slide shows expanding on the exhibition content, but also feature the generative graphics when idle; the case itself would also integrate the curls as a graphic motif. For the case graphics, I sent Thylacine an applet that output a PDF snapshot on a key press; they could generate the graphics as required, then import the files directly into their layout.

The screens posed some extra challenges. The initial idea was to have the screens switch between a Powerpoint slideshow, and the curls applet; but making this happen without window frames and other visual clutter was impossible. In the end it was easier to build a simple slide player into the applet: it reads in images from an external folder, allowing JCSMR to author and update the slideshow content independently.

So to wrap up the Processing rave: it provided a single integrated development and delivery tool for a project spanning print, screen, audio, interaction, animation and even content management. Being able to burrow straight through to Java is powerful. Development was seamlessly cross-platform; the whole thing was developed on a Mac, and now runs happily on a single Windows PC with three (modest) OpenGL video cards. The installation has run daily for over six months, without a hitch (touch wood).

Some installation shots below, though it's hard to photograph, being a glass fronted cabinet in a bright foyer - reflection city. I'll add some better shots when I can get them. If you're in Canberra, drop in to the JCSMR - worth it for the building alone - and see it in person.





And very finally, photographic proof of the Jackie Chan connection - image from The Age.

Read More...

Monday, October 27, 2008

Dorkbot CBR at Manuka CCAS


Dorkbot Canberra's inaugural group show opens Thursday November 6th at Canberra Contemporary Artspace Manuka. It's a great, super diverse lineup, including wearables, data art, solar power, generative grunge, drawing machines and audiovisuals. I'll be showing a big crop of prints from Limits to Growth, as well as doing a kind of urban version of Watching the Sky, gathering images from the street. Here's the full press release.

Read More...

Wednesday, July 30, 2008

The Visible Archive

I signed the contracts thismorning on a research project that I'm really excited about: a grant from the National Archives of Australia to develop interactive visualisations of their collection. That collection has over nine million items, grouped into some thirty thousand series (or sets); it's basically all of the Federal government's paperwork, but also includes photographs, AV material and other stuff. You can search the collection via the Archives site - and access digital copies of the original records in some cases.

The Visible Archive aims to do what the search interface doesn't: provide a sense of context and orientation, revealing structures and relations within the collection. The visualisations should be useful for both archivists and archive users; and the techniques developed should also be useful for other archives and collections.


The idea seems to have some currency - you may have seen Lev Manovich recently announce a project on Visualizing Cultural Patterns, working with collaborators including Noah Wardrip-Fruin.

Read more and follow the project at its own, freshly minted blog. And if you have any pointers to other related work in the visualisation of cultural datasets, especially archives, please send them along.

Read More...

Friday, April 11, 2008

Wanted: Research Students (A Message from my Sponsor)

I've kept my academic day job out of this blog until now; but that's really a false distinction since the work presented here is largely supported by my employer. So with that in mind, a message from my sponsor - and actually, from me too.

I'm looking for research students! My research interests are pretty well represented by this blog, and visualised in the tag cloud: criticism, theory and practice in computational media, data practices, generative art, a-life art, experimental sound and music, digital culture in general. With my colleagues Stephen Barrass and Sam Hinton we span internet history and theory, gaming, sonification, AR, perceptual approaches to HCI, and wearables. With our collective track record and mix of specialisations, we're one of the best groups in the country for this kind of work. What's more our new Faculty of Design and Creative Practice now combines media arts with architecture, landscape architecture, cultural heritage, industrial design and graphic design, so there's a vast field of crossovers there. All our research programs encourage practice-led research, and thesis forms that combine writing with creative projects.

If this sounds like you, and you're interested in stand-alone Honours, Masters by Research or PhD study, get in touch.

We now return you to our scheduled programming.

Read More...

Thursday, June 28, 2007

Dataesthetics - Close to Home

The data from the 2006 Australian census has just been released. In the last day or two the media have run the usual kind of headline stories - in which specific bits of data or comparions are extracted, spun and narrativised; nationally, there's been some focus on increasing debt (and income); locally Canberrans have been portrayed as richer, more wired and more generous with their time than everyone else. This process of top-down public storytelling dominates our understanding of this kind of data - but perhaps that will change, because now the whole dataset is available online, for free. It's buried a few steps in, and yes it's in a proprietary (Excel) format, but it's all there for the munging.

I started browsing some data from my suburb, and focused on numbers of kids per mother per age group. It's coarse-grained data but evocative - birth rates suggest a lot about a society. Comparisons suburb by suburb also hint at distinct demographic patterns. I put together a quick visualisation, a stacked area graph (inspired in part by Lee Byron's beautiful last.fm vis). Another reference was the Japanese tradition of Koinobori, the carp pennants that celebrate Boy's (now Children's) Day. So, here are some statistical pennants - suburban emblems that encode demographic data. Maybe we could fly them at the shops, or individuals could annotate them by marking their own place in the local profile. It's fun to play amateur demographer (read on) but the point here is really proof of concept; if I can do this, so can lots and lots of others, and that's interesting in itself.


Each form shows the number of children per woman; the wide end is zero, the narrow end is six or more. So in all the pennants the initial dip shows the difference between the number of women without children, and women with one child; then more women with two kids, fewer with three and so on. The thicker tail visible in the second pennant shows a larger number of women with lots of kids. The bands in each pennant show age groups, with youngest at the top. Most young women have no kids - not a great surprise - but the forms also show older women with larger families, and the relative distribution of children by mother's age group, and how this varies with suburb. The bottom-most pennant comes from an old, wealthy suburb: lots of older women with two and three kids. Pennant two is from a semi-rural town, with a more even distribution of children through the age bands; pennant three is from a new suburb, with wide bands of small, relatively young families. Colours are arbitrary, for the moment.

For more demographic data art see also Jason Salavon's American Varietal project, commissioned by the US Census Bureau.

Read More...

Wednesday, June 20, 2007

ACMC07 - Warren Burt and Sebastian Tomczak

This year's Australasian Computer Music Conference is here in Canberra, hosted by Alistair Riddell at the CNMA. Though ironically I could only get there for the first day, here are a couple of choice morsels.

The opening keynote by Warren Burt took on the conference theme - "trans" - and delivered a dense core sample of transdisciplinarity in music, from the ancient Greeks to the West Coast musical avant-garde of the 70s, through to the present. Many of Burt's projects look fresh all over again - he's been doing audiovisual synthesis, sonification of complex systems, and bio-collaboration since back in the day. He also made some great points about the role of the avant-garde in transforming cultural systems, rather than just "playing new music in the same old venue" (a mistake he attributed to Richard Wagner and the Sex Pistols, among others). During the 70s and 80s Burt was involved with the Clifton Hill Community Music Centre, an experiment in new social contexts for music, affordable technology and anarchic DIY.

Meanwhile back in the present, Alex Thorogood presented some nifty hardware hacks splicing an Arduino board with the innards of a cheap MP3 player, for his Chatter and Listening sound sculpture project. Hardware of the day though was Sebastian Tomczak's amazing Toriton Plus, a homebrew controller based on lasers, photocells and water. I'll spare you a lengthy description, except to say that it's much more beautiful in live performance - here's the video.

Read More...

Friday, February 16, 2007

Jonathan McCabe - Very Cellular Automata

A new year, and another exhibition from Jonathan McCabe at Canberra gallery/cafe The Front. The show, Travelling Wave, was shared with painter Luke Nilsen; it included some collaborative canvases, with Nilsen painting over McCabe's digital patterns, and new works from McCabe's Butterfly Origami and Nervous States processes. But also on the (very crowded) walls were images from a new McCabe process, based on cellular automata. In themselves the images are chunks of psychedelic maximalism, similar to McCabe's earlier work. But once again the real hook here is the mind bending and unusually rich generative process.

The generative system involves four linked cellular automata - think of them as layers. "Linked" because at each time step, a cell's state depends both on its neighbours in that layer, and on the states of the corresponding cells in the other three layers. Something like a three-dimensional CA, but not quite; the four layers influence each other through a weighted network of sixteen connections (a bit like a neural net). The pixels in the output image use three of the four CA layers for their red, green and blue values. (The images here show the full image on the left, and a 1:1 detail on the right)



As in a conventional CA, each cell looks to its neighbours to determine its future state. This is a "totalistic" CA, which means each simply sums the values of its neighbours, then changes its state based on a table of transition rules. Now for the really good part: each cell also uses its recent history as an "offset" into that transition table. In other words, the past states of a cell transform the rules that cell is using. The result is a riot of feedback cycles operating between state sequences and rulesets; stable or periodically oscillating regions form, bounded by membrane-like surfaces where these feedback cycles compete. Structures even form inside the membranes - rule/state networks that can only exist between other zones.



The images reinforce the biological analogy, but philosophically (ontologically?) this system is even wilder. It's inspiring to see a formal system where "the rules" are local and variable, rather than global and static. The way things are (or have just been) controls the rules that determine how things will be next - that much is historical relativism I suppose. But here we see regions of self-perpetuating but incompatible realities, competing for space - even states of being that only emerge where two or more reality-attractors meet. An ecology of ontologies, if you will.

McCabe has put up a page with lots of these images, including full resolution (2048x2048) jpegs.

Read More...

Thursday, November 30, 2006

Abstract Microecologies - Pierre Proske

A rare treat last night: some generative art in Canberra. The event was a one-night show at the Front gallery from Australian artist Pierre Proske, presenting the results of a residency at the ANU's Department of Archaeology and Natural History. Proske was embedded with the Paleoworks group, who do palaeo- and archeo-botany, mainly by way of using microscopes to look at ancient pollen.

The resulting works use micro-botanical images as poetic and aesthetic materials to reflect on the residency itself. In one series they're used to texture Superformula shapes, creating hyper-layered clouds that seem organically lumpy and mathematically crisp at the same time. In another series Proske used portraits of his Palaeoworks hosts to "seed" accumulations of tinted micro-blobs; they play on the edges of abstraction, at the same time evoking (for me at least) some big ideas about identity, multiplicity, and symbiogenesis.



Proske's blog of the residency is a wealth of detail. His previous work is worth checking out too - the Intelligent Fridge Poetry Magnets (pdf) attracted widepread attention earlier this year; and apparently they may yet appear on a home appliance near you...

Read More...

Wednesday, August 16, 2006

Jonathan McCabe - Nervous States

Canberra artist Jonathan McCabe is currently showing some digital prints at the Front gallery in Lyneham - the show is called Nervous States, ostensibly referring to the neural net behind the generative process... but it seems to have much wider implications just at the moment, too. I wrote about McCabe's Butterfly Origami Method on generator.x a while ago, and was impressed by the elegance of the generative mechanism and the visual richness of the results. Nervous States is just as elegant, and visually psychedelic, but uses a completely different generative approach.


Like the Butterfly Origami images, there's a sense of materiality here... which is paradoxical, considering the abstraction of the generative techniques. Each image is essentially a visualisation of the output state of a small neural network. The X and Y coordinates correspond to two variables in the connections of the network; the colour of the pixel at that point is a representation of the network's behaviour for those parameters. So the image is a map of system states; coherent colours show areas of relative stability or gradual change; edges show sharp jumps in the output; marbled swirls show complex oscillations.

Technically, this work is pushing the edges in several ways. To select images from the vast range that the system can produce, McCabe first uses an automated analysis based on variation in the image at three levels of scale: the software varies the weighting of the inter-neuron connections, and selects images (maps) with the most variation. However this automated process still generated 6000 candidate images, which McCabe then whittled down to nine for this exhibition.

Generating these images at very high resolutions is a hefty computational task. The solution for McCabe was to make use of the parallel-processing grunt available on the video card. Using the Brook language from the Stanford Graphics Lab, the images are rendered using the parallel pixel processors on an nVidia graphics card.


This work also makes me wonder about communication, meaning and generative art. As McCabe explains them, and in the context of the "nervous" metaphor, the generative system is poetic in itself; the images can be read in that context, as mysterious maps of complex dynamics - or they can function on a more "retinal" level, as sheer visual stimulus - or perhaps both. But how comprehensible is the generative system for a wide audience? Does it matter? Understanding the images as state maps, rather than physical (or even simulated physical) traces and gestures, is a considerable leap of abstraction. And at a time when open-source tools are drawing more and more artists and designers to generative techniques, McCabe's work issues a similar challenge: underneath the initial challenge of learning to code is the conceptual process of understanding, designing and visualising generative systems, and it's those systems that (I'd say) are at the core of the work.

Read More...