Showing posts with label code. Show all posts
Showing posts with label code. Show all posts

Sunday, August 23, 2009

Tiny Sketching

As a kind of test pattern to fill the current break in transmission, here are my contributions to Tiny Sketch, an OpenProcessing / Rhizome competition (open until mid September) for Processing sketches under 200 characters.

In Bit Sunset I just load the pixels[] array, pick a random block of pixels, and add a large number to their value. This process throws up some surprising results as the colour values gradually increase, then start pushing into the alpha bits of the ARGB integer; eventually, as it fills the alpha bits, it settles into a pallette of pinks and greens that are gradually smashed into pixel-dust.


Albers Clock was an attempt to slow the pace of TinySketch even further; it visualises the current time in the form of an Albers square, with three colours, one each for hour, minute and second. I also like that it creates an image that is synchronous (within timezones, at least), unlike the asynchronous, individualised runtimes of most sketches.


There are dozens of amazing sketches in this collection - it's a fascinating microcosm (in every sense) of the current Processing / generative / code art scene. Given the tight constraints it's not surprising to see some demoscene virtuousity in the code - like Martin Schneider's Sandbox, a physical simulation painting app in 200 characters. There is also some classic software art conceptualism and reflexivity - like Jerome St Clair's Joy Division cover and Kyle MacDonald's Except. Great to see projects like this - and OpenProcessing itself - reviving applet culture in an open source, web2.0-flavoured way.

Read More...

Tuesday, May 20, 2008

Draw a Straight Line...

Instruction Set is an embryonic open software project with a simple process, gathering different code implementations of a given "instruction." The format reminds me of two Whitney Artport projects - CODeDOC (2002), and Casey Reas' {Software} Structures (2004). It's good to see this approach being updated and opened out for a wider community.


The initial instruction was La Monte Young's wonderful Composition 1960 #10: "Draw a straight line and follow it." Implementations range from the abstract and conceptual to the more performative, in languages from Python and Javascript to Supercollider and Processing; web2.0 nerds like me will appreciate markluffel's Twitter version. Anyhow, I've just posted a belated implementation of "Draw a straight line..." (screengrab above). Nothing amazing, more just filling in a gap and solving a pragmatic problem - how to wring some generative juice out of the instructions - by manipulating the space, rather than the line.

That follow up post on transmateriality and hardware practice is coming soon, really. Off-task productivity is an amazing thing.

Read More...

Thursday, March 27, 2008

Self-Organised Phyllotaxis

Like Mr Smith, I'm being a bad host, but trust me, there's some good stuff in the works. Meantime, like Smith, here's something else entirely. In this case it's a little generative sketch I recently dusted off, some source code, and a side observation about Processing culture on the web.

While at CEMA last year I was working on a project with spirals as a kind of required element. I was talking to Jon McCormack about this, when he said something like "Oh, anyone can code up a spiral. What you want to do is make a system where spirals emerge." This is a classic a-life approach, of course, but also for me seemed technically daunting. Jon pointed me to Ball's The Self-Made Tapestry as well as to the literature on spiral phyllotaxis, a fundamental structure in plant morphogenesis. Douady and Couder published a brilliant paper on this topic in 1995 [pdf], so I set about implementing their model.

lotus_phyllotaxis479773
It's a beautiful thing - buds, or "primordia", are spawned by a central ring of "base" points. Douady and Couder show that you can create phylotactic spirals with a model where primordia inhibit the budding process in their neighbourhood; the result is that when a primordium forms, the next one to emerge will pop out some distance away. By simply changing the growth rate and the inhibition threshhold, you get a variety of self-organised spirals, but also other less predictable complex systems traits.

As it turned out I didn't use this for the "spiral" project - more on that soon - but rediscovered it recently when I was asked to reproduce an old drawing of a sort of abstract lotus-flower structure. In the image above the bases are invisible, and the primordia are drawn as circles that expand over time - instant lotus generator (more images).

Have a play with the applet, or just grab the Processing source. Let me know if you use it, too.

Which leads me to a side point. What's become of the applets-on-the-web side of the Processing community? Maybe it's just me, but it seems to be diminishing; instead there's tons of (web-compressed) video, with relatively sparse documentation and source. Is it because of the increased interest in using Processing for generative motion graphics (and other exotic, large scale, non-applet-friendly things)? Maybe I'm over-reliant on ProcessingBlogs, which now seems to be all Vimeo, all the time. Any thoughts?

Read More...

Thursday, November 29, 2007

Murray McKeich - Generative Gothic

While I'm in Melbourne I've been trying to catch up with some of the artists who have made this town a new media hotspot over the past decade or so. I recently met up with Murray McKeich, an artist who came to prominence in the 90s here through the amazing (and sadly departed) Australian magazine 21C. He's refined a signature style, imaging urban detritus on a flatbed scanner, then grafting those elements together into surreal-gothic hybrids. In the hyped-up, Wired-style 90s, McKeich's images in 21C were startling; artefacts from a far more unsettling future.

In recent years, as I mentioned a while back on Generator.x, McKeich has discovered generative art. His approach, and the resulting work, are interesting in part because they are so different to a lot of what goes under that banner at the moment. At the core of his practice is a kind of heresy I find really appealing: McKeich doesn't code. He cheerfully admits to being "hopeless" at programming, and has no inclination to start. I had to fight the urge to talk him around, the way I do with students sometimes, leading them gently into the joys of Processing. But I had a feeling it would be futile, and besides, the processes McKeich has devised are coding, of a sort, and they are working beautifully.

Having accumulated a massive library of scanned-in source material, and discovering Photoshop's actions, McKeich began to experiment with automated processes; macros that would randomly pull source files into large multi-layer compositions. Hierarchies and groups of layers and sources provided a mix of control and randomness. He ran batch processes that would output many thousands of stills, then hand-picked the best to form very large image sets, with works like A Thousand Pictures of Footscray and DVN (detail above). The artists's motivation here, he insists, is pragmatic, not conceptual; he's interested in the specifics of an image, the moment of its impact, not in process for its own sake. For him generative techniques are essentially a matter of externalising aspects of his own process; computational studio assistants.


McKeich's next step was his discovery of AfterEffects, which he describes as "a superior imaging tool" to Photoshop - even for stills. With a more procedural approach, nested compositions, and powerful automation, AfterEffects remains his generative platform of choice. As a kind of bonus, it produces video. McKeich's procedural motion graphics use his signature palette of materials, but feel lighter, more ephemeral. In Maddern Square (above) we seem to skirt the edge of some dense conglomerate of street flotsam which is forever dissolving into itself.


Most recently McKeich has begun a new line of work - in one sense another brilliant heresy in the super-abstract context of generative art. He's been making faces, or rather, zombies. pzombie is from "philosophical zombie," a term for a hypothetical non-conscious human in a cognitive science thought-experiment. Like digital Golems, McKeich's pzombies (above - hi res) are cooked up from junk and grime, articulated by recursive coils of AfterEffects scripting. Smoke and mirrors, in a sense, but they have that visual impact McKeich is after; he shows them in large groups, which adds to the uncanny effect. The apparently infinite variety of these faces makes them both more intriguing and more unsettling than the usual science fiction clone-armies. While the artist might deny it, there's a conceptual hook here too; who are these portraits of, after all? Aren't these zombies the faces of those studio assistants, who work tirelessly through the night, those macros within macros. This time, they've been rendering themselves.

Read More...

Friday, October 13, 2006

Live Coding: Program / Process Semantics

Andrew Sorensen, live coder and author of Impromptu, sent me some interesting comments about the previous post on data, code and performance. In particular he pointed me to a concept that seems very useful in unpacking live coding practice. Computer / cognitive scientist and philosopher Brian Cantwell Smith outlines the concepts of program and process semantics. Program semantics refers to the relation between the program code (what Cantwell Smith calls "passive text") and the resulting process or behaviour - what the code does when it's compiled or interpreted and run. Process semantics refers to the relation between that process or behvaiour, and the "task domain" or "subject matter" of the software - the context in which it is applied. There's a two-stage chain of meaning: 1. the meaning of the code with respect to the computational process; and 2. the meaning of the process with respect to the context it operates in.


For Sorensen "the addition of program semantics to live performance is the primary reason to program in real time." Live coding transforms the two-stage chain that Cantwell Smith describes, because it introduces the program semantics into the task domain. In the modified version of Cantwell Smith's diagram above it's marked B. This is important: it's not just the displaying of the program code, but the playing out of the relationship between code and process / behaviour. In a live coding performance where we see code and hear sound, we are presented with the task of understanding the relation between the two - inferring or mapping the generative relation between code and sonic output. The specific relation may have a semantics of its own in the "task domain" of performance - it's not simply what you do, but also how you do it. There's also the semantics of the code itself, in relation to the task domain (A): this seems less interesting to me, but related to the code-focused theory around software art - see for example Inke Arns' paper (Runme 2004).

Read More...