Make a piece of software that evolves images which are similar to images captured through a camera. The images should keep evolving continuously, so that they keep adapting to changing camera input. What ‘similar’ means is up to you: it can be any aspect of the image you can think of, as long as you can implement it in software and as long as it is based on a higher-level interpretation of the two images. For inspiration, you can relate this interpretation to a work of art or one of the concepts about perception that will be discussed in class. Your definition of similarity is one of the creative aspects of this assignment; pixel by pixel comparison of the two images will not be considered sufficient.
Your programme should consist of three parts:
- a part that draws images on the basis of a string of characters (comparable to DNA),
- a part that evolves these strings, and
- a part that compares the drawn images with camera input and judges how similar they are.
You can do this assignment on your own or with one other person.
an evolved image by artists’ duo Driessen en Verstappen in the LUMC building.
To help you with this assignment, three smaller assignments will be given during the first four lab sessions of this year’s course. These assignments lead up to the final assignment. They have to be completed within a week after the lab day in question and emailed to Joey van der Bie, the student assistant for this course.
The small assignments are:
Make a ‘turtle-graphics‘ system that translates a string of characters into an image. You should define an ‘alphabet’ of at least five different characters that each specify some kind of drawing or move operation. The classical turtle system is a point of departure: you are invited to think of other ways how to generate an image by a sequence of basic operations.
Make a camera-based fader. A user should be able to control a value by moving in front of a camera. You are free to choose to what aspect of the stream of images your software reacts.
Make a system that compares two images and gives a value between 0. and 1. as an indication of how similar the two images are. What ‘similar’ means is up to you: it can be any aspect of the image you can think of, as long as you can implement it in software. Also it should be based on a higher-level interpretation of the two images; pixel by pixel comparison of the two images will not be considered sufficient. You can compare camera input with a stored image or with an image produced by your turtle system.
This assignment was inspired by the way artists and engineers have been using Evolutionary Algorithms in order to evolve solutions to problems. After Richard Dawkins described his ‘Biomorph’ programme in 1986, artists like William Latham, Karl Sims and Steven Rooke started using simulated evolution to evolve images. A more recent example (and a different use of EA) are the evolved sculptures and images (and here) by Driessen en Verstappen.
- Mitchell Whitelaw, “Metacreation, Art and Artificial Life”, MIT press, Cambridge (MA), 2004.
- Peter J. Bentley, David W. Corne (ed.), “Creative Evolutionary Systems”, Morgan Kaufmann, San Francisco, 2002.
an image breeder by Karl Sims from 1991
goal of the assignment:
On a practical level the assignment is a vehicle to familiarize you with elementary image analysis and synthesis techniques. It is also a way to ask you to reflect on the different ways to think about images that are discussed and shown in class. Can you make a system for this assignment that embodies a way of seeing ?
- often you don’t need very sophisticated computer vision algorithms to get good results; clever use of simple analysis is more elegant and more interesting. Object following and recognition is easy for humans but hard for computers, so it is advisable to think of an approach in which knowledge of real-world objects is not necessary.
- the most interesting results are to be expected when your image analysis is based on very different techniques than your image synthesis.
a view of the breeder interface of ‘Evolver’ by Driessen en Verstappen
For this assignment you are encouraged to use the Max/Msp/Jitter environment. Like Msp, Jitter is an addition to the original Max graphical programming language. Jitter is a collection of objects for matrix manipulations, designed to work with real-time video.
It is possible to use another programming environment, as long as the following requirements are met:
- the programming environment should be powerful enough to do the assignment in it
- you should be able to figure out your implementation without the student assistant. Joey can provide assistance with Jitter, Processing, Open Frameworks, C++ and perhaps more if you ask him kindly.
- it should be possible for you to share both a working version of your software as well as a readable (annotated) version the source that enables me to figure out your approach. It is this last point that limits the choice of programming environment the most, since I do not have the time to install complicated packages. If you use Jitter or processing you are fine, please contact me if you want to use something else.
Other possible environments that are relatively friendly towards graphics and real-time video include Pd/Gem (similar to max/msp/jitter, but free, open source and less coherent), vvvv (free, windows only), Processing (free, based on Java), Nodebox (free, mac only, open source, based on Python), Open Frameworks (free, in prerelease, based on C++), or ofcourse plain C, C++, objective-C or any language of your choice.
Joey van der Bie will be the assistant for this course and he will be available to help you with questions relating to Max/Msp/Jitter and Processing. He will give introductions to the four small assignments and explain the objects that are necessary to program those in Max/Msp/Jitter.
Towards the end of the course Joost Rekveld will be available to answer questions, discuss approaches and ideas relating to the assignment. Also he will provide a simple implementation of an evolutionary algorithm in Jitter, so that students can concentrate on the image-related part of their assignment.
presentation and grades:
The assignment should result in a short paper, a working piece of software, a short video, and a short presentation.
The paper should clearly explain the concept and the principles behind the implementation. It should briefly relate the kind of image analysis and image synthesis you chose to some of the concepts discussed in class. The paper should be between 2 to 4 pages long, and sent to me via email ( joost.rekveld (at) interfaculty (dot) nl ) in either .doc or .pdf format. Please use the IEEE guidelines to format your paper.
The software should be available for download with all that is necessary to make it function on Windows or Mac. If you work in jitter, please export your project as a collective (which will be cross-platform if you use no third-party externals).
The video should demonstrate succesful use of the software; Please put it online somewhere and post the url on the forum.
During the presentation on the 2nd of June the video should be shown and the software should be working and demonstrated. The concept and the principles behind the implementation should also be explained. I should have the paper and the software by wednesday the 16th of June 10am.