Everyday graphic

Not sure how practical it is, but will try and upload a new graphic everyday .. to start with will probably be quite basic ๐Ÿ™‚ Here is the first, some moving granular thingies..


Shoebot code:

import random
from math import sin, cos

size(800, 800)

def draw():
    scale(1, 1)
    fill(0.1, 0.2, 0.3)

    for y in xrange(0, HEIGHT, 80):
        wiggle = sin(FRAME * 0.1)
        xs = 2.0 + (cos(y * 0.1) + sin(y) * 8.1)

        distance = 1.0 / HEIGHT * y
        fill(1.0, 1.0-distance, 0, distance)
        for x in xrange(0, 60):
            xpos = ((xs * FRAME-x * 40) % (WIDTH + 40)) - 20
            circle(xpos, y + (wiggle * random.random() * 20.0), 20 + (wiggle * 2.0) * distance * 8.0)

        #xs = -sin(y) * 4.0
        #for x in xrange(0, 60):
        #    circle(WIDTH-xs * WIDTH - FRAME + x * 40, y+40, 20)

To run this, install shoebot and type: sbot -w granuals.sbot

Belated post on libregraphics meeting 2014

Here is my very late post on LGM 2014! Back in April I went to Leipzig for my first in-the-flesh meeting of Shoebot devs .. I met with Ricardo + to collaborate on a workshop on shoebot. To make it more fun we hadn't decided what to do it on ! In the anarchic OSP (open source publishing) house we came with a plan to get people making examples for shoebot. Luckily Ricardo had done a lot of this sort of thing before so did most of the talking, then I showed off some bots - evolution, spirals and also the livecoding work. Overall the workshop seemed to go over well; we got a bunch of examples, and there was even a plugin for sublime text !

Things that came out of the workshop:

People want an integrated editor - this is OK, since the IDE still exists. Differences between the Nodebox/Shoebot API and Cairo are not always intuitive. Livecoding is cool! Shoebot 2 ... or something else ? Going forward, it might be best to take the Nodebox approach and build something new based on these lessons, I'm not sure exactly what yet. What is the most intuitive API, how can we be close to standard APIs.

Non Shoebot Stuff

Outside of the workshops and talks there were plenty of time to drink and chat - apart from talking the head off of one of the mypaint guys I learned quite a lot about OSP off Sarah Magnan and Brendan Howell .. which made me regret missing many of their talks, including Brendans on the screenless office. Leipzig was a really great city to visit, the venue for LGM was particularly impressive being inside an old church that the East Germans repurposed to a university. Importantly for me I learned about the "kebab circle" - the ring of gentrification moving from the inside of the city outwards (beyond which you can still buy kebabs). With any luck LGM will be able to make it next year and meet everyone again.

Shoebot experiment – perlin noise..

Perlin noise is pretty cool, you can use it to generate realistic looking clouds or mountains. Here's a bot for shoebot I made a while back that uses perlin noise to generate some nice circles. You'll need shoebot and the lib "noise" installed into your environment for it to work;
# pip install noise
Then to run;
sbot -w perlin-circlescape1.bot
Here's a video of them in action - See below the break for the code - (more…)

Moving things in shoebot – simple particles…

OK, part 3 - now for something fun - extending parts 1 + 2 into a simple particle system.

Particles, generally means - a lot of things moving around (the particles) and a way to generate them, an "emitter"

Here we're going to take the code from the previous two parts and add a couple of things to make a basic particle system.
Note - shoebot, isn't the fastest; but we do get nice looking results.

Here's a video of our arrows as particles (arrowsplosion!):


Moving things in shoebot, adding different behaviours..

In my last post we made an arrow move around the screen, in this post we'll look to extend things so it's easy to make many things move around the screen.

This will make the code a little more complex, but as usual it makes things simpler later on.


This python code runs in shoebot, planar.py is used to handle coordinates

At the end we'll have two arrows, a blue one controlled with the keyboard and a pink one that moves on it's own:


Natural movement using polar coordinates in shoebot

Here's a little shoebot bot to experiment with natural movement.

This uses polar coordinates to decide the direction and velocity of an arrow on the screen.

Polar coordinates mean we can give an object a sense of 'forward', 'back', 'left' and 'right'

The code below works on the current version of shoebot

With planar.py to handle the directions and velocity


Simple python spectrograph with shoebot

Seeing"Realtime FFT Graph of Audio WAV File or Microphone Input with Python..."ย  on python.reddit.com reminded me of one I'd built in python with shoebot.

While it works OK, I feel like I'm missing a higher level audio library (especially having seen Minim, for C++ and Java).



To run it in shoebot:

sbot -w audiobot.bot


# Major library imports
import atexit
import pyaudio
from numpy import zeros, short, fromstring, array
from numpy.fft import fft


def setup():
    size(350, 260)

_stream = None

def read_fft():
    global _stream
    pa = None

    def cleanup_audio():
        if _stream:

    if _stream is None:
        pa = pyaudio.PyAudio()
        _stream = pa.open(format=pyaudio.paInt16, channels=1,
                           input=True, frames_per_buffer=NUM_SAMPLES)

    audio_data  = fromstring(_stream.read(NUM_SAMPLES), dtype=short)
    normalized_data = audio_data / 32768.0

    return fft(normalized_data)[1:1+NUM_SAMPLES/2]

def flatten_fft(scale = 1.0):
    Produces a nicer graph, I'm not sure if this is correct
    for i, v in enumerate(read_fft()):
        yield scale * (i * v) / NUM_SAMPLES

def triple(audio):
    '''return bass/mid/treble'''
    c = audio.copy()
    c.resize(3, 255 / 3)
    return c

def draw():
    '''Draw 3 different colour graphs'''
    global NUM_SAMPLES
    audio = array(list(flatten_fft(scale = 80)))
    freqs = len(audio)
    bass, mid, treble = triple(audio)

    colours = (0.5, 1.0, 0.5), (1, 1, 0), (1, 0.2, 0.5)

    fill(0, 0, 1)
    rect(0, 0, WIDTH, 400)
    translate(50, 200)

    for spectrum, col in zip((bass, mid, treble), colours):
        for i, s in enumerate(spectrum):
            rect(i, 0, 1, -abs(s))
            translate(i, 0)

    audio = array(list(flatten_fft(scale = 80)))

Processing with Jython and Nodebox/Shoebot libraries

Update: 26/April/2010

Problems I was having with incomplete images have been fixed in the current version of the web library, available in shoebots mecurial repository.

Using Processing from Jython is a promising idea, so I took the base from this post on backspaces.net where they explained how to use Jython and built on it a little.

This is great as Shoebot/Nodebox have great libraries for data manipulation, while processing is more focused on graphics.

The result is the attached Netbeans project which demonstrates using the nodebox web library andย  drawing with processing.


The glue code is in slowcessing.py

Theres a special version of PApplet (PJApplet), and 'pj_frame' which can put this in a JFrame.

The other method is 'shoebot_imports' adds the shoebot imports to the library path

In case anybody doesn't want to download the whole project, heres the code:


from slowcessing import PJApplet, pj_frame, shoebot_imports
from processing.opengl import *

import web
import thread

class ImageQueue(list):
    Download images in the background and add them to a list
    def __init__(self, search, size):
        self._search = search
        self._image_size = size
        thread.start_new_thread(self._get_images, ())

    def _image_downloaded(self, path):
        p = PJApplet()

    def _get_images(self):
        for image in self._search:
            image.download(self._image_size, asynchronous=False)

class WebTest (PJApplet):
  def setup(self):
    self.size(400, 400, self.P3D)
    self.images = ImageQueue(web.morguefile.search("sweets", max=1), size='small')

  def draw(self):
    y = (self.height * 0.2) - self.mouseY * (len(self.images) * 0.58)
    for image in self.images:
        self.image(image, 20, y)
        y += image.height

if __name__ == '__main__':


from javax.swing import JFrame

from processing.core import PApplet

class PJApplet(PApplet):
  # rqd due to PApplet's using frameRate and frameRate(n) etc.
  def getField(self, name):
      return PApplet.getDeclaredField(name).get(self)

def pj_frame(pj_applet, **kwargs):
    from time import sleep

    frame = JFrame(kwargs.get('title', 'Slowcessing'))
    frame.defaultCloseOperation = kwargs.get('defaultCloseOperation', JFrame.EXIT_ON_CLOSE)
    frame.resizable = kwargs.get('resizable', False)

    panel = pj_applet()

    while panel.defaultSize and not panel.finished:

    frame.visible = 1

    return frame

def shoebot_imports():
    Allow import of the shoebot libraries
    ##APP = 'shoebot'
    import sys
    DIR = sys.prefix + '/share/shoebot/locale'
    ##locale.setlocale(locale.LC_ALL, '')
    ##gettext.bindtextdomain(APP, DIR)
    ##_ = gettext.gettext

    LIB_DIR = sys.prefix + '/share/shoebot/lib'


There are some things I couldn't work :

The callback to say that images have been downloaded happens before the whole file is available, for this reason there are grey parts on the images on the first run.

Nodebox web...

While I did manage to fix things to get this working in Jython and get Morguefile working, I had a lot of trouble understanding what was going on here.

Cheers to Tom De Smedt for fixing these the areas of nodebox-web that I couldn't ๐Ÿ™‚


Some parts of PApplet to do with image loading seem to be static, which may also explain problems I was getting with reentrancy.


If you want to have a go, you'll need to:

Install Netbeans 6.8

Install Jython (2.5 or higher) by installing the Netbeans python module

Add python to the path (if using Netbeans it's copy is where Netbeans is installed).

Get nodebox-web by downloading shoebot and install it with:

jython setup.py install

In Netbeans, add all the jars in the processing\lib folder to the Jython classpath, and opengl\library\opengl.jar

Download the PythonOnProcessing (tested on Netbeans 6.8)