Mark Cetilia is a sound / media artist working at the nexus of analog and digital technologies. Exploring the possibilities of generative systems in art, design, and sound practice, Cetilia's work is an exercise in carefully controlled chaos. He is a member of the media art group Redux, recipients of a Creative Capital grant in Emerging Fields, and the electroacoustic ensemble Mem1, described by The Grove Dictionary of American Music as “a complex cybernetic entity” whose “evolving, custom-built systems are as important an aspect of the duo's achievements as their ever-innovative sound.”

Cetilia holds a Ph.D (Computer Music + Multimedia) from Brown University, where he has served as a visiting Assistant Professor, and an MFA (Digital + Media) from the Rhode Island School of Design, where he is a Critic. He is also a Co-Director and Instructor for the Media Lab at Community MusicWorks, a non-profit organization that gives free instruments and lessons to children from the West and South sides of Providence, where he teaches fundamental skills for making electronic music and media art to 12 –15 year olds.

Cetilia’s work has been screened / installed at such institutions as the Institute of Contemporary Arts (London), the Ben-Ari Museum of Contemporary Art (Bat-Yam), Oboro (Montréal), and O’ (Milan); he has performed widely at venues including Café OTO (London), the Borealis Festival (Bergen, NO), STEIM (Amsterdam), Los Angeles Contemporary Exhibitions, and Roulette (NYC). His sound works have been published by Interval Recordings, Radical Matters, Dragon’s Eye Recordings, Farmacia 901, YDLMIER, and Estuary Ltd., which he runs with his partner Laura.




I am interested in creating a new interface for use in live audio/visual performances and fixed media works. The interface will utilize the Bela platform in conjunction with a BeagleBone Black board and eight channels of controls, each of which will include a Vestax IF-37 input fader, a slide potentiometer with center detent, an endless rotary encoder with push button, and a number of potentiometers; the mappings for these controls will be dynamically reconfigurable using the Open Sound Control protocol.

This interface will be used to control parameters in custom software used for the creation of synthesized sound and manipulation of 3D models using SuperCollider, Max/Jitter, and/or OpenFrameworks. The 3D models themselves will be created / manipulated programmatically using tools such as OpenSCAD and Rhino / Grasshopper. They will be of ambiguous origin and functionality, but should be capable of existing within both physical and virtual worlds, and are to be of interest due to their modular design and combinatorial possibilities.



Fraktur v.4.2 is the current version of my performance system, which has evolved over the past 15+ years into an analog / digital and audio / visual hybrid platform comprised of Eurorack-standard analog synthesis modules in conjunction with custom hardware and software.

The digital audio portion of the system runs on a Mac Mini via custom software written in SuperCollider, and signals are passed bidirectionally between SuperCollider and the analog synthesis modules via ADAT optical cables using a MOTU 828 mk3 audio interface. The digital video portion of the system runs on a Macbook Pro via custom software written either in Max/Jitter or OpenFrameworks (depending on the specific proejct), and video signals are passed bidirectionally between this software and analog domain via a BlackMagic UltraStudio Express, Canopus ADVC-110, and various HDMI / VGA to Component video scalers / converters.

The video software is currently self-contained, and requires no external controls, other than a stereo audio output, which is analyzed and the resulting data used to drive various processes. The audio software is controlled using a Vestax VCM-600 USB MIDI controller, custom software running on an iPhone written using OpenFrameworks, and a handful of knobs, switches, and slide potentiometers with center detents taken from Technics 1200 turntables, the outputs of which are passed to SuperCollider via an Arduino using the Simple Message System for serial communication.

This configuration yields some wonderful results, but is a bit unwieldy for touring / performances, so a primary goal of my project is to simplify and consolidate a number of these features into a more portable system which can be expanded upon as necessitated by the needs of a given set.



The MXPD-1 controller was the result of work undertaken at STEIM (Amsterdam, NL) and Kunstenaarslogies (Amersfoort, NL) in 2008, for use with a much simpler performance system. However, this interface used a keypad to reconfigure the controls dynamically. My final project for Fab Academy will utilize a number of Vestax IF-37 faders, as found in the MXPD-1, and will take a similar approach to re-configurability, using commands sent to Bela over OpenSoundControl.



My current artistic practice is grounded in research into the history and development of analog computers, video synthesizers and image processors, the ways these technologies helped shaped the work of video art pioneers during the late 1960s and 1970s, and how this work has shaped our contemporary media landscape.

This research is not strictly a logocentric endeavor, but is deeply rooted in experience-based learning and the production of new artistic works. Such works often take the shape of improvised audio / visual performances as well as fixed media works that act as documentation of fleeting moments: audio/visual "Processes" such as this one, and still image "Sequences" created using generative and evolving systems with no fixed duration.



Excerpt of live networked performance at Tele-present // Tele-musik // Tele-vision (Alfred University, 11.09.2016).



This website is hand-coded using HTML5 and CSS. The style sheets were created using HTML5 Boilerplate v5.0 as a starting point; the background animation is made using p5.js; all other motion is created using a combination of CSS tricks and jQuery.

Each week’s assignment is placed into a separate article, which is allowed to scroll using the overflow-y: scroll CSS markup. Each article is then placed in a section with overflow-y set to hidden, thus constraining the actual height of the site to the height of the browser, though the overflow-x attribute of the section is set to auto, allowing for scrolling in any instance where the site is wider than the browser window. In order for this to work correctly, the height of both the html and body entities must be set to 100%, and the position of the section must be absolute. Each article is initially set to 50% opacity by setting its color to rgba(255,255,255,0.5) (i.e. 100% red, green, and blue, but 50% opaque) and its background to rgba(0,0,0,0.5) (i.e. 0% red, green, and blue—that is to say, black—but 50% opaque). Upon hovering over the article, a given article is made to look more “solid” byt setting its color to rgba(255,255,255,1) (i.e. 100% on all channels: red, green, blue, and alpha) and its background to rgba(0,0,0,0.75) (75% opaque, which is allows a peek through to the background while still giving a feeling of “solidity” and “focus” to the current section. Similar differences to opacity have been applied to anchor elements (used in this case as hyperlinks), and their attributes animated solely using a 0.5s ease-in transition which change the values of the corresponding background and opacity (as opposed to the articles, which are animated via jQuery calls, as explained below). In order to demonstrate user focus on images, they are given a medium grey one-pixel outline (offset inwards by one pixel such that the outline does not interfere with the overall dimensions of the image); when they are hovered over, the outline changes to solid white. All of these changes to the HTML5 Bootstrap template may be found within the style.css file, under the section labeled Mark Cetilia 2017.

In order to generate the animation by which each article appears in the site, I used the jQuery animate function within animation.js, called upon $(window).load, iterating over each article using the jQuery call: $("article").each(function(index) {...});, and creating an unnamed function that passes its instance number on to the code contained therein. Each instance then waits 350ms * its instance before beginning its animation cycle, eases in using a sine function to 50% opacity over 500ms, and upon completing this animation, enables all pointer-events (which are disabled by default in the CSS, using pointer-events: none). At the same time, the article is given another set of animation instructions: namely, to animate its opacity to 100% when hovered over, and back down to 50% when not. This animation cycle could have been scripted using built-in CSS transitions, but in this case, I find the easeInSine function provided by jQuery to be more engaging than the ease-in function available via CSS alone. The animation.js file is also responsible for allowing users to “preview” images full-screen, by dynamically adding and removing html content to a span at the end of this (index.html) file named preview. When a user clicks on an image, the preview JS function found in animation.js is triggered (having been added using a call to $("img").each(function(index { ... }); on $(window).load as seen previously in animating all of the article elements). This preview function not only writes the image to the preview span, but also moves the fill span within which the image is nested to a z-index of 3: above all the other content of the page. It also animates the opacity of this element to 100% using a 1000ms easeInSine function. If anything in this element is clicked upon (the 66.6% opaque background or the image itself) or any key pressed, a kill function is called, which animates the fill span back to 0% opacity, then moves it to a z-index of -2, behind the content of the website.

The animated background of the page is created in logo.js, using a 3D model whose view is obstructed by an opaque black box at the clipping plane, causing sections of the model to continually undulate into and out of view, appearing and disappearing as its position on the z-axis is slowly modulated based on a slowly incrementing / decrementing timer. The color of the model changes based on the position of three lights: a medium intensity red pointLight set at (500,0,200), a blue directionalLight set at a position of 1 on the z-axis, and positions on the x- and y-axes determined by the position of the user‘s mouse on the screen, and a green pointLight positioned at (0, 0, 300). The positions of these lights all change based on the position of the mouse on the y-axis, and the blue and green lights are furthermore rotated on their x- and y-axes based on the position of the user‘s mouse on the screen. The model itself is rotated on the x- and y-axes based on the position of the mouse on the screen; the black box, however, remains stationary, its size and position only being impacted by the size of the canvas itself. It is important to note that this entire 3D image is being generated within a HTML5 canvas using WEBGL; the canvas itself is set at a fixed position of 0,0 (left and top positions) at a z-index of -1 (behind the site content). In order to ensure that the canvas always fills the screen, a windowResized function may be found in logo.js which calls p5js’ resizeCanvas accordingly.

This site is maintained using git (using the commands git pull to update the cache of the repository located on my local computer based on any changes that have been made to the remote repository, git add * to let git know of any changes between the versions currently found on the server and my local machine, git commit -m "some message" to store these differences on the server with a note as to what they might be, and git push to update the remote cache with the version that was just pushed).


This week, I created design prototypes for the new interface, and modeled them to scale in Rhino after initial sketches made using Adobe Illustrator.

To get started, I first measured the components I will be using, then laid out a rough footprint for a single channel in Illustrator. I began by making three rows and two columns of 0.5" circles, beginning at 0.25" down and 0.25" to the right of the top left corner of the page, and spaced 0.25" apart on each axis, to represent six potentiometers.

I then made a 1.5" x 4" rectangle to represent the volume slider at 0.25" up and 0.25" right of the bottom left corner of the page.

Next, I made a 1.4444" x 0.2361" rectangle to represent the center-detent horizontal fader, and using Illustrator’s Smart Guides (under View -> Smart Guides), centered it over the volume slider using the Align palette with even space on the y-axis between the bottom of the pots and the top of the volume sider.

I then grouped all of these objects, and copied the group, using the Paste in Front command (Edit -> Paste in Front), and the Move command (Object -> Transform -> Move) to duplicate the group and move it along the x-axis 1.5 inches eight times.

Having determined the dimensions and spacing in Illustrator, I began rendering each of the components to scale in Rhino. To start, I took on the D-shaft potentiometer. This is a fairly simple shape, which I created by combining a series of cylindrical forms for the base, ridges, and shaft, with a box at the dimensions of the actual potentiometer&rsquot;s footprint. To create the D-style shaft, I simply made a box, aligned it with the shaft, and used Rhino’s BooleanDifference Tool (under Transform).

I then created the knob for the vertical fader, first creating a box roughly half the desired depth of the knob’ (but at the desired width and height), then selecting the extrusion edges at the top of the box and bringing them in towards the center of the object, thus creating a trapezoidal solid. I then duplicated and mirrored this form across the z-axis in the Front view using the Mirror Tool (under Transform -> Move), then moved the mirrored sides slightly apart, added a rectangular solid between them, and used Rhino’s BooleanUnion Tool to combine the resulting forms. Next, I made the cutout area of the faceplate for the fader by creating two cylinders and a rectangular solid, which I combined to make one object using the BooleanUnion Tool. I then made the faceplate itself using the Box Tool, and used the BooleanDifference Tool to remove the cutout area from the faceplate.

Next, I duplicated and resized the knob from the vertical fader for use in the horizontal slider, and scaled it accordingly. I then made a faceplate for the horizontal slider using the BooleanDifference Tool on two Box objects.

Once I had modeled the components, I then placed them according to the plans developed in Illustrator, again beginning with a single channel, then duplicating that channel eight times. Once I had each channel in place, I created the enclosure, which is a simple Box with rounded edges. In order to round the edges of the enclosure, I used the FilletEdge Tool (under Solid Tools -> Fillet Edge), selecting all edges one at a time, and using the “Rolling Ball” Ball Type.

After the enclosure and components had been completed, I began experimenting with texturing using Rhino’s Material Editor, deciding upon the Black Leather Material for a backdrop, Black Matte Material for the faceplates, Default Material for the various knobs, and a modified version of the Black Matte Material (the only modification being that the color used is 25% lighter than the solid black of the Black Matte Material’s default color).

This was my first time really digging deep into Rhino and while there is still a left lot to learn, I am starting to know my way around and am feeling a bit more confident with the interface and the platform in general. Next up: Grasshopper and perhaps rhino-python.



Made a toolkit for press-fit construction of simple rectilinear forms in Grasshopper featuring adjustable parameters for Notch Width, Material Thickness, Kerf, and Offset Amount (labeled in the Grasshopper script as “Fudge Amount” A and B).

A user may create any rectilinear shape in Rhino or Grasshopper, and the press-fit construction kit will automatically create a series of notches on each of the sides, allowing for a vast array of of possible objects.

All parameters are made available to the user through GUI elements, and may be changed dynamically at any point, allowing for quick changes to be made based on materials and the laser cutter used, as well as easy experimentation in finding the perfect “Fudge Amount” values to ensure perfect press fit connections.

At the root of my Grasshopper script are the controls which are passed to each of two identical “clusters.” The top cluster generates the outgoing tabs, while the bottom cluster generates those that are inset into the form (though this may be changed by reversing the toggle with the inlet for each of the adjoining pieces).

Within each of these clusters, there are a number of other clusters, which we will call “Child 1A,” “Child 1B,” “Child 2A,” “Child 2B,” “Child 3A,” “Child 3B,” “Child 4A,” and “Child 4B:” a pair each for the each of the Left, Right, Top and Bottom walls.

The “A” children are used to determine the origins for each of the planes associated with the tabs that are to be added, as well as the amount by which these tabs are to be offset from their origins.

The “B” children are used to actually create the tabs themsleves, and are offset using the information passed in by the “A” children.

The tabs on the Right side of the rectilinear form are offset on the other side of the plane’s origin point than those on the Left, so this requires an additional transformation.

The tabs on the Top and Bottom sides of the rectilinear form must be oriented in a horizontal direction, but otherwise function similarly to the clusters associated with generating the Left and Right tabs.

Back at the root of the Grasshopper script, the output of the cluster used to generate outgoing tabs is combined with the original geometry using the Insert Items component in conjunction with the Region Union component (which acts much like the BooleanUnion Tool in Rhino), and the resulting geometry offset on the y-axis. On the other side of the equation, the inward-facing tabs are cut out of the original geometry using using the Region Difference component (again, roughly equivalent to the BooleanDifference Tool in Rhino), and the resulting geometry offset on the y-axis.

In order to create the box for this week’s assignment, I used this press-fit construction kit to quickly run through numerous iterations of the joint, experimenting with these values until I had a secure connection. As shown in the image above, I set the Material Thickness to 0.1026", the Kerf for 0.01", Notch Width to 0.5", and Offset Amount A and B to 0.002".

To set up the laser-cutting job, I first turned on the Red Diode Pointer and pressed X/Y Off, moved the lens assembly to the top left corner of the bed, and pressed Set Home to establish the Home position of the bed. I then set the height of the bed by flipping the Manual Focus Gauge on the Lens Carriage Head, and raised the bed until the Manual Focus Gauge touched the top surface of the cardboard and lifted up very slightly. I then brought my .pdf file into CorelDRAW, ensured that the lines were all Hairline weight and 100% red, and hit “print.” In the Epilog dialog, I set the Resolution for 600DPI and Vector Settings for 100% Speed, 70% Power, and 2500Hz Freq.

I also vinyl-cut a sticker to cover the Apple logo on my laptop with the letter “m” from my group Mem1’s logo (special thanks to Marco Pinsker for the logo design). The file used consists of a circle, which delineates the outside of the sticker, enclosing the letter “m,” which will be cut out to take advantage of the mirrored surface of the Apple logo beneath for a second “ink.” Neither object was given a fill color, and were outlined using solid red with a 0.5 point weight.

In order to set up the job, I first loaded the roll of vinyl into the machine, pulled it through the back of the vinyl cutter and ensured both that the rollers were in the areas of the bed delineated by white lines, and that the material covered the optical sensor. I then lifted the lever to lock the sheet into place, and told the cutter that I was using a roll rather than a sheet. I was careful to place my vector graphic in the bottom of the page to avoid wasting material. I sent the job the the Roland GX-24 Vinyl Printer by hitting Print. Once the vinyl had been cut, I cut my shape out of the sheet with a bit of area around it, then pulled off the vinyl on the outside of my shape. I then laid the shape on a table, and applied transfer tape to the top (cutting the transfer tape with an X-Acto knife) carefully tacking it down smoothly to avoid any bubbles, and burnishing the transfer tape to ensure that the vinyl was firmly bound to the transfer tape. I then pulled the transfer tape back, making sure that the vinyl was completely adhered, and placed the resulting decal on my laptop smoothly and evenly, again burnishing it in place, then peeled off the transfer tape, making sure that the vinyl did not lift up in the process.



Milled, hand-soldered, and programmed the FabISP in-circuit programmer. I was able to flash the board but did not use it to program any other boards.

In order to mill the board, I first applied carpet tape to the back of a copper 4x6" copper coated phenolic paper board, and set the zero position on the z axis of the milling machine by slowly nudging the end mill down until it touched the board, letting off a tiny bit of dust.

For milling the traces, I used the Fab Modules’ “mill traces (1/64)” settings (diameter = 0.4mm, offsets = 4, overlap = 0.5, error = 1.1, intensity = 0.5, z = -0.1 mm, speed = 4mm/s, jog = 1.0mm) and a 1/64" end mill.

For cutting out the board, I used the Fab Modules’ “cutout board (1/32)” settings (diameter = 0.79mm, offsets = 1, overlap = 0.5, error = 1.1, top intensity = 0.5, top z = -0.6mm, bot intensity = 0.5, bot z = -1.7mm, cut depth = 0.6mm, speed = 4mm/s, jog = 1.0mm) and a 1/32" end mill.

In order to populate the board, I used the light from an LED magnifying lamp and Optivisor headband with 10 diopters (3.5x magnification at 4") to see everything clearly. For each component, I added a small bit of solder to one side, then pushed the components into place using a set of precision tweezers. Once the first side was tacked into place, I then soldered the other side properly, then went back to the first side and applied a tiny bit of solder to ensure a clean connection.

Once I had populated the board, I connected it to the lab’s Linux box both over USB (for power) and via an AVRISP mkII programmer, and began the programming the board.

First, I downloaded the firmware from the class site, and edited the Makefile to ensure that it worked with the AtTiny84 chips we are using in our lab.

Then, I compiled the firmware using the make clean command,

used the make hex command,

set the fuses so that the board would use the external clock using make fuse

and programmed the board to be an ISP using the make program command.

This step also rewrote the fuses.

I then disconnected my board from the Linux box and reconnected it using only the USB cable, and confirmed that the Linux machine could see it using lsusb.

Finally, I removed the 0Ω resistor and solder bridge from my fabISP so that it can be used to program other boards.



As a group, we printed the Test your 3D printer! model by ctrlV, which we printed using Cura with a Layer Height of 0.06mm, Shell Wall Thickness of 0.8mm, Shell Top/Bottom Thickness of 0.8mm, Infill Density of 20%, Print Speed of 60mm/s, Travel Speed of 120mm/s, and Material Retraction Enabled, Print Cooling, and Support (placed Everywhere) enabled; the Build Plate Adhesion Type was set to Brim with an 8.0mm Brim Width, and the Print Sequence was set to All at Once. These settings resulted in a fairly decent print, though some details were lost in translation (especially the high peaks of the cone and pyramid shapes). It might be helpful to take it a little slower, and try adjusting the temperature settings for a more accurate print.

We also made a 3D scan of my head using the Sense 3D scanner and Sense software. In order to get a full 3D model, we used a (human-scale) Lazy Susan, which I stood on, and was rotated upon while being scanned. We didn't completely capture the top of my head on the first scan, so we ran it again, being sure to angle the scanner up and over on each pass to get full coverage.

The rendering seen above was made using Weaverbird's Sierpinski Triangles component in Grasshopper.

Finally, I modeled a Mobius ring in Rhino, after a tutorial in Michael van der Kley's Working with Rhinoceros 5.0. In order to create a Mobius ring in Rhino, you simply create a rectangular solid beam, then use the “Twist” Tool (under Deformation Tools) to twist the beam 360 degrees over its length.

Once you have created the twisted beam, you then create a circle, and use the “Flow Along Curve” tool (with “Stretch” enabled) to convert the twisted beam into a ring.

I then 3D printed it using AS220's Ultimaker 2 3D printer using Cura with a Layer Height of 0.06mm, Shell Wall Thickness of 0.8mm, Shell Top/Bottom Thickness of 0.8mm, Infill Density of 20%, Print Speed of 60mm/s, Travel Speed of 120mm/s, and Material Retraction Enabled, Print Cooling, and Support (placed Everywhere) enabled; the Build Plate Adhesion Type was set to Brim with an 8.0mm Brim Width, and the Print Sequence was set to All at Once.



Designed a “hello-ftdi” board in EAGLE, first creating the schematic, based on the Introduction to EAGLE tutorial, including the addition of a button and LED. Once I created the schematic, I then set about arranging the parts & routing the wires in EAGLE’s Board view.

I intentionally didn’t use auto-routing or peek at the final version shown in the tutorial, to force myself to learn how to do this whole process from scratch. This naturally made things a lot trickier than it perhaps should have been, and I ended up having to use a handful of 0Ω resistors to route wires through.

Once I had completed routing the wires, I used EAGLE’s DRC tool to ensure that there was a space of at least 16mil between all Wires and Pads, and exported the board as a 1000dpi monochrome .png image file.

I then milled and populated the board. I was able to successfully set the fuses on the board using the “Burn Bootloader” command in the Arduino IDE and upload a test sketch to the board. For some reason (running OS X 10.12 on a late 2016 15" MacBook Pro), I am only able to do this using an AVR MKII ISP, no other ISP (including any of the FabISPs in our lab) seemed to work. Since the AVR MKII ISPs have been discontinued, I picked up a $13 clone from Amazon, which seems to work perfectly.

For milling the traces, I used the Fab Modules’ “mill traces (1/64)” settings (diameter = 0.4mm, offsets = 4, overlap = 0.5, error = 1.1, intensity = 0.5, z = -0.1 mm, speed = 4mm/s, jog = 1.0mm) and a 1/64" end mill.

For cutting out the board, I used the Fab Modules’ “cutout board (1/32)” settings (diameter = 0.79mm, offsets = 1, overlap = 0.5, error = 1.1, top intensity = 0.5, top z = -0.6mm, bot intensity = 0.5, bot z = -1.7mm, cut depth = 0.6mm, speed = 4mm/s, jog = 1.0mm) and a 1/32" end mill.



Made a “big” Mobius ring based on the model I created in Week 5 using Bowerbird's BB Waffle component in Grasshopper, and cut it using a Shopbot PRSstandard CNC router.

The process of designing the waffled ring was fairly simple: I began with the Mobius ring file from Week 5, and opened Grasshopper. In Grasshopper, I converted the mesh to a Brep using the Brep Mesh component. I then sent the output to the BB Waffle component, joined the U and V slices using the Brep Join component, and extruded each of the slices a given distance (all set using a Number Slider component), and baked to preview the resulting shape.

In order to make the slices into a usable 2D file, the slices and their corresponding Section Planes had to be passed from the BB Waffle component to an Orient component, which in turn makes sure they are placed on the surface of an XY plane.

Once I had all the sections positioned on a flat plane, I then baked them and exported the resulting 2D image as an Illustrator file, and opened the file. In Illustrator, I scaled the pieces to fit perfectly on a 4x8' sheet of plywood (rearranging the pieces slightly to make get the most surface area out of the sheet.

To set up the zero position on the Z axis, I used the automatic zero plate method, and set the zero positions for the X and Y axes to the top right corner of the bed.

I used VCarve Pro to generate the toolpaths for the Shopbot with the “1/4" Up-cut (52-910)” setting with a Spindle Speed of 1200rpm, Feed rate of 2.5 inches/sec, and Plunge Rate of 0.5 inches/sec, in conjunction with a 1/4" end mill.

In order to set up the job, I first added drill holes in VCarve to the middle of each slice (thus avoiding the necessity of cutting out tabs by hand at the end of the job), and tacked down the full sheet to the bed of the CNC.

Once the machine was set up, I did a couple of passes on two of the pieces to test the fit of the joint, beginning with the process of cutting out the drill holes, and adding a screw to each hole to ensure that the piece would not fly out when it was cut.

In order to get the correct fit, I changed the Allowance variable in the Machine Vectors tab of vCarve Pro. Once the fit was snug, I cut the screw wholes for the whole job, and added all the screws, then began cutting the slices, labeling each one to ensure that I knew which piece went where.

It is worth noting that our Shopbot is “open loop” (i.e. it does not provide any feedback from the axes), so I had to keep an eye on the machine the whole time and re-run a couple of pieces at the end of the job.

Once all the slices were cut, I removed each one with its label intact, and began the physical construction process. With over 50 tightly-spaced pieces, the process of putting the ring together was achieved through a laborious process that involved starting from the middle of each of the sides of the ring and moving inwards.



Used the “Burn Bootloader” command in the Arduino IDE to set the fuses on my AtTiny84 so that it looks for an external 20MHz clock. In order to test the functionality of my AtTiny84 board, I wanted to accomplish two objectives: First, I wanted to make sure that the LED could indeed be turned on and off. Then, I wanted to make sure that the value of the button was being received; I decided that this could be best demonstrated with the blinking of the on-board LED.

The AtTiny84 board I made in Week 6 hosted a button on Pin 10 (via Analog to Digital Converter #3, known in the Arduino IDE as pin 3), and an LED on Pin 6 (which is one of three pins that on the AtTiny84 that allows for Pulse Width Modulation, known in the Arduino IDE as pin 7).

The code for testing the LED using the Arduino IDE was simple: First, pin 7 had to be set as an output in the setup() function. Then, the LED could be turned on and off by sending it HIGH and LOW values using the digitalWrite() function, with a short pause between each command via the delay() function. In order for the LED to keep blinking as long as the board has power, these calls must happen within the loop() function.

Testing the button was not much more difficult: Again, pin 7 had to be set as an output. Pin 3 (where the button is located), however, had to be set as an input using the internal pullup resistors (via the INPUT_PULLUP preprocessor macro). This all happens, as before, in the setup() function. This time, the loop() function obtains the value of the button press (1 = on, 0 = off). Given that the button input is pulled high using the internal pullup resistor, the value has to be inverted in order to work as expected (lit when the button is pressed down, off when no contact is made). This is accomplished by subtracting the incoming value from 1 (1 - 0 = 1 and 1 - 1 = 0).

In order to program the AtTiny with the code above, I used the “Upload” button within the Arduino IDE, making sure that the following settings were established under the “Tools” menu: Board: "AtTiny24/44/84", Processor: "AtTiny84", Clock: "External 20 MHz", Programmer: "USBTinyISP".

The AtTiny data sheet is an exhaustive treatise that covers not only the pin configuration of the AtTiny chips, their architecture, and example code for programming these boards. I plan refer to this more as a reference guide, rather than trying to memorize its contents, and cannot claim to host encyclopedic knowledge of its contents, but from reading through the data sheet, have gleamed some important information.

First of all, it is important to note that the AtTiny has two ports: Port A and Port B. Port A is an 8-bit bi-directional I/O port, while Port B is a 4-bit bi-directional I/O port; both host internal pull-up resistors that may be selected for each bit.

In the pinout above, you can see that physical pin 1 is host to the supply voltage (labeled VCC), while physical pin 14 is host to ground (labeled GND). The pins associated with Port A may be found beginning at physical pin 13 (PA0), and descending to physical pin 6 (PA7). The pins associated with Port B, however, begin at physical pin 2 (PB0), ascending to physical pin 5; it is worth noting that PB3 (physical pin 4) and PB2 (physical pin 5) are in reverse physical order from what one might expect.

I cannot claim to understand each possible use case for the pins accessible via the AtTiny, but there are a number of cases which are immediately noteworthy to me. First, all pins associated with Port A host an ADC (analog to digital converter), allowing external voltages to be sampled and used to drive additional processes in the microcontroller; as previously mentioned, these pins all host internal pull-up resistors, which allows for noise-free operation (avoiding “floating” values) without the need for additional resistors to be added in an external circuit.

Furthermore, all pins may be turned on or off (given “high” or “low” values) using bit arithmetic. To turn on all pins on Port A, one may simply say PORTA = 0xFF; that is to say, each of the 8 ports are given 8 bits of information. If we wanted to set just pin 2 high, and leave all other pins at their current value, we could say PORTA |= (1<<2), while in order to set the same pin low, we would say PORTA &= ~(1<<2). It is of equal importance to note that Pins PA5 (physical pin 8), PA6 (physical pin 7), PA7 (physical pin 6), and PB2 (physical pin 5) are all also available to use as Phase Correct Pulse Width Modulators with variable PWM Periods. That is to say, these pins may be turned on and off at varying rates such that the outputs begin to approximate “analog” feeling values: if the PWM period is comprised of mostly “low” signals, it will approximate a low value (giving a dim glow to an LED, for example), while if the PWM period is comprised of mostly “high” signals, it will approximate a high value (in this case, providing a much brighter light from the same LED).

Other important pins of note include physical pin 9 (PA4), physical pin 8 (PA5) and physical pin 7 (PA6), which may be used accordingly as SCK, MISO, and MOSI within the SPI (Serial Peripheral Interface) bus protocol; PA4 and PA6 may also be used accordingly as SCL and SDA within the I2C protocol.

Though there are myriad additional uses for the pins (detailed in the “Alternate Port Functions” section of the data sheet), the final pin I wish to make note of here is physical pin 4 (PB3), which may be used to reset the microcontroller if held high for a period specified by the table above.

The AVR architecture used in the AtTiny chips is based on a Harvard architecture, which provides “separate memories and buses for program and data,” thus allowing predictable timing due to the fact that each subsequent command is fetched and passed on for execution within a single clock cycle.

As a further test, I replicated the functionality of both programs shown above in C code, without the use of the Arduino libraries, though I used the Arduino IDE to burn the programs to the board as I did elsewhere throughout the scope of this class.

In order to replicate this functionality, I began from Neil’s hello.arduino.328P.blink.c script, and modified it to work with my board, using PORT A rather than PORT B, with pins PA7 (physical pin 6 on the AtTiny84) for the LED and PA3 (physical pin 10 on the AtTiny84) for the button. In order to determine if the button is pressed or not, I use the bit_is_clear function from the avr library.



We spent a great deal of time deliberating about possible directions for our group project, and finally decided to build a 3-axis string-guided gimbal loosely inspired by the SkyCam. My primary contributions to this week's assignment centered around installing and testing the device, while Evan built the gimbal, Jose constructed a simplified pulley structure, and Eddie focused on documentation.

In heading towards an electronically-driven iteration of the machine, I helped begin the process of making the interconnects and cables for communication between the boards, and took the opportunity to start digging a bit further into Python, as it is a language I have only ever really brushed the surface of, and want to get a deeper understanding of its conceptual underpinnings; the Gestalt Nodes we will be using for the final version of the machine are controlled using Python.

For more information about the progress of this project, please visit the group project page.


Made a site-specific flicker device using a KYOTTO KB20C02A solid state relay + the AtTiny84 board I milled in Week 8.

For this to work, I repurposed the ground and MISO pins on its SPI header of the AtTiny84 board to directly drive the relay. I chose the MISO pin for the relay because it was easily accessible via the SPI header, requiring only jumper cables from 6-pin header to Input 2 (+3~32VDC) of the relay, while Ground is connected to Input 1 of the relay.

The code for the AtTiny84 board is fairly straightforward: in the setup() function, the relay is set as an output, while in the loop() function, a test is run to see if a counter (named “current”) has reached a value of 255 (having previously been instantiated with a value of 0). If it has, it will be decremented by 1 through each pass of the loop() function; if not, it will be incremented by 1 through each pass of the loop() function. Once this value has been established, the result is written to the relay pin, and held for a given number of milliseconds (in this case, 2).

In order to control a flickering light using the relay, I cut the power side of a 2-prong AC extension cord, and fed it through the relay, ensuring that when the relay is switched off, no power flows from the outlet though the ground connection is maintained. I then connected a small incandescent clamp light to the extension cord.

It is worth noting that this specific solid state relay only switches at zero crossings (see attached data sheet), so modulating the width of the pulses sent to the relay to simulate sending analog signals of varying voltage (“PWM”ing the output from the pin on the Arduino) results in a aesthetically pleasing flicker pattern that changes based on the irregularities inherent in the electrical signals in real time.



My contribution towards our group project this week centered around getting the Gestalt Nodes up and running. This involved constructing node connector / adapter cables, downloading and installing the PyGestalt code and testing the boards, as shown on this page.

Given my background and interest in sound, once I got the boards up and running, I became fascinated with the sounds they made, and thought that it might be interesting to think of each motor as an instrument, and set them against one another playing polyrhythms. A common way of generating polyrhythms in electronic music today is known as "Euclidian" sequencing, and is based on an algorithm by E. Bjorklund found in the paper The Theory of Rep-Rate Pattern Generation in the SNS Timing Systems and popularized by Godfried Toussaint in such writings as his 2013 book The Geometry of Musical Rhythm: What Makes a "Good" Rhythm Good?

This algorithm was recently (and conveniently) implemented in Python by Brian House, so it seemed only natural that we create Euclidian sequences with the Gestalt nodes. One of our boards was faulty, so I began my initial tests on the bench with the working two boards / stepper motors.

The problem with the third Gestalt Node turned out to be a faulty RS-845 transceiver IC, which handles communication between the boards. Once the IC was replaced, we were able to successfully define all three boards and begin testing various ways of controlling all three motors.

The first task was to implement the Euclidian sequencer in three-dimensional space, which resulted in something of a spasmodic dance.

The next task was something of a “hello world” involving sending the gimbal up and down repeatedly in three-dimensional space by spooling each axis the same amount simultaneously.

And finally, random positions within a limited range of motion were sent to each motor, such that unique patterns of movement could unfold.

For more information about this project, please visit the group project page.



Made a mold for a potential cassette insert for my Apathy and Steel project. The Apathy and Steel logo is fairly complicated, so finding a way to machine the wax required reconsidering the design a bit. First, the logo needed to be reduced to its most essential elements.

To start, the letterform grid had to be simplified to a single circle with diagonal axes. Then instead of distinguishing the grid and letters through color or fill vs. stroke, each of the elements needed to be separated by height.

Since the Fab modules allow height to be defined by a greyscale value, this was easily accomplished by assigning each section of the logo to a different color in Adobe Illustrator, without having to create a heightmap or 3D model.

I then split the logo into two passes: one rough cut to set the depth for the entire insert (and a border that surrounds it so that the logo is not being pressed against the case by the cassette itself), and another to take care of the detail.

After the wax had been cut, I then created a silicon mold using Smooth-On OOMOO 30 and let it set.

I then made a small barrier surrounding the area to be cast using four pieces of hardwood.

Finally, I mixed Hydrocal White Gypsum Cement with water until it was about as thick as a bowl of melted ice cream, and poured it into the mold.

As the mold began to set, I gently rocked it to try to remove any air bubbles, and then left it alone to harden.

Once the hydrocal had completely hardened, I flipped the mold over and removed it from the mold.

I am pleased with how the resulting cast came out (it has a nice “weathered” look that naturally complements the Roman type used in the logo), but I poured it a bit thicker than I had anticipated, which made the cast too thick to fit into a standard cassette case. If I were to make a two-part mold, the thickness of the resulting object could be determined not by the length of my pour but by the mold itself, which would eliminate any further such concerns.

For milling the wax, I first did one pass for the outside area with the Fab Modules’ “wax rough cut (1/8)” settings as a starting point but with a 1/16" (1.5875mm) end mill, a cut depth of 0.25mm and a total depth of 1/16" (diameter = 1.5875mm, offsets = -1, overlap = 0.25, error = 1.5, top intensity = 1, top z = 0mm, bot intensity = 1, bot z = -1.5875mm, cut depth = 0.25mm, speed = 20mm/s, jog = 1.0mm).

I then did another pass for the inside area, again with the Fab Modules’ “wax rough cut (1/8)” settings as a starting point but with a 1/32" end mill and a total depth of 1/16" (diameter = 0.79mm, offsets = -1, overlap = 0.25, error = 1.5, top intensity = 1, top z = 0mm, bot intensity = 1, bot z = -1.5875mm, cut depth = 1mm, speed = 20mm/s, jog = 1.0mm).




This week, I focused on getting an endless rotary encoder (ordered from Amazon) working with the AtTiny84. To do this, I constructed a breakout board using EAGLE, and programmed the AtTiny84 using the Arduino IDE.

Unlike traditional potentiometers, rotary encoders are not limited in their range of motion, but may turn continuously in either direction, and produce 2-bit Gray codes (named after Frank Gray, who patented this binary numeral system) to relay which direction they are being turned. When sampling the outputs of both pins, if both pins are the same value, that means that the rotary encoder has been turned counterclockwise; if they are opposite values, that means it has been turned clockwise.

Decoding the Gray Code was accomplished by setting two of the AtTiny84’s pins, rot_a and rot_b, to accept inputs (with their pullup resistors enabled) in the setup() function. In the loop() function, a variable called rot_a_state is set based on the input of rot_a using the digitalRead() function. If this variable is different from the value stored in a variable called rot_a_state_prev during the last pass through the loop, then we know that someone has turned the knob one way or the other. If the values are the same, that means that the knob has been turned clockwise, so we increment a variable called rot_enc_val by 0.5 (since we will get changes on both sides of the knob turn, this allows us to see when the knob has been turned partway). If the values are different, the knob has been turned counter-clockwise, and rot_enc_val should be decremented by 0.5.

More information may be found on rotary encoders via How to Mechatronics, Hobbytronics, and the Arduino Playground.

Testing the momentary switch on the rotary encoder is simply a matter of checking a digitalRead on a pin labeled rot_switch. In order to keep from streaming data unnecessarily, the switch’s values are only sent when the corresponding variable changes (as with the rotary encoder, the value is stored in a state variable known as rot_switch_state, and stored for the next pass through the loop as rot_switch_state_prev). Since we are using pull-up resistors, the value of the switch must be inverted by subtracting it from 1, as we have seen the case of the button in Week 6.

My personal iteration fuses strategies from the aforementioned tutorials, while using the pull-up resistors built into the AtTiny84 to simplify the circuit, and senses “half-steps,” outputting whole numbers only when having completed turning the rotary encoder completely to the next step, and half-increments once having begun, but not having finished, turning the encoder.

In making this board, I sought to create a reusable board with a minimal footprint that could easily be adapted and folded into other projects (specifically, my final project, as I am planning for it to include a rotary encoder. Given that the AtTiny84 board I made in Week 6 had accessible all the necessary pins via the SCI header (ground and three Analog to Digital Converter pins), I chose to focus on a design that could be reused in multiple scenarios, and for the purposes of this week’s assignment, interfaced via headers and jumper cables.

It is worth noting that the current version of EAGLE will export images at twice the size you specify if you are using an Apple computer with a Retina display and do not have “Open in Low Resolution” enabled. This is now the second time I have learned this lesson, hopefully I will not have to learn it again.

For milling the traces, I used the Fab Modules’ “mill traces (1/64)” settings (diameter = 0.4mm, offsets = 4, overlap = 0.5, error = 1.1, intensity = 0.5, z = -0.1 mm, speed = 4mm/s, jog = 1.0mm) and a 1/64" end mill.

For cutting out the board, I used the Fab Modules’ “cutout board (1/32)” settings (diameter = 0.79mm, offsets = 1, overlap = 0.5, error = 1.1, top intensity = 0.5, top z = -0.6mm, bot intensity = 0.5, bot z = -1.7mm, cut depth = 0.6mm, speed = 4mm/s, jog = 1.0mm) and a 1/32" end mill.



For this week's assignment, I made a mold for acoustic tiling out of machinable foam using Rhino, the ShopBot and PartWorks 3D. In terms of the workflow for the big CNC, I used carpet tape to laminate two sheets of 1-inch polystyrene foam and to fixture it to the bed. As in Week 7, I used the zero plate method to ensure that the height on the Z axis was correct. However, instead of setting the zero positions for X and Y axes to the top right side of the bed, I set the zero to the center of the bed, and placed my foam in the center of the bed. This is much easier for 3D cutting (especially if doing 2- or 4-sided 3D, which I plan to experiment with in the future).

To make the mold, I first modeled the shape in Rhino, then inverted it so that I could make a one-part mold with the surface of the cast at the bottom of the mold. I then cast the mold using a composite of burlap and Super Sap CLR Epoxy resin (2 parts epoxy to 1 part ONF Hardener). After coating the foam in plastic wrap (to act as a mold release), the resin was applied to the burlap using a plastic applicator.

The burlap was then formed to the surface of the foam and another layer of plastic applied to act as a release agent for the other side of the piece. Once this layer of plastic was in place, I poked a number of holes into the plastic and added batting above the plastic to allow the excess resin to seep into the batting, placed everything in a vacuum bag, sealed the bag shut and vacuumed all the air out of the bag.

I used VCarve Pro to generate the toolpaths for the Shopbot, with the “1/2" End Mill (Drill Mill 0.5 inches)” setting with a Spindle Speed of 1200rpm, Feed rate of 1.5 inches/sec, and Plunge Rate of 0.25 inches/sec, in conjunction with a 1/2" end mill.

I am pleased with how the final result came out (see above), and I can certainly see many potential uses for both working with composite materials and milling in three dimensions in my future.



For this week's assignment, I made a network board for RGB LEDs using the I2C bus.

For milling the traces, I used the Fab Modules’ “mill traces (1/64)” settings (diameter = 0.4mm, offsets = 4, overlap = 0.5, error = 1.1, intensity = 0.5, z = -0.1 mm, speed = 4mm/s, jog = 1.0mm) and a 1/64" end mill, but with a slight alteration: I had to set the diameter to 3.6 instead of the default 0.4 in order to cut the .

For cutting out the board, I used the Fab Modules’ “cutout board (1/32)” settings (diameter = 0.79mm, offsets = 1, overlap = 0.5, error = 1.1, top intensity = 0.5, top z = -0.6mm, bot intensity = 0.5, bot z = -1.7mm, cut depth = 0.6mm, speed = 4mm/s, jog = 1.0mm) and a 1/32" end mill.

Through an intensive testing process, I discovered that my boards had a handful of errors: First, the Master board did not have a tie-up resistor on the SCL line. Second, the Slave boards did have a tie-up resistor on the SDA line (I know that there has to be one on the channel, but am not certain that it is bad for the all channels to have one or not—in testing, I tried both options, with no change in results). Finally, my board design included ties from MISO to the I2C header rather than from SCL to the I2C header. I made the necessary alterations to the boards, and proceeded in my testing process.

Unfortunately, I was unable to communicate between the boards, even though I made what I believe to be a fairly valiant effort. Since the AtTiny84 does not feature hardware I2C as many other microcontrollers do, it is not capable of running the Arduino Wire library, so I needed to use software I2C via the TinyWire library, which is not developed for the AtTiny84, but has been ported as terribly-documented and separately-developed TinyWireM and TiryWireS libraries, which are seemingly left incomplete—cf. for example the “TODO (by others!)” notes in the TinyWireS.h file, and—more importantly—the fact that USI_TWI_Master.c, used in TinyWireM, is only made to work with a 1MHz clock, a fact that was not made clear until digging into the code of a dependency. However, even after setting the fuses on both the Master and Slave AtTiny84 boards to accommodate the use of the 1 MHz internal clock via the Arduino IDE, I was still not able to send or receive data.

At this point, I began digging around the internet, and discovered that Sungeun Lee, a prior Fab Academy student, had run into similar problems on an AtTiny45, as seen here, which they apparently resolved by removing any calls to SoftwareSerial, as it apparently is incapable of running on the 1Mhz clock, and removing such calls from my code and setting up an oscilloscope yielded some useful information.

Even though I am unable to communicate between boards for some reason, I can indeed see that, as per the AVR310 Application Note, the TWI Address is indeed being sent on both SDA and SCL lines, but is hanging up at ACK, which should go low but is staying high, and the Data never follows. While I had hoped to get data from one AtTiny84 to another, seeing that the address is indeed sent and received across the I2C bus is at least something of a consolation prize.

The code I ended up with on the sender boils down to two primary events: setting three pins on the sender to random values between 0 and 255; these pins correspond to the red, green, and blue channels of the RGB LED. This works perfectly (on both boards), as shown in the picture below; this picture also details the specific means of physically connecting the boards using 6-pin IDC cables.

Using TinyWireM, the AtTiny84 than proceeds to begin a transmission. In order to create an event that should be easy to recognize on the oscilloscope, I then had the TinyWireM send a value of 0x01, delay for 500 milliseconds, send a value of 0x80, delay another 500 milliseconds, then repeat 9 more times. After sending this date, the transmission is terminated using TinyWireM’s endTransmission() method.

On the receiving end, when a value of 0x01 is received, the RGB LED is set to turn red (by sending a HIGH value to the red LED pin and LOW to the green and blue LED pins). When a value of 0x80 is received, the RGB LED is set to turn green (by sending a HIGH value to the green LED pin and LOW to the red and blue LED pins).

This board was programmed, as in all of my assignments, using the Arduino IDE. While I am disappointed that I was unable to get I2C running on the AtTiny84, I look forward to working with the protocol on other microcontrollers that feature hardware I2C.

In order to complete this week‘s assignment, I set up a network using SoftwareSerial. In order to do begin, I repurposed a pair of the I2C boards by first removing the tie-up resistor on the master SDA line (Ardiuno pin 6 / physical pin 7), which I planned to use for the communication bus: TX on board, and RX on the other.

In order to make the success of communication between the boards crystal clear, I created a script that simply changes the brightness of the red channel of the LED on the sender to a random value between 0 and 255 every 500 milliseconds. The script begins with a call to include the SoftwareSerial header file, and then creates an instance of the SoftwareSerial class called mySerial, with the receive on pin 0 and the transmit on pin 6, as previously mentioned. In the setup() function, mySerial is initialized at a rate of 9600 baud, the RGB LED’s pins are set to outputs, and the green and blue channels are set to 255. Due to the polarity of the LED / circuit used on these boards, this actually corresponds to turning the green and blue channels completely off. In the main loop() function, a random number r is generated, then sent both to the board’s LED, and as a decimal value to the other board. Finally, the delay function is called with a value of 500, creating a 500ms pause between successive values.

The receiving side simply echoes the value of the sender’s red channel. In order to do this, we again include the SoftwareSerial header file, and create an instance of the SoftwareSerial class called mySerial, this time with the receive on pin 6 and the transmit on pin 1. In the setup() function, mySerial is initialized at a rate of 9600 baud, the RGB LED’s pins are set to outputs, and the green and blue channels are set to 255. The main loop() function first checks to see if the SoftwareSerial instance is indeed available. If so, it is used to gather data from the receive line and parse the data as an integer. If the resulting integer does not equal zero (as it may in the case of invalid data), the value is written to the red channel of the LED.


I am proud to say that this network protocol (simple as it may be at this point) does in fact work flawlessly, as you may see in the video above. I look forward to expanding upon this protocol, and building more elaborate networks in future projects.




For this week's assignment, I interfaced my endless rotary encoder board from Week 13 with Processing.py using the serial library and animated obj files of the acoustic tiles I created for Week 14 via two separate strategies selected via the encoder‘s momentary switch. For both of these animation strategies, the values generated by the rotary encoder are used to determine the speed and direction of rotation of each of the shapes, resulting in a simple, dedicated controller for the real-time performance of a specific set of minimal computer graphics.

I used Modern Device’s BUB II USB to TTL serial adapter to interface between the computer and rotary encoder / AtTiny84 board, which hosted the code as seen in Week 13.

The Processing.py code on the computer uses the serial library to receive commands from the AtTiny84 board. Upon receiving a command, it calls the serialEvent function, which splits the string based on colons, and removes any extra whitespace. If the beginning of the resulting array is the string “count,” a variable named ct is set to a tenth of the value of the corresponding incoming value. This value is used to set the speed of rotation of a number of 3D models that have been loaded in using Processing.py’s loadShape() function. If, however, the beginning of the resulting array is the string “switch,” a variable named sw is set to either 0 or 255, accordingly. At the same time, a multidimensional array called matrix is set to a value between 0 and 2. This value is used in the draw() loop to determine the direction of rotation and choose between two possible 3D models selected while the switch is depressed.

The draw() loop itself is used to define a number of visual attributes, including lighting color (which contributes greatly to the difference between the switch-enabled and -disabled states). Throughout the draw() loop, calls to pushMatrix() and popMatrix() allow transformations to be applied to individual objects without changing the rest of the scene, which is accomplished through iteration over the array elements.

Each of the 3D models in the multidimensional array is positioned on the screen using the translate() function, and by default (i.e. if the switch is not depressed), rotated at a rate based on its position on both axes. This rate is determined by multiplying its position on the x axis by its position on the y axis, and then adding 1, to ensure that all of the models are moving. By this logic, the speed of rotation gets faster along a diagonal axis, with the slowest-moving model found at the far top left end of the screen, and the fastest moving found at the fart bottom right end of the screen.

If the switch is depressed, each model moves to the left slightly, and may jump forward or backward up to 25 units per frame. If the random number associated with the position of a 3D model in the matrix is greater than 1, the model will rotate counterclockwise, the specular color of the object will be reassigned every frame, and the 3D model used will be the "bottom.obj" file; if it is less than 0, the model will (also) rotate counterclockwise and the specular color of the object will be reassigned every frame, but the 3D model used will be the "top.obj" file.



I am planning to move forward with my original intention of creating an interface for use in improvised audio/visual performances and fixed media works utilizing the Bela platform in conjunction with a BeagleBone Black board, a bank of controls, which will include a Vestax IF-37 input fader, a slide potentiometer with center detent, an endless rotary encoder with push button, and five rotary potentiometers, as well as an AtTiny84 board which will collect the inputs and pass them to Bela via a serial connection.

However, instead of using this interface as a control surface for software running on an external computer, I am interested in making a simple chaotic synthesis engine in SuperCollider that will run on the BeagleBone itself, thus allowing a compact, self-contained and easily reconfigurable performance instrument.

For my final project, I will create a single channel (out of an eventual eight, as shown above) as the first arm in a spiral development cycle. This version of the instrument will require the creation of an enclosure, a set of boards (which I will mill using the Modela milling machine) for input of control signals + remapping / communication to the Bela over the serial bus o, and software written in SuperCollider, to generate sound based on the controls in live performances.

My project is built upon a number of existing technologies: the Bela platform, SuperCollider scripting language / environment, Arduino libraries and IDE, BeagleBone Black hardware, and Modern Device’s BUB II USB to TTL serial adapter. In terms of parts I have included a complete list of components and materials with pricing and where to source them in the “Files” section below.

I plan to create the boards in EAGLE, and mill them by June 4, developing the enclosure in tandem. Once the boards are milled, I will laser cut and construct the enclosure, checking that the components will fit both the design of the control surface and the boards, prior to populating them. By June 11, I will have the boards fully populated and enclosure built, and move on to working on the SuperCollider code. This will be complete by June 12, and I will document the working system on June 13, so that I may present it on June 14.



As mentioned in Week 17, my project is built upon a number of existing technologies: the Bela platform, SuperCollider scripting language / environment, Arduino libraries and IDE, BeagleBone Black hardware, and Modern Device’s BUB II USB to TTL serial adapter. I have made publicly available all code and design files used for this iteration of the project, both through this webpage, as well as my personal portfolio site; future iterations upon this platform will be announced within the Systems section of my portfolio (specific project names and locations TBD).

The licenses for the hardware and software of Entangle XM are as follows:

The Entangle XM software is distributed under the GNU Lesser General Public License (LGPL 3.0), available here: https://www.gnu.org/licenses/lgpl-3.0.txt

The Entangle XM hardware designs are released under a Creative Commons Attribution-ShareAlike 3.0 Unported license (CC BY-SA 3.0). Details here: https://creativecommons.org/licenses/by-sa/3.0/


Made good progress towards my final project this week. First, I measured the components I would be using for my final project but not have Eagle library parts for, and began my own Eagle library to house them. Given that I could not locate data sheets for either the rotary encoder or the center-detent horizontal fader I am using for my final project, this was a somewhat laborious process involving laser-printing mockups to scale, and pushing the components through the paper to determine slight differences in positioning (at increments below the measuring capability of the set of calipers available to me) to ensure an exact fit for the parts on the board. It is worth noting that Eagle will allow you to print directly from your library, so there is no need to place your part in a board, export an image, and print that image (I discovered this somewhat later than might have been optimal).

Once I had the measurements of components properly established on paper, I milled a board with each of the components spaced appropriately in order to ensure both that my measurements were correct and to get a feel for how much finger room would be available based on a populated board with knobs installed. Both the physical space between knobs and the physical locations of holes in the component board required minor changes before moving forward.

Having addressed these issues, I then began the process of designing the boards for both the AtTiny84 and the components, and milled them.

For milling the traces, I used the Fab Modules’ “mill traces (1/64)” settings (diameter = 0.4mm, offsets = 4, overlap = 0.5, error = 1.1, intensity = 0.5, z = -0.1 mm, speed = 4mm/s, jog = 1.0mm) and a 1/64" end mill.

For cutting out the boards, I used the Fab Modules’ “cutout board (1/32)” settings (diameter = 0.79mm, offsets = 1, overlap = 0.5, error = 1.1, top intensity = 0.5, top z = -0.6mm, bot intensity = 0.5, bot z = -1.7mm, cut depth = 0.6mm, speed = 4mm/s, jog = 1.0mm) and a 1/32" end mill.

I also developed a preliminary version of the synthesis engine to be used for my final project using SuperCollider running on my laptop in conjunction with the endless rotary encoder / AtTiny84 board from Week 14 and a Korg Nanokontrol 2.

Part of this process included testing the ways in which parameters would be mapped from the physical control surface to the synthesis engine, with the knowledge that the ranges would simply need to be expanded from standard MIDI range of values (from 0 to 127) to the range available via Serial (0 to 1023). I also tested the BUB II USB to TTL serial adapter on the Bela platform in conjunction with the endless rotary encoder / AtTiny84 board from Week 14, to ensure that this specific mode of serial communication would work effectively, which it did. I also discovered that MIDI is not currently supported within SuperCollider on the Bela platform, though there is a workaround involving installing a kernel with support for virmidi (see this thread for more details). Given that I do not plan to use MIDI in my final project, I decided to hold off on this for now and focus on moving forward with the tasks at hand.

In moving forward, I will need to populate both the component and microcontroller boards, port my SuperCollider code to the Bela platform, moving controls from MIDI to serial and remapping controls accordingly, and design and laser-cut the enclosure. Given that I am set to present on Wednesday June 14, I do not have a lot of time left for these tasks, but I am confident that I will be able to meet the deadline. Through this process, I have learned a great deal about making custom parts for my Eagle Library, designing interfaces that are comfortable and not physically cramped, and designing separate boards and interconnects for components and processors.




My final project is an electronic instrument entitled EntangleXM. It features 2-operator chaotic synthesis engine based on ring modulation techniques with feedback via a variable-length delay line (0–8 seconds) and digital wavefolding for additional signal complexity.

The controls include frequency of oscillators 1 + 2 and delay time (via 3 linear potentiometers), panning (via a center-detent horizontal slider), feedback amount + universal bypass (via a rotary encoder + switch), and volume (via a Vestax IF-37 Input Fader).

The software for this instrument is written in SuperCollider (cf. entangle_xm.scd) and runs on the Bela platform (Xenomai Linux on a BeagleBone Black board) for ease of updating via the browser-based IDE (over USB) and ultra-low latency (~100μs).

The input devices communicate with SuperCollider via serial messages sent from an AtTiny84 microcontroller using the Arduino IDE (cf. entangle_xm.ino). I designed and milled both the component and microcontroller boards, each with a 10-pin header, to allow for signal flow between the boards and future development of additional input boards (cf. entangle_xm-components.brd, entangle_xm-components.sch, entangle_xm-microcontroller.brd, and entangle_xm-microcontroller.sch).

The enclosure was laser cut out of 1/8" hardwood and made using press-fit construction techniques (cf. entangle_xm.pdf), which allows easy access to the circuits and hassle-free maintenance and upgradability—no tools necessary.

The 3.5mm TRS (stereo) output and integrated speaker enable performances to take place anywhere through the use of an external rechargeable 5v battery such as an Anker PowerCore 5000mAh Portable Charger.

The component board features a number of specialized components, which must be sourced from a variety of manufacturers (cf. entangle_xm.xlsx).

Having completed the first pass in the spiral development cycle of this new performance system, I have learned a number of things that I can leverage towards the next iteration. First, I will make sure that the wiring for the header connecting the component board to the microcontroller is reversed since I am using through-hole components (though I also learned from the evaluation that I could have simply reversed the order of the wires on the IDC connector and re-crimped it, rather than soldering each wire to a separate pin on a female header).

In moving forward, I am also considering replacing the center-detent horizontal faders with full-length Technics 1200 style pitch controls, adding a bit more spacing between knobs, and replacing the Alpha 9mm linear potentiometers used for setting oscillator frequency with 10-turn knobs and precision potentiometers. I am also not completely sold on the necessity / utility of the endless rotary encoders in the continued development of this system, and may substitute them for SPDT (on-off-on) switches.

I had initially planned to make one board that would host all eight channels of controls. I now plan to make one board that can be duplicated eight (or more) times, one for each channel. I will begin this process by first refining the schematics and design of the board based on the lessons I learned from this iteration, milling and populating one instance to ensure that I am completely satisfied with the response and tactile sensation of all of the controls. Once this has been accomplished, I will send it out for manufacture by a board house like OSHPark.

I plan to move forward with a design roughly equivalent to the 3D model shown above, but may refine the design of the enclosure to include an aluminum frontpanel with etched graphics, fine hardwood end cheeks, and custom knobs.

As an initial test towards these ends, I 3D printed a knob (whose author released its plans under a Creative Commons license and made it freely available via Thingiverse both as an .stl file and as an OpenSCAD file) for the rotary encoder I am using on the current iteration of my project. I am interested in using this as a baseline for future experimentation.

Given that it fits my rotary encoder perfectly and that the code behind it is easily accessible, I am very curious to explore a number of sculptural possibilities for custom knobs using OpenSCAD, and experimenting with different materials through service bureaus like Shapeways, Ponoko, and Additively.

While I am primarily invested in the continued creation and development of a personal platform with components that are aligned with my history and experience as a performer, there are also implications for commercial development of this platform using more readily-available components and less expensive hardware such as the Pi Zero. Both of these possibilities are tremendously exciting to me, and I look forward to taking this project into the future.