Tuesday, 23 June 2015

What's wrong with the way things are?

Daniel Kos

Whenever I tell people that I'm working on a brand new development platform called ioL, the first question I often hear is 'why?', 'what's wrong with the way things are now?', or even 'why would you want to re-invent the wheel?' In recent years, computers have gone from being niche novelties to mainstream devices that everyone keeps in their pockets. The industry seems to be going from strength to strength. So here's a post about negatives. I'm going to spend the next several paragraphs complaining about the state of computing today, and discussing where I believe the room for improvement lies. You've been warned.

There's no longevity in code

Consider the direction operating systems have been headed lately, and the way in which software distribution has become increasingly centralised. Now ask yourself: where is the fun in writing an app (or getting into programming for the first time), when you can't easily share your work with your friends without first publishing it in a tightly-regulated app store? Where is the incentive to create a masterpiece, when the life-span of a typical app is limited to the current OS version that it supports? Large software corporations can afford to continue updating applications so that they continue to work on new system releases. The works of independent developers are simply forgotten. The art of coding has become devalued as expendable in the eyes of the software industry to the point that it is now typically outsourced.
You might ask why anyone would be interested in running old applications. How do traditional media formats compare? Would movies such as Star Wars have become enduring classics if they had required regular revisions to keep them watchable? Perhaps that was a bad example. What about Citizen Kane? Books, movies, songs, even radio plays endure and become classics because they are permanent works. Interactive works (no less culturally significant) don't endure, because they are typically not built on firm foundations. Although writers of software engineering textbooks invariably advise against tightly coupling your code modules to dependencies that change outside your control, this is often the only way to do things on modern operating systems.
There are some exceptions: interactive works that have endured. The first ever website, created in 1991, is still online, and still functions perfectly in a modern web browser (even if it does look a little dated). Websites endure because they are not coupled to a particular web browser implementation. Instead, a web server outputs the page in an intermediary language (HTML) consisting of text-based content punctuated by formatting instructions. A web browser doesn't need to interface directly with a web site's implementation. It only needs to interpret the HTML code well enough to produce a good-enough result. The use of an intermediary layer of interpretation has worked wonders in the database world too. Rather than attempting to mesh with the gears of a database system directly, today's business information systems send their queries in Structured Query Language (SQL), to be interpreted by the database system. This kind of approach has to be the way forward.

There's no variety either

Today's modern operating systems typically lock you into writing your app in a particular programming language. If you're writing for Android it's Java. If you're writing for iOS it's Objective C. The locked-down nature of these systems is understandable. They are feature-rich systems and it would seem that an app needs to be tightly coupled to the host system's programming interface in order to integrate well.
Early operating systems were far more rudimentary. A program's user interface was a bunch of green fixed-width characters on a dark-grey phosphorescent screen. As the program produced new output, old output would scroll off the top of the screen. User interaction was done by waiting for the user to type a string of characters and press the enter key. The user experience wasn't much, but the upshot was simplicity. The elegant nature of a simple computing model led to programs that were robust and easy to test. The only standard facility a programming language had to provide was the ability to 'print' a string of characters to an output stream (the terminal screen) and read characters from an input stream (the user's keyboard). There was a plethora of competing programming languages to choose from, representing different coding paradigms suitable for different kinds of applications, and different coders' preferences. The coder's choice of language rarely mattered to the end user. Innovation in programming languages was a happening thing. The locked-down nature of modern operating systems has put a freeze on that kind of innovation.
Today's operating systems still support the so-called standard-input and standard-output streams, though their use has remained relegated to providing the old style of input and output via a rudimentary terminal window. All programming languages continue support standard input and output, even though the terminal output is now often hidden from view. There is still a variety of programming languages to choose from (for those who are happy to write text-based programs that run in a terminal window), and novices are often taught to write text-based terminal programs first, because the standard input/output paradigm remains an easy model to grasp. Of course they are then discouraged by the fact that their programs don't look anything like 'real' graphical apps.

It's not all about fun or games

The writing of robust computer software free of interoperability issues remains one of the great unsolved problems of software engineering. Even today, multi-million-dollar software engineering projects go massively over time or over budget because of unforeseen regression faults in the software, promoted by tightly-coupled dependencies in the host system. It can often become cheaper to retire systems rather than maintain them (even before they are complete), because the relationships between different code modules and dependencies simply become too complicated to work out.
Programming education is more accessible than ever before, but perceived barriers to entry imposed by system vendors drive enthusiastic novices away. Modern operating systems are increasingly geared for users to consume rather than produce. Tightly-coupled dependencies imposed by system vendors discourage the development of new programming languages (is object-oriented programming as far as we go?), and the inherent lack of a firm foundation on which to create an enduring work, can destroy a programmer's motivation to build a quality product.
Some of the problems I have touched on may seem daunting. You might even argue that many of them are necessary side-effects of the direction computing has taken. However, the success of the decoupled nature of the web has demonstrated that such problems are certainly solvable.

Tuesday, 30 September 2014

Coming Soon: Viewshare

We're busy developing a new website called Viewshare and, well, we're excited.

Viewshare is a website that lets you create virtual tours of physical spaces by taking multiple photos from different angles and linking them together.

Think of it as a sort of D.I.Y. Google Streetview. You assemble the spaces yourself. You can show off intimate places that are meaningful to you, or cool public places (like your best friend's bar) that you want people to visit in real life.

Stay tuned!

Tuesday, 22 July 2014

Deprecate at Your Peril!

You're building something great. You're investing a lot of time and effort into writing the code. Question:

What kind of platform would you rather develop for? One which has a bit of a learning curve at the start, but then you'll be able to keep using it, depending on it, and adding new features to your application, for years to come? Or would you choose a platform where features your application depends on might be taken out or changed in the next version, requiring you to spend time re-writing your code in a few years just so your application won't stop working when the new version of the platform comes out?

Enter the concept of stability

'Stability' is a misunderstood term that gets thrown around a lot in IT. 'Is this OS stable?' People use it to mean something reliable and well-built. Something that won't crash easily.

In academic software engineering, stability has a different, but related, definition. Stable means you can build on it, because it isn't subject to change underneath your application between versions. That's important, because if you have a mature code-base that depends on a particular API, and the API's fundamental interface changes between versions, it creates a moving target problem. It means you have to periodically modify your application's function calls just to keep up. Not only does that rob you of time you could otherwise have spent adding new features, it can require major surgery on your application at the risk of introducing regression faults or breaking the original internal architectural design of your application (the new version of the API you're using might require you to adopt a new, 'improved' usage paradigm that your code wasn't originally designed around), making your code less elegant for future maintainers. In the real world, where enterprise applications have incredibly complicated and mature code-bases that are often not even well understood by the people who are paid to maintain them, this is a real problem.

And if it's bad for large enterprises, it's worse for independent developers, who will often abandon their work when the platform no longer runs it rather than continue to maintain it indefinitely. In contrast to film and literature (non-interactive forms of entertainment), where classic works may endure for centuries, think of all of the cultural loss of the countless computer games that have been forgotten simply because it is no longer possible to run them.

Examples of stable platforms

Programming languages like C and C++ have been officially standardised by official standards bodies. Although new features get added in each new revision to the language, the standard remains backward compatible. While these languages mightn't be perfect, standardisation means that you can depend on them to be a fixed point in your project.

Recently, in response to concerns by governments and large organisations that documents archived as Microsoft Word documents might be rendered inaccessible in a decade's time, and the existence of the already-standardised Open Document formats, Microsoft went to a lot of trouble to get their XML-based Office formats officially standardised. Microsoft's DOCX format might leave a little to be desired in terms of compatibility issues, but at least they made the effort.

The X Window system, version 11, is one of the most stable pieces of software out there. It's the graphical front-end used by almost every Linux distribution, almost every BSD distribution, and is even provided as a compatibility layer in Apple's OSX. And it's been at version 11 since the 1980s. The API is horrible to program for (people rarely work with the library directly any-more), and it provides several features that are now redundant because people have come up with better ways of doing things. But that doesn't matter. What matters is that it's always there behind the scenes, and it's reached a level of stability that makes it dependable and means it will continue being used for years to come.

Why we're giving up on OpenID

We had high hopes for OpenID. The vision was that you would sign up for an OpenID account through any OpenID provider, and you would be able to use that account to log into any website that also followed the OpenID standard. Rather than having to create a separate account for every website, you'd only need one or two accounts for all websites. Individual website owners wouldn't need to worry about securing your credentials as these would be held and authenticated by OpenID providers instead.

Companies like Google, Yahoo, Microsoft, even AOL, adopted the OpenID standard. We set up an OpenID login system on our website. We wouldn't need to deal with account security at our end, we could simply allow people to use their existing OpenID account (such as a Google account) to log in without even having to sign up to our website separately. The system was simple to implement and seemed to work well. There were potential security vulnerabilities, but no really fatal flaws that couldn't be fixed.

Then something changed. The OpenID foundation announced that they didn't believe in OpenID anymore, and release a new improved and very different system called OpenID Connect instead. A website called MyOpenID, which provided OpenID accounts for people who didn't want to sign up with larger companies like Google, announced that they were shutting down for good. Websites like Flickr and Facebook announced that they were moving away from OpenID and would no longer be accepting third-party login credentials.

Fortunately for us, our OpenID login facility was never more than experimental. Had we been serving customers through it, those customers would have potentially found themselves locked out of accounts that no longer authenticated and unable to access their purchases. All because the OpenID foundation decided that pursuing a new 'easier to use' system was more important than preserving the functionality that existing websites were already depending on.

Why the PHP Group is making a mistake

PHP is a programming language that's commonly used to generate dynamic websites by the server. MySQL is a database system that is often used hand-in-hand with PHP for data stored and accessed by the website. (Shameless plug: for people who want a simple web content management system without the mystery of a MySQL database, there's always FolderCMS.)

A few years ago, the PHP Group announced that the MySQL functions in PHP were being deprecated. This means 'we're going to get rid of them, so stop using them.' In their place, there would be two new, but somewhat different, MySQL function sets to choose from. This was, and is, a controversial move. A lot of very popular websites rely on PHP's established MySQL functionality, and PHP owes a lot of its popularity to its ability to interface easily with MySQL. Why were they doing this? Their own website's FAQ isn't very clear:

Why is the MySQL extension (ext/mysql) that I've been using for over 10 years discouraged from use? Is it deprecated? ...

The old API should not be used, and one day it will be deprecated and eventually removed from PHP. It is a popular extension so this will be a slow process, but you are strongly encouraged to write all new code with either mysqli or PDO_MySQL.

That isn't really a justification, it's just the question re-worded into the format of an answer. There are several threads on StackOverflow, where the question has been repeatedly asked, which provide some more substantial answers: one is that the old functions are potentially dangerous for beginners who don't know that they are supposed to validate and sanitise user input before sending it into an SQL query. Another is because of a belief that developers should be moving away from text based SQL queries and moving towards pre-compiled queries. This provides a performance boost. On the other hand it represents a significant move away from the usage paradigm that made SQL popular in the first place. SQL is a database language that has become universal because, like HTML that powers the web, data is transmitted in a well-established human-readable language which is not coupled to system-dependent function bindings or compiled code. You send a text-based query to the database engine and receive a reply. It doesn't need to be complicated.

Tuesday, 6 May 2014

Getting new heating or air conditioning? Insist on a model that works with third-party thermostats.

There are a lot of heating and cooling appliances on the market, and most of them come with horrible thermostats. You know the type: it require separate batteries, has a calculator-style LCD display that's difficult to read in low light, and lacks intuitive features.

Some manufacturers (particularly for split system air conditioning units) lock you into using the manufacturer's thermostat. Many don't.

Get ready for the future

In the future, all your home's heating and cooling will be co-ordinated by a small server appliance, situated out of the way in a closet or inside your home's data cabinet next to your broadband router. It won't consume much power, maybe around $2 worth of electricity per year. But it will run your heating and cooling appliances in a way that saves you money.

It will also make your heating/cooling system more practical. Instead of having to squint at a dedicated LCD display to access a limited set of functions, you'll be able to access a full set of features through your smart phone and PC: phones and computers are designed to allow for far better human-computer interaction than a dedicated thermostat could ever have.

Here's a phone screenshot showing the web interface for a prototype device we're working on, called THERMOSERVER:

Here you have an adaptive interface, where you can customise the panels on display, scroll down to access other features, program behavioural policies, and if you share your house with other people, you can specify restrictions to prevent them from running up your energy bill to high.

If you want a wall-mounted controller too, that's easy to do. Just purchase an inexpensive low-spec tablet computer next time your supermarket has them on special, plug it in to a power socket and stick it on your wall. If your house is cabled with ethernet sockets or your router is nearby, just add an ethernet dongle and you don't even need WiFi.

How a thermostat works

A typical central heating system with a wired thermostat on your wall works on a simple principle. There are two wires between the heating unit and your thermostat, that carry a low voltage (e.g. 25VAC). When your thermostat detects that heat is required (based on the temperature you've set and the ambient temperature in the room), it joins the wires to complete the circuit. When the room gets warm enough, it breaks the circuit again.

Cooling systems work the same way in reverse.

THERMOSERVER works using electromechanical relays to tell heating or cooling appliances whether they are required. This approach was chosen because it's common among third-party thermostats.

Some appliances use modulating thermostats, which don't just say 'yes' or 'no', they also communicate a power level that indicates how close the ambient temperature is to the target temperature (e.g. low, medium, or high). Such devices will typically allow you to flick an internal switch to 'non-modulating mode' allowing you to control them them with third party thermostats. For example, a Baxi hydronic heating system boiler comes with a wired modulating thermostat that can be moved from its original location on a wall into a special receptacle inside boiler unit itself if control via a third-party thermostat is required. In this case, the instructions say to switch the unit into non-modulating mode.

The efficiency benefits of running a system in modulating mode are debatable, and many high-efficiency heating units don't provide the capability at all. As the temperature inside your house nears your desired target temperature, a modulating heating system will switch to a reduced setting and keep running for slightly longer in order to reach the target cut-off temperature. A non-modulating system will simply reach the target temperature sooner. The laws of thermodynamics state that the same amount of energy is required in either case.

Insist on Interoperability

Some devices will only work with a supplied wireless remote control that is cumbersome to use and easy to misplace or step on.

Before you buy, always ask the manufacturer whether the appliance can be wired up to a third-party (and non-modulating) thermostat. For a dedicated heater or cooler this will be a simple 2-wire connection. For a combination heating/cooling unit (such as a reverse-cycle air conditioner), this ideally means one pair of wires for heating and another pair for cooling. Ask the salesperson too. If not, tell them it's a deal-breaker.

That way, you'll avoid locking yourself in to a single manufacturer's proprietary control system, and you'll have heaps of flexibility in the years to come.

Monday, 28 April 2014

Nice in theory, but...

Every so often someone comes out with The Next Big Thing. And they market the theory behind it, people say 'hey, that makes sense', and people start buying the product.

But what if the theory is flawed or incomplete? What do you do with an invention that doesn't really work, because the idea wasn't properly thought through? Often it takes something going out of style before people look back at a flawed product and ask 'what were we thinking?'

Ergonomic keyboards

A good example is a computing product that came out of the 90s. The ergonomic Microsoft Natural keyboard, which splits the keyboard layout into left and right sections that are angled and tilted to point in the direction of your elbows. The idea was based on the fact that your forearms form an inverted V when you type. By putting a kink in the middle of the keyboard, you wouldn't need to kink your wrists in order to have your hands positioned over the keys with your fingers pointing directly forward.

The obvious problem: nobody actually types with their hands positioned that way. Human fingers are different lengths; your little finger is significantly shorter than your index finger, and there's an approximate graduation in lengths between them. That means that when you position your fingers in the home position to type on an ordinary keyboard, your hands form an inverted V too. Using a Microsoft Natural keyboard actually forces you to either kink your wrists the other way, or spread your elbows out further than you normally would.

There's a keyboard on the market that's been in continuous production since the 1980s. It's known the Model M, and is now sold as the Unicomp Classic. It has the same straight layout as any cheap keyboard, yet enjoys a bit of a following among writers and programmers as a comfortable keyboard to type on. The difference is in the internal mechanism of the keys.

Now a typical keyboard registers keystrokes on a 'membrane' under the keys. The membrane consists of two layers of plastic, with screen-painted electrical traces running across them, that are kept slightly separated from each other by an intermediate layer which has holes where the keys are. When you press a key, you flatten a silicone dome which pushes the two membrane layers into contact with each other and completes the circuit in that spot. When you release the key, the silicone dome springs back into shape, and the key pops up.

A Model M also uses a membrane arrangement, but rather than having a rubber dome, it has a spring and hammer mechanism under each key. When a key gets two thirds of the way down, the spring buckles, causing its base to pivot and causing a hammer to strike the membrane, at this point, you hear a loud click and the resistance beneath your finger disappears. From here, your finger muscles (which are actually located further up your arm) instinctively relax as the key hits the bottom, so you avoid straining your tendons.

Old > new ?

How can a thirty year old keyboard design possibly be better than something you get with a new computer today? Well, the Model M was designed by IBM in the heat of the computer wars of the 80s. IBM invested a lot of resources into developing it, and it wasn't a cheap keyboard to manufacture. The reason was because Apple computers were all sold with rubber-dome keyboards. Selling a computer with a higher quality keyboard that didn't feel cheap to type on gave IBM a competitive advantage in the world of business computing, at a time when a lot of personal computers on the market must have seemed (to serious business people) like toys.

So the question: 'what were we thinking?' goes both ways. Sub-optimal design often falls out of favour over time, but a lot of good design gets forgotten too. Design priorities change, and the original vision gets neglected. It's important for designers in today's world to not only create new visions of the future, but also to look back at understanding and appreciating what the vision used to be. Today's computing devices have evolved out of (and bear remnants of) a history of changing design visions, so understanding them is certainly worthwhile.

Freer than Linux?

Linux is getting a lot of attention right now.

Android, arguably the hottest OS right now, is powered by the Linux kernel behind the scenes. Desktop distributions like Ubuntu and Mint are gaining in popularity at the expense of the traditional inflexible (but easily manageable) paradigm of all PCs running Windows. As far as driver support among PC manufacturers goes, Linux comes second only to Windows. Linux works across multiple architectures. Broadband routers run it. Smart TVs run it. Even our upcoming ARM-based embedded home automation operating system, ThermOS, is built on GNU/Linux underneath.

So. What about BSD?

BSD?

Like GNU/Linux, BSD is based on the Unix operating system that came out in the early 1970s. It aims at the same POSIX standard for Unix compatibility as Linux, which means Linux applications are pretty-much source-compatible with BSD. On the desktop, the main two distributions that others are based on are FreeBSD (the more popular branch) and OpenBSD (a slightly more ideologically-driven branch, with heavy focus on security). PC-BSD is a user-friendly distribution that is based on FreeBSD (in much the same way as Ubuntu is based on Debian in the Linux sphere).

BSD operating system distributions are solid products, with a track record spanning decades of legendary reliability. Many Linux programs can be made to run on BSD, and the computing experience feels a little bit more responsive and robust than Linux does. The OpenBSD community even prides itself on regularly and proactively auditing the codebase to weed out potential issues before they become problems. OpenBSD has only had a handful of security vulnerabilities over the course of its entire history; a point that gets prominent mention on their website. If there's anything wrong with BSD, it's that the community isn't big enough for things like driver support to get the attention they deserve. So why is nobody using it?

Nobody's using it?

On the contrary, BSD is a lot more popular than you might think. Apple's Darwin operating system (better known for it's consumer branches: iOS and Mac OSX), is Apple's own BSD distribution, and borrows heavily from the FreeBSD branch. There are literally hundreds of millions of Apple devices out there running BSD. If you open up a terminal window on a Mac, the command line experience is not all that different to what you get on a typical GNU/Linux system.

Now that we've introduced BSD, we can get to the crux of this post: both the Linux and BSD communities are driven by the ideal of free software, but they differ drastically in terms what freedom means in the software world.

A different licensing philosophy

GNU/Linux is based on the GNU Public License (GPL), a multipage document that requires overt publication of all source code used in a piece of software, and that you freely allow others to modify your work to make it their own, and redistribute it as they wish. It's a 'viral' license in that if you use someone else's GPL'd code in your work, then you must distribute your work under the same license, so that others can continue to alter and modify that code within your program.

BSD takes on a different philosophy. The BSD license is limited to a few short paragraphs, rather than being pages long, and states that the code is free but without warranty, and does not impose restrictions on how you re-use or re-purpose a program's source code, usually as long as you retain the copyright notice.

Both approaches have their merits. The GPL is designed to encourage the continual development of free software by preventing people from poaching the free work of others without giving back, and it's perfectly sensible that people who write free software would want to license their work in this way. The GPL isn't designed to be 'popular,' it's primarily geared to serving communities of free software developers. It doesn't necessarily work for game developers who want to make money out of their work without the risk that others will 'steal', modify, and redistribute their work. There aren't too many commercial games available as native Linux applications (not because Linux applications are actually required to be released in source form, but more because the source code form is often required for compatibility reasons, in order to recompile packages for different Linux distributions. Android apps aren't native Linux applications as they run through a virtual machine that sits on top of the system stack).

The BSD license is designed to encourage people to use something however they wish, without any of the compliance hassles or limitations of the GPL. Members of various BSD developer communities, notably OpenBSD, have even taken the drastic step of re-writing freely available GPL-licensed utilities from scratch, to free them of the restrictions imposed by the GPL. It's meant to be a more pragmatic and popular approach, allowing developers free reign to do what they want without drowning in licensing clauses. The obvious question then, is why BSD developer and user communities have remained relatively small, despite the enormous benefits they have brought to companies like Apple.

Friday, 22 November 2013

Modifying our cheap CNC machine

As mentioned in a previous post, our CNC 3020 milling machine came with a little room for improvement. In this post we discuss some modifications we made to make our machine more useful.

Adding limit switches

Limit switches allow the CNC mill to home itself to a repeatable zero position. They also prevent the machine from going outside its permitted range of motion and crashing into 'hard' limits.

For most CNC machines, it's common to provide two limit switches for each of the X and Y axes (for maximum and minimum limits) and a single limit switch on the Z axis for the upper limit only. The 'safe' lower Z limit depends on the kinds of materials you have clamped onto the platform and the kind of cutting bit you happen to be using at the time. It's not the sort of thing you can simply guard with a limit switch.

Limit switches are cheap to obtain online. They are simple microswitches with a lever arm and roller attached to help actuate the switch with a reasonable amount of precision. We just superglued them onto the aluminium frame of the machine in positions where they would be tripped just before the machine would hit a hard limit. With a bit of creativity, you can find positions to locate the switches that sacrifice very little of the machine's range of motion.


The X limit switches will trip if the carriage moves too far to the left or right. The Z-maximum limit switch is mounted on the carriage and will trip if the spindle rises too far up. Small dowel pieces help give the superglue more surface area to bond to.


This Y-minimum limit switch will trip if the gantry moves closer to the front of the platform.

We wired both X axis and both Y axis switches in series, using the 'Normally Closed' connectors on the switches. This is the recommended option as faults in the limit switch connections show up immediately. We made the connections using light-duty stranded-core bell wire. We'll have to wait and see how durable this wire is in the long run, but it's very flexible and thin enough to fit into the existing cable trunking fairly easily.

For the other end of the connections, we were lucky. Although the CNC 3020 controller box doesn't provide any inputs for limit switches, there are holes on the circuit board inside where you can solder on a pin-header for X, Y and Z limits. What's more, after we soldered on a 6-pin header, we found these pins to be fully functional. They simply map onto three otherwise-unused parallel port inputs pins.


The PCB from the controller box after soldering a strip of 6 header pins into a bank of 6 vacant holes labelled 'LIMIT'. Presumably, a more expensive model would have come with those header pins already there to begin with.

We drilled holes in the back of the controller box and ran the limit pins to female banana connectors. We chose banana connectors because they're versatile: you can use them as binding posts for bare wire, or you can terminate the wires properly by adding banana plugs. (We also added a grounding post on the back for possible equipotential bonding to the CNC platform, which might be overkill...)


Banana socket binding posts added to the back of the controller box.


Ribbon cable linking banana sockets to LIMIT pins. (GND post not yet connected.)

Keeping dust out of the controller box

The controller box has a fan inside for circulating air around the heatsink, with an unfortunate side-effect. Milling dust is sucked into the unit from the nearby milling platform, where it collects on the circuit boards inside. This isn't a huge problem when milling wood as sawdust isn't conductive. It becomes a problem if we start milling aluminium or copper. Moving the controller box further away from the milling platform isn't an option; the cables are too short.

Our solution was a combination of filters and ducting. For the front and side vents, we cut filters out of kitchen scouring pads (the type that come in flat sheets). For the price, these make excellent dust filters. We hooked them over the air intake grilles with office staples.

For the bottom air intake (the most critical, as it pulls air across a large heatsink), there wasn't enough bottom clearance between the controller box and the bench to add a filter without severely limiting airflow, with the risk of overheating. Instead, we added sponge strips around the bottom of the controller box on three sides so that air could only enter from the left hand side – away from the milling platform. The sponges had the added bonus of lifting the controller box slightly, improving airflow to the bottom vent.

The rear vent does not need filtering as it's the exhaust vent for the fan, and should repel dust when the unit is turned on.