Tuesday, 22 July 2014

Deprecate at Your Peril!

You're building something great. You're investing a lot of time and effort into writing the code. Question:

What kind of platform would you rather develop for? One which has a bit of a learning curve at the start, but then you'll be able to keep using it, depending on it, and adding new features to your application, for years to come? Or would you choose a platform where features your application depends on might be taken out or changed in the next version, requiring you to spend time re-writing your code in a few years just so your application won't stop working when the new version of the platform comes out?

Enter the concept of stability

'Stability' is a misunderstood term that gets thrown around a lot in IT. 'Is this OS stable?' People use it to mean something reliable and well-built. Something that won't crash easily.

In academic software engineering, stability has a different, but related, definition. Stable means you can build on it, because it isn't subject to change underneath your application between versions. That's important, because if you have a mature code-base that depends on a particular API, and the API's fundamental interface changes between versions, it creates a moving target problem. It means you have to periodically modify your application's function calls just to keep up. Not only does that rob you of time you could otherwise have spent adding new features, it can require major surgery on your application at the risk of introducing regression faults or breaking the original internal architectural design of your application (the new version of the API you're using might require you to adopt a new, 'improved' usage paradigm that your code wasn't originally designed around), making your code less elegant for future maintainers. In the real world, where enterprise applications have incredibly complicated and mature code-bases that are often not even well understood by the people who are paid to maintain them, this is a real problem.

And if it's bad for large enterprises, it's worse for independent developers, who will often abandon their work when the platform no longer runs it rather than continue to maintain it indefinitely. In contrast to film and literature (non-interactive forms of entertainment), where classic works may endure for centuries, think of all of the cultural loss of the countless computer games that have been forgotten simply because it is no longer possible to run them.

Examples of stable platforms

Programming languages like C and C++ have been officially standardised by official standards bodies. Although new features get added in each new revision to the language, the standard remains backward compatible. While these languages mightn't be perfect, standardisation means that you can depend on them to be a fixed point in your project.

Recently, in response to concerns by governments and large organisations that documents archived as Microsoft Word documents might be rendered inaccessible in a decade's time, and the existence of the already-standardised Open Document formats, Microsoft went to a lot of trouble to get their XML-based Office formats officially standardised. Microsoft's DOCX format might leave a little to be desired in terms of compatibility issues, but at least they made the effort.

The X Window system, version 11, is one of the most stable pieces of software out there. It's the graphical front-end used by almost every Linux distribution, almost every BSD distribution, and is even provided as a compatibility layer in Apple's OSX. And it's been at version 11 since the 1980s. The API is horrible to program for (people rarely work with the library directly any-more), and it provides several features that are now redundant because people have come up with better ways of doing things. But that doesn't matter. What matters is that it's always there behind the scenes, and it's reached a level of stability that makes it dependable and means it will continue being used for years to come.

Why we're giving up on OpenID

We had high hopes for OpenID. The vision was that you would sign up for an OpenID account through any OpenID provider, and you would be able to use that account to log into any website that also followed the OpenID standard. Rather than having to create a separate account for every website, you'd only need one or two accounts for all websites. Individual website owners wouldn't need to worry about securing your credentials as these would be held and authenticated by OpenID providers instead.

Companies like Google, Yahoo, Microsoft, even AOL, adopted the OpenID standard. We set up an OpenID login system on our website. We wouldn't need to deal with account security at our end, we could simply allow people to use their existing OpenID account (such as a Google account) to log in without even having to sign up to our website separately. The system was simple to implement and seemed to work well. There were potential security vulnerabilities, but no really fatal flaws that couldn't be fixed.

Then something changed. The OpenID foundation announced that they didn't believe in OpenID anymore, and release a new improved and very different system called OpenID Connect instead. A website called MyOpenID, which provided OpenID accounts for people who didn't want to sign up with larger companies like Google, announced that they were shutting down for good. Websites like Flickr and Facebook announced that they were moving away from OpenID and would no longer be accepting third-party login credentials.

Fortunately for us, our OpenID login facility was never more than experimental. Had we been serving customers through it, those customers would have potentially found themselves locked out of accounts that no longer authenticated and unable to access their purchases. All because the OpenID foundation decided that pursuing a new 'easier to use' system was more important than preserving the functionality that existing websites were already depending on.

Why the PHP Group is making a mistake

PHP is a programming language that's commonly used to generate dynamic websites by the server. MySQL is a database system that is often used hand-in-hand with PHP for data stored and accessed by the website. (Shameless plug: for people who want a simple web content management system without the mystery of a MySQL database, there's always FolderCMS.)

A few years ago, the PHP Group announced that the MySQL functions in PHP were being deprecated. This means 'we're going to get rid of them, so stop using them.' In their place, there would be two new, but somewhat different, MySQL function sets to choose from. This was, and is, a controversial move. A lot of very popular websites rely on PHP's established MySQL functionality, and PHP owes a lot of its popularity to its ability to interface easily with MySQL. Why were they doing this? Their own website's FAQ isn't very clear:

Why is the MySQL extension (ext/mysql) that I've been using for over 10 years discouraged from use? Is it deprecated? ...

The old API should not be used, and one day it will be deprecated and eventually removed from PHP. It is a popular extension so this will be a slow process, but you are strongly encouraged to write all new code with either mysqli or PDO_MySQL.

That isn't really a justification, it's just the question re-worded into the format of an answer. There are several threads on StackOverflow, where the question has been repeatedly asked, which provide some more substantial answers: one is that the old functions are potentially dangerous for beginners who don't know that they are supposed to validate and sanitise user input before sending it into an SQL query. Another is because of a belief that developers should be moving away from text based SQL queries and moving towards pre-compiled queries. This provides a performance boost. On the other hand it represents a significant move away from the usage paradigm that made SQL popular in the first place. SQL is a database language that has become universal because, like HTML that powers the web, data is transmitted in a well-established human-readable language which is not coupled to system-dependent function bindings or compiled code. You send a text-based query to the database engine and receive a reply. It doesn't need to be complicated.

Tuesday, 6 May 2014

Getting new heating or air conditioning? Insist on a model that works with third-party thermostats.

There are a lot of heating and cooling appliances on the market, and most of them come with horrible thermostats. You know the type: it require separate batteries, has a calculator-style LCD display that's difficult to read in low light, and lacks intuitive features.

Some manufacturers (particularly for split system air conditioning units) lock you into using the manufacturer's thermostat. Many don't.

Get ready for the future

In the future, all your home's heating and cooling will be co-ordinated by a small server appliance, situated out of the way in a closet or inside your home's data cabinet next to your broadband router. It won't consume much power, maybe around $2 worth of electricity per year. But it will run your heating and cooling appliances in a way that saves you money.

It will also make your heating/cooling system more practical. Instead of having to squint at a dedicated LCD display to access a limited set of functions, you'll be able to access a full set of features through your smart phone and PC: phones and computers are designed to allow for far better human-computer interaction than a dedicated thermostat could ever have.

Here's a phone screenshot showing the web interface for a prototype device we're working on, called THERMOSERVER:

Here you have an adaptive interface, where you can customise the panels on display, scroll down to access other features, program behavioural policies, and if you share your house with other people, you can specify restrictions to prevent them from running up your energy bill to high.

If you want a wall-mounted controller too, that's easy to do. Just purchase an inexpensive low-spec tablet computer next time your supermarket has them on special, plug it in to a power socket and stick it on your wall. If your house is cabled with ethernet sockets or your router is nearby, just add an ethernet dongle and you don't even need WiFi.

How a thermostat works

A typical central heating system with a wired thermostat on your wall works on a simple principle. There are two wires between the heating unit and your thermostat, that carry a low voltage (e.g. 25VAC). When your thermostat detects that heat is required (based on the temperature you've set and the ambient temperature in the room), it joins the wires to complete the circuit. When the room gets warm enough, it breaks the circuit again.

Cooling systems work the same way in reverse.

THERMOSERVER works using electromechanical relays to tell heating or cooling appliances whether they are required. This approach was chosen because it's common among third-party thermostats.

Some appliances use modulating thermostats, which don't just say 'yes' or 'no', they also communicate a power level that indicates how close the ambient temperature is to the target temperature (e.g. low, medium, or high). Such devices will typically allow you to flick an internal switch to 'non-modulating mode' allowing you to control them them with third party thermostats. For example, a Baxi hydronic heating system boiler comes with a wired modulating thermostat that can be moved from its original location on a wall into a special receptacle inside boiler unit itself if control via a third-party thermostat is required. In this case, the instructions say to switch the unit into non-modulating mode.

The efficiency benefits of running a system in modulating mode are debatable, and many high-efficiency heating units don't provide the capability at all. As the temperature inside your house nears your desired target temperature, a modulating heating system will switch to a reduced setting and keep running for slightly longer in order to reach the target cut-off temperature. A non-modulating system will simply reach the target temperature sooner. The laws of thermodynamics state that the same amount of energy is required in either case.

Insist on Interoperability

Some devices will only work with a supplied wireless remote control that is cumbersome to use and easy to misplace or step on.

Before you buy, always ask the manufacturer whether the appliance can be wired up to a third-party (and non-modulating) thermostat. For a dedicated heater or cooler this will be a simple 2-wire connection. For a combination heating/cooling unit (such as a reverse-cycle air conditioner), this ideally means one pair of wires for heating and another pair for cooling. Ask the salesperson too. If not, tell them it's a deal-breaker.

That way, you'll avoid locking yourself in to a single manufacturer's proprietary control system, and you'll have heaps of flexibility in the years to come.

Monday, 28 April 2014

Nice in theory, but...

Every so often someone comes out with The Next Big Thing. And they market the theory behind it, people say 'hey, that makes sense', and people start buying the product.

But what if the theory is flawed or incomplete? What do you do with an invention that doesn't really work, because the idea wasn't properly thought through? Often it takes something going out of style before people look back at a flawed product and ask 'what were we thinking?'

Ergonomic keyboards

A good example is a computing product that came out of the 90s. The ergonomic Microsoft Natural keyboard, which splits the keyboard layout into left and right sections that are angled and tilted to point in the direction of your elbows. The idea was based on the fact that your forearms form an inverted V when you type. By putting a kink in the middle of the keyboard, you wouldn't need to kink your wrists in order to have your hands positioned over the keys with your fingers pointing directly forward.

The obvious problem: nobody actually types with their hands positioned that way. Human fingers are different lengths; your little finger is significantly shorter than your index finger, and there's an approximate graduation in lengths between them. That means that when you position your fingers in the home position to type on an ordinary keyboard, your hands form an inverted V too. Using a Microsoft Natural keyboard actually forces you to either kink your wrists the other way, or spread your elbows out further than you normally would.

There's a keyboard on the market that's been in continuous production since the 1980s. It's known the Model M, and is now sold as the Unicomp Classic. It has the same straight layout as any cheap keyboard, yet enjoys a bit of a following among writers and programmers as a comfortable keyboard to type on. The difference is in the internal mechanism of the keys.

Now a typical keyboard registers keystrokes on a 'membrane' under the keys. The membrane consists of two layers of plastic, with screen-painted electrical traces running across them, that are kept slightly separated from each other by an intermediate layer which has holes where the keys are. When you press a key, you flatten a silicone dome which pushes the two membrane layers into contact with each other and completes the circuit in that spot. When you release the key, the silicone dome springs back into shape, and the key pops up.

A Model M also uses a membrane arrangement, but rather than having a rubber dome, it has a spring and hammer mechanism under each key. When a key gets two thirds of the way down, the spring buckles, causing its base to pivot and causing a hammer to strike the membrane, at this point, you hear a loud click and the resistance beneath your finger disappears. From here, your finger muscles (which are actually located further up your arm) instinctively relax as the key hits the bottom, so you avoid straining your tendons.

Old > new ?

How can a thirty year old keyboard design possibly be better than something you get with a new computer today? Well, the Model M was designed by IBM in the heat of the computer wars of the 80s. IBM invested a lot of resources into developing it, and it wasn't a cheap keyboard to manufacture. The reason was because Apple computers were all sold with rubber-dome keyboards. Selling a computer with a higher quality keyboard that didn't feel cheap to type on gave IBM a competitive advantage in the world of business computing, at a time when a lot of personal computers on the market must have seemed (to serious business people) like toys.

So the question: 'what were we thinking?' goes both ways. Sub-optimal design often falls out of favour over time, but a lot of good design gets forgotten too. Design priorities change, and the original vision gets neglected. It's important for designers in today's world to not only create new visions of the future, but also to look back at understanding and appreciating what the vision used to be. Today's computing devices have evolved out of (and bear remnants of) a history of changing design visions, so understanding them is certainly worthwhile.

Freer than Linux?

Linux is getting a lot of attention right now.

Android, arguably the hottest OS right now, is powered by the Linux kernel behind the scenes. Desktop distributions like Ubuntu and Mint are gaining in popularity at the expense of the traditional inflexible (but easily manageable) paradigm of all PCs running Windows. As far as driver support among PC manufacturers goes, Linux comes second only to Windows. Linux works across multiple architectures. Broadband routers run it. Smart TVs run it. Even our upcoming ARM-based embedded home automation operating system, ThermOS, is built on GNU/Linux underneath.

So. What about BSD?

BSD?

Like GNU/Linux, BSD is based on the Unix operating system that came out in the early 1970s. It aims at the same POSIX standard for Unix compatibility as Linux, which means Linux applications are pretty-much source-compatible with BSD. On the desktop, the main two distributions that others are based on are FreeBSD (the more popular branch) and OpenBSD (a slightly more ideologically-driven branch, with heavy focus on security). PC-BSD is a user-friendly distribution that is based on FreeBSD (in much the same way as Ubuntu is based on Debian in the Linux sphere).

BSD operating system distributions are solid products, with a track record spanning decades of legendary reliability. Many Linux programs can be made to run on BSD, and the computing experience feels a little bit more responsive and robust than Linux does. The OpenBSD community even prides itself on regularly and proactively auditing the codebase to weed out potential issues before they become problems. OpenBSD has only had a handful of security vulnerabilities over the course of its entire history; a point that gets prominent mention on their website. If there's anything wrong with BSD, it's that the community isn't big enough for things like driver support to get the attention they deserve. So why is nobody using it?

Nobody's using it?

On the contrary, BSD is a lot more popular than you might think. Apple's Darwin operating system (better known for it's consumer branches: iOS and Mac OSX), is Apple's own BSD distribution, and borrows heavily from the FreeBSD branch. There are literally hundreds of millions of Apple devices out there running BSD. If you open up a terminal window on a Mac, the command line experience is not all that different to what you get on a typical GNU/Linux system.

Now that we've introduced BSD, we can get to the crux of this post: both the Linux and BSD communities are driven by the ideal of free software, but they differ drastically in terms what freedom means in the software world.

A different licensing philosophy

GNU/Linux is based on the GNU Public License (GPL), a multipage document that requires overt publication of all source code used in a piece of software, and that you freely allow others to modify your work to make it their own, and redistribute it as they wish. It's a 'viral' license in that if you use someone else's GPL'd code in your work, then you must distribute your work under the same license, so that others can continue to alter and modify that code within your program.

BSD takes on a different philosophy. The BSD license is limited to a few short paragraphs, rather than being pages long, and states that the code is free but without warranty, and does not impose restrictions on how you re-use or re-purpose a program's source code, usually as long as you retain the copyright notice.

Both approaches have their merits. The GPL is designed to encourage the continual development of free software by preventing people from poaching the free work of others without giving back, and it's perfectly sensible that people who write free software would want to license their work in this way. The GPL isn't designed to be 'popular,' it's primarily geared to serving communities of free software developers. It doesn't necessarily work for game developers who want to make money out of their work without the risk that others will 'steal', modify, and redistribute their work. There aren't too many commercial games available as native Linux applications (not because Linux applications are actually required to be released in source form, but more because the source code form is often required for compatibility reasons, in order to recompile packages for different Linux distributions. Android apps aren't native Linux applications as they run through a virtual machine that sits on top of the system stack).

The BSD license is designed to encourage people to use something however they wish, without any of the compliance hassles or limitations of the GPL. Members of various BSD developer communities, notably OpenBSD, have even taken the drastic step of re-writing freely available GPL-licensed utilities from scratch, to free them of the restrictions imposed by the GPL. It's meant to be a more pragmatic and popular approach, allowing developers free reign to do what they want without drowning in licensing clauses. The obvious question then, is why BSD developer and user communities have remained relatively small, despite the enormous benefits they have brought to companies like Apple.

Friday, 22 November 2013

Modifying our cheap CNC machine

As mentioned in a previous post, our CNC 3020 milling machine came with a little room for improvement. In this post we discuss some modifications we made to make our machine more useful.

Adding limit switches

Limit switches allow the CNC mill to home itself to a repeatable zero position. They also prevent the machine from going outside its permitted range of motion and crashing into 'hard' limits.

For most CNC machines, it's common to provide two limit switches for each of the X and Y axes (for maximum and minimum limits) and a single limit switch on the Z axis for the upper limit only. The 'safe' lower Z limit depends on the kinds of materials you have clamped onto the platform and the kind of cutting bit you happen to be using at the time. It's not the sort of thing you can simply guard with a limit switch.

Limit switches are cheap to obtain online. They are simple microswitches with a lever arm and roller attached to help actuate the switch with a reasonable amount of precision. We just superglued them onto the aluminium frame of the machine in positions where they would be tripped just before the machine would hit a hard limit. With a bit of creativity, you can find positions to locate the switches that sacrifice very little of the machine's range of motion.


The X limit switches will trip if the carriage moves too far to the left or right. The Z-maximum limit switch is mounted on the carriage and will trip if the spindle rises too far up. Small dowel pieces help give the superglue more surface area to bond to.


This Y-minimum limit switch will trip if the gantry moves closer to the front of the platform.

We wired both X axis and both Y axis switches in series, using the 'Normally Closed' connectors on the switches. This is the recommended option as faults in the limit switch connections show up immediately. We made the connections using light-duty stranded-core bell wire. We'll have to wait and see how durable this wire is in the long run, but it's very flexible and thin enough to fit into the existing cable trunking fairly easily.

For the other end of the connections, we were lucky. Although the CNC 3020 controller box doesn't provide any inputs for limit switches, there are holes on the circuit board inside where you can solder on a pin-header for X, Y and Z limits. What's more, after we soldered on a 6-pin header, we found these pins to be fully functional. They simply map onto three otherwise-unused parallel port inputs pins.


The PCB from the controller box after soldering a strip of 6 header pins into a bank of 6 vacant holes labelled 'LIMIT'. Presumably, a more expensive model would have come with those header pins already there to begin with.

We drilled holes in the back of the controller box and ran the limit pins to female banana connectors. We chose banana connectors because they're versatile: you can use them as binding posts for bare wire, or you can terminate the wires properly by adding banana plugs. (We also added a grounding post on the back for possible equipotential bonding to the CNC platform, which might be overkill...)


Banana socket binding posts added to the back of the controller box.


Ribbon cable linking banana sockets to LIMIT pins. (GND post not yet connected.)

Keeping dust out of the controller box

The controller box has a fan inside for circulating air around the heatsink, with an unfortunate side-effect. Milling dust is sucked into the unit from the nearby milling platform, where it collects on the circuit boards inside. This isn't a huge problem when milling wood as sawdust isn't conductive. It becomes a problem if we start milling aluminium or copper. Moving the controller box further away from the milling platform isn't an option; the cables are too short.

Our solution was a combination of filters and ducting. For the front and side vents, we cut filters out of kitchen scouring pads (the type that come in flat sheets). For the price, these make excellent dust filters. We hooked them over the air intake grilles with office staples.

For the bottom air intake (the most critical, as it pulls air across a large heatsink), there wasn't enough bottom clearance between the controller box and the bench to add a filter without severely limiting airflow, with the risk of overheating. Instead, we added sponge strips around the bottom of the controller box on three sides so that air could only enter from the left hand side – away from the milling platform. The sponges had the added bonus of lifting the controller box slightly, improving airflow to the bottom vent.

The rear vent does not need filtering as it's the exhaust vent for the fan, and should repel dust when the unit is turned on.

Wednesday, 23 October 2013

Beware of Dark Patterns!

Dark patterns are increasingly-common user interface elements designed to trick unsuspecting users into selecting an undesirable option, such as installing an unwanted app or signing up to an unwanted service.

Although designers go out of their way to provide a sleek user interface for hooking you in, you may not always find an equivalent feature that lets you undo your mistake afterwards.

A typical example

The following screenshot comes from an installer stub program which the user must run in order to install video editor software downloaded from a popular website:

Most software will require you to accept a license agreement before you are allowed to install. In the above screenshot, the user will probably assume that the green 'accept' button is the only way to go forward with installation.

Reading the screen more carefully reveals that it is actually referring to a different, completely unrelated program; one that alters the user's web browser settings in order to display advertisements. In this case, the correct way to proceed with installing the video editor, without the unwanted extra software, is actually to click the greyed-out 'decline' button.

By accepted user-interface convention (as specified by Microsoft and Apple, among others), a greyed-out button denotes an invalid option that is not able to be selected by the user. Here however, the 'decline' button is a perfectly valid option even though it has been given the cosmetic appearance of a greyed-out button. It has been greyed out purely to deter the user from clicking on it, even though it is probably the preferred option for the majority of savvy users.

Why do dark patterns exist?

Organisations typically have a financial incentive to persuade users to install a particular program or buy a particular service. In the above case, an organisation would receive a commission for each user who clicks the 'accept' button. There are always incentives for a company to improve sales, and deceptive sales tactics are hardly anything new.

Why does it matter?

If you own a computer, you have the right to be in informed control over what runs on it.

Unwanted applications are a security risk. If you have sensitive data on your computer, you probably don't want to allow strange applications to install themselves and assume free-reign over your files.

Unwanted applications deprive you of full use of the computer you paid for. If you perform computationally-intensive tasks, you probably don't want to have your system's performance and reliability compromised by a poorly-coded application you didn't realise you'd installed.

Finally, in an age of email-viruses spread through a combination of shady social-engineering tactics and uninformed users, it really isn't a good look when otherwise-reputable organisations engage in the same tactics.

What to do about dark patterns

Ideally, you should never trust someone with your business, or your web traffic, if they do not treat you – or your computer – with respect.

If you stumble across a dark pattern on a website you're using (for example, a website that tries to trick you into adding unwanted items to your order, or one that doesn't tell you what the 'catch' is until you've already invested time in filling out your details), stop using that website and find an alternative.

If you find a dark pattern in an app you've downloaded, immediately delete the app from your computer. If the app is distributed by a reputable organisation, consider sending an email to the distributor to let them know what is going on.

More information

There's an excellent guide to dark patterns, along with a collection of examples, over at darkpatterns.org.

Tuesday, 8 October 2013

Getting a CNC mill

We've just got hold of a cheap CNC milling machine, which ought to come in handy when building hardware prototypes.

What's a CNC mill?

Everyone's heard of 3D printers. A CNC mill is a bit like a 3D printer in reverse. Instead of building up an object by adding material in layers, a CNC mill starts out with a solid piece of material (a block of wood, aluminium, etc.), and carves away unwanted material until you're left with the finished part. The quality is higher than 3D printing. When Apple sells a high-end product that was 'carved from a single piece of aluminium', they're talking about CNC milling.

Unlike 3D printers, CNC milling machines haven't started to become 'consumerised' yet. You pretty much have to know the machine inside out in order to use it. Fortunately, they aren't particularly complicated machines, and we do appreciate the fundamental simplicity. There are two main bits to understand:

  • A spindle holds an engraving bit. It spins the engraving bit at high speed (like a drill) to carve out material.
  • The spindle can be moved around in three dimensions (X, Y, Z axes) to different co-ordinates by a program script (called G-code) running on a connected PC. When you move the move the spinning engraving bit through a path where a piece of wood happens to be, wood is carved out by the spindle (unless you go too deep too quickly, or forget to spin up the spindle first, in which case you're likely to stall the machine or break something).
    The machine has a set of three motors, one for each axis of motion. Each motor turns a long screw that moves the spindle along its respective axis.

What did we go with?

We went with a YOOCNC 3020 machine. This machine is sold on eBay under a variety of brand names (just search for CNC 3020), and is just about as cheap as you can get a new machine without building your own from scratch. By all accounts, it's a machine loved by amateurs, hated by professionals, and requires the odd few modifications to overcome the inevitable design flaws. We're new to all this, so we haven't been spoiled enough to demand anything better.

Setting up the machine

The CNC machine is controlled by a PC through the parallel port interface that many motherboards no longer include (although there are cheap PCI cards available on eBay that will add a parallel port to any motherboard). The PC must provide the correct impulses in real-time, or else the CNC machine stutters or the PC application that controls it gets out of sync with the machine's actual position. That usually means that you set aside a PC that's dedicated exclusively to CNC work and nothing else.

There's a special Ubuntu-based operating system called LinuxCNC (formerly known as EMC2), which provides software for controlling a CNC machine and uses a customised Linux kernel optimised for the real-time demands of CNC machines.

For controlling the machine, I put together a PC with a 1.5GHz Via C7-D processor, 1GB of RAM, and an old 4.3GB Quantum Bigfoot hard drive that cost around $900 in 1996. I used an older version of LinuxCNC/EMC2 because the new version is based on a more bloated version of Ubuntu). The Via C7 line of CPUs came out a few years ago as a competitor to the Intel Atom. Despite mediocre performance in many desktop and gaming applications, they provide great real-time characteristics which made them well-suited to industrial applications.

Limitations of the machine

On this particular machine, the spindle control is all-manual. You have to flick a switch to turn it on, and slowly rotate a variable control to take it up to full speed. (More expensive machines allow the spindle rotation to be controlled by the computer.) The machine comes with a sticker attached warning you that you'll blow one of the internal fuses if you spin it up too fast:

The emergency stop button on the front panel is purely a software input to the PC through one of the parallel port pins, so if you have it set up correctly, it'll instruct the PC to stop moving the machine along the X, Y, Z axes, but it won't cut the power to the rotating spindle. (Looking inside the controller box though, there is an unused pin header on the power supply board for an emergency stop input for the spindle. Perhaps with a bit of work, the controller box could be modified to incorporate the spindle into the emergency stop.)

A problem we noticed fairly early on is that some engraving dust tends to get sucked into the controller box by the inbuilt cooling fan. We'll discuss this later in another blog post.

Another issue is that there are no limit switches or inputs on the controller box. What does this mean? Well, a CNC machine always knows how to move, say, 52.4mm along the X axis from its current position, but it has no way of knowing its absolute position, or how close it is to slamming into the end of its range of motion and stalling one of the motors (or, in the case of this machine, pinching the Z axis motor cable). Without limit switches, you have to zero (or 'touch-off') the spindle position relative to your work-piece each time you use it. With limit switches, the PC software can automatically home the machine to a default position each time you use it, and automatically ensure that the spindle doesn't go outside an allowed safe range of motion.

The good news is, it is possible to add limit switches to this particular machine: the PCB inside the controller box provides the capability, although there's a bit of soldering required. We'll cover this in another blog post later.

Finally, the documentation leaves a bit to be desired. Parameters are provided for a Windows program called Mach3, but EMC2 requires a few extra parameters. We found we had to start with what was given and work backwards to determine the rest.