Tuesday 30 September 2014

Coming Soon: Viewshare

We're busy developing a new website called Viewshare and, well, we're excited.

Viewshare is a website that lets you create virtual tours of physical spaces by taking multiple photos from different angles and linking them together.

Think of it as a sort of D.I.Y. Google Streetview. You assemble the spaces yourself. You can show off intimate places that are meaningful to you, or cool public places (like your best friend's bar) that you want people to visit in real life.

Stay tuned!

Tuesday 22 July 2014

Deprecate at Your Peril!

You're building something great. You're investing a lot of time and effort into writing the code. Question:

What kind of platform would you rather develop for? One which has a bit of a learning curve at the start, but then you'll be able to keep using it, depending on it, and adding new features to your application, for years to come? Or would you choose a platform where features your application depends on might be taken out or changed in the next version, requiring you to spend time re-writing your code in a few years just so your application won't stop working when the new version of the platform comes out?

Enter the concept of stability

'Stability' is a misunderstood term that gets thrown around a lot in IT. 'Is this OS stable?' People use it to mean something reliable and well-built. Something that won't crash easily.

In academic software engineering, stability has a different, but related, definition. Stable means you can build on it, because it isn't subject to change underneath your application between versions. That's important, because if you have a mature code-base that depends on a particular API, and the API's fundamental interface changes between versions, it creates a moving target problem. It means you have to periodically modify your application's function calls just to keep up. Not only does that rob you of time you could otherwise have spent adding new features, it can require major surgery on your application at the risk of introducing regression faults or breaking the original internal architectural design of your application (the new version of the API you're using might require you to adopt a new, 'improved' usage paradigm that your code wasn't originally designed around), making your code less elegant for future maintainers. In the real world, where enterprise applications have incredibly complicated and mature code-bases that are often not even well understood by the people who are paid to maintain them, this is a real problem.

And if it's bad for large enterprises, it's worse for independent developers, who will often abandon their work when the platform no longer runs it rather than continue to maintain it indefinitely. In contrast to film and literature (non-interactive forms of entertainment), where classic works may endure for centuries, think of all of the cultural loss of the countless computer games that have been forgotten simply because it is no longer possible to run them.

Examples of stable platforms

Programming languages like C and C++ have been officially standardised by official standards bodies. Although new features get added in each new revision to the language, the standard remains backward compatible. While these languages mightn't be perfect, standardisation means that you can depend on them to be a fixed point in your project.

Recently, in response to concerns by governments and large organisations that documents archived as Microsoft Word documents might be rendered inaccessible in a decade's time, and the existence of the already-standardised Open Document formats, Microsoft went to a lot of trouble to get their XML-based Office formats officially standardised. Microsoft's DOCX format might leave a little to be desired in terms of compatibility issues, but at least they made the effort.

The X Window system, version 11, is one of the most stable pieces of software out there. It's the graphical front-end used by almost every Linux distribution, almost every BSD distribution, and is even provided as a compatibility layer in Apple's OSX. And it's been at version 11 since the 1980s. The API is horrible to program for (people rarely work with the library directly any-more), and it provides several features that are now redundant because people have come up with better ways of doing things. But that doesn't matter. What matters is that it's always there behind the scenes, and it's reached a level of stability that makes it dependable and means it will continue being used for years to come.

Why we're giving up on OpenID

We had high hopes for OpenID. The vision was that you would sign up for an OpenID account through any OpenID provider, and you would be able to use that account to log into any website that also followed the OpenID standard. Rather than having to create a separate account for every website, you'd only need one or two accounts for all websites. Individual website owners wouldn't need to worry about securing your credentials as these would be held and authenticated by OpenID providers instead.

Companies like Google, Yahoo, Microsoft, even AOL, adopted the OpenID standard. We set up an OpenID login system on our website. We wouldn't need to deal with account security at our end, we could simply allow people to use their existing OpenID account (such as a Google account) to log in without even having to sign up to our website separately. The system was simple to implement and seemed to work well. There were potential security vulnerabilities, but no really fatal flaws that couldn't be fixed.

Then something changed. The OpenID foundation announced that they didn't believe in OpenID anymore, and release a new improved and very different system called OpenID Connect instead. A website called MyOpenID, which provided OpenID accounts for people who didn't want to sign up with larger companies like Google, announced that they were shutting down for good. Websites like Flickr and Facebook announced that they were moving away from OpenID and would no longer be accepting third-party login credentials.

Fortunately for us, our OpenID login facility was never more than experimental. Had we been serving customers through it, those customers would have potentially found themselves locked out of accounts that no longer authenticated and unable to access their purchases. All because the OpenID foundation decided that pursuing a new 'easier to use' system was more important than preserving the functionality that existing websites were already depending on.

Why the PHP Group is making a mistake

PHP is a programming language that's commonly used to generate dynamic websites by the server. MySQL is a database system that is often used hand-in-hand with PHP for data stored and accessed by the website. (Shameless plug: for people who want a simple web content management system without the mystery of a MySQL database, there's always FolderCMS.)

A few years ago, the PHP Group announced that the MySQL functions in PHP were being deprecated. This means 'we're going to get rid of them, so stop using them.' In their place, there would be two new, but somewhat different, MySQL function sets to choose from. This was, and is, a controversial move. A lot of very popular websites rely on PHP's established MySQL functionality, and PHP owes a lot of its popularity to its ability to interface easily with MySQL. Why were they doing this? Their own website's FAQ isn't very clear:

Why is the MySQL extension (ext/mysql) that I've been using for over 10 years discouraged from use? Is it deprecated? ...

The old API should not be used, and one day it will be deprecated and eventually removed from PHP. It is a popular extension so this will be a slow process, but you are strongly encouraged to write all new code with either mysqli or PDO_MySQL.

That isn't really a justification, it's just the question re-worded into the format of an answer. There are several threads on StackOverflow, where the question has been repeatedly asked, which provide some more substantial answers: one is that the old functions are potentially dangerous for beginners who don't know that they are supposed to validate and sanitise user input before sending it into an SQL query. Another is because of a belief that developers should be moving away from text based SQL queries and moving towards pre-compiled queries. This provides a performance boost. On the other hand it represents a significant move away from the usage paradigm that made SQL popular in the first place. SQL is a database language that has become universal because, like HTML that powers the web, data is transmitted in a well-established human-readable language which is not coupled to system-dependent function bindings or compiled code. You send a text-based query to the database engine and receive a reply. It doesn't need to be complicated.

Tuesday 6 May 2014

Getting new heating or air conditioning? Insist on a model that works with third-party thermostats.

There are a lot of heating and cooling appliances on the market, and most of them come with horrible thermostats. You know the type: it require separate batteries, has a calculator-style LCD display that's difficult to read in low light, and lacks intuitive features.

Some manufacturers (particularly for split system air conditioning units) lock you into using the manufacturer's thermostat. Many don't.

Get ready for the future

In the future, all your home's heating and cooling will be co-ordinated by a small server appliance, situated out of the way in a closet or inside your home's data cabinet next to your broadband router. It won't consume much power, maybe around $2 worth of electricity per year. But it will run your heating and cooling appliances in a way that saves you money.

It will also make your heating/cooling system more practical. Instead of having to squint at a dedicated LCD display to access a limited set of functions, you'll be able to access a full set of features through your smart phone and PC: phones and computers are designed to allow for far better human-computer interaction than a dedicated thermostat could ever have.

Here's a phone screenshot showing the web interface for a prototype device we're working on, called THERMOSERVER:

Here you have an adaptive interface, where you can customise the panels on display, scroll down to access other features, program behavioural policies, and if you share your house with other people, you can specify restrictions to prevent them from running up your energy bill to high.

If you want a wall-mounted controller too, that's easy to do. Just purchase an inexpensive low-spec tablet computer next time your supermarket has them on special, plug it in to a power socket and stick it on your wall. If your house is cabled with ethernet sockets or your router is nearby, just add an ethernet dongle and you don't even need WiFi.

How a thermostat works

A typical central heating system with a wired thermostat on your wall works on a simple principle. There are two wires between the heating unit and your thermostat, that carry a low voltage (e.g. 25VAC). When your thermostat detects that heat is required (based on the temperature you've set and the ambient temperature in the room), it joins the wires to complete the circuit. When the room gets warm enough, it breaks the circuit again.

Cooling systems work the same way in reverse.

THERMOSERVER works using electromechanical relays to tell heating or cooling appliances whether they are required. This approach was chosen because it's common among third-party thermostats.

Some appliances use modulating thermostats, which don't just say 'yes' or 'no', they also communicate a power level that indicates how close the ambient temperature is to the target temperature (e.g. low, medium, or high). Such devices will typically allow you to flick an internal switch to 'non-modulating mode' allowing you to control them them with third party thermostats. For example, a Baxi hydronic heating system boiler comes with a wired modulating thermostat that can be moved from its original location on a wall into a special receptacle inside boiler unit itself if control via a third-party thermostat is required. In this case, the instructions say to switch the unit into non-modulating mode.

The efficiency benefits of running a system in modulating mode are debatable, and many high-efficiency heating units don't provide the capability at all. As the temperature inside your house nears your desired target temperature, a modulating heating system will switch to a reduced setting and keep running for slightly longer in order to reach the target cut-off temperature. A non-modulating system will simply reach the target temperature sooner. The laws of thermodynamics state that the same amount of energy is required in either case.

Insist on Interoperability

Some devices will only work with a supplied wireless remote control that is cumbersome to use and easy to misplace or step on.

Before you buy, always ask the manufacturer whether the appliance can be wired up to a third-party (and non-modulating) thermostat. For a dedicated heater or cooler this will be a simple 2-wire connection. For a combination heating/cooling unit (such as a reverse-cycle air conditioner), this ideally means one pair of wires for heating and another pair for cooling. Ask the salesperson too. If not, tell them it's a deal-breaker.

That way, you'll avoid locking yourself in to a single manufacturer's proprietary control system, and you'll have heaps of flexibility in the years to come.

Monday 28 April 2014

Nice in theory, but...

Every so often someone comes out with The Next Big Thing. And they market the theory behind it, people say 'hey, that makes sense', and people start buying the product.

But what if the theory is flawed or incomplete? What do you do with an invention that doesn't really work, because the idea wasn't properly thought through? Often it takes something going out of style before people look back at a flawed product and ask 'what were we thinking?'

Ergonomic keyboards

A good example is a computing product that came out of the 90s. The ergonomic Microsoft Natural keyboard, which splits the keyboard layout into left and right sections that are angled and tilted to point in the direction of your elbows. The idea was based on the fact that your forearms form an inverted V when you type. By putting a kink in the middle of the keyboard, you wouldn't need to kink your wrists in order to have your hands positioned over the keys with your fingers pointing directly forward.

The obvious problem: nobody actually types with their hands positioned that way. Human fingers are different lengths; your little finger is significantly shorter than your index finger, and there's an approximate graduation in lengths between them. That means that when you position your fingers in the home position to type on an ordinary keyboard, your hands form an inverted V too. Using a Microsoft Natural keyboard actually forces you to either kink your wrists the other way, or spread your elbows out further than you normally would.

There's a keyboard on the market that's been in continuous production since the 1980s. It's known the Model M, and is now sold as the Unicomp Classic. It has the same straight layout as any cheap keyboard, yet enjoys a bit of a following among writers and programmers as a comfortable keyboard to type on. The difference is in the internal mechanism of the keys.

Now a typical keyboard registers keystrokes on a 'membrane' under the keys. The membrane consists of two layers of plastic, with screen-painted electrical traces running across them, that are kept slightly separated from each other by an intermediate layer which has holes where the keys are. When you press a key, you flatten a silicone dome which pushes the two membrane layers into contact with each other and completes the circuit in that spot. When you release the key, the silicone dome springs back into shape, and the key pops up.

A Model M also uses a membrane arrangement, but rather than having a rubber dome, it has a spring and hammer mechanism under each key. When a key gets two thirds of the way down, the spring buckles, causing its base to pivot and causing a hammer to strike the membrane, at this point, you hear a loud click and the resistance beneath your finger disappears. From here, your finger muscles (which are actually located further up your arm) instinctively relax as the key hits the bottom, so you avoid straining your tendons.

Old > new ?

How can a thirty year old keyboard design possibly be better than something you get with a new computer today? Well, the Model M was designed by IBM in the heat of the computer wars of the 80s. IBM invested a lot of resources into developing it, and it wasn't a cheap keyboard to manufacture. The reason was because Apple computers were all sold with rubber-dome keyboards. Selling a computer with a higher quality keyboard that didn't feel cheap to type on gave IBM a competitive advantage in the world of business computing, at a time when a lot of personal computers on the market must have seemed (to serious business people) like toys.

So the question: 'what were we thinking?' goes both ways. Sub-optimal design often falls out of favour over time, but a lot of good design gets forgotten too. Design priorities change, and the original vision gets neglected. It's important for designers in today's world to not only create new visions of the future, but also to look back at understanding and appreciating what the vision used to be. Today's computing devices have evolved out of (and bear remnants of) a history of changing design visions, so understanding them is certainly worthwhile.

Freer than Linux?

Linux is getting a lot of attention right now.

Android, arguably the hottest OS right now, is powered by the Linux kernel behind the scenes. Desktop distributions like Ubuntu and Mint are gaining in popularity at the expense of the traditional inflexible (but easily manageable) paradigm of all PCs running Windows. As far as driver support among PC manufacturers goes, Linux comes second only to Windows. Linux works across multiple architectures. Broadband routers run it. Smart TVs run it. Even our upcoming ARM-based embedded home automation operating system, ThermOS, is built on GNU/Linux underneath.

So. What about BSD?

BSD?

Like GNU/Linux, BSD is based on the Unix operating system that came out in the early 1970s. It aims at the same POSIX standard for Unix compatibility as Linux, which means Linux applications are pretty-much source-compatible with BSD. On the desktop, the main two distributions that others are based on are FreeBSD (the more popular branch) and OpenBSD (a slightly more ideologically-driven branch, with heavy focus on security). PC-BSD is a user-friendly distribution that is based on FreeBSD (in much the same way as Ubuntu is based on Debian in the Linux sphere).

BSD operating system distributions are solid products, with a track record spanning decades of legendary reliability. Many Linux programs can be made to run on BSD, and the computing experience feels a little bit more responsive and robust than Linux does. The OpenBSD community even prides itself on regularly and proactively auditing the codebase to weed out potential issues before they become problems. OpenBSD has only had a handful of security vulnerabilities over the course of its entire history; a point that gets prominent mention on their website. If there's anything wrong with BSD, it's that the community isn't big enough for things like driver support to get the attention they deserve. So why is nobody using it?

Nobody's using it?

On the contrary, BSD is a lot more popular than you might think. Apple's Darwin operating system (better known for it's consumer branches: iOS and Mac OSX), is Apple's own BSD distribution, and borrows heavily from the FreeBSD branch. There are literally hundreds of millions of Apple devices out there running BSD. If you open up a terminal window on a Mac, the command line experience is not all that different to what you get on a typical GNU/Linux system.

Now that we've introduced BSD, we can get to the crux of this post: both the Linux and BSD communities are driven by the ideal of free software, but they differ drastically in terms what freedom means in the software world.

A different licensing philosophy

GNU/Linux is based on the GNU Public License (GPL), a multipage document that requires overt publication of all source code used in a piece of software, and that you freely allow others to modify your work to make it their own, and redistribute it as they wish. It's a 'viral' license in that if you use someone else's GPL'd code in your work, then you must distribute your work under the same license, so that others can continue to alter and modify that code within your program.

BSD takes on a different philosophy. The BSD license is limited to a few short paragraphs, rather than being pages long, and states that the code is free but without warranty, and does not impose restrictions on how you re-use or re-purpose a program's source code, usually as long as you retain the copyright notice.

Both approaches have their merits. The GPL is designed to encourage the continual development of free software by preventing people from poaching the free work of others without giving back, and it's perfectly sensible that people who write free software would want to license their work in this way. The GPL isn't designed to be 'popular,' it's primarily geared to serving communities of free software developers. It doesn't necessarily work for game developers who want to make money out of their work without the risk that others will 'steal', modify, and redistribute their work. There aren't too many commercial games available as native Linux applications (not because Linux applications are actually required to be released in source form, but more because the source code form is often required for compatibility reasons, in order to recompile packages for different Linux distributions. Android apps aren't native Linux applications as they run through a virtual machine that sits on top of the system stack).

The BSD license is designed to encourage people to use something however they wish, without any of the compliance hassles or limitations of the GPL. Members of various BSD developer communities, notably OpenBSD, have even taken the drastic step of re-writing freely available GPL-licensed utilities from scratch, to free them of the restrictions imposed by the GPL. It's meant to be a more pragmatic and popular approach, allowing developers free reign to do what they want without drowning in licensing clauses. The obvious question then, is why BSD developer and user communities have remained relatively small, despite the enormous benefits they have brought to companies like Apple.