Bubble 2.0 – Why Not APIs?
Why wouldn’t a web service include an API (Application Program Interface)? In my previous post I blogged about how a good API (a way for one program or service to request services from another) can lead to fame and fortune for the developers who are smart enough to include it. Google Maps is just one example of how making functionality available through an API results in a wealth of third-party services built on the API-supporting service. However, most software and many web services do not support APIs. Why?
The most common reason a service may not have an API is purely practical: APIs take time and work. Code has to be written and tested in order to implement an API. We’re all in a hurry to get a new service out. Most new services are built on a shoestring. An API may be a “nice to have” in the first release of a Web 2.0 (aka Bubble 2.0) service. Developing it can wait for the next release. And the next. And the next.
An API is also a commitment. If I publish an API for my gidget.tabulo.us service so that you can incorporate my dancing gidgets in your web site, you’ll be unhappy with me if I stop supporting the API or change it in some way in my next release so that your service suddenly breaks. Nerd ethics says that, if you hack into my site and somehow figure out how to make my gidgets dance, it’s your problem if my next release breaks your hack and the music stops. But, if I publish an API, you are entitled to rely on that API continuing to work as I published it. Third party developers don’t want to build their services on shifting APIs. So, once I publish an API, I am committed to maintaining it in each future release of my service. Good reason to think twice about the costs and benefits of an API. In general, services which do publish APIs publish them a release or two after they have made the same functionality available directly to human users through a GUI (Graphic User Interface}.
The second most common reason for NOT publishing APIs is that the developer of the software or the service wants to reserve all future exploitation of the API-less program or service for himself or herself. I think my dancing gidgets are so cool that I will be able use them to create a more friendly search engine than Google; so I don’t want Google to use my APIs to incorporate the gidgets in its service. Or I have a huge investment in the database I built which contains the present location of every iguana on earth. I’m happy to have people look at this database because they buy iguna-anna from the accompanying ads; but I don’t want other services to be able to use an API to incorporate my iguana data so that they can also attract iguana-specific advertising.
Microsoft has often been accused of having a set of secret Windows APIs that developers of Microsoft Office Applications can use to enhance their applications. The alleged motive for this alleged behavior is to reserve the benefits of these APIs for Microsoft exploitation. However, Microsoft is also motivated to have third parties develop for Windows. As far as I know from my time at Microsoft, hiding APIs was NOT company policy. The point is, though, that it is not always a good business decision to allow third parties to incorporate all of the functionality or data that you are providing.
The third major reason for not building APIs into a service is to protect it from misuse. del.icio.us does not support APIs which would allow other services to tag web content without user involvement. Why? Simple: if such an API existed, it would be seized by the spam community to automatically and massively tag websites that want to call attention to themselves. For the same reason digg wants to be available only to humans, not to robots. No API for digging.
APIs can be dangerous. The rich sets of APIs available for Microsoft Outlook made it possible for viruses to raid address books and send infections disguised as email from a known human. Microsoft now requires human interaction before a program is allowed access to the address book through its API.
When no API is available, developers who want to drive the function of one service from another – for good or bad reason – will often try to do so through the human user interface. There is a joke that goes “on the Internet no one knows you’re a dog.” Well, on the Internet, no one knows you’re a computer, either. I can write a program which looks to another program as if it were a human. It appears to click a mouse and type in variables. It appears to “read” what comes back. But it is really another program and its function may well be malicious: it may be misappropriating your iguana data, stealing your gidget functionality, or spamming your folksonomy with phony tags for Viagra websites.
Of course, defenses have been built – antiAPIs if you will. Sign up for a digg account and you have to prove that you’re a human by picking wobbly letters out of irregular backgrounds – something humans can so far do better than machines. Many services require that each user have an email account and respond to a message sent to that account as part of the signup process. Unhuman behavior like thousands of requests a minute is often rejected.
Despite the costs and the dangers, my prediction is that the most successful Web 2.0 services will support rich APIs. Most services only meet a slice of user need. Most users won’t even know about most services. APIs allow services to be knit together into valuable aggregates. It is the best of these aggregates that will get the user attention they need in order for their linked together services to prosper individually.
Comments