About Tom Evslin

Video Profile of Tom Evslin

Follow Tom Evslin on Twitter


Add to Technorati Favorites!
Powered by TypePad
Member since 01/2005


« January 2008 | Main | March 2008 »

Great Time to be a Nerd


My friend Jeff Pulver was just telling me that it’s a great time be a nerd. Right on cue I walked past the poster above which I think advertises a new TV show.

On the same theme, Mary asked the other day what all us nerds would’ve done for a living if there weren’t computers or other electronic devices for us to play with. It’s a very frightening thought. I thought I’d be a writer like my father but met my first computer, an IBM 7090 mainframe, in the computer lab at college when I was nineteen (1962) and found this was a much better way to make a living. Didn’t get around to writing fiction again until a couple of years ago when I wrote hackoff.com: an historic murder mystery set in the Internet bubble and rubble – not surprisingly, nerd Dom Montain is a major character of that book.

So what would we have done? Took some thought. Not many of would have been gymnasts, super warriors, craftsman (takes fine motor skills), or very good farmers.

We would’ve counted, added, subtracted, multiplied and divided. Before there were computers, there were lots of numbers that had to be crunched by hand. When you go over the George Washington Bridge or ascend the Empire State Building, remember that the only mathematical tools for all the engineering calculations were adding machines and slide rules (easy to use for a nerd); presumably the calculations were done at least twice.

All the accounts of everything had to be done by hand. All monthly statements manually totaled (at best on an adding machine).

So we would’ve been accountants or engineers or employed by such.

It’s much more fun to teach computers to do all that work (what programming is all about); we never have to do the same thing twice – at least until we reimplement it for a new computer or operating system. And we get great toys to play with.

Jeff’s right.

The Homeowners Own the Banks

There’s an old saying “If you owe the bank a hundred dollars, they own you. If you owe the bank a million, you own the bank.” The numbers need to be updated but the principle applies. The banks lent too much; they over-valued the housing stock that was the collateral; they made the same bet that the homeowners did: prices would keep going up. But the banks can’t afford to own all the houses whose mortgages are under water; so the homeowners own the banks.

The banks (by which I mean those who own the mortgages) should make a deal with the homeowners; they need each other. Naturally both the banks and the homeowners would like taxpayers to share some of the pain. There’s no good reason why taxpayers should get in the middle; but this is an election year so there are lots of bad reasons which may be compelling.

Bankers are suggesting that the federal government buy these mortgages at a discount; the bankers would take losses (which some of them have already booked); the feds would presumably go easy on the borrowers; people would keep their homes; and the downward spiral of housing prices MIGHT be stopped.

A couple of problems with this approach are 1) how would the government decide which homeowners “deserve” forbearance and which don’t? 2) what happens to people who can’t make any reasonable payments at all? 3) who decides what price the government pays for these mortgages? Overpaying means bailing out the lenders.

The prospect of a bailout is delaying the real negotiation and pain-sharing that would be the beginning of the end of this “crisis”.

Bank by bank, the banks should voluntarily set policies which let them reduce the principal due on some mortgages to less than the value of the collateral. The loans have already lost value along with the houses; “sharing” the loss in value with the homeowners increases the chances that the homeowners will be willing and able to stay and pay money to the bank. The banks don’t want the houses; they need to help their debtors work out of debt. Banks can also decide to forgo raising interest as much as they are entitled to by the mortgage contracts, all in the interest of getting paid as much as possible.

A second benefit of adjusting the principal to fit reality is that this strategy allows owners who’d rather have less house and less debt to sell. This good news for the bank which gets their (reduced) principal back; it’s good for the people who can reduce their debt burden by downsizing; and it’s good for the new owners who can presumably afford the joint because sub-prime mortgages are hard to get these day. As long as the face amount of the mortgage is more than the value of the house, the mortgagee is trapped since they have to come up with cash in order to sell.

Banks will still foreclose in cases where the collateral has enough value and/or there is no chance of getting repaid anything substantial.  Banks will still collect as much as they can from the mortgagees. This isn’t a bailout; it’s a recognition of reality. This kind of workout happens all the time with corporate debt. It’s not a “moral hazard” as a government bailout would be because there’s no subsidy and plenty of pain for all. Still more people WILL get to keep their homes than if the banks keep trying to collect full freight. A huge advantage is the government doesn’t have to decide who’s worthy of being bailed out.

BTW, I know I’m oversimplifying. The mortgages are often NOT owned by the banks which service them. They seem, in fact, to be owned by investors all over the world. Those investors are already taking losses on the securities they bought. Nevertheless, the sooner a settlement is reached with as many mortgagees who can settle, the less the future losses.

There MAY actually be a role for government in making any regulatory changes necessary to allow these complex negotiations to happen. There may even be a role for us taxpayers if the FDIC insurance fund is not sufficient to cover insured deposits at banks which may turn out to be underwater once their mortgage investments are valued at what they’re worth.

Two years ago politicians were echoing the complaint that housing was so expensive it was unaffordable. Now prices are readjusting to reality. More people can actually afford houses (without assuming someone will buy them out before their mortgage interest rate resets); that’s not all bad. The people already in houses will have a better chance of staying in them if their mortgage principal is marked down to reality; and the banks’ll have a better chance of collecting money instead of empty houses in declining neighborhoods if they forget the amount they actually lent and only try to collect what the collateral is now worth.

Nerding Again: It's Hard

I’m back writing code. Despite the fact that I first programmed 45 years ago and used to be a top gun (if I say so myself), it’s hard getting re-started after a 17 year hiatus in which I’ve coded nothing more complex than some VBA in Excel macros and hacked a little HTML and played a little with the OpenSocial APIs .

How I got in trouble is I promised Mary I’d write a little demo app that she could try out at Town Meeting next week and see if it’s a good way for the Vermont Telecommunications Authority (which she chairs) to get people to report on what broadband they actually use. This way the Authority and all Vermonters get a wikiMap that shows what access options really are and aren’t available and what is and isn’t used and it can be kept uptodate. If it works at Town Meeting, then they can package it up and put it on their website.

“It’d be cool,” I said, “to have this all be displayed as pins on a Google map. People should see their pin go on the map when they supply the data. I mean we’ve got to set a good example of the e-state.”

“I’ve got to be able to get the data in spreadsheet form,” Mary said. “Can you get it out that way?”


Well, it’s all not so easy. My plan was to use Google Mashup Editor. It does a lot of the UI work for you and it runs on Google servers – no worry about scalability. You can even save data on the Google servers.

Nope, that won’t work. The number of entries you can save from your app is too limited even for this application. Moreover, you can’t get the data OFF the Google servers except as you display it in your browser. “Of course,” you experienced young studs’ll say, “all you have to do is talk from your browser code to a server and the server’ll store your data for you. It’s just a little bit of PHP and mySQL and you’re in business ” There’s even an example of that in the Google documentation.

But I don’t have a server and I don’t have time to learn how to set one up and program it in the next week or even learn .NET which might or might not help. Later, if the prototype works and we need to deploy so everyone can use in his or her own browser, then we can make this into a true client-server app.

So here’s the things I had to learn:

  • All about how HTML stores things in the DOM (don’t ask if you don’t know).

  • Javascript (which isn’t Java, BTW). My last serious programming language was Pascal for the Mac and it WASN’T object-oriented like its successor C++. So I was more than a little weak on all this object model stuff but now I’ve got my classes, objects, methods, properties, functions, and constructors pretty well straightened out.

  • How Javascript interacts with the DOM.

  • More about XML – it’s the right way to get data into Mary’s spreadsheet and anywhere else we may want to put it.

  • How to read and write file in Visual Basic because that’s how I’m going to do it until I have server (I know this’ll only work on Windows but that limitation’ll go away when the server comes).

  • How Visual Basic interacts with the DOM.

  • A development environment. Writing code in notepad got old quickly. Now I’m using Antechinus Javascript Editor. It’s been helpful so far and has a very good Javascript tutorial with it but I’ve got nothing to compare it to.

  • What an “HTA” is. It’s basically code written with web tools like HTML and Javascript that run on your computer like an application. Turns out that’s what the protype’ll be because it has to read and write files on my computer (remember, no server) and browsers don’t, in general, allow code that does that so that malicious web pages won’t steal or destroy local data.

But, despite all my whining, it’s really a lot easier to learn new stuff than it used to be. Everything you need is online so you never don’t have critical documentation. If you Google “xxx tutorial”, you inevitably find several good tutorials on xxx. The ones from W3 Schools are always good; they’re free; and there’s a good sandbox there to play in.

If you Google something like “Read Local Files in Browser”, you find someone on a forum or in a blog who’s answered the question for someone else. That’s how I found about HTAs.

That’s all for now; I’ve got a deadline.

Japan’s Internet Access Satellite Is a Mistake

CNN reports that “Japan launched a rocket Saturday carrying a satellite that will test new technology that promises to deliver "super high-speed Internet" service to homes and businesses around the world… If the technology proves successful, subscribers with small dishes will connect to the Internet at speeds many times faster than what is now available over residential cable or DSL services… the Associated Press said the satellite would offer speeds of up to 1.2 gigabytes per second [nb. with a seventeen foot dish].”

Sounds good but it isn’t.

What neither the Associated Press nor CNN picked up is the altitude at which the satellite is intended to orbit, probably because they don’t understand why that’s important. Slashdot was a little more discerning, however: they picked up that it is intended to be geostationary (always appearing in the same spot in the sky so that antennae can be pointed at it). Physics (and the release from the Japanese AerospaceExploration Agency) tell us that a geostationary satellite must be 22, 000 miles above the earth. Other laws of physics say that radio signals are going to take more than a tenth of a second to get there and the same time to get back; the universe apparently doesn’t allow faster speeds.

Not only does that mean that these satellites won’t be good for interactive gaming (as Slashdot points out) and that they’ll be terrible for VoIP; they also won’t work well for web browsing. That matters! A modern web page is built in many interactions between your computer and the host of the website (much more detail here); the minimum time for each of those interactions is half a second because the signal has to go up and down to get to the server and up and down to get back to you. Those half a seconds don’t sound like much but they add up (this delay is called “latency”). If you use satellite, you know how slow page builds are and how may pages just break during the delay. Unfortunately fast data rates don’t help when latency is the problem.

The satellite’ll be good for email; it’s a good backup to oceanic fiber that seems to be getting cut lately. It will NOT do what the Japanese Agency’s press release says: “…even in some areas where major ground infrastructure for the Internet is difficult to establish, people can enjoy the same level of Internet service as that in urban areas.” Cable, DSL, and even terrestrial wireless measure latency in milliseconds (thousandths of a second); latency is very often MORE important than bandwidth in determining the quality of Internet experience. Anyone who thinks geostationary satellites are an acceptable way to bring broadband to rural areas doesn’t understand how the modern web works.

WHO Doesn’t Like the Gates Foundation

From a recent article in the New York Times:

“The chief of malaria for the World Health Organization has complained that the growing dominance of malaria research by the Bill and Melinda Gates Foundation risks stifling a diversity of views among scientists and wiping out the world health agency’s policy-making function.

“In a memorandum, the malaria chief, Dr. Arata Kochi, complained to his boss, Dr. Margaret Chan, the director general of the W.H.O., that the foundation’s money, while crucial, could have ‘far-reaching, largely unintended consequences.’”

WHO is a UN organization, in case you didn’t know. UN agencies, even those that don’t do their job very well (most of them), don’t like to have their “policy-making function” wiped out.

Dr. Kochi, according to the NY Times article from which all facts following in this post come,  says the Gates Foundation is forcing recipient scientists into “group think”. A Dr. Amir Attaran at the University of Ottawa agrees with Dr. Kochi but “…believed that scientists were not afraid of the foundation, but of its chief of malaria, Dr. Regina Rabinovich, whom he described as ‘autocratic.’”

The Gates Foundation has spent about $1.2 billion on malaria research. Before their involvement in 2000, less than $100 million a year was being spent on this and drug companies had pretty much abandoned the field. You’d think the Foundation would be entitled to have an opinion and influence policy since it pays the bills but not according to Dr. Kochi who complains that the foundation “even takes its vested interest to seeing the data it helped generate taken to policy.”

I had a lot of contact with Bill Gates and Melinda and I were peers when I worked at Microsoft. I’d much rather have them setting policy – particularly on how their money is spent - than UN bureaucrats. Yeah, Bill can be arrogant; yeah, people who work for him sometimes think aping the arrogance will also make them as brilliant and successful as them. But, when Bill gets “hard core” about things, they get done. The things the Gates Foundation is doing need someone getting hard core about them. They need much more rigorous analysis and much better execution than the UN especially but also many other governmental and non-governmental have shown themselves capable of.

The last few paragraphs of the Times story may well explain what the Gates Foundation has done to raise the ire of UN officials:

“There have been hints in recent months that the World Health Organization feels threatened by the growing power of the Gates Foundation. Some scientists have said privately that it is ‘creating its own W.H.O.’

“One oft-cited example is its $105 million grant to create the Institute for Health Metrics and Evaluation at the University of Washington. Its mission is to judge, for example, which treatments work or to rank countries’ health systems.

“These are core W.H.O. tasks, but the institute’s new director, Dr. Christopher J. L. Murray, formerly a health organization official, said a new path was needed because the United Nations agency came under pressure from member countries. His said his institute would be independent of that.”

Maybe we ought to find out what other UN functions the Gates Foundation would be willing to take over.

Planning for Failure

There have been a numbers of good posts lately on failure: Fred Wilson kicked it off with Failure Rates In Early Stage Venture Deals; fellow VC Seth Levine picked up the riff with Failure; Brad Feld says It’s Better to Fail Quickly and Mitchell Ashley says Fail Early, Fail Often. They’re all right, of course: failure happens; failure teaches lessons; failure builds character; and failure sucks. But none of them have posted the eight sure steps to failure.

Failure’s easy to achieve; all you have to do is plan for it. Here’s how:

  1. set only low goals; then you’ll fail even if you meet them.
  2. when you get behind schedule, revise the goals down; that’ll incent everyone not to break his or her neck trying to catch up.
  3. revisit goals frequently; you may have accidentally set them too high.
  4. don’t monitor progress; you might scare yourself.
  5. form a committee to revisit the goals; that’ll protect against undue enthusiasm and deflect blame (and responsibility).
  6. make sure you’re the first to cry “uncle”; then failure won’t be your fault.
  7. never revise the strategy for achieving goals; it’s much easier to move the goalposts.
  8. make sure everyone knows that you know the goals are unachievable; that’ll help assure that you’re right.

Very seriously, it’s better to get most of the way to an audacious goal than to achieve a slam dunk. On the other hand, failing at a slam dunk really sucks. If you want to be a success, you have to set your goals very high (and then, as I know well, often you’ll fail).

If you’re an entrepreneur, you have no choice. You have to achieve something audacious in some dimension (technology, marketing, financing, customer support, scale… something) or you aren’t going to distinguish yourself from those who are already entrenched and you’re going to be overrun by those who were braver.

If you’re dealing in technology in any type of organization, you also have no choice. Technology changes so quickly that anything that doesn’t get done at outrageous speed will be obsolete before it’s finished. That’s just the way it is.

You can’t just set a high goal and then bury your head in the sand, either. Chances are your plan to achieve any worthy goal will slip at one point or another. Then it’s time to revise the plan. That’s something you want to do very quickly. You may even find a few features you can trim from the goal to protect its essence.

Plans were made to be changed; goals were made to be met. Real failures come from giving in to fear of failure.

WiMAX vs. WiFi

In fact WiFi (technically standard 802.11) and WiMAX (802.16) don’t compete for broadband users or applications today. That’s partly because WiFi is widely deployed and WiMAX is still largely an unfulfilled promise and partly because the two protocols were designed for very different situations. However, if WiMAX is eventually widely deployed, there will be competition between them as last mile technologies.

Some people describe the difference between WiFi and WiMAX as analogous to the difference between a cordless phone and a mobile phone. Wifi, like a cordless phone, is primarily used to provide a connection within a limited area like a home or an office. WiMAX is used (or planned to be used) to provide broadband connectivity from some central location to most locations inside or outside within its service radius as well as to people passing through in cars. Just like mobile phone service, there are likely to be WiMAX dead spots within buildings.

From a techie POV, the analogy is apt at another level: WiFi, like cordless phones, operates in unlicensed spectrum (in fact cordless phones and WiFi can interfere with each other in the pitiful swatch of spectrum that’s been allocated to them). There are some implementations of WiMAX for unlicensed spectrum but most WiMAX development has been done on radios which operate on frequencies whose use requires a license.

Some more subversive types (they’re subversive so I can’t link to them) say that WiMAX is what you get when bellheads (not a nice term) try to reinvent WiFi the way they’d like it to be. It’s true that WiMAX is much more a command and control protocol than WiFi. Oversimplified, in a WiFi environment every device within reach of an access point shouts for attention whenever it’s got something to transmit. In that chaos, some signals tromp on other signals; the more powerful devices and those closer to the access point tend to get more than their share of airtime like the obnoxious kid who always has his hand up in the front of the class. In WiMAX devices contend for initial attention but then are assigned times when they may ask to speak. The protocol allows the operator more control over the quality of service provided – bellheads like control.

But it’s not clear that more control means better service than contentious chaos (I’m talking about technology but the same may apply to economies or bodies politic). The Internet and its routing algorithms are chaotic; the routers just throw away packets if they get to busy to handle them. Bellheads (and even smart people like Bob Metcalfe) were sure that design or lack thereof wouldn’t scale. They were wrong.

Same people said that voice would never work over the Internet – there’s no guarantee of quality, you see. They were wrong although it’s taken awhile to prove it. Now HD voice is available on the Internet but NOT on the traditional phone network (although it could be).

Lovers of an orderly environment and those who like to keep order were absolutely sure that WiFi couldn’t work once it became popular. Not only is it chaotic; it also operates in the uncontrolled environment of unlicensed frequencies along with cordless phones, bluetooth headsets, walkie-talkies and the occasional leaky microwave oven. But somehow it’s become near indispensable even in places where a city block full of access points contend for the scarce frequencies.

Net: I’m not convinced that WiMAX won’t suffer from its own orderliness. Did you ever fume leaving an event when an amateur cop (or a professional one) managed traffic into an endless snarl? Fact is cars at low speed usually merge better without help than otherwise. Turns out that control comes at the expense of wasted capacity. The reason that the Internet or WiFi radios can work is that the computing power necessary to deal with chaos from the edge of the network is far cheaper and less subject to disruption or misallocation than the computing power (and communication) for central command and control.

WiMAX may be too well-controlled for its own good. Moreover, if it is used only in regulated spectrum where most frequencies are idle most of the time AND licenses for the frequencies have to be purchased, it will be even less efficient than if it could contend for unlicensed spectrum.

By the way, WiFi CAN operate at distances as great as WiMAX but there are two reasons why it doesn’t. One reason is that radios operating in the unlicensed frequencies are not allowed to be as powerful as those operated with licenses; less power means less distance. These regulations are based on the dated assumption that devices can’t regulate themselves – but the assumption MAY be correct over great enough distances. The second reason why WiFi access points don’t serve as wide an area as WiMAX access points are planned to do is the engineering belief that the problem of everybody shouting at once, even if it’s surmountable in a classroom, would be catastrophic in a larger arena. Maybe.

New licensed spectrum is being made available for WiMAX and other technologies NOT including WiFi - for example, the valuable 700MHz frequencies currently used by analog over the air TV. WiMAX could have a good run because it is allowed to operate in that efficient spectrum while WiFi will eventually run out of the pitifully little spectrum that’s been allocated to it. That’s policy and politics and not engineering but could still be a reason for WiMAX success.

Why WiMAX? is about the advantages of that technology.

Internet 2.0 is Open Spectrum is an argument against licensed spectrum.

Why WiMAX?


The chart above tells part of the “Why WiMAX” story. WiMAX is yet one more standard (IEE 802.16) for the wireless transmission of data. The most recent rev, 802.16e, provides for very good mobile access to data. At speeds up to 75mph you can receive at rates “up to” 1 megabit per second and, at slightly lower speeds, much higher data rates. So, unlike WiFi (802.11), WiMAX competes with various cellular technologies for use in a moving vehicle. Not only voice but receipt of traffic conditions, weather, and entertainment will be coming to your car soon over your Internet connection and WiMAX is a contender to be that mobile Internet connection.

But WiMAX is also a contender to be your fixed connection to the Internet – even if you have DSL or cable available. Clearwire communication has already deployed a very limited WiMAX network in the US which it has promised to expand rapidly. Sprint is piloting its planned national rollout in a couple of locations and, if bankers don’t stand in the way, plans widespread coverage. WiMAX speeds can be faster than most DSL and rival cable in some locations. But, more importantly, you can use the same account both for Internet access at home and while roaming – if it’s a WiMAX account. That’s not important now when there’s not a significant WiMAX network to roam on but will matter if and when there is a nationwide and global WiMAX rollout – assuming that does happen.

Perhaps the most significant fact about WiMAX is that it IS a public standard. Most wireless ISPS (WISPs) today deliver service over proprietary precursors to WiMAX.  Because the transmission standards are proprietary to the radio manufacturers, a WISP which uses one brand of radios on its towers MUST use the same brand of radio at all its customers’ premises. With a standard-based protocol (and after testing and certification) radios from different manufacturers should interoperate freely and competition should quickly bring down the price of these radios. Note that WiFi cards and modems and USB dongles are very, very cheap BECAUSE WiFi is a standard. Moreover, WiFi gets built into laptops at almost no incremental cost. Cheap and builtin WiFi radios created the market for WiFi services at places like Starbucks which led to more demand for WiFi radios.

Learning from its success in pushing WiFi by building it into chipsets, Intel has an aggressive program to get OEM computer manufacturers to build WiMAX into laptops. Codenamed Echo Peak, the chipsets combine WiFi and WiMAX so that a single antenna and other components serve both – clearly Intel understands that users’ll be switching from WiFi to WiMAX and back for some time to come. And clearly Intel is committed to building a market for WiMAX as it did for WiFi; however, you can’t buy a laptop with WiMAX preinstalled today. “Should” be available 3d quarter of this year.

Because WiMAX chips will be cheap and WiMAX mobility can be implemented on a low power budget, enthusiasts predict that it will quickly show up in cameras, phones, GPSes, and all kinds of other devices.

The chicken and egg problem, however, is that there still needs to be enough WiMAX coverage available to make such devices useful.

WiFi vs. WiMAX is more about the differences between the two protocols and a little about creative chaos.

Danger: Biofuels Cause Global Warming

The abstract of an article in a recent edition of Science Magazine says:

“Most prior studies have found that substituting biofuels for gasoline will reduce greenhouse gases because biofuels sequester carbon through the growth of the feedstock. These analyses have failed to count the carbon emissions that occur as farmers worldwide respond to higher prices and convert forest and grassland to new cropland to replace the grain (or cropland) diverted to biofuels. Using a worldwide agricultural model to estimate emissions from land use change, we found that corn-based ethanol, instead of producing a 20% savings, nearly doubles greenhouse emissions over 30 years and increases greenhouse gases for 167 years. Biofuels from switchgrass, if grown on U.S. corn lands, increase emissions by 50%. This result raises concerns about large biofuel mandates and highlights the value of using waste products.

Europeans trying to comply with Kyoto mandates have proposed stipulating that biofuels used to meet their alternative fuel mandates cannot come from land that was previously rain forest. However, the study points out that such restrictions are window dressing. Food, like energy, is fungible. If European biofuels come only from existing agricultural land, the food crops formerly grown there will be grown somewhere else; good chance that somewhere else will be newly cleared.

A New York Times article by Elizabeth Rosenthal about the studies published in Science gives this example:

“…Previously, Midwestern farmers had alternated corn with soy in their fields, one year to the next. Now many grow only corn, meaning that soy has to be grown elsewhere.

“Increasingly, that elsewhere, Dr. Fargione said, is Brazil, on land that was previously forest or savanna. ‘Brazilian farmers are planting more of the world’s soybeans — and they’re deforesting the Amazon to do it,’ he said.”

The studies do point out that biofuels made from agricultural waste (technologies for which are being worked on but have not yet been made remotely economical) and that biofuels from sugar – as made in Brazil and, inexplicably to me, not made in any quantity in Hawaii – would and do have a positive carbon impact.

Defenders of the subsidized biofuels industry are quick to point out that biofuels do help energy independence. On a global basis, use of farmland to “grow energy” diversifies energy sources – a good thing – and increases income to farmers in poor as well as wealthy areas – another good thing. On the other hand, diversion of cropland raises food prices.

The world economy isn’t as complex as the environment but it may be as chaotic and hard to model. Food prices and the amount of land under cultivation would both be going up now even without corn-based ethanol production because the huge number of people escaping poverty in India and China are using some of their new income to eat more and better – as well as to buy motorcycles and cars.

In the long term it seems foolish to use plants to convert sunlight to energy for fuel when solar collectors – after a huge capital outlay and with big infrastructure changes – yields one hundred times more energy per acre than growing corn. Moreover, some of the best places for solar generated electricity are not cropland because they are arid.

But now it seems that corny ethanol may not be a good short term solution either. Suppose, for example, we burn more coal even before we have a way to sequester or divert the atmospheric carbon dioxide produced. Even giving full credit to the most alarming predictions of carbon-based global warming, this may be environmentally more friendly than clearing a rain forest. You can stop burning the coal if you can’t sequester the CO2 or whenever replacement energy comes online; you can’t  replant the rain forest. Hmm…

Some will argue reasonably that discrediting ethanol as a panacea is one more reason why conservation (aka less driving in smaller cars) is the only solution to the twin problems of energy independence and global warming. Trouble with that thinking is that the aforementioned newly unpoor aren’t going to forgo the pleasures of personal transportation which we have long enjoyed. We need more energy sources.

The math behind my claim that solar produces 100 times the yield of corn in net energy per acre at 1800 times the capital cost is here.

Raising Money in Tough Times

The best time to SUCCEED in raising money is when money is hard to get - like now, for example. When capital is easy to get, your competitors (who are, of course, much less deserving) will have plenty of capital, too. Dumb spending or pricing by them may “force” you to do the same. In the end, easy capital may not give you any advantage at all and you pay for it with part of your company.

Capital is a coward; the sound of popping bubbles sends it burrowing under the mattress for safety. Your tech startup is not in any way related to the sub-prime housing market or to the imploding debt of leveraged buyouts. Nevertheless, if you go out for money now, you are searching for a spooked commodity. You may just be wasting your time. But, if you get the money, it puts you at a huge advantage to unfunded competitors. BTW, your competitors include everyone else jockeying for attention in the new product and service marketplace whether they compete directly with you or not.

There is a fundamental difference in what scared investors look at compared to greedy, bold investors.

Greedy, bold investors (which is what you have while bubbles inflate) don’t worry much about fundamentals; they are too busy making sure they get seats at the table – any table. That strategy actually works at the beginning of bubbles (and Ponzi schemes); those who get in AND OUT early get rich – they really do: that’s what attracts everyone else. Obviously this happens not only in high tech but also in residential real estate and tulip bulbs: it’s a fundamental part of the economic cycle.

But frightened investors (which is what you have when bubbles pop) are worried not only about the fundamentals of the company but also all the external things you, the entrepreneur, don’t control. What if your market implodes through no fault of yours? What if the time comes to raise your next round and, even though you’ve met and exceeded all your objectives, ALL the money in the world is in hiding? Oh, dear.

So, if you’ve decided to raise money now (or have no choice), you have to address these fears. Here’s a few suggestions:

  1. present a plan of reasonably achievable singles and doubles, not home runs. Remember you’re selling against fear, not to greed.
  2. present a plan which is a believable projection of what you (either as a company or as principals) have already achieved.
  3. instead of the automatic assumption that another round of financing’ll be available at a reasonable price when you need it, have a Plan B which includes going forward with NO additional financing.
  4. consider making Plan B above your Plan A. If the market opens up and the company has done as well as you think it will, you can always change your mind.
  5. show the investors how the addition of their money to your already excellent company will create a virtually unassailable position vs. potential competitors.

There is money out there; it’s just hiding. There are venture funds which have commitments for funds they’d dearly like to put to work. Credit is cheap for the most credit worthy (which doesn’t usually include startups), because credit is unavailable for everyone else and banks have to put their money somewhere. The terms you’ll get now are not as good as the terms you can get when there’s more money than ideas; but the money may be worth much more if you get it.

Good luck.

See a related post by my friend VC Rob Shurtleff on the perils of Bridges to Nowhere - on the perils of funding rounds that are too small to cross a chasm with.

Now on Kindle!

hackoff.com: An historic murder mystery set in the Internet bubble and rubble

CEO Tom Evslin's insider account of the Internet bubble and its aftermath. "This novel is a surveillance video of the seeds of the current economic collapse."

The Interpreter's Tale

Hacker Dom Montain is in Barcelona in Evslin's Kindle-edition long short story. Why? and why are the pickpockets stealing mobile phones?

Need A Kindle?

Kindle: Amazon's Wireless Reading Device

Not quite as good as a real book IMHO but a lot lighter than a trip worth of books. Also better than a cell phone for mobile web access - and that's free!

Recent Reads - Click title to order from Amazon


  • adlinks
  • adsense