Amy and I are huge supporters (and collectors) of the visual arts. The Denver Art Museum is in the process of building an extension to the museum which is a revolutionary piece of architecture. When completed in 2006, I predict it will do for Denver what the Guggenheim Bilbao did for Bilbao, Spain – namely transform Denver into one of the must-visit arts communities in the world.
I’ve been a member of the Denver Art Museum Technology Advisory Board for the past year, helping Bruce Wyman and his team think about how technology will be involved in the new art museum. Tonight, I hosted 30 folks in the Boulder/Denver high tech / venture capital community for a tour of the new building (under construction), a presentation from Bruce on what he’s up to, and some time with Daniel Libeskind, the amazing architect behind The Denver Art Museum’s Hamilton Building.
I am in love with Daniel. Amy and I got to have dinner with him last fall and I spent two hours watching the two of them rap about an incredible range of subjects. I love the architect brain – it’s a complex combination of artist and engineer – and Daniel epitomizes what is great about it. The Hamilton will be the first completed Libeskind building in the US, which is extremely cool for Denver. Daniel is deeply engaged with the Denver community, very committed to the incredible building he has designed, and full of energy and vision for what it will be when it’s completed.
As we were touring the building, Dave Jilk made the comment to me that this building could not have physically been built 5 years ago. Dave X – the site manager (I can’t remember his last name) told us that they couldn’t have built it 2 years ago as the architectural design requires an incredible amount of computer technology, 3D visualization technology, and significant spatial placement technology (everything in all three dimensions is exactly where it is supposed to be – there is no room for any margin of error or the building won’t physically work). The detailed level of architectural design is unbelievable and simply awe inspiring.
The following anecdote will help, especially if you haven’t seen the building. Last night, Daniel gave a presentation to 1500 museum members about the new building. During Q&A, someone asked him how he felt about the fact that the 1776 Freedom Tower now has a right angle in it (I won’t go into the story of the new World Trade Center site – Daniel is the master architect for the site – but – as one could expect, there are incredible politics and contentious issues at play in the development of the site). Daniel – who is known for avoiding using right angles in his designs, said something like “It’s not that I have issues with right angels, it’s just that I think we live in a democracy and there are 359 other angels that should get their chance.”
Bruce did great tonight. His goal is to incorporate technology into the museum experience in a way that’s revolutionary – taking the best of what others have done, avoiding the typical traps and pitfalls, and trying to have the technology blend into the experience to be a complete part of it, rather than the appendage that technology often is in a museum setting. It’s super fun to work with a blank slate on a canvas such as the Hamilton Building – and it’s great to see our local tech community getting engaged in the project.
I’m obviously extremely psyched about the Hamilton Building on many levels, including the positive impact I believe it will have on the Denver arts community and our standing as an internation destination site for more than just skiing and the Avs (if they ever play again). Wow – tonight was fun.
Dean Karnazes, the writer and subject of Ultramarathon Man, is a wild man. As the leader of Team Dean (which has a nice support crew consisting of his family, but only one athlete), Karnazes describes in great detail several of his ultramarathon feats (feets?) including his first Western States 100, a failed Badwater Ultramarathon, the first marathon at the South Pole (and one of two people to run around the world naked – try to figure that out), and his first (and the first) solo effort of The Relay (199 mile relay race from Calistoga to Santa Cruz).
In the middle of the running stories and descriptions of his feet, his digestive challenges, his food intake (if you burn 600 calories an hour and you run for 48 hours, how do you manage to choke down 29,000 calories just to stay even? – see p. 280 of the book), he takes a crack at talking about how he does it, why he does it, what he eats, and whether or not he is sane. His philosophy is good, the running stories are awesome, and the motivational lift (yeah – I’ll be running a lot this week) is huge.
If you are a marathoner, you’ll love this book. If you want to be a marathoner, you need this book. If you are a soul searcher, you’ll enjoy this book.
Thanks Team Dean for bringing us Dean.
Last year at about this time I hosted a fundraiser for the Boulder Philharmonic and was the guest conductor. Prior to this I knew nothing about conducting and – while I like music – have no real musical aptitude beyond singing in the shower – so it was both a fun time and a good learning experience for me. In addition to conducting, I was also the guest auctioneer for the live auction (which was much easier). Amy and I chose the Boulder Phil as our major philanthropic initiative last year and our efforts (among others) at the fundraiser resulted in over $130,000 of additional contribution, helping the Boulder Phil land in the black for the first time in a while.
I’m doing it again this year. As part of the auction last year, I auctioned off the opportunity to conduct the orchestra. Peter Johnson, who recently married my friend Carrie van Heyst, won the right to conduct (at Carrie’s prompting – I think she might have actually been the one bidding). Peter then roped me into co-conducting with him since I was “experienced” and it was apparently my fault that he was “getting” to do this.
So – if you are in Boulder on April 9th, come to i due Maestri to see Peter and I conduct (guaranteed to be entertaining), participate in the live and silent auction, and enjoy the beautiful new St. Julien Hotel in downtown Boulder while contributing to one of Boulder’s key arts groups. I’ll be the auctioneer again and you never know what I might auction off (last year it included a day with my parents – which was happily picked up by our friends Larry and Pat Nelson.)
I hope to see you there. Tickets can be bought online here.
There was plenty of buzz last week about the new company – Numenta – that Jeff Hawkins (inventor of Graffiti and the PalmPilot, Visor, and Treo products) and Donna Dubinsky (CEO of Palm and Handspring) have started. It was coincidental that I was reading Hawkins book – On Intelligence – which describes his theory of intelligence, the working of the brain, and how he thinks it will lead to the creation of truly intelligence machines.
I haven’t spent any time studying neural science, the brain (my biggest effort was probably not very successfully grinding through the Scientific American issue on Better Brains), or any of the contemporaneous efforts at “next generation Artificial Intelligence” (I was at MIT in the 1980’s during the peak of the last wave of AI research and subsequent commercialization attempts – I fondly remember being amazed at Symbolics – they are still around in a new incarnation called Symbolics Technology – Macsyma has been hard to kill off) .
So – I don’t know much about brain research, theories of intelligence, the biology behind it, or much of anything else. As a result, I thought On Intelligence was superb. I don’t expect that it’s right (nor does Hawkins) – he’s clear that it’s a framework and work in process (as it should be). I found it extremely accessible, very provocative, and mostly internally consistent (which is important whenever you are trying to learn about something you know very little about – it can be wrong, but at least it hangs together in a way you can understand it.)
The book and theory is based on the work being done at the Redwood Neuroscience Institute, of which Hawkins is the founder and director. Beyond just doing research, part of RNI’s mission is to “encourage people to enter and pursue this field of research.” Hawkins is consistent in his message in the epilogue of his book where he says “I am suggesting we now have a new more promising path to follow. If you are in high school or college and this book makes you want to work on this technology, to build the first truly intelligent machines, to help start an industry, I encourage you to do so. Make it happen. One of the tricks of entrepreneurial success is that you must jump head first into a new field before it is one hundred percent clear you can be successful. Timing is important. If you jump too early, you struggle. If you wait until the uncertainty lifts, it’s too late. I strongly believe that now is the time to start designing and building cortical-like memory systems. This field will be immensely important both scientifically and commercially. The Intels and Microsofts of a new industry built on hierarchical memories will be started sometime within the next ten years. It is challenging doing new things, but it is always worth trying. I hope you will join me, along with others who take up the challenge, to create one of the greatest technologies the world has ever seen.”
Hawkins thoughts and writing are fused with his obvious entrepreneurial energy. He approaches things as an ultimate pragmatist (unlike so many scientists, his examples and analogies are extremely understandable – very reminicient of Richard Feynman), an outsider (he acknowledges that mainstream brain research has huge problems with many of the things he is saying), and recognizes that any fundamental breakthrough typically requires a paradigm shift in thinking about the specific domain.
If you are an entrepreneur who likes to challenge yourself intellectually with things you know nothing about, you’ll love this book. If you are a brain researcher or scientist, you’ll probably be frustrated, but it’ll stretch you in good ways. If you are a brain expert, you’ll probably hate it. In any case, it’ll be fun to watch what Hawkins, Dubinsky, Numenta, and RMI do next – remember, they’re the ones that brought you the Palm Pilot / Handspring Treo based on the revolutionary notion that humans should learn to write different (e.g. Graffiti), not the ones that brought you the Go Whatever or the Apple Newton who thought that the computer should be able to recognize your handwriting.
A long time friend, Shawn Broderick (first employee at my first company Feld Technologies and founder/CEO of Genetic Anomalies – acquired by THQ), has started up a new company called MaxVox that’s working on some cool consumer VoIP related stuff.
Shawn’s been studying the VoIP market to try to determine which demographics are actually driving the adoption of VoIP. MaxVox announced today that they have discovered that a much younger demographic than expected is actually driving VoIP adoption. They issued a press release discussing the “Wired Toddler” category and announcing the availability of their report Residential VoIP Adoption – The Pampers Segment – which costs $495 and will be available via their website on April 1.
I’m playing around with some of the characteristics of my Feedburner feed so I apologize in advance (and ask for your indulgence) if you get multiple copies of posts. Things should settle down in a couple of days.
If you are avid followers of the TV series 24 (as Jason and I are), you’ll recognize that the next item in our term sheet series – Redemption at Option of Investors – has similar characteristics to the regular exchange Jack has with CTU:
CTU Director (any of them – Driscoll, Tony, Ryan, George, Michelle): “Jack – stand down – don’t go in there without backup.”
Jack: (Gruffly, in a hoarse voice) “I gotta go in – there’s no time to wait – if I don’t go, the world will end and my (current babe, hostage, daughter, partner) will die.”
CTU Director: (Mildly panicked) “Jack – wait – it’s too dangerous – I command you – wait.”
Jack: (Insolently) “I gotta go.” (Jack hangs up the phone).
Cut to clock ticking and commercial or teaser for scenes from next week.
Think of the discussion around redemption rights as this scene – utterly predictable and ultimately benign. Jack always goes in. Jack always stops the bad stuff – for the time being. Jack (or the bad guys) always creates a new problem. The CTU director always forgets that Jack disobeyed a direct order shortly after Jack is successful with his latest task.
You are Jack. Your investor is the CTU director. If you ask your CTU director “have you ever actually ever triggered redemption rights?” you will normally get some nervous fidgeting (“wait – it’s too dangerous”), a sheepish “no” followed by a confident “but we have to have them or we won’t do the deal!” (“I command you – wait.”)
Redemption rights usually look something like:
“Redemption at Option of Investors: At the election of the holders of at least majority of the Series A Preferred, the Company shall redeem the outstanding Series A Preferred in three annual installments beginning on the [fifth] anniversary of the Closing. Such redemptions shall be at a purchase price equal to the Original Purchase Price plus declared and unpaid dividends.”
There is some rationale for redemption rights. First, there is the “fear” (on the VCs part) that a company will become successful enough to be an on-going business, but not quite successful enough to go public or be acquired. In this case, redemption rights were invented to allow the investor a guaranteed exit path. However, any company that is around for a while as a going concern that is not an attractive IPO or acquisition candidate will not generally have the cash to pay out redemption rights.
The second reason for redemption rights pertains to the life span of venture funds. The average venture fund has a 10 years life span to conduct its business. If a VC makes an investment in year 5 of the fund, it might be important for that fund manager to secure redemption rights in order to have a liquidity path before his fund must wind down. As with the previous case, whether or not the company has the ability to pay is another matter.
Often, companies will claim that redemption rights create a liability on their balance sheet and can make certain business optics more difficult. In the past few years, accountants have begun to argue more strongly that redeemable preferred stock is a liability on the balance sheet, not an equity feature. Unless the redeemable preferred stock is mandatorily redeemable, this is not the case and most experienced accountants will be able to recognize the difference.
There is one form of redemption that we have seen in the past few years and we view as overreaching – the adverse change redemption. We recommend you never agree to the following which has recently crept into terms sheets.
“Adverse Change Redemption: Should the Company experience a material adverse change to its prospects, business or financial position, the holders of at least majority of the Series A Preferred shall have the option to commit the Company to immediately redeem the outstanding Series A Preferred. Such redemption shall be at a purchase price equal to the Original Purchase Price plus declared and unpaid dividends.”
This is just too vague, too punitive, and shifts an inappropriate amount of control to the investors based on an arbitrary judgment. If this term is being proposed and you are getting pushback on eliminating it, make sure you are speaking to a professional investor and not a loan shark.
In our experience – just like Jack’s behavior – redemption rights are well understood by the market and should not create a problem, except in a theoretical argument between lawyers or accountants.
If you thought it was challenging determining the quality of art these days (e.g. does it – or does it not – belong in MoMA), you apparently aren’t the only one. The British graffiti artist Banksy pulled off a nice prank recently, installing his own art in four New York top museums in one day. It “only” took MoMA three days to discover it.
I woke up this morning thinking about Plaxo and my computer. I’m been thinking lately about the number of manual things that I do that my computer should take care of for me (when I say “my computer”, I mean “my personal computing infrastructure”, which goes well beyond an individual computer at this point and includes four desktop computers, a laptop, server Al servers, a Danger Sidekick, a Windows Media Player, a ton of software, all the data stored at various web-based services, lots of web-based applications automatically doing things for me, several people (including “my IT guy Ross”) – all spread across at least four locations plus all the different places I travel to).
I’ve experienced numerous big advances in this over the years. A relatively recent example that many people can relate to is spam. I used to have to deal with spam manually – now I never see email spam because of Postini which automagically eliminates it all in the background. Email spam is gone from my life – it’s no longer something I have to interact with – whereas before Postini, I probably was manually deleting 200–300 spams a day and now – when I periodically check my spam filter online – it’s well over 1000 a day. I put this in the “magic” category where the solution was a binary experience – one day I had a huge problem, the next day it was gone. I especially notice this when I have to deal with Movable Type spam (both comment and trackback) – this is a “new” problem that hasn’t been solved yet (although tools are starting to appear to address it better).
I realized that in the past year, I’ve introduced a new set of computing tools in my daily life. A lot of this has been driven off of the shift to web-based applications, but RSS and the way that I interact with real time information has changed this as well. As I thought about this, I got aggravated with the number of things that I have to do to simply “interact” with my compute infrastructure. They range from the trivial (manually synchronizing my bookmarks across multiple computers – since I use Firefox, I can use the Bookmark Sync plug in to keep the bookmarks across my four computers synced – but I still have to actually click on Sync when I make a change) to the more complex (dealing with all of the email-based meeting requests that I get – fortunately my assistant Wendy handles much of this, but it is clearly something my compute infrastructure should be smart enough to figure out.)
Twenty years from now the way we interact with our compute infrastructure will be as archaic as it was twenty years ago (when the Apple III and IBM PC AT were the innovation of the day and del c:\*.* was a big scary deal). So – I’ve started to think about how to increase my compute infrastructure to the next level – especially with regard to “automating all the trivial shit that my computer should be smart enough to deal with for me.” I want my compute infrastructure to continually get smarter, do more effective things for me in the background, and free my time up to actually generate “content”.
Contact management is a great example of this. My core contact management database lives on my Exchange server and I access it through Outlook. After years of playing cut and paste from email when I wanted to add a new contact, I finally found a program (Anagram) that effectively sucks contact info out of email and puts it in my Outlook databases. While this was a small step, it saves a huge amount of “stupid time” over the course of the rest of my life.
The concept of a web-based address book synchronizer has been around for a while (we even talked about at it Anyday.com – an online calendar company that I funded in the late 90’s that was bought by Palm in 1999 for $80m and promptly shut down one year later.) Until recently, all the approaches I had ever interacted with caused me more work then they saved as they generated lots of new email (spam). Linkedin is a great example – I’ve got a nice Linkedin profile and plenty of connections, but I’ve gotten minimal personal value out of it at this point and it’s generated hundreds (thousands?) of email I’ve had to deal with (even if dealing with them is as looking at the email and hitting delete). At some point, I blacklisted Linkedin in my spam filter so I don’t have to see the emails and every now and then I go onto Linkedin and interact with it directly, but the value is low, so my interaction is low, etc.
When Plaxo first came out, it had the same problem – it was merely a gigantic spam generator. So – I eventually gave up, deleted it, and wiped out all my Plaxo data. I downloaded version 2 the other day to see if it was any better.
I have 3000 active contacts in my Outlook database. There are probably 500 core contacts that I communicate with regularly, another 500 that are my “house list” for mass emails I send out about various things I’m involved in that I want to invite folks to (and yes – I observe good email hygiene and give people a way to opt out of these things), another 1000 contacts that I know well enough that I’d recognize them if I ran into them on the street but don’t interact regularly, and 1000 randoms that I interact with transactionally.
I was positively stunned with what Plaxo did. After installing it, it connected 300 of my contacts (10% of my database) and automatically updated the information in the background (on the Plaxo servers). It synchronized this data with Outlook/Exchange in the background and kept track of what it did. It was flawless – handling typical thorny sync issues correctly (Outlook on multiple machines connected to the same Exchange data store has a whole series of classic sync issues that anyone that has ever dealt with sync knows about – another example of a “problem” that computer should solve for me that continues to be an issue).
Now – independent of whether or not Plaxo is a good business (there are some clever new “monetization approaches” in version 2) – the software suddenly became part of my compute infrastructure. I’ve had it up and running for a couple of days on two machines and it has settled into the background, doing what I expect, and continuing to help incrementally with managing my contact database. Like Anagram, it’s a relatively small application in the grand scheme of what I do everyday, but has suddenly improved the automation of one of the more annoying things I deal with regulary that adds no fundamental value to how I interact with my computer.