The Big Bandwidth Misconception

Over on Bram Cohen’s blog (of BitTorrent fame), Mark Cuban made a comment that reminded me of a pretty common misconception about the ISP business. Mark said, in reference to Bram and attempting to make money from BitTorrent:

Unfortunately for you,ISPs crack down on heavy bandwidth users, particularly uploaders and enforce their TOS.

By definition, seeders create upstream bandwidth. The ISPs dont want to see more upstream usage Bram, i know its a tough concept for you, but in the mind of the ISP, upstream use = bad. MOre upstream b/w use = more bad. Which in turn pushes them not to increase the bandwidth available to end users, but to evaluate where the upstream use is coming from and look at shutting it off and throttling it. Call me crazy, but that equates to a challenge for the BT universe.

This couldn’t be further from the truth. The upstream bandwidth is not a concern for most ISPs, especially your standard Cable or DSL provider. Broadband ISPs have to order bandwidth synchronously, in various speed increments, because all high capacity circuits come in synchronous form. This means if you order a Gigabit Ethernet connection to an upstream provider, you’re actually buying 2 Gigabits per second worth of bandwidth, one from your provider and another towards your provider. Broadband ISPs sell bandwidth asynchoronously (the A in ADSL), which means they’re selling you something like 6 megabits down and 784k or less upstream. Obviously, it’s well known that ISPs oversubscribe their upstream links which is why peak times can see serious difficulties achieving the full downstream. However, no matter how you do the math of an ISP which serves mostly bandwidth consumers, this still leaves large amounts of bandwidth on the upstream on the ISPs egress connections. In fact, most Broadband ISPs don’t provision large WANs to haul traffic back to central egress points, because it makes more sense to dump the traffic off at a peering point in the market, assuming you’re in a large metro where you can cheap bandwidth. Because ISPs providing broadband to homes and businesses also have such a one way usage pattern, it means that they cannot negotiate peering arrangements with other ISPs for an even trade of bandwidth, meaning they’re always going to be paying for that bandwidth.

In the business, this is generally called Tier 2 bandwidth. Broadband ISPs will sell large hicap circuits to businesses largely interested in serving content much cheaper than you can buy Tier 1 bandwidth from a major internet provider (something like UUNet, etc). This also becomes very evident if you attempt to buy bandwidth from a Tier 1 provider and a Tier 2 or Tier 3 provider, because it becomes very difficult to load balance your traffic over to the Tier 2 providers because of the way BGP routing works. There are methods for overcoming this (AS padding), but it still never works out quite the way you want.

So, in summary, ISPs which are selling bandwidth to users which are using BitTorrent don’t give a rats ass about the upstream. They’ve got loads to spare. What they’re concerned with is the users who are using large amounts of upstream bandwidth are generally also heavy consumers of downstream, and those people are causing oversubscription issues.

VMWare GSX Now Free

Joel Spolsky points me to a post by Mike Gunderloy about VMWare releasing their GSX Server product for free. This is outstanding news, and I plan to test it out at the office this week. I guess VMWare listened to my advice to Microsoft, which I wrote in May of last year. Virtualization technology is already free with Xen and Linux systems, it’s about time one of the commercial vendors released one of their lower-end products for people to use for free. Way to go VMWare!

Why would you want to kill exclusives?

I like Steve Rubel. I like Robert Scoble too. Robert’s dead wrong on this one though. For some reason, bloggers seem to think that just because there’s more of us and anyone can contribute the conversation, that somehow everything has to change. Not so.

Perfect example, we just did a major release about 3 weeks ago of FireAnt. We spent a lot of time on the product. The directory was over 4 months in development. There were test sites available to the public about a month prior to release. We seeded the release out to trusted videobloggers and our users groups for the product to get feedback, but we asked all of them to remain quiet. They did. The reason? We wanted to give someone who had traction the exclusive to write about the new release such that we’d get a bit of a bang with our release instead of a gradual dull thud. That exclusive fell to Mike Arrington of TechCruch, and we were not disappointed. He got the exclusive, he was happy, his readers got the scoop the day it was released, and we got extended coverage in the blogosphere echo chamber because we gave a high-profile blogger the exclusive.

Steve groks it. I’m not sure why Chris and Robert seem to think everything has changed. I could have had the exclusive or given it to someone like my good friend Steve Garfield (whose readership/viewership is nothing to sneeze at), but why would I want to release something to my 200 readers and wait for it to maybe disseminate throughout the blogosphere when I can seed it to someone with a much larger and more influential readership? If we had given it to everyone all at once, we would have ended up with that dull thud I was talking about earlier. Somebody has to help control the noise, and a little bit of PR and marketing savvy can go a long way to doing that.

Dumbest quote of the year

This came across an email list I’m on recently:

Ning co-founder Marc Andreessen recently said…

Ideally we’ll never meet any of our customers. We actually had to take the sign down from our front door because one of our customers actually stopped in, uninvited, and said, “Hi, I love your service.? And we’re like, “why are you here?? And so down came the sign.

Drop-bys like that should only happen in sitcoms as far as I’m concerned… The consumer internet businesses in a sense are ideal businesses from the standpoint of never meeting your customers.

Only in the technology business would anyone be caught dead uttering such an utterly stupid statement, and even then it doesn’t make it any less of a moronic comment. Your customers are your bread and butter. You should jump up and down if someone takes the time to stop by your office just to tell you how much they like you, and you should be just as excited if someone takes the time to tell you what you’re doing poorly, because it’s a chance to save a customer and make an advocate. This is something I’d expect to see on Rick Segal’s blog, in one of his infamous (at least to me) overheard dumb business conversations. I can’t honestly see anyone with this kind of attitude being successful in any business in the long run, technology or not.

The Local Web Experiment: Fort Smith, Arkansas

A while back, I wrote about what I’m calling the Local Web. The Local Web, in my mind, is a group (an infinite number of groups are possible) which arrange their interconnectedness by sharing a geographical point of reference, traditionally Metropolitican Statistical Areas, or MSAs. The Local Web is already built in many of the larger cities, with directories and vertical search engines to allow you to search for stuff in major metropolitan areas, but a good percentage if not the majority of Americans live outside of a major metropolitan area. The connected netizens from those areas are being largely overlooked by current major initiatives to create localized web experiences.

I’m starting an experiment in a town that should be the perfect size. My hometown is Fort Smith, Arkansas, a town of about 80,000 with about a quarter million in the MSA. There are billions of dollars of business done every year here, and many companies here ship worldwide. However, for doing business in town, most people still reach for the phone book. The reason for this, of course, is because you can spend days Googling around for information about Fort Smith businesses without finding much but spam sites. No one in this town has made a concerted effort to make sure things are easily found on the web about businesses they’d like to do business with.

So, I’m starting an experiment. I’m going to organize a blogger meetup to start. I’ve already found several local bloggers and I’m going to find or create more. I’m going to organize them and attempt to get them to write about business and other activities (softball, church, whatever) they that they do locally and where they do them at. I’m going to try to incent people to create links from site to site across town and try to make information more easily indexable by the search engines so that when you search for something in the area you don’t end up at a spam site. We will be holding the meetings at Kirkham Systems of Fort Smith.

Once this is going strongly, I, along with the staff of Kirkham Systems are going to start showing the results to local businesses and convince them they should have a website with a blog and incent them to link to the people they’re doing business with and write about their experiences with it. The goal is to create an interconnected web of links focused on this geographical area, so that if you end up at Kirkham Systems website you’ll find annotated links about the people we do business with, and when you end up there you can find the people they do business with.

If I’m right, by the time I’m done, Google will be a far more interesting resource to find information about businesses, things and places in Fort Smith, Arkansas than any other resource, anywhere. This may seem boring to people who live on the coasts and can find a well designed and well organized website for even local businesses, but for the large portions of the country that have been ignored by businesses attempting to organize information for them on the web, I think this will be a large step forward. No one understands or cares about this because they haven’t been educated as to what it can mean for both their businesses, themselves and their community. My goal is to educate everyone here.

The Local Web is long overdue.

Some discussion in the comments

There’s some discussion going in on in the comments of my last post. Check it out, I think we might be in for an interesting discussion.

My friend Raymond…

My friend Raymond is getting some attention for his love of OPML over on Dave Winer’s here and here. As is typical of format geeks, there’s a debate over on Raymond’s blog comments about why you’d use OPML over XHTML ordered and unordered lists. When are people going to realize that 99% of people don’t care? I’ve been involved in more format discussions than I care to remember, and in the end the reason RSS and OPML will become popular is because Dave Winer goes to the effort to develop tools rather than writing specifications and hoping someone will write tools for them. The format in the end doesn’t really matter much, it’s just a way to format data. There have been thousands over the years, and as long as everyone can read it, the rest is just syntax and semantics.

Josh and I were having a debate the other day as to whether using a pseudo-protocol like fireant:// was an acceptable solution for one-click subscribe in our aggregator. Most of the other aggregators are fighting over feed:// or some other specific file format (like iTunes pcast files). Why should we worry about all this when all we want is to enable easy one-click subscribe for people who already have our software? Josh’s concern is that the geeks will be upset over our use of a protocol that’s not really a protocol (of course, no need to remind people that feed:// isn’t a valid protocol either) instead of doing it through a file or some other method that’s more robust. Sorry, it works. The facilities are already in the OS and the browser to facilitate it, why not use it? Is it a hack? Yeah, so what? It works!

The same people who would be upset about us using fireant:// as a protocol are the ones who’d be upset that people are using OPML rather than XHTML formatted unordered and ordered lists. Hello? Who fucking cares. The user cares that it works! We spend far too much time debating the merits of one format over another and lot less time than we should making sure that software works for the end user. This is why Dave Winer continues to be a success in getting formats adopted, because unlike the Atom folks who have spent years making a format that’s the most robust and most well-documented, there isn’t a refrence implementation. Why is Microsoft Word the default format for exchanging documents and not OASIS? Because of the software people use. Why is RSS the preferred format for exchanging feed information? Because there was software that worked when the format was introduced that everyone could use as a reference implementation.

There something also to be said for simplicity. OPML and RSS are simple. Perhaps the specs are not complete and don’t cover all the use cases, but I can also code something up to work with them in a matter of hours. I investigated the Atom publishing protocol, and it would take me a couple days to do a pull implementation. By contrast, I have done a full Metaweblog implementation in a couple of hours.

Dave Winer can be an ass, but I give him credit where credit is due. The people who spend so much time complaining about him are excellent at complaining and not so good at getting things done. For that, I look to Dave.

Solaris: “I’m not quite dead!”

Seems the pronoucements (including my own) of Solaris’ death were a bit premature, according to David Berlind. I agree. In terms of performance and stability, Solaris is definitely the 800 pound gorilla over Linux, and now with it being essentially free on x86 (it has been for a while, although making it open source definitely makes this much more clear) I see no reason not to deploy it over Linux. I’ll definitely be exploring this as an option for some of my new deployements, especially with ZFS out in the wild now.

Paul Graham Strikes Again

I love reading Paul Graham. He’s a must read for anyone in the technology business, especially anyone who is either involved with or considering a startup. His latest essay, “How to Fund a Startup” is an excellent read and very timely for me. Howevever I do take exception to his third note, which states:

[3] If “near you” doesn’t mean the Bay Area, Boston, or Seattle, consider moving. It’s not a coincidence you haven’t heard of many startups from Philadelphia.

If the Internet has done anything, it’s changed the dynamics of the workplace such that it doesn’t really matter where you’re located anymore. I’m still highly in favor of having a physical office and a place where people can collaborate, but as long as you can find your core team where you’re located, you can fill in the rest from wherever in the world you so choose. I lived in Arkansas for 23 years. I’ve lived in Seattle for 2. I’ve met a lot of technology people out here, and lot of them are very smart, but I also know a lot of incredibly smart technology people back home in Arkansas as well. I’d take anyone’s bet that I could start a software company and develop software on par with any California company from my home state (perhaps with some work filled in from overseas or out of state, but that’s what this whole Internet thing is about). If we’re still thinking that all successful startup technology companies need to be located in one of those three places, things really haven’t changed. I’m very disappointed in Paul, because I would have thought of all the people I respected and read on a regular basis he would have thought differently. I guess old prejudices never die.

Finally Found It!

I’ve been searching for this for a year and a half.  My laptop runs like shit.  I’ve always meant to come back around and find out exactly what’s wrong with it.  It’s not incredibly high CPU usage, it’s not swapping, it’s not disk I/O.  But, I went and grabbed Process Explorer from SysInternals today and I see that when I’m docked, and I have a USB mouse plugged in, my Hardware Interrupts go through the roof and the DPCs (deferred procedure calls) take up 30-40% of my CPU.  This is obviously not right.  I’ve updated everything I can think of from Dell’s website and still I’m having the same problem.  I’m now installing Service Pack 2 to see if that helps, but I’m going to be fucking ecstatic when I can finally dump this piece of shit Dell laptop for one I buy and setup myself.  I will have no problem saying goodbye and good riddance to Cingular’s corporate IT and their insistance on buying poor hardware and loading it up with crappy software that kills the performance on my PC.

Update: For anyone who happens to come across this later via a search engine, Windows XP SP2 fixed the problems I was having with a USB mouse plugged into the Dock USB controller causing 30-40% processor usage via the “Deferred Procedure Calls” or DPCs.