By BoLOBOOLNE payday loans

Net Neutrality does not make the Internet fair

Posted in Economics, Geek, Tech Industry on April 25th, 2014 by leodirac – Comments Off

A lot of people are concerned that the FCC will “destroy the Internet” (or something similarly hyperbolic) by loosening so-called net neutrality rules.  But the fact is that today’s system is actually a lot less stable than you realize, and legislating net neutrality would actually make it even more precarious.

That traffic moves in a “fair” way across the internet is not at all guaranteed by net neutrality.  “Fairness” is an extremely fuzzy concept as any economist will tell you — there are more bits that want to move through the Internet than there are pipes to carry them.  So how is it the system works at all right now?  The answer today is not because of ISPs being nice to each other.  It’s mostly because of carefully designed protocols, and complex algorithms built into them.  Most important is the transmission control protocol’s congestion avoidance algorithms — that’s the TCP part of TCP/IP which is how almost all content moves on the Internet today.  This set of algorithms tries really hard to maximize the amount of bandwidth used to deliver your content, without using too much bandwidth, and to do so in a fair way.  Researchers have spent decades optimizing these algorithms to be fair.  This research is what makes the internet work well today, much more so than agreements between ISPs.

How does TCP congestion avoidance work?  As an analogy consider a crowded party with lots of conversations going on.  This is like the internet.  The people at the party are computers — laptops and phones and server.  The conversations are content moving around between them.  TCP includes an agreement that you always start a conversation at a moderate volume, and if you’re being heard perfectly, you can talk louder.  But as soon as your conversation gets mixed up with somebody else’s (packet loss) you quiet down.  It’s a feedback mechanism that results in conversations being quiet when the room is full, but when it’s empty you can yell and get your point across really fast — it’s fair and efficient.  The analogy is far from perfect, but it works for this discussion.

Now consider a little thought experiment.  What if a hypothetical content provider created a new protocol instead of TCP — let’s call it GooseCast — to shove content down the pipes as fast as they could without any regard to fairness.  By analogy, GooseCast servers would just yell at the top of their lungs regardless of how full the room was.  They would very practically drown out the polite TCP servers.  GooseCast content would get delivered faster than TCP content because the TCP servers would keep getting quieter trying to avoid messing up other people’s conversations, whereas GooseCast would just keep yelling.  Clearly this would be a problem.

Since GooseCast content gets to customers faster, with higher bandwidth and quality than TCP, many content providers would switch to GooseCast.   In fact we’d probably see an arms race develop.  GooseCast2 would be even more aggressive than GooseCast and then FatGooseCast, etc.  Good old-fashioned web browsing using classic HTTP over TCP would lose out, as would the slow to change and anybody trying to be polite or fair.  Some content providers would probably stand on principle and stick with polite protocols, but many wouldn’t.

Non-evil ISPs (color me naive, but yes I think they exist) would naturally try to protect their users who want equal access to all content.  They might prioritize network traffic that was using good old fashioned TCP to ensure that content was delivered fairly.  This would mean explicitly deprioritizing content that used aggressive protocols like GooseCast.  But what if net neutrality rules were in full effect?  Then ISPs would have their hands tied, and would be legally unable to prioritize the polite content over the impolite content.  This sounds like exactly the world that net neutrality advocates would like to avoid.  Well guess what — it’s the world we already live in today!  We are only protected from this bad outcome by convention, not legislation.

If you think this possibility is remote and implausible, you have to look no further than this sensationalistic article in MIT Technology Review from 2012.  These folks made this “amazing” new protocol which boosted wireless download speeds in a big city from 0.5 Mbps to 16 Mbps.  Wow!  Amazing breakthrough!  How?  It’s pretty much exactly GooseCast — they just drown out everybody else.  There is no magic “algebra” that can extract 30x more bandwidth from existing cellular networks with a new protocol.  But you can hog the network and frustrate everybody else connected to the same cell tower as you.  Fortunately for all of us, the company promoting this technology appears to be going nowhere — I’m hoping because the scientists who worked on this are more realistic/honest about what they’ve developed than the reporter who wrote about it.  But I hope you appreciate that the “GooseCast” scenario I describe here is not just hypothetical.

So what is keeping Internet content delivery fair?  Mostly protocols, and somewhat agreements.  If we legislate agreements, then we are truly at the mercy of protocols.  And I think we can all agree that we don’t want the government telling tech companies what protocols they can and can’t use to move data around on the Internet.

Fairness on the Internet is not nearly as simple as you’d like it to be.  Legislation would be easily defeated by innovation, and between the two, innovation moves much faster.

Making software architecture choices analytically with CodeTrend

Posted in Democratization of Information, Geek, Software Engineering on September 3rd, 2013 by leodirac – 1 Comment

Modern software gets assembled from parts as much as it gets built from scratch.  It used to be you just picked your operating system and programming language and went to it.  Nowadays you need to pick your data store, your development tools, your framework and its plugins and all sorts of libraries independently.  These choices are difficult and important.

A big part of the difficulty is even knowing what choices are available.  All too often the decision is made entirely based on what a key developer has used recently.  This is important, don’t get me wrong — if your current team isn’t productive, the project will not go well.  But if your current team happens to be experts in something that nobody else in the world uses, you might be heading for a dead-end.

These choices really matter too.  Anybody who’s been in the industry has run into so-called “legacy codebases” which is a term that literally means old, but in fact gets applied to any piece of software that is no longer considered “good” for whatever reason.  Some very old codebases are still doing great.  But some become “legacy” less than a year after birth.  Another common feature of legacy codebases is that they’re hard to maintain, and require very expensive investments to replace.

For these reasons, I think that some of the most critical choices in software projects is the choice of technologies upon which it will be based.  Despite the importance of these choices, rarely are they considered very carefully.  The trade-offs are difficult to categorize, and thus get dismissed all too quickly as subjective, and thus inappropriate for strict analysis.  There is definitely a strong subjective component to it, which is why personal experience is so important, but there are analytical ways to look at the choices.

A few years back I wrote a popular article comparing two web development frameworks I was considering using: Django or Ruby on Rails.  In it I argue that popularity is a critical measure of any software technology for many reasons.  The more people are using a technology, the better it will be.  People using it means questions will already be asked and answered on the googles.  It means more bugs will have already been found and fixed, and more features will have already been added.  For open source software the mechanisms for this are obvious, but the same results tend to happen with closed source systems assuming the organization maintaining the code is rational.  It also means it will be easier to hire people who know how it works.  Fortunately, popularity of software is relatively easy to measure analytically.

I have spent a lot of time researching these issues before making technology choices and realized that this manual process is wasteful.  To that end, I have started an open source project to simplify the systematic comparison of software technologies.  It’s called codetrend.org which is where you can start researching right now.  It’s all open source and the data are creative-commons licensed.  You can start adding to the data today by categorizing technologies, and if you know Ruby on Rails, I’d love help adding features.

As an example of it in use, here’s a comparison of the aforementioned web frameworks counting number of posts on everybody’s favorite developer Q&A site StackOverflow:

http://www.codetrend.org/compare/django/vs/ruby-on-rails

This shows that they are both quite popular and growing, but Rails clearly has more activity than Django.  (For the record, I recognize and explicitly dismiss the counter-argument that rails might be more confusing or worse documented leading to more questions — questions always come up during use, good / bad / easy / complex / rtfm / brain-buster / whatever.)

I also like to think of CodeTrend as filling a niche that StackOverflow has chosen to ignore.  Questions on SO that ask “which is better __ or __” get quickly closed as inappropriate.  They keep coming up, and are very useful resources, even though they’re against the rules.  I hope CodeTrend will someday be able to fill that need explicitly.  There’s a ton of work to do to make that possible, so if you’d like to help out, I’d sure appreciate it.  Together we can provide a resource for the entire industry.

Paul Dirac’s PhD Thesis

Posted in Physics, Science on July 29th, 2013 by leodirac – 1 Comment

Recently my grandfather’s PhD thesis has found its way onto the Internet.  You can view a PDF of it here, courtesy of Florida State University:

http://www.lib.fsu.edu/files/pdfs/dirac_1926_dissertation.pdf

This fascinating document is significant in the history of science.  Its two-word title, “Quantum Mechanics” demonstrates how fundamental it was in opening up a new branch of science.  For those of you who have written doctoral theses, imagine if the title of your thesis was exactly the title of a required undergraduate class.

The document’s journey to the Internet was slow.  It had been sitting in my mother’s cluttered house for decades, before she passed it along to Graham Farmelo, who delivered it to FSU, who scanned it and published it online.  Now it has a permanent home in their Dirac Collection at the Dirac Science Library.

The first thing you’ll notice about the document is that it is entirely hand-written.  It doesn’t take long to realize that in 1926 this was the only practical option for a document of this type.  Of course type-setting was technologically possible at that time, even for documents with complex mathematical formulae like this.  But the cost of preparing a document in this manner was huge, and thus only done for works that were expected to be broadly distributed.  Then as today, the primary audience for a typical PhD thesis was the handful of professors guiding the doctoral student.  At the time, nobody knew that just 6 years later Dirac would be honored with the Lucasian Professorship of Mathematics, a highly prestigious academic post once held by Isaac Newton and until recently by Stephen Hawking.  So of course this document was hand-written.

The thesis is also visually wonderful.  There are scribbles in margins, neat parts and sloppy parts, crossed-out sections, derivations, questions marks of uncertainty, hand-drawn graphs, arrows of re-arrangements, and torn sheets of paper.  The document is difficult to follow, and its completeness is not obvious — the table of contents does not seem to line up with the contents, and the pages which do have numbers aren’t even in order.  Its utility to science is clearly eclipsed by later works, but seeing aspects of the original thought process laid out in both pen and pencil invites so many questions.  What is that sudoku-like grid of numbers in the top margin of page 25?  Why is there a hand-drawn candle on a page otherwise filled with equations going in different directions?  Somebody who is well-versed in the topic would be more qualified to speculate than me.  Obviously I’m biased, but I find it a joy to browse.

I am personally deeply grateful to Dr. Farmelo for his work to preserve my family’s history, while recognizing that our personal gains are entirely tangential to the vastly more important scholarly efforts which motivate him.  Dr. Farmelo is himself a physicist by training, and has become a family friend while researching the most recent biography of Paul Dirac, appropriately titled The Strangest Man.  This excellent book follows my grandfather’s life in great detail, describing both personal and scientific aspects.  It provides great insight into a man who cared a lot more about equations than people, and was thus able to make incredible contributions to science.  Some have assumed that I would take offense at Farmelo’s conclusion that my grandfather was probably autistic.  Quite the contrary — I appreciate his boldness in offering a straightforward explanation of the famously odd behavioral patterns which have inspired generations of jokes, and that still have ripples in my own life today.

Sorry for the downtime – we got hacked

Posted in Electronic Security, Geek, Hacks on March 11th, 2012 by leodirac – 2 Comments

My apologies that the blog has been down for the last few days.  Some hackers got into my PHP and inserted some malware onto the blog.  A helpful reader alerted me to the problem within hours of it happening, and I quickly turned the whole site off to prevent spreading malware.  It took me a few days to find the time to gain enough confidence that I understood what happened so that I could safely turn the site back on.  I won’t detail everything I did to lock the server down, but I’m pretty sure it’s safe now.  But if you see anything amiss, please contact me right away!

In the interest of keeping the internet safe, I’ll share what I found.  Dan Hill has a pretty good description of the problem on his blog, or at least a very similar one.  I know another friend who got hit in a similar manner.  They all have their sites hosted on dreamhost, as I do.  So it certainly could have been a result of the recent hacking there, but from what I saw, there are hints it is just an exploit of an insecure wordpress plugin.  In particular, the attack came in through Google Analytics for WordPress by joostdevalk (v 3.2.5).  Somehow the plugin directory had global-write (x777) permissions on it, and a couple rogue files were there including one called ainslieturing.php which is pure virus (as opposed to a modified file that was originally there and useful) and apparently the code which attaches the virus to all the other PHP files in the site.  The virus was triggered by a POST to the ainslieturing.php page from IP 176.65.163.29, which might be somewhere in Germany. Curiously, at the time of this writing, the exact phrase “ainslieturing.php” does not appear anywhere on the web, which is part of my motivation for documenting what happened.

Dissecting the ainslieturing file took a bit more work.  It was extra-obfuscated.  The code does the same thing of eval’ing a base64_decode’d string, but it does it in a way where the string “base64_decode” never shows up in the source (example source).  Presumably this is to make it harder  to detect when somebody is trying to clean up the mess.  For example, this avoids the simple sed fix posted on Dan Hill’s blog.  Additionally, the base64 encoded code appears written to avoid simple virus filters, because it is shuffled before evaluation by a key (143 in my case) which can be easily modified (example source).  The inner code is a PHP script which lets the attacker run arbitrary code on the server, or upload arbitrary files.  Interestingly, the whole thing is password protected, requiring the attacker to present a password with MD5 signature “ca3f717a5e53f4ce47b9062cfbfb2458″.   (Anybody feel like reversing that?)  If you want to check your  files to see if any of them have the double-obfuscated code, this will find them (and perhaps some false positives too):

grep ".x62.x61.x73.x65" * -R -l 2> /dev/null

Once ainslieturing was triggered, the rough symptoms were that a bunch of code got inserted at the top of many of wordpress’s PHP files which is lightly obfuscated through eval-base64-decode.  The virus code when de-obfuscated looks like this.  I haven’t bothered to fully understand it, but similar code has infected other people’s servers, with minor variations.  In particular, the code fetches some instructions from URLs which are doubly-obfuscated, but resolve to domains in Poland or Russia.  Many *.rr.nu domains with hoperjoper.ru as the nameserver and and in particular the throw-away domain ens122zzzddazz.com.  (Please be careful with these URLs — DO NOT JUST TYPE THEM INTO YOUR BROWSER.  Use wget and look at the files that come back.)  If you operate any blacklists, feel free to add these domains to them.

The …dazz domain in particular has a whois record which is not private:

Registrant:
Dan Brown [email protected] +022.824460528 +022.824460528
ENO
Aleje Ujazdowskie 20-44
Warszawa,Warszawa,AF 00540

So, Dan, if you actually exist, you either have some explaining to do, or your domain has been completely taken over.  If any of my readers are traveling to Warsaw, Poland and feeling intrepid, feel free to drop by Dan’s office and let me know what you find.

That’s all for now.  If you have anything relevant to add to the situation, please leave a comment.

How fast is college tuition rising?

Posted in Education on January 23rd, 2012 by leodirac – 1 Comment

Many are concerned about the rapidly rising cost of higher education.  Recently this problem has gained a lot of attention, being somewhat integrated into the #occupy platform (insofar as there is one), and leading to abusive pepper spraying.  The problem is that college tuition costs are rising far faster than inflation, putting it out of reach of many Americans.

But this problem is not at all new.  Tuition has been outpacing inflation for decades.  The College Board‘s statistics show that tuition has increased faster than inflation almost every year going back to 1958.  On average it has outpaced inflation by about 2.8%.


(raw data)

With all the recent discussion about how unsustainable health care costs are, it’s very telling to note that the cost of higher education has been rising faster than health care for the last 30 years.  (Ref: freakonomics, seeking alpha.)

You might say this is all water under the bridge or sunk costs or what have you.  The important question is how fast will college tuition go up in the future? Of course, nobody knows for sure.  Past performance is no guarantee of future results, etc.  Some folks who pay attention to this think it will continue to go up about 6%/yr in the future, although long-term averages are more like 7%/yr or 8%/yr.  So something in that range is a reasonable guess.

Why is tuition going up so fast? That’s a great question which I won’t go into detail here.  But briefly, higher education is a good whose price is influenced strongly by market forces — supply and demand.  Demand must be increasing to keep up with the rising costs.  But another important factor is the unusual way that education is financed which distorts prices.  Also, many think we’re currently in a “bubble” in which higher education is overpriced.  I don’t subscribe to that point of view, but as I said more later…

Burning Man is not Home

Posted in Burning Man, Community, Societal Values on September 4th, 2011 by leodirac – 7 Comments

The Man c. 2006“Welcome home” is the standard greeting people hear when they first arrive at Black Rock City, the city which is Burning Man.  For many return visitors, this phrase embodies why they keep coming back to endure the long travel and harsh dusty conditions.  Black Rock City (BRC) feels like home in a way they can’t find anywhere else.  Although I understand this sentiment, I think this is a really unfortunate way to live your life.  How sad to have a home that does not exist 51 weeks out of the year.

To be clear, I understand that it is a wonderful feeling to find a home if you haven’t known one before.  In 1997 during my first visit to Burning Man, I felt like Gonzo in Muppets from Space when he (spoiler alert!) first meets his extended family.  His unique appearance had made him feel utterly alone, until a spaceship full of Gonzo-looking aliens landed on earth and explained that he was one of them.  The realization that he was not a freak outcast but part of a vibrant community is the same that many first experience at Burning Man.  I first experienced this sense of inclusion there, and it has undoubtedly transformed my life for the better.  It is a deeply powerful experience that continues to be extremely important for a great many people.  But why does it need to be rooted in a wasteland in Nevada?  Why not bring that feeling to your real home?

My challenge is this to everybody who considers Burning Man their home: How can you bring what you love about Burning Man into the other 51 weeks of your year? What is so immutable about your regular life that you can only feel comfortable 2% of the time?  Is that dusty dusty place really so special that you cannot bring its culture home in a sustainable way?  From personal experience, I think not.  It might take years, but you really can take the things you love about Burning Man back to your regular life. Let’s go through some of the features of BRC that many people find wonderful and discuss how to recreate them in the real world.

At Burning Man, I get to spending lots of time with my friends

One of the simplest pleasures of That Thing In The Desert is that you get to spend an entire week hanging out with your friends.  Vacations are great, right?  Well here’s an idea: go on a camping trip with your friends closer to home.  Or how about arranging a weekly gathering to play board games or cook dinner together?  Creating sustainable community activities is completely possible at home.

Or convince your friends to go somewhere new for a vacation.  Sure, BRC is a wonderfully amazingly different place (at least the first several times you go), but so is much of Africa or Asia.  It’s not like a trip to BRC is cheap either — on average people spend over $2,000 for the whole thing (ref: BRC Census).  Compare that to a plane ticket across the globe.

If you really like being around your friends all the time, how about actually moving into a house with them?  That’s what I did.  It’s called co-housing, and it’s awesome.  Every morning when I get up and every evening for dinner I see my good friends milling about living their lives, and we enrich each other.  I highly recommend it.  If that’s too intense for you, figure out how you and your closest can live within walking distance of each other.  It takes years for neighborhoods to coalesce, but when it works it’s wonderful.

At Burning Man, I’m surrounded by cool art

If this is an excuse for why you can’t feel at home in your regular home life, the irony is thick.  First tabulate how much time and money your camp expended on your last vacation in the desert.  Break that down into the part that was spent on personal comforts (i.e. making BRC more like home) and the part that was spent creating cool art for others to experience.  Now try harnessing all that creative brainpower which went into your project, and divert it towards doing something awesome for your local community.  A few quick ideas: a mural or sculpture in your neighborhood or a new community P-patch or a collective third place for your friends.

Sure it’s a different kind of challenge.  Most cities have more rules about modifying your surroundings than Black Rock City.  But as the years go on, the differences are shrinking.  BRC has strict fire codes and (less strict) building codes, and as the community expands, increasingly restrictive community decency standards.  You can always put up your own Jiffy Lube sculpture in your back yard.

At Burning Man, I can be myself

“Radical self expression” has been one of Burning Man’s philosophies from the beginning.  The ability to be yourself in your normal life seems on the surface like it really should be easy, but is often extremely hard.  What’s preventing you from being yourself?  Often it’s social inertia.  People who expect you to act a certain way — a way that maybe you’re tired of and want to move on from.  If this is the case for you, I’ll offer some bold advice: try spending less time with those people, and more time with people who reinforce the version of yourself you prefer.

If on the other hand you enjoy being somebody different only while you’re in the desert, then you have a harder choice to make.  Is that other person who you really want to be?  Perhaps they’re just a costume you enjoy wearing like for Halloween.  But if that other person has a real home, and you are living as an outsider, then this choice bears consideration.

At Burning Man, strangers are friendly and awesome

This one can be hard, especially for people living in certain cities.  After my first burn, my campmate and I decided to try to bring some of the playa attitude back to Los Angeles.  We attempted what we later termed “attack smiles” because their effect on sidewalk passersby was the exact opposite of what we hoped.  Within a year we both left LA for friendlier pastures.  So in the “tough choices” department, moving is always an option.  You might not feel at home because your home isn’t a very friendly place.  But I wouldn’t jump to that conclusion too quickly.

It might be cliche, but scientific research has shown that good moods spread through social networks.  Happiness is contagious.  Especially amongst friends.  So spend more time with your friends and friends of friends, and bring that same energy you bring to the desert.  Build community. (This is the simplest, strongest advice I can give.) Bring the cultural principles that you love into your 98%-of-the-year community.  It’ll take a lot of work over time.  But I bet your friends will be on board to help, and the end goal is absolutely worth the effort.

Mac ‘n’ Cheese Cupcakes

Posted in Cooking, Hacks, Humor on August 3rd, 2011 by leodirac – Comments Off
mac n cheese cupcakes

My housemate Ellery created these mac ‘n’ cheese cupcakes for dinner the other night. The frosting is mashed potatoes, and they’re topped with a cherry tomato. Inside is a meatless meaty macaroni and cheese combination surrounded by a savory dough. They were super fun and tasty.

I can take very little credit for these beyond the photo. And helping to consume them. But it’s a great example of why I love living with fun creative people! I’ve heard many requests for the recipe — stay tuned! It’s not mine to share, but when Ellery writes it down I’ll be sure to let you know. (And update this page.)

Co-housing: Picking your housemates

Posted in Co-housing, Community, Seattle on July 23rd, 2011 by leodirac – Comments Off

So you’ve found some folks you think you might want to live with.  Or maybe they’re awesome friends whom you’re super excited to live with.  Either way, before signing a lease (or a mortgage!) it’s important to do your due diligence and try to figure out how well you’ll get along living together.

If it’s somebody you don’t know very well, the need might seem obvious.  But if it’s an old friend, I posit it’s even more important to check your homie-compatibility index.  Being friends and being good housemates are not the same thing.  When considering co-housing, probably the most important thing is picking the right people to live with.  My very wise housemate Heater developed this list of discussion topics to go over with potential roommates.

  • Communication style
  • Occupancy dates
  • Noise
  • Guests
  • Parties
  • Food
  • Regular meetings
  • Use of the Common Spaces
  • Substances
  • Nudity
  • Sex
  • Scheduling use of space
  • Cleanliness
  • Utilities
  • Methods of rent
  • Parking and neighbors
  • Rooms
  • Pets
  • Kids
  • Temperature
  • Decor
  • Chores

We recommend scheduling 2-3 hours of uninterrupted time together to discuss everything on this list.  It takes a while to talk about everything!  Discuss each topic, and write down your expectations for how a household should work.  This forms an informal social contract that you can refer back to.  Make note of differences of opinion.  Decide how you’ll deal with them, or recognize that the barriers to a happy house are too large.

Google+ and Facebook’s natural monopoly in social networks

Posted in Analysis, Economics, Facebook, Google, Microsoft, Tech Industry on July 17th, 2011 by leodirac – 2 Comments

Google+ and FacebookNatural monopolies occur when it is economically favorable to have a single standard vendor for a product or service. In these situations, monopolies tend to appear and maintain themselves naturally. When I say “economically favorable” I mean in the aggregate — the entire economy operates more efficiently because of the standard. Which is unusual with a monopoly — usually monopolies get in the way of theoretically ideally efficient capitalism because their power distorts competition. The monopolist will often create friction in the market by say charging unreasonably high prices. The strange thing about a natural monopoly is that even with a powerful monopolist in place, most people (not all of course!) are better off.

I’m going to give two examples of natural monopolies in high tech. They are not the perfect examples used in textbooks, but I think they are illustrative, and offer valuable lessons.

Natural Monopoly of Operating Systems

Operating systems are a good example of a natural monopoly. As much as we all value choice as a driver of innovation, the plain truth is that almost everybody is better off if there is a standard operating system upon which higher-level applications can be built. Application developers benefit because they have a single clear platform upon which to build. If there were two or three dominant operating systems, application vendors would need to build a separate version of their application for each one in order to reach consumers, which is considerably more effort. Similarly, the standard benefits consumers because they have a single choice which gives them the benefit of all the applications written on it.

Gates & Allen understood this long before most, which prompted them to drop out of school and pursue Microsoft with vigor. Windows succeeded in creating such a natural monopoly, enabling a rich ecosystem of third-party software vendors (ISVs in MS parlance) to create value for consumers without needing to worry about what chipset underlies the graphics card or network adapter their customers’ computers. In this way, Microsoft enabled the creation of value for PC customers and wealth for ISVs, and the monopoly persists in a form to this day.

But all is not rosy in this world. Other companies want to sell operating systems. People want choice. Once entrenched, the monopolist has a tendency to make choices which benefit the monopolist more than the consumer — Microsoft continues to exhibit this behavior even as their monopoly power fades. In classic natural monopolies like utilities, explicit regulation controls the monopolist’s abuse. With Windows, a combination of limited government intervention and competitive innovation ultimately limited their influence.

Social networks as natural monopolies

Online social networks also exhibit properties of a natural monopoly. A well built social networking service like Facebook creates tremendous economic opportunities. Particularly if the service exposes its valuable social graph data through an API that other services can use. Almost any online service can be made more compelling by incorporating social graph data. The existence of a publicly usable social graph dataset provides an economic boost to the entire tech sector.

This boost tends to create a winner-take-all situation.  When third-party services rely on a social API service, they reinforce consumer’s use of that service.  Third parties’ lives are easier when there is a single standard, because they only need to code to a single API in order to gain the benefits of the social graph.  Here the analogy to operating systems is clear.  The social network provides a platform upon which others can create value.  The value creation process is easier if there is a single standard social network upon which to build. These characteristics make the social networking monopoly natural.

A behavioral characteristic of social networking sites’ users also helps create a monopoly. People enjoy the benefits of having their social network defined online, but they do not enjoy the effort of defining it. Us geeks (everybody reading this and probably most of your friends) are willing to spend hours organizing our friends into circles or searching for people we know to connect with them. Some of us even enjoy it. But for most normal people this very quickly becomes a boring waste of time, especially if they’ve already done this once or twice on different websites.  Most people are not willing to maintain multiple social networks. Once they are invested in one, the barrier to switching is quite high.

Implications for Google+ in competing with Facebook

Facebook’s dominance is rapidly approaching monopoly levels.  They have crossed the tipping point where they are fast on their way to becoming the de-facto standard for social graph data, if they haven’t already.  The nature of social networks as supporting a natural monopoly means that Facebook’s rise will be supported more strongly than it would be otherwise.  When considering Facebook’s dominance, we readers must remember our place in the ecosystem as geeks.  We and our friends, are the innovators and early adopters who are far more willing to try the new thing, because we see intrinsic value in progress, and are far less perturbed by unrefined products.  The fact that recently Facebook’s fastest growing demographic was women over 55 shows that the service has crossed Moore’s chasm and now appeals to the majority of people.  As industry insiders, it’s easy for us to forget the bubble we live in — just because everybody we know uses something doesn’t mean it will ever actually take off an be popular with non-geeks.  But Facebook is clearly on a path to provide a dominant monopolistic standard for social networking data.

Breaking this monopoly would be difficult for Google even without the advantages of a natural monopoly.  People’s natural laziness makes a third social network (after Facebook and Twitter) unlikely to succeed as well.  So on the face of it, Google‘s got a very tough road ahead.  It’s tempting to declare G+ dead on arrival because of these intrinsic forces, but there are other reasons why I think they actually have a decent shot.  But I’ll save that analysis for another story.

Ignite video on Advanced Co-Housing Techniques

Posted in Co-housing, Community, Ego, Seattle on June 26th, 2011 by leodirac – 1 Comment

My Ignite talk from April on Advanced Co-Housing Techniques has been posted.  This is my best 5-minute summary on the joys of living with friends, and some techniques for making it work.  For some deeper thoughts than what I could fit into those 5 minutes, check out the community section here.