By BoLOBOOLNE payday loans

Making software architecture choices analytically with CodeTrend

Posted in Democratization of Information, Geek, Software Engineering on September 3rd, 2013 by leodirac – 1 Comment

Modern software gets assembled from parts as much as it gets built from scratch.  It used to be you just picked your operating system and programming language and went to it.  Nowadays you need to pick your data store, your development tools, your framework and its plugins and all sorts of libraries independently.  These choices are difficult and important.

A big part of the difficulty is even knowing what choices are available.  All too often the decision is made entirely based on what a key developer has used recently.  This is important, don’t get me wrong — if your current team isn’t productive, the project will not go well.  But if your current team happens to be experts in something that nobody else in the world uses, you might be heading for a dead-end.

These choices really matter too.  Anybody who’s been in the industry has run into so-called “legacy codebases” which is a term that literally means old, but in fact gets applied to any piece of software that is no longer considered “good” for whatever reason.  Some very old codebases are still doing great.  But some become “legacy” less than a year after birth.  Another common feature of legacy codebases is that they’re hard to maintain, and require very expensive investments to replace.

For these reasons, I think that some of the most critical choices in software projects is the choice of technologies upon which it will be based.  Despite the importance of these choices, rarely are they considered very carefully.  The trade-offs are difficult to categorize, and thus get dismissed all too quickly as subjective, and thus inappropriate for strict analysis.  There is definitely a strong subjective component to it, which is why personal experience is so important, but there are analytical ways to look at the choices.

A few years back I wrote a popular article comparing two web development frameworks I was considering using: Django or Ruby on Rails.  In it I argue that popularity is a critical measure of any software technology for many reasons.  The more people are using a technology, the better it will be.  People using it means questions will already be asked and answered on the googles.  It means more bugs will have already been found and fixed, and more features will have already been added.  For open source software the mechanisms for this are obvious, but the same results tend to happen with closed source systems assuming the organization maintaining the code is rational.  It also means it will be easier to hire people who know how it works.  Fortunately, popularity of software is relatively easy to measure analytically.

I have spent a lot of time researching these issues before making technology choices and realized that this manual process is wasteful.  To that end, I have started an open source project to simplify the systematic comparison of software technologies.  It’s called which is where you can start researching right now.  It’s all open source and the data are creative-commons licensed.  You can start adding to the data today by categorizing technologies, and if you know Ruby on Rails, I’d love help adding features.

As an example of it in use, here’s a comparison of the aforementioned web frameworks counting number of posts on everybody’s favorite developer Q&A site StackOverflow:

This shows that they are both quite popular and growing, but Rails clearly has more activity than Django.  (For the record, I recognize and explicitly dismiss the counter-argument that rails might be more confusing or worse documented leading to more questions — questions always come up during use, good / bad / easy / complex / rtfm / brain-buster / whatever.)

I also like to think of CodeTrend as filling a niche that StackOverflow has chosen to ignore.  Questions on SO that ask “which is better __ or __” get quickly closed as inappropriate.  They keep coming up, and are very useful resources, even though they’re against the rules.  I hope CodeTrend will someday be able to fill that need explicitly.  There’s a ton of work to do to make that possible, so if you’d like to help out, I’d sure appreciate it.  Together we can provide a resource for the entire industry.

Paul Dirac’s PhD Thesis

Posted in Physics, Science on July 29th, 2013 by leodirac – 1 Comment

Recently my grandfather’s PhD thesis has found its way onto the Internet.  You can view a PDF of it here, courtesy of Florida State University:

This fascinating document is significant in the history of science.  Its two-word title, “Quantum Mechanics” demonstrates how fundamental it was in opening up a new branch of science.  For those of you who have written doctoral theses, imagine if the title of your thesis was exactly the title of a required undergraduate class.

The document’s journey to the Internet was slow.  It had been sitting in my mother’s cluttered house for decades, before she passed it along to Graham Farmelo, who delivered it to FSU, who scanned it and published it online.  Now it has a permanent home in their Dirac Collection at the Dirac Science Library.

The first thing you’ll notice about the document is that it is entirely hand-written.  It doesn’t take long to realize that in 1926 this was the only practical option for a document of this type.  Of course type-setting was technologically possible at that time, even for documents with complex mathematical formulae like this.  But the cost of preparing a document in this manner was huge, and thus only done for works that were expected to be broadly distributed.  Then as today, the primary audience for a typical PhD thesis was the handful of professors guiding the doctoral student.  At the time, nobody knew that just 6 years later Dirac would be honored with the Lucasian Professorship of Mathematics, a highly prestigious academic post once held by Isaac Newton and until recently by Stephen Hawking.  So of course this document was hand-written.

The thesis is also visually wonderful.  There are scribbles in margins, neat parts and sloppy parts, crossed-out sections, derivations, questions marks of uncertainty, hand-drawn graphs, arrows of re-arrangements, and torn sheets of paper.  The document is difficult to follow, and its completeness is not obvious — the table of contents does not seem to line up with the contents, and the pages which do have numbers aren’t even in order.  Its utility to science is clearly eclipsed by later works, but seeing aspects of the original thought process laid out in both pen and pencil invites so many questions.  What is that sudoku-like grid of numbers in the top margin of page 25?  Why is there a hand-drawn candle on a page otherwise filled with equations going in different directions?  Somebody who is well-versed in the topic would be more qualified to speculate than me.  Obviously I’m biased, but I find it a joy to browse.

I am personally deeply grateful to Dr. Farmelo for his work to preserve my family’s history, while recognizing that our personal gains are entirely tangential to the vastly more important scholarly efforts which motivate him.  Dr. Farmelo is himself a physicist by training, and has become a family friend while researching the most recent biography of Paul Dirac, appropriately titled The Strangest Man.  This excellent book follows my grandfather’s life in great detail, describing both personal and scientific aspects.  It provides great insight into a man who cared a lot more about equations than people, and was thus able to make incredible contributions to science.  Some have assumed that I would take offense at Farmelo’s conclusion that my grandfather was probably autistic.  Quite the contrary — I appreciate his boldness in offering a straightforward explanation of the famously odd behavioral patterns which have inspired generations of jokes, and that still have ripples in my own life today.

Sorry for the downtime – we got hacked

Posted in Electronic Security, Geek, Hacks on March 11th, 2012 by leodirac – 2 Comments

My apologies that the blog has been down for the last few days.  Some hackers got into my PHP and inserted some malware onto the blog.  A helpful reader alerted me to the problem within hours of it happening, and I quickly turned the whole site off to prevent spreading malware.  It took me a few days to find the time to gain enough confidence that I understood what happened so that I could safely turn the site back on.  I won’t detail everything I did to lock the server down, but I’m pretty sure it’s safe now.  But if you see anything amiss, please contact me right away!

In the interest of keeping the internet safe, I’ll share what I found.  Dan Hill has a pretty good description of the problem on his blog, or at least a very similar one.  I know another friend who got hit in a similar manner.  They all have their sites hosted on dreamhost, as I do.  So it certainly could have been a result of the recent hacking there, but from what I saw, there are hints it is just an exploit of an insecure wordpress plugin.  In particular, the attack came in through Google Analytics for WordPress by joostdevalk (v 3.2.5).  Somehow the plugin directory had global-write (x777) permissions on it, and a couple rogue files were there including one called ainslieturing.php which is pure virus (as opposed to a modified file that was originally there and useful) and apparently the code which attaches the virus to all the other PHP files in the site.  The virus was triggered by a POST to the ainslieturing.php page from IP, which might be somewhere in Germany. Curiously, at the time of this writing, the exact phrase “ainslieturing.php” does not appear anywhere on the web, which is part of my motivation for documenting what happened.

Dissecting the ainslieturing file took a bit more work.  It was extra-obfuscated.  The code does the same thing of eval’ing a base64_decode’d string, but it does it in a way where the string “base64_decode” never shows up in the source (example source).  Presumably this is to make it harder  to detect when somebody is trying to clean up the mess.  For example, this avoids the simple sed fix posted on Dan Hill’s blog.  Additionally, the base64 encoded code appears written to avoid simple virus filters, because it is shuffled before evaluation by a key (143 in my case) which can be easily modified (example source).  The inner code is a PHP script which lets the attacker run arbitrary code on the server, or upload arbitrary files.  Interestingly, the whole thing is password protected, requiring the attacker to present a password with MD5 signature “ca3f717a5e53f4ce47b9062cfbfb2458″.   (Anybody feel like reversing that?)  If you want to check your  files to see if any of them have the double-obfuscated code, this will find them (and perhaps some false positives too):

grep ".x62.x61.x73.x65" * -R -l 2> /dev/null

Once ainslieturing was triggered, the rough symptoms were that a bunch of code got inserted at the top of many of wordpress’s PHP files which is lightly obfuscated through eval-base64-decode.  The virus code when de-obfuscated looks like this.  I haven’t bothered to fully understand it, but similar code has infected other people’s servers, with minor variations.  In particular, the code fetches some instructions from URLs which are doubly-obfuscated, but resolve to domains in Poland or Russia.  Many * domains with as the nameserver and and in particular the throw-away domain  (Please be careful with these URLs — DO NOT JUST TYPE THEM INTO YOUR BROWSER.  Use wget and look at the files that come back.)  If you operate any blacklists, feel free to add these domains to them.

The …dazz domain in particular has a whois record which is not private:

Dan Brown [email protected] +022.824460528 +022.824460528
Aleje Ujazdowskie 20-44
Warszawa,Warszawa,AF 00540

So, Dan, if you actually exist, you either have some explaining to do, or your domain has been completely taken over.  If any of my readers are traveling to Warsaw, Poland and feeling intrepid, feel free to drop by Dan’s office and let me know what you find.

That’s all for now.  If you have anything relevant to add to the situation, please leave a comment.

How fast is college tuition rising?

Posted in Education on January 23rd, 2012 by leodirac – 1 Comment

Many are concerned about the rapidly rising cost of higher education.  Recently this problem has gained a lot of attention, being somewhat integrated into the #occupy platform (insofar as there is one), and leading to abusive pepper spraying.  The problem is that college tuition costs are rising far faster than inflation, putting it out of reach of many Americans.

But this problem is not at all new.  Tuition has been outpacing inflation for decades.  The College Board‘s statistics show that tuition has increased faster than inflation almost every year going back to 1958.  On average it has outpaced inflation by about 2.8%.

(raw data)

With all the recent discussion about how unsustainable health care costs are, it’s very telling to note that the cost of higher education has been rising faster than health care for the last 30 years.  (Ref: freakonomics, seeking alpha.)

You might say this is all water under the bridge or sunk costs or what have you.  The important question is how fast will college tuition go up in the future? Of course, nobody knows for sure.  Past performance is no guarantee of future results, etc.  Some folks who pay attention to this think it will continue to go up about 6%/yr in the future, although long-term averages are more like 7%/yr or 8%/yr.  So something in that range is a reasonable guess.

Why is tuition going up so fast? That’s a great question which I won’t go into detail here.  But briefly, higher education is a good whose price is influenced strongly by market forces — supply and demand.  Demand must be increasing to keep up with the rising costs.  But another important factor is the unusual way that education is financed which distorts prices.  Also, many think we’re currently in a “bubble” in which higher education is overpriced.  I don’t subscribe to that point of view, but as I said more later…

Burning Man is not Home

Posted in Burning Man, Community, Societal Values on September 4th, 2011 by leodirac – 7 Comments

The Man c. 2006“Welcome home” is the standard greeting people hear when they first arrive at Black Rock City, the city which is Burning Man.  For many return visitors, this phrase embodies why they keep coming back to endure the long travel and harsh dusty conditions.  Black Rock City (BRC) feels like home in a way they can’t find anywhere else.  Although I understand this sentiment, I think this is a really unfortunate way to live your life.  How sad to have a home that does not exist 51 weeks out of the year.

To be clear, I understand that it is a wonderful feeling to find a home if you haven’t known one before.  In 1997 during my first visit to Burning Man, I felt like Gonzo in Muppets from Space when he (spoiler alert!) first meets his extended family.  His unique appearance had made him feel utterly alone, until a spaceship full of Gonzo-looking aliens landed on earth and explained that he was one of them.  The realization that he was not a freak outcast but part of a vibrant community is the same that many first experience at Burning Man.  I first experienced this sense of inclusion there, and it has undoubtedly transformed my life for the better.  It is a deeply powerful experience that continues to be extremely important for a great many people.  But why does it need to be rooted in a wasteland in Nevada?  Why not bring that feeling to your real home?

My challenge is this to everybody who considers Burning Man their home: How can you bring what you love about Burning Man into the other 51 weeks of your year? What is so immutable about your regular life that you can only feel comfortable 2% of the time?  Is that dusty dusty place really so special that you cannot bring its culture home in a sustainable way?  From personal experience, I think not.  It might take years, but you really can take the things you love about Burning Man back to your regular life. Let’s go through some of the features of BRC that many people find wonderful and discuss how to recreate them in the real world.

At Burning Man, I get to spending lots of time with my friends

One of the simplest pleasures of That Thing In The Desert is that you get to spend an entire week hanging out with your friends.  Vacations are great, right?  Well here’s an idea: go on a camping trip with your friends closer to home.  Or how about arranging a weekly gathering to play board games or cook dinner together?  Creating sustainable community activities is completely possible at home.

Or convince your friends to go somewhere new for a vacation.  Sure, BRC is a wonderfully amazingly different place (at least the first several times you go), but so is much of Africa or Asia.  It’s not like a trip to BRC is cheap either — on average people spend over $2,000 for the whole thing (ref: BRC Census).  Compare that to a plane ticket across the globe.

If you really like being around your friends all the time, how about actually moving into a house with them?  That’s what I did.  It’s called co-housing, and it’s awesome.  Every morning when I get up and every evening for dinner I see my good friends milling about living their lives, and we enrich each other.  I highly recommend it.  If that’s too intense for you, figure out how you and your closest can live within walking distance of each other.  It takes years for neighborhoods to coalesce, but when it works it’s wonderful.

At Burning Man, I’m surrounded by cool art

If this is an excuse for why you can’t feel at home in your regular home life, the irony is thick.  First tabulate how much time and money your camp expended on your last vacation in the desert.  Break that down into the part that was spent on personal comforts (i.e. making BRC more like home) and the part that was spent creating cool art for others to experience.  Now try harnessing all that creative brainpower which went into your project, and divert it towards doing something awesome for your local community.  A few quick ideas: a mural or sculpture in your neighborhood or a new community P-patch or a collective third place for your friends.

Sure it’s a different kind of challenge.  Most cities have more rules about modifying your surroundings than Black Rock City.  But as the years go on, the differences are shrinking.  BRC has strict fire codes and (less strict) building codes, and as the community expands, increasingly restrictive community decency standards.  You can always put up your own Jiffy Lube sculpture in your back yard.

At Burning Man, I can be myself

“Radical self expression” has been one of Burning Man’s philosophies from the beginning.  The ability to be yourself in your normal life seems on the surface like it really should be easy, but is often extremely hard.  What’s preventing you from being yourself?  Often it’s social inertia.  People who expect you to act a certain way — a way that maybe you’re tired of and want to move on from.  If this is the case for you, I’ll offer some bold advice: try spending less time with those people, and more time with people who reinforce the version of yourself you prefer.

If on the other hand you enjoy being somebody different only while you’re in the desert, then you have a harder choice to make.  Is that other person who you really want to be?  Perhaps they’re just a costume you enjoy wearing like for Halloween.  But if that other person has a real home, and you are living as an outsider, then this choice bears consideration.

At Burning Man, strangers are friendly and awesome

This one can be hard, especially for people living in certain cities.  After my first burn, my campmate and I decided to try to bring some of the playa attitude back to Los Angeles.  We attempted what we later termed “attack smiles” because their effect on sidewalk passersby was the exact opposite of what we hoped.  Within a year we both left LA for friendlier pastures.  So in the “tough choices” department, moving is always an option.  You might not feel at home because your home isn’t a very friendly place.  But I wouldn’t jump to that conclusion too quickly.

It might be cliche, but scientific research has shown that good moods spread through social networks.  Happiness is contagious.  Especially amongst friends.  So spend more time with your friends and friends of friends, and bring that same energy you bring to the desert.  Build community. (This is the simplest, strongest advice I can give.) Bring the cultural principles that you love into your 98%-of-the-year community.  It’ll take a lot of work over time.  But I bet your friends will be on board to help, and the end goal is absolutely worth the effort.

Mac ‘n’ Cheese Cupcakes

Posted in Cooking, Hacks, Humor on August 3rd, 2011 by leodirac – Comments Off
mac n cheese cupcakes

My housemate Ellery created these mac ‘n’ cheese cupcakes for dinner the other night. The frosting is mashed potatoes, and they’re topped with a cherry tomato. Inside is a meatless meaty macaroni and cheese combination surrounded by a savory dough. They were super fun and tasty.

I can take very little credit for these beyond the photo. And helping to consume them. But it’s a great example of why I love living with fun creative people! I’ve heard many requests for the recipe — stay tuned! It’s not mine to share, but when Ellery writes it down I’ll be sure to let you know. (And update this page.)

Co-housing: Picking your housemates

Posted in Co-housing, Community, Seattle on July 23rd, 2011 by leodirac – Comments Off

So you’ve found some folks you think you might want to live with.  Or maybe they’re awesome friends whom you’re super excited to live with.  Either way, before signing a lease (or a mortgage!) it’s important to do your due diligence and try to figure out how well you’ll get along living together.

If it’s somebody you don’t know very well, the need might seem obvious.  But if it’s an old friend, I posit it’s even more important to check your homie-compatibility index.  Being friends and being good housemates are not the same thing.  When considering co-housing, probably the most important thing is picking the right people to live with.  My very wise housemate Heater developed this list of discussion topics to go over with potential roommates.

  • Communication style
  • Occupancy dates
  • Noise
  • Guests
  • Parties
  • Food
  • Regular meetings
  • Use of the Common Spaces
  • Substances
  • Nudity
  • Sex
  • Scheduling use of space
  • Cleanliness
  • Utilities
  • Methods of rent
  • Parking and neighbors
  • Rooms
  • Pets
  • Kids
  • Temperature
  • Decor
  • Chores

We recommend scheduling 2-3 hours of uninterrupted time together to discuss everything on this list.  It takes a while to talk about everything!  Discuss each topic, and write down your expectations for how a household should work.  This forms an informal social contract that you can refer back to.  Make note of differences of opinion.  Decide how you’ll deal with them, or recognize that the barriers to a happy house are too large.

Google+ and Facebook’s natural monopoly in social networks

Posted in Analysis, Economics, Facebook, Google, Microsoft, Tech Industry on July 17th, 2011 by leodirac – 2 Comments

Google+ and FacebookNatural monopolies occur when it is economically favorable to have a single standard vendor for a product or service. In these situations, monopolies tend to appear and maintain themselves naturally. When I say “economically favorable” I mean in the aggregate — the entire economy operates more efficiently because of the standard. Which is unusual with a monopoly — usually monopolies get in the way of theoretically ideally efficient capitalism because their power distorts competition. The monopolist will often create friction in the market by say charging unreasonably high prices. The strange thing about a natural monopoly is that even with a powerful monopolist in place, most people (not all of course!) are better off.

I’m going to give two examples of natural monopolies in high tech. They are not the perfect examples used in textbooks, but I think they are illustrative, and offer valuable lessons.

Natural Monopoly of Operating Systems

Operating systems are a good example of a natural monopoly. As much as we all value choice as a driver of innovation, the plain truth is that almost everybody is better off if there is a standard operating system upon which higher-level applications can be built. Application developers benefit because they have a single clear platform upon which to build. If there were two or three dominant operating systems, application vendors would need to build a separate version of their application for each one in order to reach consumers, which is considerably more effort. Similarly, the standard benefits consumers because they have a single choice which gives them the benefit of all the applications written on it.

Gates & Allen understood this long before most, which prompted them to drop out of school and pursue Microsoft with vigor. Windows succeeded in creating such a natural monopoly, enabling a rich ecosystem of third-party software vendors (ISVs in MS parlance) to create value for consumers without needing to worry about what chipset underlies the graphics card or network adapter their customers’ computers. In this way, Microsoft enabled the creation of value for PC customers and wealth for ISVs, and the monopoly persists in a form to this day.

But all is not rosy in this world. Other companies want to sell operating systems. People want choice. Once entrenched, the monopolist has a tendency to make choices which benefit the monopolist more than the consumer — Microsoft continues to exhibit this behavior even as their monopoly power fades. In classic natural monopolies like utilities, explicit regulation controls the monopolist’s abuse. With Windows, a combination of limited government intervention and competitive innovation ultimately limited their influence.

Social networks as natural monopolies

Online social networks also exhibit properties of a natural monopoly. A well built social networking service like Facebook creates tremendous economic opportunities. Particularly if the service exposes its valuable social graph data through an API that other services can use. Almost any online service can be made more compelling by incorporating social graph data. The existence of a publicly usable social graph dataset provides an economic boost to the entire tech sector.

This boost tends to create a winner-take-all situation.  When third-party services rely on a social API service, they reinforce consumer’s use of that service.  Third parties’ lives are easier when there is a single standard, because they only need to code to a single API in order to gain the benefits of the social graph.  Here the analogy to operating systems is clear.  The social network provides a platform upon which others can create value.  The value creation process is easier if there is a single standard social network upon which to build. These characteristics make the social networking monopoly natural.

A behavioral characteristic of social networking sites’ users also helps create a monopoly. People enjoy the benefits of having their social network defined online, but they do not enjoy the effort of defining it. Us geeks (everybody reading this and probably most of your friends) are willing to spend hours organizing our friends into circles or searching for people we know to connect with them. Some of us even enjoy it. But for most normal people this very quickly becomes a boring waste of time, especially if they’ve already done this once or twice on different websites.  Most people are not willing to maintain multiple social networks. Once they are invested in one, the barrier to switching is quite high.

Implications for Google+ in competing with Facebook

Facebook’s dominance is rapidly approaching monopoly levels.  They have crossed the tipping point where they are fast on their way to becoming the de-facto standard for social graph data, if they haven’t already.  The nature of social networks as supporting a natural monopoly means that Facebook’s rise will be supported more strongly than it would be otherwise.  When considering Facebook’s dominance, we readers must remember our place in the ecosystem as geeks.  We and our friends, are the innovators and early adopters who are far more willing to try the new thing, because we see intrinsic value in progress, and are far less perturbed by unrefined products.  The fact that recently Facebook’s fastest growing demographic was women over 55 shows that the service has crossed Moore’s chasm and now appeals to the majority of people.  As industry insiders, it’s easy for us to forget the bubble we live in — just because everybody we know uses something doesn’t mean it will ever actually take off an be popular with non-geeks.  But Facebook is clearly on a path to provide a dominant monopolistic standard for social networking data.

Breaking this monopoly would be difficult for Google even without the advantages of a natural monopoly.  People’s natural laziness makes a third social network (after Facebook and Twitter) unlikely to succeed as well.  So on the face of it, Google‘s got a very tough road ahead.  It’s tempting to declare G+ dead on arrival because of these intrinsic forces, but there are other reasons why I think they actually have a decent shot.  But I’ll save that analysis for another story.

Ignite video on Advanced Co-Housing Techniques

Posted in Co-housing, Community, Ego, Seattle on June 26th, 2011 by leodirac – 1 Comment

My Ignite talk from April on Advanced Co-Housing Techniques has been posted.  This is my best 5-minute summary on the joys of living with friends, and some techniques for making it work.  For some deeper thoughts than what I could fit into those 5 minutes, check out the community section here.

Macbook Crashes, Kernel Panics and coping with an Apple “Genius”

Posted in Analysis, Apple, Gadgets, Geek, Hacks, Hardware on May 14th, 2011 by leodirac – 8 Comments

So your Mac is crashing a lot, and after a trip to the “Genius Bar”, you’re starting to think maybe that “genius” you talked to is anything but.  Is this where you are?  If so, join the club, because that’s exactly what I’ve been going through recently.  My MacBook Pro would regularly go black without warning, and the only way I could get its attention again was to hold the power button for ten seconds.  Often it crashed while the screen saver was running, or when I was switching between desktop Spaces, or any other time.  And it was a thorough and complete crash — no warning, no recovery.

It was quite a chore to get Apple to admit that the cause was a hardware problem, and fix it.  But I finally succeeded, so I thought I’d share some of my experiences.  I’ll explain what a Kernel Panic is, how they sometimes can be caused by faulty software but often indicate hardware problems, how they differ from other kinds of crashes, and provide a guide on how to read a Mac OS X kernel panic report.

Dealing with the “Genius” Bar staff

“Genius” is what Apple calls its first tier of technical support.  I find the brand unfortunate and insulting for everybody involved.  There is no intelligence test required to work as a “genius” — just some minimal training on how to follow Apple customer service scripts like an obedient robot.  Knowing Apple, I wouldn’t be surprised if the “Genius” staff are required to follow these scripts verbatim and face not only termination but punitive lawsuits for deviating from the party line.  Keep this in mind when dealing with them.  Also know that they have some discretion in the outcome of your visit, but the discretion exists within guidelines that they cannot control.

Some tips on getting past the “genius” from my limited experience.  Print out your kernel panic reports and bring them in.  The more the better.  Highlight the relevant parts.  I’m not sure if bringing a bad attitude with you helps or not — they want to make their customers happy, but they don’t like their “genius” title challenged with logic.  I also recommend persistence.  Following their stupid advice and showing them that it did no good will help.  I’m not sure if understanding what’s going on will or not.  But if you’d like to understand more about why your Mac is crashing, read on…

Kernel panics and hardware failures vs regular software failures

There are two basic ways your Mac can crash.  First, an application might lock up on you and become unresponsive.  You get the spinning beachball of death, and eventually have to Force Quit your application, losing whatever work you hadn’t saved.  This kind of user mode failure is very common with buggy software.  If the beachball is getting you down, the problem is almost certainly caused by bad software, not by a hardware problem.  In OS 9 and before, this kind of failure could have taken down your entire machine, but since the introduction of the BSD kernel in OS X, the system is designed to allow one application to fail while protecting all the other applications.

Sometimes though your entire Mac will crash hard.  Without warning your system displays a full-screen message saying “You need to restart your computer. Hold down the Power button for several seconds or press the Restart button.” in several languages.  This is OS X’s last ditch attempt to tell you something about what happened before it goes completely teets up.  It’s formally known as a kernel panic.  Sometimes the system is so screwed it can’t even get that error message onto the screen before it dies.

Kernel panics indicate a serious problem, either with the computer’s hardware, or the low-level software in the operating system. In fact there are only three things that can cause a kernel panic:

  1. Faulty hardware causes a problem that the OS doesn’t know how to deal with
  2. A bug in OS X itself
  3. A bug in an OS plugin called a kernel extension or kext

Firstly, if the hardware itself has problems, then kernel panics are a common way they manifest themselves.  Similarly, if the operating system itself has any bugs, they could take down the entire system.  The third option could be caused by third-party software, while the first two are entirely Apple’s responsibility.  So when it comes to dealing with the “Genius” behind the bar, the first two are fairly straightforward.  If you’re seeing this problem a lot, and nobody else is, then it’s probably a hardware problem, and they should replace your hardware. Here’s a thought experiment I tried unsuccessfully with the Apple “geniuses” I had to deal with: Imagine you have a hundred Macs all running the same software, and one of them crashes periodically, but the other 99 don’t.  Would you classify that Mac as having a hardware problem or a software problem?  In my case, the genius insisted that it was a software problem.  In fact he claimed he was certain that if I uninstalled Adobe Flash, the problem would be fixed.  Read on, and you’ll learn how the kernel panic reports themselves show that this explanation is impossible.

Understanding and interpreting Kernel Panic reports

First a bit about what a Kernel Panic is.  Very simply, it’s when something unexpected goes wrong in the operating system kernel.  What’s the kernel?  The kernel is the lowest level of the operating system — the part that’s closest to the hardware.  In modern operating systems, there’s a fairly arbitrary line between what functionality lives in the kernel and what functionality lives in the user space.  The key difference is that when something goes wrong with software in the user space, you get a beachball on the app, but the system survives.  When something goes wrong in the kernel, you get a kernel panic, and the whole system goes bye bye fast.  So it’s critical that any code running in the kernel space be ultra reliable.  You don’t change kernel code quickly or lightly, and you test the hell out of it before you release it.  But code runs faster in the kernel, so most modern operating systems put important things like networking and graphics into the kernel.  The BSD kernel which powers OS X allows the installation of “kernel extensions” or “kexts” which add functionality.  More about these soon.  But suffice to say that when anything goes wrong with any kext, it’s a big deal problem because there’s nothing to fall back on (e.g. can’t display an error dialog if the problem is with the display system), so the system’s reaction is called a panic.  Thus “kernel panic.”

Immediately after a KP, your computer does two things: it stores a bunch of information to help diagnose what caused the problem, and puts up the error screen, if it can.  When you reboot, your computer asks if you want to send the KP report to Apple.  You should do this.  The smarter of the “genius” staff can look these reports up and see that your Mac is actually crashing, but they’ll admit that the contents are too technical for a mere “genius” to understand.  Well I’m going to explain to you what the reports contain and what it means about what’s wrong with your computer.

Here’s a typical crash report from my computer.  In my case, these panics weren’t even accompanied by the “restart your computer message” because as I’ll explain, the problem originated in the graphics system.  My computer just suddenly went black and non-responsive.  I’ve highlighted a few key sections for explanation below.

Interval Since Last Panic Report:  420 sec
Panics Since Last Report:          1
Anonymous UUID:                    8A09F455-1039-4696-8479-xxxxxxxxxxxx
Thu Apr 21 09:00:51 2011
panic(cpu 3 caller 0x9cdc8f): NVRM[0/1:0:0]: Read Error 0x00000100: CFG 0xffffffff 0xffffffff 0xffffffff, BAR0 0xc0000000 0xa734e000 0x0a5480a2, D0, P2/4
Backtrace (CPU 3), Frame : Return Address (4 potential args on stack)
0xbc001728 : 0x21b510 (0x5d9514 0xbc00175c 0x223978 0x0)
0xbc001778 : 0x9cdc8f (0xbe323c 0xc53840 0xbf23cc 0x0)
0xbc001818 : 0xae85d3 (0xe0cfc04 0xe5c9004 0x100 0xb83de000)
0xbc001868 : 0xadf5cc (0xe5c9004 0x100 0xbc001898 0x9bd76c)
0xbc001898 : 0x16c8965 (0xe5c9004 0x100 0x438004ee 0x28)
0xbc0019d8 : 0xb07250 (0xe5c9004 0xe5ca004 0x0 0x0)
0xbc001a18 : 0x9d6e23 (0xe5c9004 0xe5ca004 0x0 0x0)
0xbc001ab8 : 0x9d3502 (0x0 0x9 0x0 0x0)
0xbc001c68 : 0x9d4aa0 (0x0 0x600d600d 0x704a 0xbc001c98)
0xbc001d38 : 0xc89217 (0xbc001d58 0x0 0x98 0x2a358d)
0xbc001df8 : 0xc8ec1d (0xe8e5404 0x0 0x98 0x45e8d022)
0xbc001f18 : 0xc8f0b4 (0xe8e5404 0x124b6204 0x6d39d1c0 0x0)
0xbc001f78 : 0xc8f39f (0xe8e5404 0x124b6204 0x6d39d1c0 0xbc0021e0)
0xbc002028 : 0xca3691 (0xe8e5404 0x1f80d8e8 0xbc00239c 0xbc0021e0)
0xbc002298 : 0xc84d09 (0x6d0b7000 0x1f80d8e8 0xbc00239c 0x0)
0xbc0023f8 : 0xc84f47 (0x6d0c6000 0x1f80d800 0x1 0x0)
0xbc002428 : 0xc87a04 (0x6d0c6000 0x1f80d800 0x0 0x97c6c4fc)
0xbc002468 : 0xca9d40 (0x6d0c6000 0x1f80d800 0x6d09f274 0x140)
0xbc0024f8 : 0xc9b5a9 (0xde94bc0 0x1f80d800 0x0 0x1)
0xbc002558 : 0xc9b810 (0x6d09f000 0x6d09f77c 0x1f80d800 0x0)
0xbc0025a8 : 0xc9bce4 (0x6d09f000 0x6d09f77c 0xbc0028cc 0xbc00286c)
0xbc0028e8 : 0xc98aaf (0x6d09f000 0x6d09f77c 0x1 0x0)
0xbc002908 : 0xc605a1 (0x6d09f000 0x6d09f77c 0x1956a580 0x0)
0xbc002938 : 0xc9a572 (0x6d09f000 0xbc002a7c 0xbc002968 0x5046b1)
0xbc002978 : 0xc648de (0x6d09f000 0xbc002a7c 0x0 0xc000401)
0xbc002ab8 : 0xc9dee6 (0x6d09f000 0x0 0xbc002bcc 0xbc002bc8)
0xbc002b68 : 0xc60c93 (0x6d09f000 0x0 0xbc002bcc 0xbc002bc8)
0xbc002be8 : 0x56a738 (0x6d09f000 0x0 0xbc002e3c 0xbc002c74)
0xbc002c38 : 0x56afd7 (0xcef020 0x6d09f000 0x129bab88 0x1)
0xbc002c88 : 0x56b88b (0x6d09f000 0x10 0xbc002cd0 0x0)
0xbc002da8 : 0x285be0 (0x6d09f000 0x10 0x129bab88 0x1)
0xbc003e58 : 0x21d8be (0x129bab60 0x1ec235a0 0x1fd7e8 0x5f43)
      Backtrace continues...

      Kernel Extensions in backtrace (with dependencies):>0xd0afff

BSD process name corresponding to current thread: kernel_task

Mac OS version:
Kernel version:
Darwin Kernel Version 10.7.0: Sat Jan 29 15:17:16 PST 2011; root:xnu-1504.9.37~1/RELEASE_I386
System model name: MacBookPro6,2 (Mac-F22586C8)
System uptime in nanoseconds: 35829130822125

unloaded kexts: 1.6.3 (addr 0xbc1e5000, size 0x53248) - last unloaded 12216461868115

loaded kexts:
com.parallels.kext.prl_vnic 6.0 11992.625164
com.parallels.kext.prl_netbridge 6.0 11992.625164
com.parallels.kext.prl_usb_connect 6.0 11992.625164
com.parallels.kext.prl_hid_hook 6.0 11992.625164
com.parallels.kext.prl_hypervisor 6.0 11992.625164 1.6.6 - last loaded 12151022138289 1.9.3d0 100.12.19 1.2.0 1.9.9f12 3.5.4 1.0.17 1.9.9f12 1.54 6.2.6 6.2.6 3.0.0d4 1.5.0d3 7.0.0 201 216 1.1.6 2.8.68 4.5.0d5 6.2.6 1.4.12 2.1.0 200.3.2 200.3.2 303.8 2.5.8 2.6.5 31 1.0.0d1 1.6.3 4.1.7 4.7.1 427.36.9 2.3.9b6 1.4.0 160.0.0 4.1.8 2.1.5 1.3.5 1.3.1 1.5 1.6 1.3.5 1.4 105.13.0 1 0 2.1.11 105.13.0 1.9.9f12 17 10 14 10 10 20 1.0.8d0 2.0.3 74.2 2.4.0f1 10.0.3 208 1.8.0fc1 1.3 1.9.9f12 1.9.9f12 41 3.1.0d3 4.5.0d5 1.0.8d0 6.2.6 6.2.6 2.2 2.2 2.4.0f1 2.4.0f1 2.4.0f1 206.6 4.1.5 2.6.5 2.6.5 4.1.8 3.9.0 2.6.5 1.6 1.6 1.6 402.1 1.2.5 2.6.5 4.1.5 4.2.6 314.1.1 1.10 4.1.8 2.0.4 1.4.0 1.6.5 1.1 1.0.0d1 6 289 1.6.2 1.3.5 2.6 1.3.0

The first line is fairly clear — how long has your system been running since its last crash?  If this is less than an hour, as it was for my computer, then your machine is completely FUBAR.  Less than a day and you’ve still got a seriously unstable computer.  (Hint for any “genius” that might be reading this article: take the number of seconds, divide it by 60 using the Calculator app on your store-issued-iPad, and that will give you the number of minutes.  Divide that new smaller number by 60 again to get an even smaller number which is hours.  If you can figure out how to get to number of days by yourself, it’s time to apply for the “Genius Lead” job.)

The Anonymous UUID is an effectively random code that allows Apple to lookup the crash reports for your computer when you go into the store.  Then there’s the date.  Straightforward.

The line which starts “panic” is the closest thing you’ll find to a concise explanation of what went wrong. In all likelihood this will be a jumble of words and numbers that make no sense, but it’s a great string to Google.  If you’re having a hardware problem, this message will probably stay about the same with each KP.  Googling my error message “NVRM[0/1:0:0]: Read Error 0×00000100” turns up a bunch of people with similar problems — computer going black without warning, often while playing World of Warcraft.

The next section titled “backtrace” is worthless unless you’re actually diving into the source code that caused the problem.  Skip over it.  But the section after it is extremely interesting and relatively easy to interpret.

The section titled “Kernel Extensions in backtrace (with dependencies)” actually tells you what part of the system failed.  Read this one closely and try to make sense of it. In the case of my example, there are three kernel extensions involved with the crash.  They are called “” and “” and “”.  The first one is fairly obvious — GeForce is the kind of graphics chip in the macbook.  The second one is also pretty clear — NVidia is the company that makes GeForce, and nv50hal I would guess means “NVidia 5.0 Hardware Abstraction Layer” or something similar.  I’m not sure what NVDAResman is but looking down a bit I see it’s related to “IOGraphicsFamily”.  This paints a really clear picture that the failure is in the graphics system.  Moreover, since every line here starts with “” we know the failure is entirely in code written by Apple.  There is no third-party software involved in this crash.

For my particular crash, it’s important to know something about the graphics hardware of these MacBooks, since all evidence points to the graphics hardware.  This generation of macbooks have two graphics chips — a faster one from Nvidia, and a more battery-friendly one from Intel.  The nvidia chip which is apparently having problems is always used when the computer has an external monitor plugged in, or when something fancy is happening on the built-in screen.  A nice utility called gfxCardStatus can help you understand this complexity, and will definitely give you a leg up on the “genius.”

The following line starting with “BSD process name” can also be important.  This will sometimes tell you which user-level app originated the call into the kernel which failed.  In my case it was “kernel_task” which provides no additional information.

The next section gives some basic info about the Mac — hardware and OS versions.  What follows is a complete list of kernel extensions (kexts) installed.  This gives you a bit more ammo in dealing with the “genius” who is probably ignoring you at this point anyway.  You can look through this list and see everything that might possibly contribute to a kernel panic.  In my case, the only software modules that aren’t from Apple are some drivers from Parallels for running my Windows virtual machine.  So the only reasons my Mac might kernel panic are because of a hardware problem, a bug in OS X itself, or something going wrong with Parallels.  Understanding this should, in theory, be very helpful when talking to your local neighborhood “genius” but unfortunately they are simple bots that only run scripts authored in Cupertino and are not permitted to listen to logic.

Apple’s Propaganda about Flash

When the “genius” told me my Mac’s problem was that I had Adobe Flash installed, I just laughed at first.  Flash is installed on something like 97% of desktop computers, and very few of them regularly turn themselves off for no reason.   Moreover, the kernel panic report lists every piece of software that could possibly contribute to the kernel panic, and neither the word “flash” nor “adobe” appear anywhere in the list.  But then I realized he wasn’t joking.

Apple’s ongoing arguments with Adobe over Flash are well publicized.  The root of the issue, in very brief summary, is that Apple sees Adobe’s Flash as a strategic threat to their incredibly profitable iPhone platform.  The poor “genius” I’m stuck with has become a pawn in Apple’s PR battle, throwing himself on the grenade of propaganda just to spread FUD about Flash.  I tried reasoning with him, explaining that Adobe’s software doesn’t run in the kernel, and therefore cannot cause a kernel panic.  The job of the kernel is to protect users from badly written software crashing the whole machine. But he would not budge.  I imagined a “genius” script which read as follows:

Mac is crashing…

1. Run hardware diagnostic tests.

2. Address any identified hardware problems.

3. If hardware tests come back clean, tell customer that the problem (whatever it is) is caused by Flash.  Tell them to uninstall it, and see if that helps.

Here I imagine the Dantesque trap of the rare “genius” who actually understands how OS X works.  I’m telling the customer something which is impossible on its face, and he knows it.  He’s arguing with me telling me I’m being stupid.  But I signed a contract with Apple saying I would defame Adobe, and deviation from this contract will bring the wrath of Steve’s legal team on me.  I just have to smile and say things like “yeah, that’s the really strange thing about this particular software problem — it only affects certain computers.  But it’s definitely caused by Flash.”

One might reason that Flash could cause kernel panics because it makes more extensive use of the graphics system than other applications.  But in this case, Flash isn’t the actual problem.  Flash is exposing the underlying problem, as would any software which works the graphics system hard.  Thus lots of people with the same problem as me who play World of Warcraft.  If the “genius” advice ever works, it’s just because Flash is the most graphics intensive software that many people use on their Macs.  The actual problem is still either a bug in OS X, or a hardware problem.

Consider the advice not to use Flash on your Mac in analogy to a car.  (A high-end MacBook actually costs as much as some cars.)  Imagine that your car sometimes just turned its engine off while you were in the middle of driving it – catastrophic failure with no warning or apparent reason.  You go to the dealership and they can’t find anything wrong with it, but ask if you ever listen to electronic music?  Well, yes, sometimes.  That’s the problem!  It’s the electronic music which is causing your car to malfunction.  So stop listening to it, and the problem will be fixed.  Umm, what?  The closest thing to the truth, by analogy, would be that any bass-heavy music (graphics-intensive application) is stressing out some weak connection in the electronics.  But because the car dealership is owned by the local philharmonic, they’re blaming it on that awful music the kids listen to.   Using your misfortune and their incompetence to push an unrelated political agenda.

It’s an interesting glimpse into how Apple is using their retail presence to advance a strategic PR goal.  Evidence that Apple has grown up as a company to the point where their own motives are more important than doing what actually helps customers.  *sigh*  At least I got my MacBook fixed.