By BoLOBOOLNE payday loans

Brain Simulation Tactics and Complexity Estimates

Ray Kurzweil recently predicted that we’d be able to reverse engineer the human brain by 2020.  He makes an argument that a brain simulator would need about a million lines of code:

Here’s how that math works, Kurzweil explains: The design of the brain is in the genome. The human genome has three billion base pairs or six billion bits, which is about 800 million bytes before compression, he says. Eliminating redundancies and applying loss-less compression, that information can be compressed into about 50 million bytes, according to Kurzweil.

About half of that is the brain, which comes down to 25 million bytes, or a million lines of code.

This reasoning is IMHO flawed and overly optimistic.  It’s an interesting idea to compare the complexity of these two systems by comparing their bit representations.  I think the idea has merit, at a very rough level — that is I think you can compare the complexity of a genome to the complexity of a piece of software on a rough order-of-magnitude scale. The biggest flaw in Kurzweil’s argument is that he magically throws in a factor of 16x improvement in his favor by saying the genome can be “compressed.”  Well, software executables can be compressed too, a fact that Kurzweil conveniently ignores.  So I’d follow his reasoning to say that a human brain simulator probably needs about 10 – 100 million lines of code.  (I’m deliberately including 0 significant digits here to indicate the roughness of this approximation.)  This puts a human brain simulator on par with the some of world’s most sophisticated software projects so far, which seems about right, at least to an order of magnitude or so.

Strong reactions

PZ Myers published a wrathful condemnation of Kurzweil’s argument titled “Ray Kurzweil does not understand the brain.”  If you sift through the name-calling you see that Myers assumes a specific tactic in building the brain simulator: starting with the human genome and deriving the brain’s functionality from it.  This strategy will certainly work, once we have solved the protein-folding problem, and more generally have the ability to do quantum chemical simulations of kilogram-sized masses of organic chemicals.  Which is to say it’s theoretically possible (we might be living in a software simulation of our universe for all we know), but completely intractable with current technology.  For comparison, our best quantum chemical simulations if you push them top out at maybe a dozen atoms right now.  So being able to simulate an entire kilogram of organic matter is nowhere in sight.

Tactics to simulation

I agree with Myers that we are nowhere near being able to interpret the genome well enough to understand how it makes a brain.  But we probably don’t need to in order to simulate a brain.  By analogy, consider the Super Nintendo (SNES) Emulator, which is another kind of simulator many of us have experience with.

SNES emulators let you play all the old Nintendo games but on a modern computer instead of original SNES hardware.  Let’s say somebody handed you a box and a stack of cartridges and told you to build a Nintendo simulator.  What would you do?  Well, clearly you could open up the SNES box and reverse engineer the circuit boards to figure out all the wiring.  You’d probably figure out that the CPU was important — a variant on the 65816, which was essentially the 16-bit version of the 6502 some of us grew up with in our Commodore 64s and Atari 800s.  So you could (theoretically) crack open the 65816 CPU chip itself, put it the through an electron microscope and understand every transistor it used to interpret the instructions. In this way you could reliably create an emulator which completely replicated every aspect of the SNES. Such a simulation would replicate all of its bugs, timing quirks and everything, but it would work and be extremely expensive to simulate.

This is analogous to the tactic PZ Myers seems to be assuming Kurzweil would take to simulating a human brain. But Kurzweil would actually start at a much higher level of abstraction.  Simulating every protein in every neuron is like building an SNES emulator by simulating every transistor in the original Nintendo’s hardware. The key to getting those SNES games to work does not lie in replicating the design of the CPU which interprets the instructions.  The key is figuring out how to run those instructions on modern hardware.  By moving up through levels of abstraction, we can simulate the system much more cheaply and easily, although there’s a chance edge-case behavior won’t be captured properly.  (What if our world is a simulation and we bump into the edge-cases?)

Similarly, the key to simulating a human brain anytime soon does not lie in understanding every chemical pathway in human neurons.  Although if we did understand neurons at this level, we would have a great head start at simulating a brain.  Success in simulating a human brain will come by recognizing higher levels of abstraction in neuronal function.  We have known for a very long time that neurons communicate by “firing” electrical signals which are transmitted chemically at synapses.  The details of these behaviors are complex and determined by a great many interdependent chemical systems, but it seems highly likely that we can replicate the key behaviors of human neurons at this level of abstraction without needing to understand everything underneath supporting them.  If we can replicate the firing behavior of neurons in sufficient detail, we don’t care what the proteins underlying them are doing.  The key question here of course is what is “sufficient detail.”  I expect that question is one that researchers who are genuinely interested in reverse-engineering the brain will actually focus their attention on.

Once we can simulate the firing behavior of neurons, simulating a brain becomes much more of an engineering problem than a scientific one.  Still it’s going to be a massive engineering challenge, and gathering the input data will probably require a bunch of new science.  Then the philosophers can debate the meaning of free-will if our brains are Turing-complete.

  1. Tomek says:

    [quote]So you could (theoretically) crack open the 65816 CPU chip itself, put it the through an electron microscope and understand every transistor it used to interpret the instructions. In this way you could reliably create an emulator which completely replicated every aspect of the SNES. Such a simulation would replicate all of its bugs, timing quirks and everything, but it would work and be extremely expensive to simulate.[/quote]

    Such emulator exists although it’s slow. Simulating brain that way won’t work unless it’s build with lightspeed neurons.

  2. Joe Hunkins says:

    Thanks for clarification Leo! I haven’t been to the Bay Area meetings either but they look interesting and I was very impressed with Monica Anderson’s presentation at a conference a few years back – she seems to be a key contact for that group.

    Excellent blog insights as usual – keep the AI conversations going. The stakes are *so high* I’m always surprised how few folks are discussing (and funding) the likely advent of self awareness in machines.

  3. leodirac says:

    Hi Joe, thanks for the comment.

    I meant “expensive.” Sorry that sentence was rough and typed quickly. Expensive in terms of the amount of processing power required on the host machine to simulate every logic gate of the original processor. Much more costly than just interpreting the compiled game with the host processor’s instruction set. It’s inexpensive in terms of the amount of science/engineering work to understand what’s actually going on. It’s a brute force approach to solving the problem, i.e. I don’t need to understand what’s going on, I’ll just throw gobs of computing power at the problem.

    Thanks for the link to the Bay Area AI group — I’d like to check that out in person when I’m down there some time.

  4. Joe Hunkins says:

    Leo this is a great post, especially the nice demonstration that reverse engineering a human brain may eliminate many seemingly insurmountable modelling problems of replicating activity at the molecular level. One wonders if the SyNAPSE project (algorithmic intelligence) may run into problems that would be better solved by the Blue Brain approach of reverse engineering a human brain, but luckily both are moving forward and appear to be making fairly good progress.

    Also interesting along these lines is the work of (former Googler) Monica Anderson who is making the case that the routines that drive thought and consciousness – obviously derived from evolutionary pressures – may be much simpler than many assume. An example is the act of a dog “catching a frisbee” which could be modelled using extensive advanced mathematics but is more likely the result of much simpler information processing.

    If she’s right, it’ll probably make it a lot easier to tame the mess we call consciousness because a system would only need to replicate much higher order functionality to become “aware”.

    Typo? “INexpensive”? Such a simulation would replicate all of its bugs, timing quirks and everything, but it would work and be extremely expensive to simulate.

  5. daedalus2u says:

    Ramez, essentially every mitochondrion has the same DNA that codes for the same proteins that perform the same function in every single eukaryote. Plants have a few extra, but animals have essentially all the same mitochondrial DNA with only a handful of exceptions.

    Mitochondria have a couple thousand proteins. Only 13 are coded for by mitochondrial DNA. The same 13 in just about every eukaryote. Why those 13 and only those 13? Good question.

    The “migration” of genes from mitochondria to the nucleus happened pretty fast and a very long time ago (and under went a code change because the coding scheme to feed into the mitochondrial ribosome and the eukaryote ribosome is different and incompatible. Mitochondrial SOD is pretty clearly bacterial derived because it has high homology with bacterial SOD. But it is only coded by nuclear DNA in all organisms that have mitochondria.

    If your simulation doesn’t have the fidelity to distinguish between states of sober and drunk, how do you know you have simulated a sober state and not a drunk state? How do you know you have simulated a sane individual and not one that is insane?

  6. Ramez Naam says:

    I’ll dig up some references on genetically derived code being extremely difficult to parse.

    I do think we should be working towards whole brain emulation. My comments aren’t meant to discourage that. I just think Kurzweil is hand waving away a lot of complexity. It’s possible that the complexity he’s ignoring isn’t necessary to simulate the brain, but it’s also possible that it is, and so far as I can tell no one has the evidence to say conclusively how much complexity we need.

    While I look for a citation of genetically derived code being convoluted, here’s an example off the top of my head. We’ve all heard of the mitochondrial genome, right? Mitochondria have their own genes that are responsible for creating the proteins that mitochondria are made of.

    Except…over time, many of those genes have migrated from the mitochondria into the nuclear genome. The modularity of the mitochondria has been violated. Maybe this has happened because DNA in the nucleus has better error correction, or maybe the proteins serve a dual purpose for the rest of the cell, or maybe some other reason. But eukaryotic cells in this case started out with some clear modularity and then short circuited it to achieve some other goals.

    That’s a relatively mild example. I’m sure there are many others.

  7. Thanks for pointing me to your blog post. I’m @ferrouswheel on twitter, developer on I’ve been stewing about PZ Myers post and the various condescending remarks to non-biologists in the comments (which is strange, I have a PhD in ecology and did molecular biology in undergrad… along with CS!) – however, you’ve essentially covered the points I intended to make so now I can just point people here.

    I still personally think that simulating the brain is a relatively silly way to create an artificial mind, with a bucket of ethical issues waiting at the fruition point, but it’s certainly one potential route.

  8. SudarshanP says:

    We may never achieve what he says in 10 years, but “faith” in his ideas would push a number of adventurers to strap on wings and try to fly.

    from 1904 to 1969 could anyone have predicted the leap from flight to moon landing. from 1969 to 2010 could anyone have predicted the laziness of mankind. From transistor to iPad. Predictions go wrong both ways. IBM predicted the max market potential of 5 machines and Gate’s 640k memory statement is another worth remembering.

    We may not even achieve the obvious sometimes and sometimes we may shock ourselves. Guys like Kurzweil through his image of authority inspires young minds. Alexander never reached the ends of the earth. At his time it was possibly a silly endeavor. But today we know the earth is round and we can go in and out of it.

    If you are a geek with a passion to DO something, have “faith” in kurzweil… You will do stuff that matters, whether Kurzweil is right or not. If you are a government spending tax dollars, pretend Kurzweil does not exist!!!

  9. leodirac says:

    The key question is what is lost from a simulation if you base it on higher levels of abstraction. If it can think and reason but not get drunk, does that make it useless? Absolutely not! In terms of advancing the science, we should start working on things that are most easily achievable. We should not give up because there are hints that the easy approach won’t be give us everything we hope for.

    The argument that systems designed by genetic algorithms tend not to have levels of encapsulation is a strong one. I’d like to read more about that. Any references? Right away I can see some counter-arguments. In a sense, human physiology has clear layers of encapsulation in that the different body systems (circulatory, immune, etc) operate fairly independently of each other. On a smaller scale, mitochondria and DNA are solid and powerful layers of encapsulation between responsibilities.

    Generalizing, understanding these levels of abstraction are the things that advance science most quickly. Also the cleaner the separation between the layers, the easier it is for us to recognize them. The fact that the brain remains so obtuse indicates that additional abstraction layers beyond the ones we already understand (neurons, firing, synapses, neurotransmitters) are either subtle or not there, which lends weight to the argument that we can’t rely on them. “Subtle” might mean that the hackers of evolution did not respect them and crossed over them in order to make expedient changes — exactly what we’d expect if their intrinsic value (beyond design cleanliness) was not strong enough. It’s all very fertile ground for scientific advance.

  10. Maximus says:

    I think Ramez is right. The idea of abstracting brain functions is borrowed from computing — and that’s where Kurzweil seems to go wrong. He is deeply attached to the brain-as-computer metaphor. The truth is, brains don’t work all that much like electronic computers. It appears that the biochemistry of the medium is essential to the characteristics of neurological functioning, down at least to the molecular level.

    Actually, it may go deeper: some have theorized that subatomic quantum-mechanical interactions are key. There’s been some rather loose, New Agey speculation along these lines, but the fact is we have neither proven nor disproven that quantum effects play an important role in mental processes.

    It’s one thing to abstract the functions of software when the general principles of the construction of the hardware are understood — which is the case for any electronic computer. It’s another thing to try to abstract the functions of neural “programs” when we have very little clue about how the underlying wetware works. This, I think, is why Kurzweil is just bloviating.

  11. Michael Tyka says:

    I like the drunk argument. Although you could analyse the effect and make a built in simulator instead. The question is how far do you take that ? How realistic is your upload going to be ?

    You could bring the same argument and say you have to simulate down to the atom level, because otherwise the upload would not experience the same as a real human under conditions of alzheimers or poisoning or radiation or magnetic fields etc.

    Which indicates to me that an upload will never (in a sense by definition) be a perfect simulation and one will have to decide where to draw the line.

    Personally i think we ought to embrace the fact that artificial beings are going to be different rather then trying so hard to simulate us old-style flesh-powered monkeys.

  12. Ramez Naam says:

    Agreed that the big question in simulating the brain is what level of detail or abstraction is required.

    The answer is that we just don’t know.

    One thing to consider is that algorithms derived via genetic means tend to be far far messier and far more dependent on tiny details than systems that are built top down. The SNES emulator can work at a high level of abstraction because humans intentionally put various layers of abstraction in place (with convenient interfaces) as ways to build computers and software on top of them.

    It’s not clear that evolution has really done this. Genetic Algorithm-derived code tends to be nearly impossible to parse for humans. It’s not nice and neat and modular. Overall behavior can depend upon tiny details.

    So, from that perspective, we don’t know how far we’re going to have to go in humans. If the level is down to individual synapses (and almost everyone in the field with the exception of Kurzweil agrees that that is a bare minimum level of detail) then it is still many orders of magnitude harder than Kurzweil’s estimates.

    Here’s my favorite argument for why you need to go even deeper than neurons and synapses and model individual receptors: Can your upload get drunk? Because to do so it arguably needs a modeling level that is down to receptors.

    To go back to Kurzweil’s argument of the genome’s size – something he misses there is that the genome doesn’t create the brain directly. It does so through an unpacking process that involves the machinery of the cell and a developmental process of interaction with the environment. The genome isn’t even a compressed version of the brain – it’s a seed that grows into the information content of the brain.

  1. […] later on. This means that for any external system to an individual brain (e.g. a brain simulation if such a thing is possible), it is impossible to completely predict the behaviour of that brain… eventually the […]