Monday, November 11, 2013

Nanobots: Using nature to beat nature

Emulating nature.. yet again!

Inspired by the melanophore cell, which controls color in fish, scientists at Oxford University and Warwick University* have developed self-assembling transport networks powered by motors controlled by DNA -- the genetic instructions found in nearly all known life.

The system constructs its own network of tracks spanning tens of micrometers in length -- less than the thickness of a sheet of paper -- and then it transports cargo across the network, and even dismantles the tracks.

How the system works

The system is modeled after melanophore cells, which work by controlling the location of pigment in them. The structure of the cell is a network of spokes like a bicycle wheel, and it works by having motor proteins transport pigment in the network, either concentrating it in the central hub or spreading it throughout the network. Concentrating pigment in the center makes the cells lighter as the surrounding space is left empty and transparent.

The system developed by the Oxford University team is similar, and is built from DNA and a motor protein known as kinesin. Powered by ATP fuel -- the smallest unit of energy transport -- kinesins move along the micro-tracks carrying RNA control-modules made from short stands of DNA. 

"DNA is an excellent building block for constructing synthetic molecular systems, as we can program it to do whatever we need," said Adam Wollman, who conducted the research at Oxford University's Department of Physics. "We design the chemical structures of the DNA strands to control how they interact with each other. The shuttles can be used to either carry cargo or deliver signals to tell other shuttles what to do."

Wollman added, "We first use assemblers to arrange the track into 'spokes', triggered by the introduction of ATP. We then send in shuttles with fluorescent green cargo which spread out across the track, covering it evenly. When we add more ATP, the shuttles all cluster in the centre of the track where the spokes meet. Next, we send signal shuttles along the tracks to tell the cargo-carrying shuttles to release the fluorescent cargo into the environment, where it disperses. We can also send shuttles programmed with 'dismantle' signals to the central hub, telling the tracks to break up."

How else could this system be used to cure disease?

This demonstration used fluorescent green dyes as cargo, but the same methods could be applied to other molecules, like life-saving ones!

More broadly, using DNA to control motor proteins will enable the development of more sophisticated self-assembling systems for a wide variety of applications.

Many diseases are prime targets. Some diseases are a result of a overproduction or underproduction of a molecule within certain cells. So that means that these diseases can be "solved" by adjusting the amount of that problematic molecule in those cells.

Take for example the autoimmune disease known as Wegener's granulomatosis. It's an autoimmune disease affecting blood vessels where a certain class of molecules known as reactive oxygen intermediate molecules are overproduced. The job of these molecules is to destroy foreign particles like harmful bacteria, but if the molecule is overproduced, then it begins to indiscriminately destroy the body that it's supposed to be protecting.

So imagine a nanobot system that measures and then adjusts the amount of that problematic molecule within the cells that produce it, namely macrophages and neutrophils.

The same system can "solve" rheumatoid arthritis, which is basically the same autoimmune disease where reactive oxygen intermediate molecules and other toxic molecules are made by overproductive macrophages and neutrophils invading the joints. The toxic molecules contribute to inflammation, which is observed as warmth and swelling, and participate in damage to the joint.

And by controlling the amount of these problematic molecules within the macrophages and neutrophils, we can effectively control the level of that molecule so that it doesn't destroy what your immune system is supposed to be protecting, your body's tissue.


Such a system might even be able to "solve" HIV/AIDS. The HIV virus works like a trojan horse, getting itself into T-cells, a type of macrophage, and then basically "tricking" the cells to create a copy of DNA from the virus' RNA, and then to copy that foreign DNA into one of it's own 46 chromosomes, resulting in turning that cell into a "brainwashed" traitor. The traitor cell then produces the virus until it erupts and then those new viruses go repeat the same thing to other T-cells. This happens until there are so few T-cells that the person's immune system is debilitated to the point of not even being able to defend against the common cold.

Now, what we've recently learned about why some people have the HIV virus, but live for decades without much trouble, is that there are two immune responses to the HIV viral invasion, one of which is effective at stopping the virus while the other isn't so much. So if we discovered the biochemical process that causes the more effective response, and if we controlled the molecules of that process, or the molecules that initiate that process, then we could effectively make a person's T-cells fight the virus better. And that's something a nanobot system could possibly do in the near future.


Such a system might even be able to help with cancer, and I mean all types. Cancer is a type of disease where a certain part of a cell's genetic code gets damaged and doesn't get repaired -- the part that is responsible for cell division. When cell division is working properly, the cell goes through its normal life cycle and then divides to make two cells when it's ready. But, when that part of the process stops working properly, let's say because the genes responsible for that process have mutated, then the cell divides erratically, and then the resulting new cells have the same mutation resulting in more erratically-dividing cells that no longer fill the function they are supposed to. 

Now, cells have a natural defense against mutations in general. They repair DNA constantly, which is necessary because mutations occur so frequently -- about 1 million times per cell per day. Now sometimes these mutations don't get fixed, which is what makes it possible for new genes to enter the gene pool, which is what makes genetic evolution possible. Most mutations turn out to be harmless, but sometimes they aren't. The ones that occur in the genes that control cell-division, are among the bad ones, because they can cause cancer.

Fortunately there is another failsafe system to help catch these types of mistakes. What this system does is to initiate the cell's self-destruct mechanism, also known as apoptosis. Unfortunately, even this system doesn't catch all the mistakes, and it's these cases that result in cancer.

The potential here is in the DNA repair system. In order to help stave off cancer, the rate of DNA repair needs to be more than the rate of mutations. The normal situation is where mutations occur and where they are also being corrected faster than they are occurring. The correcting mechanism is there as a defense against too many mutations, so if the correcting mechanism isn't keeping up with the rate of mutations, then it's only a matter of time before cells become cancerous.** This is why physicians suggest that people limit themselves from activities that increase the rate of mutations, like sunbathing and smoking cigarettes. But another way would be to increase the rate of the DNA repair by adjusting the precursor molecules that drive that mechanism, effectively allowing the rate of genetic corrections to beat the rate of mutations, thus helping to prevent cancer before it even starts. And that's something a nanobot system could possibly do in the near future.

There's so much potential here. Just imagine how many more lives would be saved! And imagine the cost savings.

One day these diseases will exist only as pictures and stories in history books and museums. 

Using nature to beat nature! That's the power of ideas! That's the power of the human mind!


* The work is published in Nature Nanotechnology and was supported by the Engineering and Physical Sciences Research Council and the Biotechnology and Biological Sciences Research Council.

** Browner, WS; Kahn, AJ; Ziv, E; Reiner, AP; Oshima, J; Cawthon, RM; Hsueh, WC; Cummings, SR. (2004). "The genetics of human longevity". Am J Med 117 (11): 851–60.doi:10.1016/j.amjmed.2004.06.033. PMID 15589490.

Saturday, November 9, 2013

What's plan B for energy? Hint: It involves fungus.

Today the world is heavily dependent on fossil fuels to meet its energy demands, and that demand is increasing fast. And since fossil fuels won't last forever, we need a plan B. Experts say that we have 50 to 100 years of fossil fuel left to extract from the ground. So I wonder if there is substantial progress towards a promising plan B.

Figure 1. World Energy Consumption by Source*
In 2012, 87% of global demand was met with fossil fuel, which is down from 95% in 1965. Sounds like we're making some progress huh? Not according to the measurable end result. During that same period, annual global energy consumption has almost tripled from 200 to 550 exajoules. So global fossil fuel consumption went from 190 to 480 exajoules -- more than doubling our rate of fossil fuel consumption since 1965.

But running out of fossil fuels is not the only reason we need a plan B.

It's important for us to have energy independence from other countries. Self-sufficiency is important for many reasons, one of which is that we wouldn't need to protect our interests in those countries, since we wouldn't have any!

Another reason for a plan B is to have a fuel that doesn't result in a net addition of carbon dioxide (CO2) to the atmosphere, since that would have much less climate-changing effects than compared to using fossil fuels. That's why fuels made from plants are possible contenders. To clarify, engines running on biofuels also emit CO2 just like those running on fossil fuels, but because plants are the raw material for biofuels, and because they need CO2 to grow, the use of biofuels does not result in a net addition of CO2 to the atmosphere -- instead it just recycles what was already there. The use of fossil fuels, on the other hand, releases carbon that has been stored underground for millions of years, and those emissions do result in a net addition of CO2 into the atmosphere.

Ideal restrictions for a plan B

Besides reasons for needing a plan B, we also have some ideal restrictions. One restriction is that we shouldn't be creating fuels derived from food because that creates more competition for that food, which means driving up demand thus driving up the price of that food and all foods made from it, resulting in worldwide starvation for the poor.

We saw this happen recently with ethanol which is made from corn. The price of corn skyrocketed in 2012 because the US government forced fuel producers to produce a certain percentage of their fuel from biomaterial, and since the only viable biofuel in existence today is ethanol, and since ethanol is made from corn, that created more competition for corn, resulting in higher corn prices.**

So what's the hold up on plan B? Well the main hold up is that we haven't found another source of fuel that is as cheap to deliver to consumers as is fossil fuels. This is one of the ideal restrictions on a plan B. There's no point in trying to find a fuel that is more expensive to deliver to consumers because people wouldn't want to buy it. And we shouldn't force people to buy it, or rather, we shouldn't force people to buy alternative fuels or force companies to limit production of fossil-fuels, because that also results in starvation for the poor -- cheap fuel is necessary for making and transporting food so if you get rid of the cheap fuel, then you drive up the price of all food worldwide resulting in mass starvation for the poor.

But this may change very soon with a new testing probe created by chemists and colleagues at the Department of Energy's Pacific Northwest National Laboratory (PNNL), which was published in October in the journal Molecular BioSystems.*** The team created a test that should turbocharge their efforts to create a blend of enzymes potent enough to transform tough biomaterials like corn stalks, switchgrass, and wood chips into fuel, cheap enough to compete with fossil fuels.

It's possible to make this fuel today, but the process makes the resulting biofuel too costly compared to fossil fuels. This new testing probe opens up the possibility for laboratory research, that today takes months, to be reduced to days. So this will accelerate our way to making a process that does compete with fossil fuel on price. So we are making huge progress -- it's just not visible to consumers yet.

Introducing the fungus Trichoderma reesei

This is the fungus T. reesei.
Many of today's efforts to create biofuels revolve around the fungus Trichoderma reesei, which introduced itself to US troops during World War II by chewing through their tents in the Pacific theater. Seventy years later, T. reesei is a star in the world of biofuels because of its ability to produce enzymes able to digest molecules like lignocellulose, a long-chain polymer, the tough structural material that holds plants together.

The breakdown of large polymers into smaller ones that can then be further converted to fuel is the final step in the effort to make cheap fuels from plants and other biomaterials. Biomaterials are full of chemical energy stored in carbon bonds, and can be converted into cheap fuel, if only scientists can find a way to cheaply free the compounds that store the energy from lignocellulose.

T. reesei digests biomaterials by cutting through the chemical "wrapping" like a person with scissors cuts through a tightly wrapped ribbon around a gift, freeing the inner contents. The fungus makes dozens of cutting enzymes, each of which cuts different parts of the wrapping. Wright and other chemists are trying to combine and improve upon the most effective enzymes in order to create a potent enough chemical cocktail, a mix of enzymes that accomplishes the task with the most efficiency, enough to bring down the price of biofuel to that of fossil fuels.

To assess the effectiveness of mixtures of these enzymes, scientists must either measure the overall performance of the mixture, or they must test the component enzymes one at a time to see how each reacts to different conditions like temperature, pressure and pH.

The testing probe

Wright's team developed a way to measure the activity of each of the ingredients simultaneously, as well as the mixture overall. So instead of needing to run a series of experiments each focusing on a separate enzyme, the team runs one experiment and tracks how each of dozens of enzymes reacts to changing conditions.

A series of experiments detailing the activity of 30 enzymes, for instance, now might be accomplished in a day or two with the new technology, compared to several months using today's methods.

The key to the work is a chemical probe the team created to monitor the activity of many enzymes at once. The heart of the system, known as activity-based protein profiling, is a chemical probe that binds to glycoside hydrolases and gives off information indicating the effectiveness of each of those enzymes in realtime, effectively allowing scientists to do their work

So we are definitely making huge progress and it's because of advances in technology.

And it's because of human innovation. It's because of new ideas!

Fossil fuel is great. It's the best fuel we have today. And we definitely need better fuels, but we don't have them yet. So what we need is new ideas for better fuels, not government restriction on existing fuels because that would cause worse problems than it purports to solve.


* Figure 1. World Energy Consumption by Source, Based on Vaclav Smil estimates from Energy Transitions: History, Requirements and Prospects together with BP Statistical Data for 1965 and subsequent.
** To clarify, there was also a US mandate requiring ethanol to be produced as a percentage of the US gasoline supply.  
*** Reference: Lindsey N. Anderson, David E. Culley, Beth A. Hofstad, Lacie M. Chauvign√©-Hines, Erika M. Zink, Samuel O. Purvine, Richard D. Smith, Stephen J. Callister, Jon M. Magnuson and Aaron T. Wright, Activity-based protein profiling of secreted cellulolytic enzyme activity dynamics in Trichoderma reesei QM6a, NG14, and RUT-C30, Molecular BioSystems, Oct. 9, 2013, DOI: 10.1039/c3mb70333a. 

Thursday, November 7, 2013

A Step Towards Human-type AI

Mankind emulating nature.. again!

Computers are getting ever more powerful, but the human brain is still far more efficient than even our most powerful supercomputers, both in terms of energy consumption and computation. So it makes sense to try to create computers that emulate the human brain. But what parts should we be trying to emulate?

Well one fundamental difference between today's computers and the human brain is the functionality of their smallest components, the computer's transistor and the brain's neuron.

The transistor is based on a binary system, meaning that it can only produce two distinct values which we dub 0 and 1, while the neuron doesn't have this two-value limitation. Instead, it can produce a signal of any value.

So why does this fundamental difference between today's computer and the brain matter? 

How do computers work?

Let's start with how computers work. All of it's functionality is based on a binary system, so if you want to program some math into it, you have to deal with this limitation that it can't produce numbers like 1/3.

Now you might think that it should be sufficient to use the estimated value of say 0.3333 instead of the actual value 1/3, but this doesn't work well because this small amount of error actually results in a huge amount of error when billions of calculations are performed to produce the computer's intended result. So each calculation is producing this error, and the total error for all the calculations adds up to a lot.

So you might say that we can use more transistors for each number, thereby allowing for more precision, but this doesn't work well either. It ignores a problem with how the error grows in these calculations. Instead of the error growing linearly, in other words proportionally with the number of calculations, the error grows geometrically which means that with every iteration of computation the amount of error multiplies on itself instead of adding to itself. So the error grows to a point that it might be 100's of times larger than the actual value, making the estimated value completely useless.

To solve this error problem, we use a field of mathematics known as Numerical Analysis -- originating between 1800 and 1600 BC -- which deals with creating algorithms that reduce error in numerical calculations to the point of making the error manageable (by making the error grow linearly instead of geometrically).

You might say that this would work well, and you'd be right at least for the applications that we see in daily life. For example, digital TVs have to use algorithms because they receive video feeds at 30 frames per second (fps) while delivering video to it's screen at say 60 fps. The TV must create 30 extra frames that didn't come from the video feed. What the TV does is implement numerical algorithms that interpolate data points from the existing data points coming from the images of the video feed. This interpolation of course produces some error. But because the algorithms do pretty well at minimizing the error, most people don't notice the tiny errors that the algorithms do produce, so that's good enough.

But consider the new field of computerized facial recognition. Today's systems can take hours to perform a job that humans can do in fractions of a second. So maybe if we built computer chips whose smallest component can generate any values instead of being limited to just 0 and 1, we would be one step closer to creating a human-type artificial intelligence, also known as artificial general intelligence (AGI).

New Invention: Artificial synapse

And this step may have already been achieved. Materials scientists at the Harvard School of Engineering and Applied Sciences (SEAS) have created an artificial synapse that works much like the synapses in the human brain.* And actually it mimics the brain's synapse in more ways than I've already described.

The human brain has upwards of 86 billion neurons, each connected to upwards of 10,000 other neurons. The neurons connect to each other by extending axons to the receiving ends of neurons, called dendrites. At the meeting between an axon and a dendrite is what we call the synapse. These synapses continuously adapt to stimuli, strengthening some connections, and weakening others. On a physiological level this is what we know as learning, and it enables the kind of rapid, highly efficient computational processes that make Siri and Blue Gene seem pretty stupid.

How the synapse works

What this new brain-inspired device does is control the flow of information in a circuit while also physically adapting to changing signals. This physical adaptation is mimicking what our brains do, which is to improve or weaken an axon's conductance by adding or removing an electrically-insulating chemical known as Myelin Sheath around the axons.

"The transistor we've demonstrated is really an analog to the synapse in our brains," says co-lead author Jian Shi, a postdoctoral fellow at SEAS. "Each time a neuron initiates an action and another neuron reacts, the synapse between them increases the strength of it's connection. And the faster the neurons spike each time, the stronger the synaptic connection. Essentially, it memorizes the action between the neurons."

The artificial device uses oxygen ions to achieve the same plasticity as the biological analog of calcium ions. When a voltage is applied, these ions slip in and out of the crystal lattice of a thin film of samarium nickelate 80-nanometers in width, which is about 1/1000th the width of a strand of hair. The film acts as the synaptic channel between two platinum axon and dendrite terminals. The varying concentration of ions in the nickelate raises or lowers it's conductance -- and just as in the human brain synapse, it's conductance correlates with the strength of the connection. 

Structurally, the device consists of the nickelate semiconductor sandwiched between two platinum electrodes which is adjacent to a small pocket of solution, which is where the oxygen ions come from. An external circuit multiplexer converts the time delay into a magnitude of voltage which it applies to the ionic liquid, creating an electric field that either drives ions into or out of the nickelate. The entire device, just a few hundred microns long, which is about the width of 10 strands of hair, is embedded in a silicon chip.

What's next?

So this synaptic device brings us one step closer to emulating the architecture of the human brain. But to be clear, there is another fundamental difference between today's computers and the human brain that I expect will be more difficult to solve. The architecture of today's computers is designed to allow for programming algorithms on to it, in other words installing software on it. But the architecture of the human brain allows for programming itself. And more importantly, unlike today's computers, nobody can program a person's brain except for himself.

One possibility to finding answers to this is to understand the neuron's ability to extend it's axons to meet other neurons to create connections, as this seems to be the next most fundamental difference between the brain and today's computers.

But to be clear, emulating the architecture of the human brain isn't necessary to achieve human-type AI. We should be able to emulate the human brain's ability to program itself without emulating its physical architecture.** In this view, the key to human-type AI will be found in the field of philosophy, not computer science.


* Shi, Jian, et al. "A correlated nickelate synaptic transistor." Nature Communications (2013): Web. 11 Nov 2013.

** Deutsch, David. "Creative Blocks." Aeon Magazine, 12 Oct 2013: Web. 20 Nov 2013.

Wednesday, November 6, 2013

Predicting that Botox would lead to new pain medication?

Recent research has led to the discovery of a new molecule to alleviate pain, and interestingly it's something that was created from Botox.

Professor Bazbek Davletov together with a team of scientists from 11 research institutes, using a new way of joining and rebuilding molecules, created and characterized a new molecule able to alleviate hypersensitivity to inflammatory pain. By using elements of Clostridium botulinum and Clostridium tetani neurotoxins, commonly known as Botox and tetanus toxin respectively, the scientists developed a molecule with new biomedical properties without unwanted toxic effects.

Professor Davletov said, "Currently painkillers relieve lingering pain only temporarily and often have unwanted side effects. A single injection of the new molecule at the site of pain could potentially relieve pain for many months in humans and this now needs to be tested. We hope that the engineered molecule could improve the quality of life for those people who suffer from chronic pain. We are now negotiating transfer of the technology to a major pharmaceutical company."

How awesome is that? A technology in one industry leads to a new technology in a completely different and much more important industry!

Now I'm no advocate of using Botox but I sure am an advocate of capitalism! People should produce and buy to their hearts content, regardless if some people think that using certain products is not in anyone's best interest.

The truth is that we can't reasonably predict which technologies today will lead to which technologies tomorrow.

Consider how this would work in a communist society. Botox could never be invented, since it wouldn't be seen as in the best interest of the collective. The state would put a restriction on the research and production of cosmetic enhancing technologies like Botox. As a consequence, this involuntary restriction on free trade acts as a barrier to the creation of this new awesome life-enhancing technology.

And analogous to technologies, we can't reasonably predict which goals today will lead to which goals tomorrow.

Consider how this would work in a traditional (authoritarian) family. The father, if he believes that Botox is not in his teenagers' best interest, would not allow his teenagers to buy Botox, but this is wrong for the same reason as with communism. Forcing a choice on a person won't help him learn why that choice is in his best interest. Only persuasion can work. A person will change his mind, about what his goals should be, either voluntarily, or not at all. And forcing a choice on him is actually a barrier to him learning why that choice is in his best interest. So force is a barrier to changing one's goals.

How will a person learn to replace his current goals with better ones if his parents don't allow him to have his own goals? It doesn't work. And that's why freedom is the only way it can work.

Governments often do not know what is best for its citizens, like parents often do not know what is best for their children. So governments should not set themselves up as authorities controlling the actions of their citizens, like parents should not do that to their children.

Capitalism is the only way that works! Freedom is the only way!

So yay capitalism! Yay freedom!


For more on capitalism, see Elliot Temple's essay on capitalism.

For more on parenting, see my essay on parenting.

Paper reference: Synthetic Self-Assembling Clostridial Chimera for Modulation of Sensory Functions Bioconjugate Chemistry, DOI: 10.1021/bc4003103

Tuesday, November 5, 2013

Can gaming be good for science? Yes it can!

Recently, computation research using the power of off-the-shelf computer gaming technology has unveiled secrets in the human carbohydrate bar-code.

BBSRC-funded researchers at the University of Manchester's Institute of Biotechnology used the gaming technology to capture previously unobservable atomic movements. The research is helping to chart one of nature's most complex entities known as glycomes -- the entire complement of carbohydrates within a cell.

This novel repurposing of existing technology provides a new way to learn about these vital biomolecules, which play a role in everything from neuronal development, inflammation and cell structure, to disease pathology and blood clotting.

Understanding the shapes of major biological molecules has revolutionized industries like drug development and medical diagnostics, but the shape of complex carbohydrates has been largely ignored, leaving unseen a huge area of opportunity. And gaming technology is helping us see the previously unseen.

The research provides a new view of these biochemical barcodes and presents new opportunities in the science of carbohydrates, such as designing drugs or biomaterials that mimic carbohydrate shape.

Dr. Ben Sattelle from the Faculty of Life Sciences said, "Carbohydrate activity stems from 3D-shape, but the link between carbohydrate sequence and function remains unclear. Sequence-function relationships are rapidly being deciphered and it is now essential to be able to interpret these data in terms of molecular 3D-structure, as has been the case for proteins and the DNA double-helix."

He added, "By using technology designed for computer games, we have been able to investigate the previously unseen movements of carbohydrates at an atomic scale and over longer timescales than before. The insights relate carbohydrate sequence to molecular shape and provide information that will be vital for many industries."

Modeling carbohydrate motions in water is computationally demanding. Previously researchers had been using conventional software and central processing unit (CPU) based computers, which has restricted us to simulating only nanosecond timescales -- which is only a few billionths of a second. But by using the extra computational power of graphics processing units (GPUs) that are commonly used in games to produce moving images, the team from Manchester achieved simulations ranging from one microsecond (the time it takes for a strobe light to flash) to twenty-five microseconds, or a few millions of a second. So we are now able to "watch" these molecules on a timescale 25,000 times longer than was previously possible!

And to think that this is all possible because of a technology that was created to meet a consumer demand in a completely different industry -- entertainment.

Now this is not uncommon. It happens all the time. Technologies created to meet the demands of one industry are often repurposed to meet the demands of other industries. It's all part of how technology evolves.

And this applies not just technologies, but also to ways of thinking. We repurpose technologies for other fields analogously to how we repurpose ways of thinking for other goals.

Consider how we play games. A child whose goal is to beat a game, or win a gaming competition, is learning techniques to excel at that game that he'll end up using later for other goals in his life. One thing that gamers learn is a better attitude towards mistakes. A gamer learns that mistakes are a necessary part of gaming, and so he welcomes the knowledge that he made a mistake. He learns that the important thing is to try to learn from his mistakes, to identify them and to create ways of preventing himself from making those mistakes going forward. This is a way of thinking that leads to a lot of progress, and more importantly, sustained progress!

Having a bad attitude towards mistakes is a barrier to progress. So gamers repurpose this way of thinking to other goals in their lives analogously to how these researchers have repurposed computer gaming technology for the field of biochemistry.

So progress in one field (or goal) often leads to progress in other fields (or goals).

That's the power of human innovation! That's the power of the human mind!


Author: Rami Rustom

Monday, November 4, 2013

We're Not in Medieval Times Anymore!

Lasers that heal?  Sounds like sci-fi right?  Well it is, or at least it started that way in the minds of the creators of Star Trek IV: The Voyage Home.

The film sees a team of time travelers going back to the past -- that's our time -- to bring back a pair of humpback whales in their effort to communicate with an alien people that are threatening to destroy Earth. At one point Dr. McCoy -- who is only familiar with twenty-third century technology -- is shocked to find out a twentieth century physician about to drill into Chekof's head, and he says "[We're] dealing with Medievalism here."

I remember thinking that that kind of technology wouldn't be invented until long after I'm dead, but I was wrong. Though I did predict better than most people who think that "crazy" sci-fi ideas like that will never become reality.

So what is this new technology? It's a laser that would cure certain brain diseases by selectively destroying certain proteins that are causing disease. And lasers can do their work without drilling into someone's head!

Researchers at Chalmers University of Technology in Sweden, together with researchers at the Polish Wroclaw University of Technology, have made a discovery that may lead to the curing of many brain diseases through phototherapy.

The researchers discovered that it is possible to distinguish between aggregations of the proteins -- believed to cause the diseases -- from the the well-functioning proteins in the body. And they did it using a multi-photon laser technique.

These diseases arise when amyloid beta protein are aggregated in large doses so they start to inhibit proper cellular processes. Different proteins create different kinds of amyloids, but they generally have the same structure. This makes them different from the well-functioning proteins in the body, which can now be distinguished by this multi photon laser technique.

"Nobody has talked about using only light to treat these diseases until now. This is a totally new approach and we believe that this might become a breakthrough in the research of diseases such as Alzheimer's, Parkinson's and Creutzfeldt-Jakob disease [the so-called 'mad cow' disease disease]. We have found a totally new way of discovering these structures using just laser light", says Piotr Hanczyc at Chalmers University of Technology.

The theory is that if the protein aggregates are removed, then the disease is cured. The problem until now has been to detect and remove the harmful proteins.

The researchers hope that photoacoustic therapy, which is already used for tomography -- and which takes advantage of the photoacoustic effect discovered in 1880 by Alexander Graham Bell -- may be used to remove the malfunctioning proteins. Today amyloid protein aggregates are treated with chemicals, both for detection as well as removal. These chemicals are highly toxic and so they are harmful for those treated.

But with this new laser technology, the chemical treatment would be unnecessary. Nor would drilling in heads be necessary for removing the harmful proteins. Due to this discovery it might be possible to remove the harmful protein without even touching the surrounding tissue.

No surgery means far less cost and higher success rates.

Reducing cost and saving lives! Thanks to technology! Thanks to human innovation!

Sunday, November 3, 2013

Why the gender gap on physics assessments?

Researchers are lost on the question of why women consistently score lower than men on assessments of conceptual understanding of physics. Previous research claimed to have found the "smoking gun" that would account for the differences, but new researchers claim that their new synthesis of past research has shown that there is no pattern to be found.

"These tests have been very important in the history of physics education reform," said S. B. McKagan, who co-authored the new analysis. Past studies have shown that students in classrooms using interactive techniques get significantly higher scores on these tests than do students in more traditional lecture settings; "these results have inspired a lot of people to change the way that they teach," said McKagan. But several studies had also reported that women's scores on these tests are typically lower than men's. Lead author Madsen said, "We set out to determine whether there is a gender gap on these concept inventories, and if so, what causes it."

But what problem are these researchers arguing over anyway? Are they thinking that all men and women on average should understand physics equally? Another question that this raises is: do they think that these tests accurately measure understanding of physics? Well in order to keep this essay short, I'll ignore the question of whether or not the tests accurately measure what it's creators claim they measure. So that still leaves the question: why should men and women understand physics equally?

To illustrate that this is the wrong question, consider two individuals, one that loves physics and doesn't particularly like art and another that isn't fond of physics but loves art. Should these two people be expected to learn physics equally? Should they be expected to learn how to draw equally?

Interest drives learning. Everybody knows this yet somehow these researchers ignore it. When somebody is interested in a subject, he spends a lot of time thinking about it, enjoyably. But without interest, the person wouldn't think much on that subject. And trying to do so in spite of lack of interest is not enjoyable at all. It's painstaking. And that's precisely why lack of interest is a barrier to learning.

So why are these researchers thinking that lumping all women together and all men together is the correct way to figure out what's going on here? Do they think that interest in physics by women on average should be equal to interest in physics by men on average? Does that even make sense? I think these researchers are completely ignoring the concept that interest drives learning, and that differences in interest causes differences in learning. So then the question is: why are there differences in interest between men and women?

To be clear, even if we restrict the study only to people who claim to be interested in physics, that doesn't mean that they all are interested equally. Interest is not a 0 or 1 phenomenon. It's not "I am interested" or "I'm not interested." It comes in degrees. And more importantly, interest is not associated with fields of study, and instead it's associated with specific ideas. So a person might be interested in how mechanical things work, but not interested in how light works, both of which are physics. This means that self-proclomations of interest in physics are not useful indicators of interest in specific ideas within the field of physics.

Further, a person's interests compete with his other interests. So a person might be interested in seemingly everything, but because he's interested in so many things, he finds it tough to focus on any one thing for very long. But as soon as this person satisfies some of his other interests, he might come full circle back to older interests that he hasn't focussed on in many years.

Back to the question of differences in interest between men and women, I'd like to consider a more primary question: why make this arbitrary division by gender? Why not divide by race? Should we expect that all races on average should have equal interest in physics? What about dividing by culture? I suspect that these researchers would think that dividing by race or culture wouldn't make sense because there are huge differences in background knowledge among the groups. But then that raises the question: why should we think that men and women in US schools, whom theoretically receive the same education opportunities on average, should have the same background knowledge? Well, men and women don't share the same background knowledge. Boys and girls are raised differently by their parents, and society treats them differently, so girls grow up with different background knowledge than boys. And it's these differences in background knowledge that result in differences in interest, which then results in differences in learning.

Just consider the two hypothetical individuals from before. One loves physics and the other loves art. The question is: Why do they love different things? Is it that there are differences in genes between them that cause differences in interest? Or are the interests learned?

Even if genes are a factor then isn't it possible for the X chromosome and the Y chromosome to contain some genes that affect interest in things? And if this is the case, then what would the researchers be looking for exactly? If it's possible that men's greater interest in physics is due to a gene on the Y chromosome, which women do not have, then whatever those researchers are looking for could easily be drowned out by this gender-specific gene difference. So even if some data "emerged" from the analysis, there is no way to know whether that identified variable is the cause or if the cause is actually a gender-specific gene. So why do the researchers think that the answer they are looking for would emerge from the analysis?

If it's a genetics issue, then the researchers are looking in the wrong places. And if it's not a genetics issue, well then we should be talking about the cultural differences between men and women as a factor in why they learn physics differently. And again, analogous to the genetic question, if it's a cultural issue, then researchers are looking in the wrong places again. Women and men are part of different subcultures.

So what's going on here? Why are these researchers looking in the wrong places? What are they doing wrong? Well this has already been answered decades ago by Karl Popper, a philosopher of science. He explained that many scientists aren't doing science. Doing science means creating a testable theory, and then testing that theory. What these researchers are doing instead is sifting through data looking for theories to "emerge", and never actually doing any tests that could possibly rule out a theory. It's a problem of scientific methodology.

Popper taught us that not all science is being done right. We need to be selective in figuring out what is good science and what just looks like science. To address this dilemma, he created his Line of Demarcation to separate science from non-science. In summary: a theory is scientific if and only if it can (in principle) be ruled out by experiment. So that means that if a theory cannot be ruled out by experiment, then it's not scientific -- instead it is scientism, stuff that looks like science but isn't because there is no way to rule out (by experiment) the theories being hypothesized.

So the way to test whether or not a theory is scientific is to ask yourself, “what would it take to make this theory false?” If the answer is nothing, then it isn’t science.

These researchers are assuming that there is a scientifically-measurable difference between men and women that should account for the dissimilar testing results. They are sifting through data hoping for the theory (the "smoking gun") to jump out at them. But this is backwards. Where is the part about creating an experiment that could rule out the theory? It's not there. They aren't even thinking about it. This is not science. It is scientism.

For more on the Line of Demarcation, see _Conjectures and Refutations_ (Chapter 11: The Demarcation Between Science and Metaphysics), by Karl Popper, or see the more recent and easier to read _The Beginning of Infinity_ (Chapter 1: The Reach of Explanations), by David Deutsch.



"The gender gap on concept inventories in physics: what is consistent, what is inconsistent, and what factors influence the gap?" A. Madsen, S. B. McKagan and E. C. Sayre, Physical Review Special Topics – Physics Education Research.