Laboratory robotics and high-throughput research

This essay was written by Oliver de Peyer and was first published in the 2008 Mill Hill Essays.

Every scientist has had that moment of what could be termed reverse epiphany; the moment when you realise that your dream experiment, the one that will bring you world adulation and success, actually involves hundreds of little test tubes of colourless liquid. And they’ve all got to be added to each other in the right order, the right amounts, and often at low temperatures. As you shiver in a walk-in fridge with your coat on, your pipette in hand, carefully syringing drops of colourless something from one container to another, it has occurred to many of us – is there a better way?

Go back in time one hundred and eighty years or so to the origins of computing. If John Herschel and Charles Babbage had been molecular biologists instead of mathematicians, Herschel would have inspired Babbage to invent the laboratory robot instead of the computer. All those little test tubes rolling around on his bench. Dozens and dozens of the little blighters. “I wish to God I could perform this pipetting by the power of steam!” Herschel said – well, almost. The point is, science has been here before. Multiple tasks to do, very fiddly, and very boring. Surely there are better things to do with our time? Like thinking up the experiments in the first place and analysing the results for instance, instead of wasting so much time just pipetting them?

Welcome to the world of laboratory robotics. Why not use a robot arm to move the test tubes around, and other robots to pipette the liquid into and out of them? It sounds easy enough and you can dramatically increase the number of experiments going through your laboratory. Hence the alternative name for this endeavour: high throughput research.

Major pharmaceutical companies such as GSK and Pfizer are at the vanguard of high throughput research. They have vast refrigerated warehouses of chemicals and equipment, often with no human access at all. Mighty robot arms trundle along rails and shafts, busily picking and stacking samples. But does it work? Well, yes, some of the time. There are currently two streams of high throughput work: one is High Throughput Screening and the other is what I would call High Throughput Research.

Traditionally, drug development companies have developed new drugs by collecting large libraries of potential drugs and then screening them one by one – in other words, pipetting them onto a biological sample of interest. So, for instance, if you had a cancer cell line growing in a test tube, you could test your library until you found a compound that made the cancer cells die. This is High Throughput Screening, or HTS.

The problem is that there are practical limits as to which chemical compounds can be useful as drugs – for instance they have to be soluble in the bloodstream. In fact, there are arguably only about 30,000 compounds that can be used as the basis for new drugs. Pharmaceutical companies try to add different side chains and so forth to these, and indeed some HTS libraries have more than a quarter ofthan a million compounds now. Starkly however, if you don’t get a “hit” from your basic library then you are probably flogging a dead horse; all the easy-to-find drugs have already been discovered!

Contrast this with other major drugs recently developed, such as Herceptin. This is an antibody that targets the HER2 protein that is overabundant in cells in some breast cancers. Herceptin was developed a result of careful research, not blindly screening a compound library. Researchers carefully studied HER2, identified its key role in breast cancer, and then found a way to make an antibody to attack it.

The important thing to note is that it is an antibody; a huge, specialised protein from the immune system, dedicated to rooting out and destroying its target – in this case, HER2. You’d never, ever find it by doing conventional high throughput screening.

Drug discovery will come to rely increasingly on Herceptin-type drugs instead of compounds found in high throughput screening. Robots can be used to do the research to find the next Herceptin – high throughput research, as opposed to high throughput screening. In other words, you can do very many different experiments, but not necessarily doing the same one over and over again – as would be the case for a compound screen.

With a suitably flexible robot, you can do just about any laboratory experiment – you can analyse genes, grow cells, crystallise proteins – all on the same equipment. Laboratories of the future will have a higher ratio of robots to researchers, allowing people to conduct large numbers of experiments automatically. In this way, researchers can utilise their most important asset – their brains – instead of having to pipette all the time. A robot can pipette much quicker, and it can run in batch mode – researchers can drop off their samples, and the robot then sets up the experiments one by one – working all through the night if need be.

Of course, it sounds great in principle but in practice there are many pitfalls. Scientists need an entirely new mindset in the robot-based laboratory, in order to guide their experiments around the crevasses of robotic reverse epiphany. For instance, as much as possible should be automated to avoid creating a manual bottleneck or you may end up spending hours doing the one non-automatable step by hand.

You must also stop thinking about performing individual experiments, but instead think of performing ninety-six at once. Researchers use a standard microplate format, a microplate being a rack of little well-shaped containers, arranged in eight rows of twelve, in a moulded piece of plastic about the size of the palm of your hand. Many laboratory robots are set up to handle only microplates so it is a bit of a waste to do only one experiment and use only one of the ninety-six wells. You need to find ninety-five other useful experiments to run at the same time.

Robots can pipette easily and accurately and many different kits are available in microplate form to do various interesting experiments. Just about any molecular biology experiment can be carried out on a microplate, with special chemicals and filters designed into the wells themselves. There are some very clever microplate designers, some of whom probably find it hard to leave their work in the laboratory and have eight by twelve spice racks at home in their kitchen as well.

Following on from this though, the hardware is as important as the biochemistry. Mechanical reliability is very important. If your robot arm drops a microplate, you could lose days of work and thousands of pounds’ worth of samples. This is distressing for the researcher who is used to dropping, say, only one test tube at a time, which can usually be thrown in the bin before the boss notices.

You also need to have what I call “ownership of the code” – if you pay somebody else to program your robot for you, then you risk having a lot of mysterious software that you don’t fully understand and can’t change. Being able to program your own robots means you can make them do whatever you want and makes fixing problems easier. With so many samples running on your robot, you must also have a good computer database to store and analyse all the data. Imagine, say, a hundred microplates going through your robot one day and all the results – that’s almost ten thousand individual wells – spewing out on a printer. Then imagine having to go through all those reams and reams of paper for the one well that you’re looking for. Believe me, databases are easier. But again, it’s not the sort of thing you’d ever needed before when pipetting your test tubes one by one the old fashioned way.

So, the ideal high throughput researcher is a skilled biochemist, a robot programmer, a database programmer and also has some engineering skills! Which is just as well because everything’s about to get even more complex in the world of lab automation.

At some point, a particularly cunning microplate designer must have thought: Why stop at 96 wells? If we make the wells half the size and twice as close together, then you can have four times as many, or 384 wells on a microplate. Do the same again, and you have 1536 wells. In fact, people have even tried one further doubling to make a 6144 well plate, but by then the wells are so incredibly tiny that it is very difficult to pipette into and out of them, even with a robot.

So, why not do away with the robot altogether? The laboratory robots described are doing nothing except pipetting liquid from one well to another. All the interesting stuff is actually going on in the wells themselves. If so many different molecular biology experiments have already been designed for microplate wells, why not link the wells together so that, for instance, one chemical reaction occurs in one well, and then this well’s content is passed to another well where another reaction occurs, and so forth? This way, you no longer need a pipetting robot to move samples between wells, so the entire experiment can become much, much smaller – with potentially hundreds of thousands of microscopic wells fitted into a square centimetre. A further ramification of this is because the wells can be so small, the amounts of material required – cells, genes, proteins, whatever – is very much smaller, and cheaper, as well.

This is exactly equivalent to looking at the difference between a soldered electronic circuit board and the same circuit made as a tiny silicon chip instead. Just as the silicon chip ushered in the era of microelectronics, so is lab automation now turning to the new field of microfluidics. In many cases, even the fabrication processes are the same; for instance, silicon chips can be made with tiny channels etched into them, along which fluids – and even living cells – can flow. Other materials used in microfluidic devices include elastomers, rubber-like materials which again can be manufactured with microscopic wells and channels within them. Just as we have got used to talking about the silicon chip, so now researchers are beginning to refer to microfluidic chips, or even the Holy Grail, the “lab-on-a-chip”.

Microfluidics brings with it a whole new set of challenges. The volumes and dimensions involved are so small that conventional physics no longer applies – or at least not the “common sense” physics we are used to in the world around us. A conventional pipetting robot might be capable of pipetting a few microlitres – that’s a few thousandths of a millilitre. Microfluidic chips might have wells that only hold a few nanolitres – a thousand times less! And all this in a well maybe only ten micrometres across – a hundredth of a millimetre.

So, for instance, if you want to grow living cells in a microfluidic chip, then they are likely to be quite constrained by the walls of the well around them – many cells are several micrometres across and so will almost completely fill the well. There is no guarantee that the cell will function normally since many cells actually grow freely suspended in liquid or conversely are completely surrounded by other cells.

umping fluids around a microfluidic chip is a headache as well. You can’t make a little propeller to whirl around for instance. However, if you use elastomers, you can make deformable ridges that protrude into the channel when pressurized. If you have several of these in series along a channel, then you can deform the ridges in a wave-like pattern, one after another, hence pushing fluids along the channel – this is actually similar to how an inkjet printer works. With silicon devices, you can also coax droplets along channels using varying electrostatic fields, the circuitry for which is manufactured into the chip below the etched channels – a combined microfluidic and microelectronic chip. Maybe the microelectronic route will be more elegant in the future, since otherwise you will need various pumps and pressure lines clustered around the chip to pressurise the various ridges and so forth. Unless some way can be found to miniaturise all this, it makes for a distinctly non-microfluidic set up, and is in fact about as large as a “normal” lab robot – even though the microfluidic chip is only a tiny component at its centre. But let’s put that down to growing pains for now, like the joke about the man with a digital watch who needed to pull the batteries for it behind him on a handcart.

It’s one thing to get the fluid flowing: the next challenge is mixing it, if you want to mix two chemicals to set up a chemical reaction for instance. A microfluidic chip is too small to be subject to effects like turbulence or convection, which are only noticeable on a larger scale. So if you direct two streams of different chemicals into a well to try and mix them, they will, left to themselves, form two discrete unmixed areas in your well instead. There will only a very gradual mixing of the two chemicals at the boundary between them due to diffusion alone. Actually, this can be used to your advantage if diffusion is all you want; for instance, a leading application is for cystallising proteins. Good quality protein crystals are invaluable since they can be used in a technique called X-ray diffraction that reveals the precise atomic structure of the protein involved. It can be very difficult to find exactly the right chemical conditions to coax a protein to crystallize. Microfluidic chips offer a way forward, since the slow diffusion between two chemicals in a well provides a gradient of differing concentrations of each either side of the boundary. Maybe somewhere within this gradient of concentrations is exactly the right combination to crystallize your protein of interest.

More conventional mixing can however be achieved with some imagination. For instance, there is an elastomer chip where samples can be made to chase their own tails – essentially pumped around and around in a loop with wave-like ridges until the front end of the slug of liquid being pumped caught up with its own rear, mixing up the sample effectively. This was part of a very elaborate microfluidic device, which checks whether proteins are soluble in particular chemicals. It mixes proteins and chemicals together as they rush around the loop, and if the protein is insoluble then the loop in the chip goes cloudy when viewed under a microscope. Wave-like ridge pumps on feeder channels are activated to flush out the loop, and then used to pump fresh combinations of proteins and chemicals in to the loop to be mixed, the whole cycle taking a second or so each time. If you found that a protein was soluble in a particular solution of chemicals then you could then use this for a crystallization experiment, for instance.

Microfluidics is still in its infancy, limited by commercial factors. In a crowded biochemical marketplace, you need to commercialise an invention quickly to sell it to researchers. So, if someone invents a chip for crystallization that is about as far as they go. If I wanted to, for instance, stage fifteen different experiments one after the other, then what I’d like to do is design a chip with fifteen different microfluidic wells, linked together by appropriate valves and pumps, with suitable ancillary wells feeding in different chemicals and solvents and so forth as necessary. Reaction 1 would take place in chamber 1, and this would then be pumped to chamber 2 for reaction 2, and then to chamber 3 and so on. To make this a reality research institutes will probably need their own microfluidics fabrication labs in the future.

Another strong reason for going for fully integrated, lab-on-a-chip devices is that current devices are not really microfluidic once one considers the comparatively huge size of the plumbing you need to support them. For instance, you might have a 10 nanolitre well on a chip, but this is probably perched awkwardly on a small piece of rubber tubing that you need to pipet into in the conventional way. You might only need to drip, say, a microlitre of sample into the tubing to feed the microfluidic well. Still, you have used up a microlitre of your valuable sample, be it a promising protein or whatever, to feed only 10 nanolitres into the microfluidic well. In other words, you’ve wasted 990 nanolitres (a thousand nanolitres to a microlitre, remember) or 99% of your sample – it’s lost in the rubber tube as “dead volume”. You might as well have done your experiment on a “normal” lab robot, which areis happy handling microlitre samples all the time. Of course, if you’re clever, you will try to use some of that 990 nanolitres for something else – maybe several hundred other microfluidic wells doing other interesting experiments with the same protein – but then you start hitting the problem I described in the last paragraph: Most chips just aren’t this complex yet.

In the future, researchers will be able to use the microfluidic equivalents of Ford’s Model T Factory, which took sand, coal, iron ore and water in at one end and rolled Model T cars out the other. How about a microfluidic chip which you load up with a few microlitres each of the four DNA nucleotides for instance? It could then spend all day synthesizing different genes. Of course, to round it off, it could then tap off the synthesized genes and put them straight into living cells to make genetically modified organisms – all in the same chip. And if there was some way of analyzing the cells on the same chip as well…. You get the idea.

Soon laboratory robotics and microfluidics will be routine. We will wonder how we ever did without it.

Leave a comment

name*

email* (not published)

website