Making better, faster, cheaper antibodies
Brian Kay calls himself a protein engineer.
“I’m a technologist; an inventor,” says the professor and former head of biological sciences. He is focused on improving a decades-old mainstay of biotechnology that has already spawned billion-dollar blockbuster drugs and diagnostics, yet remains far from optimal.
The body’s immune system produces antibodies to fight invading bacteria and viruses. Scientists employ these protein molecules in a wide variety of uses to detect or fish out other biomolecules from a complex mixture, because an antibody binds to its target — usually also a protein — very selectively and very tightly.
Clinicians and researchers use antibodies as probes, Kay said, “a marker of where something is and how much.” In the clinical lab, antibodies have long been used to detect molecules produced in pregnancy or leaking from heart cells after a heart attack. Researchers use them to localize a molecule of interest to a particular tissue or organ.
Historically, the antibodies used as lab reagents were produced not by technicians, but by rabbits. An animal injected with the target molecule would contain in its serum the cognate antibody. Over 40 years ago, scientists learned to produce “monoclonal” antibodies in the lab by fusing an antibody-producing immune cell to a proliferating cancer cell.
But even today, Kay and some of his colleagues estimate, fewer than half of the several thousand commercially available antibodies recognize only their specified target molecules. In a commentary last year in the journal Nature, they contended that poorly characterized antibodies result in a waste of materials, time and money in biomedical research that amounts to $350 million annually in the U.S. alone.
Kay is “part of a growing global movement” to improve, characterize and standardize antibodies as reagents.
“We need to be able to make them better, faster, cheaper,” he said, in order to create “a toolset, a platform technology, that will have broad application.”