Camera chip provides superfine 3-D resolution

Imagine you need to have an almost exact copy of an object. Now imagine that you can just pull your smartphone out of your pocket, take a snapshot with its integrated 3-D imager, send it to your 3-D printer and, within minutes, you have reproduced a replica accurate to within microns of the original object. This feat may soon be possible because of a new, tiny high-resolution 3-D imager developed at Caltech.

Any time you want to make an exact copy of an object with a 3-D printer, the first step is to produce a high-resolution scan of the object with a 3-D camera that measures its height, width, and depth. Such 3-D imaging has been around for decades, but the most sensitive systems generally are too large and expensive to be used in consumer applications.

A cheap, compact yet highly accurate new device known as a nanophotonic coherent imager (NCI) promises to change that. Using an inexpensive silicon chip less than a millimeter square in size, the NCI provides the highest depth-measurement accuracy of any such nanophotonic 3-D imaging device.

The work, done in the laboratory of Ali Hajimiri, the Thomas G. Myers Professor of Electrical Engineering in the Division of Engineering and Applied Science, is described in Optics Express.

Inhibitor for abnormal protein points way to more selective cancer drugs

Nowhere is the adage "form follows function" more true than in the folded chain of amino acids that makes up a single protein macromolecule. But proteins are very sensitive to errors in their genetic blueprints. One single-letter DNA "misspelling" (called a point mutation) can alter a protein's structure or electric charge distribution enough to render it ineffective or even deleterious.

Unfortunately, cells containing abnormal proteins generally coexist alongside those containing the normal (or "wild") type, and telling them apart requires a high degree of molecular specificity. This is a particular concern in the case of cancer-causing proteins.

"With present technologies, developing a drug that will target only the mutant version of a protein is difficult," notes Blake Farrow, a graduate student in materials science at Caltech and a Howard Hughes Medical Institute Fellow. "Most anticancer agents indiscriminately attack both mutant and healthy proteins and tissues."

Combing through terahertz waves

Light can come in many frequencies, only a small fraction of which can be seen by humans. Between the invisible low-frequency radio waves used by cell phones and the high frequencies associated with infrared light lies a fairly wide swath of the electromagnetic spectrum occupied by what are called terahertz, or sometimes submillimeter, waves. Exploitation of these waves could lead to many new applications in fields ranging from medical imaging to astronomy, but terahertz waves have proven tricky to produce and study in the laboratory. Now, Caltech chemists have created a device that generates and detects terahertz waves over a wide spectral range with extreme precision, allowing it to be used as an unparalleled tool for measuring terahertz waves.

“Freezing a bullet” to find clues to ribosome assembly process

Ribosomes are vital to the function of all living cells. Using the genetic information from RNA, these large molecular complexes build proteins by linking amino acids together in a specific order. Scientists have known for more than half a century that these cellular machines are themselves made up of about 80 different proteins, called ribosomal proteins, along with several RNA molecules and that these components are added in a particular sequence to construct new ribosomes, but no one has known the mechanism that controls that process.

Now researchers from Caltech and Heidelberg Univ. have combined their expertise to track a ribosomal protein in yeast all the way from its synthesis in the cytoplasm, the cellular compartment surrounding the nucleus of a cell, to its incorporation into a developing ribosome within the nucleus. In so doing, they have identified a new chaperone protein, known as Acl4, that ushers a specific ribosomal protein through the construction process and a new regulatory mechanism that likely occurs in all eukaryotic cells.

Tracking photosynthesis from space

Watching plants perform photosynthesis from space sounds like a futuristic proposal, but a new application of data from NASA's Orbiting Carbon Observatory-2 (OCO-2) satellite may enable scientists to do just that. The new technique, which allows researchers to analyze plant productivity from far above Earth, will provide a clearer picture of the global carbon cycle and may one day help researchers determine the best regional farming practices and even spot early signs of drought.

When plants are alive and healthy, they engage in photosynthesis, absorbing sunlight and carbon dioxide to produce food for the plant, and generating oxygen as a by-product. But photosynthesis does more than keep plants alive. On a global scale, the process takes up some of the man-made emissions of atmospheric carbon dioxide—a greenhouse gas that traps the sun's heat down on Earth—meaning that plants also have an important role in mitigating climate change.

Yeast protein network could provide insights into obesity

A team of biologists and a mathematician have identified and characterized a network composed of 94 proteins that work together to regulate fat storage in yeast.

"Removal of any one of the proteins results in an increase in cellular fat content, which is analogous to obesity," says study coauthor Bader Al-Anzi, a research scientist at Caltech.

The findings, detailed in PLOS Computational Biology, suggest that yeast could serve as a valuable test organism for studying human obesity.

"Many of the proteins we identified have mammalian counterparts, but detailed examinations of their role in humans has been challenging," says Al-Anzi. "The obesity research field would benefit greatly if a single-cell model organism such as yeast could be used—one that can be analyzed using easy, fast, and affordable methods."

Electrical control of quantum bits in silicon paves the way to large quantum computers

A Univ. of New South Wales (UNSW)-led research team has encoded quantum information in silicon using simple electrical pulses for the first time, bringing the construction of affordable large-scale quantum computers one step closer to reality.

Lead researcher, UNSW Assoc. Prof. Andrea Morello from the School of Electrical Engineering and Telecommunications, said his team had successfully realized a new control method for future quantum computers.

The findings were published in Science Advances.

Unlike conventional computers that store data on transistors and hard drives, quantum computers encode data in the quantum states of microscopic objects called qubits.

The UNSW team, which is affiliated with the ARC Centre of Excellence for Quantum Computation & Communication Technology, was first in the world to demonstrate single-atom spin qubits in silicon, reported in Nature in 2012 and 2013.


Team tightens bounds on quantum information “speed limit”

If you're designing a new computer, you want it to solve problems as fast as possible. Just how fast is possible is an open question when it comes to quantum computers, but physicists at NIST have narrowed the theoretical limits for where that "speed limit" is. The research implies that quantum processors will work more slowly than some research has suggested.

The work offers a better description of how quickly information can travel within a system built of quantum particles such as a group of individual atoms. Engineers will need to know this to build quantum computers, which will have vastly different designs and be able to solve certain problems much more easily than the computers of today. While the new finding does not give an exact speed for how fast information will be able to travel in these as-yet-unbuilt computers—a longstanding question—it does place a far tighter constraint on where this speed limit could be.



Quantum computers will store data in a particle's quantum states—one of which is its spin, the property that confers magnetism. A quantum processor could suspend many particles in space in close proximity, and computing would involve moving data from particle to particle. Just as one magnet affects another, the spin of one particle influences its neighbor's, making quantum data transfer possible, but a big question is just how fast this influence can work.

The NIST team's findings advance a line of research that stretches back to the 1970s, when scientists discovered a limit on how quickly information could travel if a suspended particle only could communicate directly with its next-door neighbors. Since then, technology advanced to the point where scientists could investigate whether a particle might directly influence others that are more distant, a potential advantage. By 2005, theoretical studies incorporating this idea had increased the speed limit dramatically.

"Those results implied a quantum computer might be able to operate really fast, much faster than anyone had thought possible," says NIST's Michael Foss-Feig. "But over the next decade, no one saw any evidence that the information could actually travel that quickly."

Physicists exploring this aspect of the quantum world often line up several particles and watch how fast changing the spin of the first particle affects the one farthest down the line—a bit like standing up a row of dominoes and knocking the first one down to see how fast the chain reaction takes. The team looked at years of others' research and, because the dominoes never seemed to fall as fast as the 2005 prediction suggested, they developed a new mathematical proof that reveals a much tighter limit on how fast quantum information can propagate.

"The tighter a constraint we have, the better, because it means we'll have more realistic expectations of what quantum computers can do," says Foss-Feig.

The limit, their proof indicates, is far closer to the speed limits suggested by the 1970s result.

The proof addresses the rate at which entanglement propagates across quantum systems. Entanglement—the weird linkage of quantum information between two distant particles—is important, because the more quickly particles grow entangled with one another, the faster they can share data. The 2005 results indicated that even if the interaction strength decays quickly with distance, as a system grows, the time needed for entanglement to propagate through it grows only logarithmically with its size, implying that a system could get entangled very quickly. The team's work, however, shows that propagation time grows as a power of its size, meaning that while quantum computers may be able to solve problems that ordinary computers find devilishly complex, their processors will not be speed demons.

Graphics in reverse

Most recent advances in artificial intelligence—such as mobile apps that convert speech to text—are the result of machine learning, in which computers are turned loose on huge data sets to look for patterns.

To make machine-learning applications easier to build, computer scientists have begun developing so-called probabilistic programming languages, which let researchers mix and match machine-learning techniques that have worked well in other contexts. In 2013, the U.S. Defense Advanced Research Projects Agency (DARPA), an incubator of cutting-edge technology, launched a four-year program to fund probabilistic-programming research.



At the Computer Vision and Pattern Recognition conference in June, Massachusetts Institute of Technology (MIT) researchers will demonstrate that on some standard computer-vision tasks, short programs—less than 50 lines long—written in a probabilistic programming language are competitive with conventional systems with thousands of lines of code.

“This is the first time that we’re introducing probabilistic programming in the vision area,” says Tejas Kulkarni, an MIT graduate student in brain and cognitive sciences and first author on the new paper. “The whole hope is to write very flexible models, both generative and discriminative models, as short probabilistic code, and then not do anything else. General-purpose inference schemes solve the problems.”

By the standards of conventional computer programs, those “models” can seem absurdly vague. One of the tasks that the researchers investigate, for instance, is constructing a 3-D model of a human face from 2-D images. Their program describes the principal features of the face as being two symmetrically distributed objects (eyes) with two more centrally positioned objects beneath them (the nose and mouth). It requires a little work to translate that description into the syntax of the probabilistic programming language, but at that point, the model is complete. Feed the program enough examples of 2-D images and their corresponding 3-D models, and it will figure out the rest for itself.

“When you think about probabilistic programs, you think very intuitively when you’re modeling,” Kulkarni says. “You don’t think mathematically. It’s a very different style of modeling.”

Joining Kulkarni on the paper are his adviser, professor of brain and cognitive sciences Josh Tenenbaum; Vikash Mansinghka, a research scientist in MIT’s Dept. of Brain and Cognitive Sciences; and Pushmeet Kohli of Microsoft Research Cambridge. For their experiments, they created a probabilistic programming language they call Picture, which is an extension of Julia, another language developed at MIT.

What’s old is new
The new work, Kulkarni says, revives an idea known as inverse graphics, which dates from the infancy of artificial-intelligence research. Even though their computers were painfully slow by today’s standards, the artificial intelligence pioneers saw that graphics programs would soon be able to synthesize realistic images by calculating the way in which light reflected off of virtual objects. This is, essentially, how Pixar makes movies.

Some researchers, like the MIT graduate student Larry Roberts, argued that deducing objects’ three-dimensional shapes from visual information was simply the same problem in reverse. But a given color patch in a visual image can, in principle, be produced by light of any color, coming from any direction, reflecting off of a surface of the right color with the right orientation. Calculating the color value of the pixels in a single frame of “Toy Story” is a huge computation, but it’s deterministic: All the variables are known. Inferring shape, on the other hand, is probabilistic: It means canvassing lots of rival possibilities and selecting the one that seems most likely.

That kind of inference is exactly what probabilistic programming languages are designed to do. Kulkarni and his colleagues considered four different problems in computer vision, each of which involves inferring the three-dimensional shape of an object from 2-D information. On some tasks, their simple programs actually outperformed prior systems. The error rate of the program that estimated human poses, for example, was between 50 and 80% lower than that of its predecessors.

Learning to learn
In a probabilistic programming language, the heavy lifting is done by the inference algorithm—the algorithm that continuously readjusts probabilities on the basis of new pieces of training data. In that respect, Kulkarni and his colleagues had the advantage of decades of machine-learning research. Built into Picture are several different inference algorithms that have fared well on computer-vision tasks. Time permitting, it can try all of them out on any given problem, to see which works best.

Moreover, Kulkarni says, Picture is designed so that its inference algorithms can themselves benefit from machine learning, modifying themselves as they go to emphasize strategies that seem to lead to good results. “Using learning to improve inference will be task-specific, but probabilistic programming may alleviate re-writing code across different problems,” he says. “The code can be generic if the learning machinery is powerful enough to learn different strategies for different tasks.”

Advances in molecular electronics

Scientists at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) and the Univ. of Konstanz are working on storing and processing information on the level of single molecules to create the smallest possible components that will combine autonomously to form a circuit. As recently reported in Advanced Science, the researchers can switch on the current flow through a single molecule for the first time with the help of light.

Dr. Artur Erbe, physicist at the HZDR, is convinced that in the future molecular electronics will open the door for novel and increasingly smaller—while also more energy efficient—components or sensors: "Single molecules are currently the smallest imaginable components capable of being integrated into a processor." Scientists have yet to succeed in tailoring a molecule so that it can conduct an electrical current and that this current can be selectively turned on and off like an electrical switch.

This requires a molecule in which an otherwise strong bond between individual atoms dissolves in one location—and forms again precisely when energy is pumped into the structure. Dr. Jannic Wolf, chemist at the Univ. of Konstanz, discovered through complex experiments that a particular diarylethene compound is an eligible candidate. The advantages of this molecule, approximately three nanometers in size, are that it rotates very little when a point in its structure opens and it possesses two nanowires that can be used as contacts. The diarylethene is an insulator when open and becomes a conductor when closed. It thus exhibits a different physical behavior, a behavior that the scientists from Konstanz and Dresden were able to demonstrate with certainty in numerous reproducible measurements for the first time in a single molecule.



A computer from a test-tube
A special feature of these molecular electronics is that they take place in a fluid within a test-tube, where the molecules are contacted within the solution. In order to ascertain what effects the solution conditions have on the switching process, it was therefore necessary to systematically test various solvents. The diarylethene needs to be attached at the end of the nanowires to electrodes so that the current can flow. "We developed a nanotechnology at the HZDR that relies on extremely thin tips made of very few gold atoms. We stretch the switchable diarylethene compound between them," explains Dr. Erbe.

When a beam of light then hits the molecule, it switches from its open to its closed state, resulting in a flowing current. "For the first time ever we could switch on a single contacted molecule and prove that this precise molecule becomes a conductor on which we have used the light beam," says Dr. Erbe, pleased with the results. "We have also characterized the molecular switching mechanism in extremely high detail, which is why I believe that we have succeeded in making an important step toward a genuine molecular electronic component."

Switching off, however, does not yet work with the contacted diarylethene, but the physicist is confident: "Our colleagues from the HZDR theory group are computing how precisely the molecule must rotate so that the current is interrupted. Together with the chemists from Konstanz, we will be able to accordingly implement the design and synthesis for the molecule." However, a great deal of patience is required because it's a matter of basic research. The diarylethene molecule contact using electron-beam lithography and the subsequent measurements alone lasted three long years. Approximately ten years ago, a working group at the Univ. of Groningen in the Netherlands had already managed to construct a switch that could interrupt the current. The off-switch also worked only in one direction, but what couldn't be proven at the time with certainty was that the change in conductivity was bound to a single molecule.

Nano-electronics in Dresden
One area of research focus in Dresden is what is known as self-organization. "DNA molecules are, for instance, able to arrange themselves into structures without any outside assistance. If we succeed in constructing logical switches from self-organizing molecules, then computers of the future will come from test-tubes," Dr. Erbe prophesizes. The enormous advantages of this new technology are obvious: billion-euro manufacturing plants that are necessary for manufacturing today's microelectronics could be a thing of the past. The advantages lie not only in production but also in operating the new molecular components, as they both will require very little energy.

With the Helmholtz Research School NANONET, the conditions for investigating and developing the molecular electronics of tomorrow are quite positive in Dresden. In addition to the HZDR, the Technische Universität Dresden, Leibniz-Institute of Polymer Research Dresden (IPF), the Fraunhofer Institute for Ceramic Technology and Systems (IKTS) and the NaMLab gGmbH all participate in running the structured doctoral program.

Breaking Down Barriers: Streamlining Data Management to Boost Knowledge Sharing

Research in the pharmaceutical and industrial science industries has become increasingly global, multidisciplinary and data-intensive. This is made clear by the evolution in patent approvals, which can also be considered a reliable measure of innovation in these industries. Innovation itself, of course, is a cumulative effect, which requires access to multiple fragments of knowledge from disparate sources and exchange of technology and ideas.

While the benefits in innovation in such a competitive environment are clear, investment in research is primarily influenced by the strategic behavior of companies, and a deeper understanding of the importance of market share. Patents and publications help to establish corporate reputation, allowing for controlled technology transfer with strategic joint ventures and to raise barriers to prevent competitors from eroding market share.



The relationship between technological processes, innovation and economic growth has changed over time, as innovation and technological advancement became increasingly important for sustained economic performance. This change was largely driven by globalization, with concurrent flows of information, technology, capital and services and resources across the world, and was manifested by the rising investment in market-oriented research, a surge in patenting driven by rapid innovation across all technology fields and a broad investment in the services sectors.

For those who fund the research, the sharing—and therefore efficient use—of data is a high priority. This keeps the “knowledge management” cogs turning, helping organizations to create, acquire, disseminate and leverage knowledge in order to retain competitive advantage. In R&D, this process increasingly requires researchers to externalize and exchange information, to increase the productivity and profitability of the organization. This growing emphasis on knowledge sharing is a significant, step-change in the way research is carried out—and presents new challenges to the R&D ecosystem.

Moving beyond the “paper prison”
Although efficient knowledge management and sharing is seen as key to increasing productivity and profitability of organizations, there are a number of potential barriers that can exist within an organization—primarily created by factors such as hierarchy, motivation, flexibility and transparency of the communication system within the organization.

Many researchers are familiar with the challenges of data storage, given that important research may often be archived in paper notebooks, computers, external hard drives and corporate IT systems. Although document management systems encompass enterprise storage capability for IP compliance, often these fail to capture the tacit knowledge of the researcher—and crucially, the context of how and why the data was created. The introduction of electronic laboratory notebooks has helped to overcome this, by providing an environment that allows the researcher to capture the experimental design process, together with the data and conclusions as the experiment is conducted.

Addressing the human factors
Trust is an important influence on an individual’s reticence to share knowledge. Employees may believe they are in competition with each other, and that the action of sharing knowledge may result in them losing power and influence in the organization. Employees may also not be willing to share information unless they are sure their knowledge is safe from misuse, or that they are certain about the results. Traditionally, such information may have been controlled by visibility and access to the paper notebook where the information was stored. In an electronic laboratory notebook, private areas can be created to hide data from public view, until an experiment has been completed and the results have been validated. Equally, these protected areas may be created to protect sensitive data, or to segment in-house research from that conducted by a contract research organization.

Human capital is an important component of the innovation process, and requires a deeper understanding the soft skills of teamwork and inter-personal relationships. Communication skills and knowledge transfer of employees are thus positively influenced by the level of interaction within the organization (given the opportunity, distance and visibility of the channel of interaction within an organization), but may be equally challenged by a “know-it-all” attitude, poor ability to comprehend the information being exchanged or a fear of receiving negative criticism.

Tackling the infrastructure obstacles
From an organizational perspective, barriers may also exist due to ethnographic language differences—particularly prevalent in global organizations—or where inherent differences in culture exist because of successive mergers and acquisitions. It is said that one of the key challenges to successful mergers of organizations is to reconcile and adopt a new organizational culture, but this may take years for managers to effectively develop and implement successfully.

Organizations can help to overcome this by creating a recognition system to reward employees for sharing information, or by accrediting those whose work contributes to new patents and publications. By enabling information exchange sessions between remote teams, an open culture of knowledge sharing may be established. The organization itself needs to recognize it’s cheaper to re-use information for both successful and failed research, than it is to repeat the work of someone else. In a paper notebook world, it’s almost impossible to identify what has been done by a co-worker in a foreign site. However with text mining of documents, an electronic data repository offers users a facile way to use keywords to search for data that’s analogous to their own research aims.

Technology also forms a key part of the knowledge management infrastructure, along with the employee resource and the processes of data capture. It forms the backbone of intra-organizational knowledge sharing, particularly where multiple research sites exist in different geographic locations. By connecting sites, the research operation become decentralized, although potential technology barriers may result from a lack of integration of the information systems, together with a disconnect between employees’ expectations of the technology and what it’s capable of delivering. Additionally, researchers now often work with a multitude of systems and instruments, and it’s important to recognize not all users have the same degree of capability or access to these. Although some of these barriers can be overcome through education or formal training, users may simply suffer from slow network speeds between sites, which can hinder system adoption and a willingness to search for prior research remotely.

Clearly, the R&D process is evolving. Firms must now manage and share knowledge, and deal with an evolving set of associated challenges in doing so—but these can be overcome. An open corporate culture, coupled with effective data management tools, helps to break the communication barrier by linking researchers across different geographies and business units. This ensures researchers are able collaborate effectively and reuse existing data, to seed new discoveries and keep science moving forwards.

Deadline Extended for 2015 R&D 100 Award Entries

The editors of R&D Magazine have announced a deadline extension for the 2015 R&D 100 Awards entry process until May 18, 2015.

The R&D 100 Awards have a 50 plus year history of awarding the 100 most technologically significant products of the year. Past winners have included sophisticated testing equipment, innovative new materials, chemistry breakthroughs, biomedical products, consumer items, high-energy physics and more. The R&D 100 Awards span industry, academia and government-sponsored research.

This year we have made the entry form shorter and simpler from the last year’s already overhauled version. That means less questions and more time to enter your products.

Register Now

What products qualify?
Any new technical product or process that was first available for purchase or licensing between January 1, 2014, and March 31, 2015, is eligible for the 2015 awards. This includes manufacturing processes such as machining, open source software, new types of materials or chemicals and consumer-level products such as cameras. Proof-of-concepts and early-stage prototypes don’t quality, however; the submitted entry must be in working, marketable condition.



For more information on the 2015 R&D 100 Award entry process, please visit www.rd100awards.com.

Testing brain activity to identify cybersecurity threats

The old adage that a chain is only as strong as its weakest link certainly applies to the risk organizations face in defending against cybersecurity threats. Employees pose a danger that can be just as damaging as a hacker.  

Iowa State Univ. researchers are working to better understand these internal threats by getting inside the minds of employees who put their company at risk. To do that, they measured brain activity to identify what might motivate an employee to violate company policy and sell or trade sensitive information. The study found that self-control is a significant factor.

Researchers defined a security violation as any unauthorized access to confidential data, which could include copying, transferring or selling that information to a third party for personal gains. In the study, published in the Journal of Management Information Systems, Qing Hu, Union Pacific Professor in Information Systems, and his colleagues found that people with low self-control spent less time considering the consequences of major security violations.




“What we can tell from this current study is that there are differences. The low self-control people and the high self-control people have different brain reactions when they are looking at security scenarios,” Hu said. “If employees have low self-control to start with, they might be more tempted to commit a security violation, if the situation presents itself.”

The study, a first of its kind, used EEG to measure brain activity and examines how people would react in a series of security scenarios. Researchers found people with high self-control took longer to contemplate high-risk situations. Instead of seeing opportunity, or instant reward, it’s possible they thought about how their actions might damage their career or lead to possible criminal charges, Hu said.

For the study, researchers surveyed 350 undergraduate students to identify those with high and low self-control. A total of 40 students—from both the high and low ends of the spectrum—were then asked to do further testing in the Neuroscience Research Lab at ISU’s College of Business. They were given a series of security scenarios, ranging from minor to major violations, and had to decide how to respond while researchers measured their brain activity. Robert West, a professor of psychology, analyzed the results.

“When people are deliberating these decisions, we see activity in the prefrontal cortex that is related to risky decision making, working memory and evaluation of reward versus punishment,” West said. “People with low self-control were faster to make decisions for the major violation scenarios. It really seems like they were not thinking about it as much.”

The findings reflect characteristics of self-control in criminology, in which individuals with low self-control act impulsively and make riskier decisions. However, with traditional research methods and techniques, researchers could not determine if the low self-control group was more likely to act based on immediate gain, without considering the long-term loss, as compared to the high self-control group.

It’s possible that social desirability bias, or the tendency to act in way that is viewed as desirable, masked the true intentions of participants. With neuroscience methods and techniques, the results are more reliable and provide a better understanding of human decision making in various circumstances, researchers said.    

What does this mean for business?
The number of security violations grew to nearly 43 million last year, up from almost 29 million in 2013, according to The Global State of Information Security® Survey 2015. The survey found employees, current and former, were the top-cited offender. Not all employee security breaches were malicious or intentional, but those that were created significant risk to organizations around the world. This highlights the need for organizations to focus internally to protect sensitive information.

Laura Smarandescu, an assistant professor of marketing, has used psychological methods in prior studies to gain a better understanding of an individual’s thought process. She says this study could help businesses determine which employees should have access to sensitive information.

“A questionnaire measuring impulsivity for individuals in critical positions may be one of the screening mechanisms businesses could use,” Smarandescu said.

Other studies on human behavior recommend implementing comprehensive policies and procedures, training for employees and clear, swift sanctions against security misconduct to deter future violations. However, in regard to low self-control, traditional training may not cut it, Hu said.

“Training is good, but it may not be as effective as believed. If self-control is part of the brain structure, that means once you’ve developed certain characteristics, it’s very difficult to change,” Hu said.

Putting a new spin on computing memory

Ever since computers have been small enough to be fixtures on desks and laps, their central processing has functioned something like an atomic Etch A Sketch, with electromagnetic fields pushing data bits into place to encode data. Unfortunately, the same drawbacks and perils of the mechanical sketch board have been just as pervasive in computing: making a change often requires starting from the beginning, and dropping the device could wipe out the memory altogether. As computers continue to shrink—moving from desks and laps to hands and wrists—memory has to become smaller, stable and more energy conscious. A group of researchers from Drexel Univ.’s College of Engineering is trying to do just that with help from a new class of materials, whose magnetism can essentially be controlled by the flick of a switch.

The team, led by Mitra Taheri, PhD, Hoeganaes associate professor in the College of Engineering and head of the Dynamic Characterization Group in the Dept. of Materials Science and Engineering, is searching for a deeper understanding of materials that are used in spintronic data storage. Spintronics, short for “spin transport electronics,” is a field that seeks to harness the natural spin of electrons to control a material’s magnetic properties. For an application like computing memory, in which magnetism is a key element, understanding and manipulating the power of spintronics could unlock many new possibilities.



Current computer data storage takes one of two main forms: hard drives or random access memories (RAM). You can think of a hard drive kind of like a record or CD player, where data is stored on one piece of material—a hard disk—and accessed by a magnetic read head, which is the computer’s equivalent of the record player’s needle or the CD player’s laser. RAM stores data by encoding it in binary patterns of electrical charges called bits. An external electric field nudges electrons into or out of capacitors to create the charge pattern and encode the data.

To store data in either type of memory device we must apply an external magnetic or electric field—either to read or write the data bits. And generating these fields draws quite a bit of energy. In a desktop computer that might go unnoticed, but in a handheld device or a laptop, quality is based, in large part, on how long the battery lasts.

Spintronic memory is an attractive alternative to hard drives and RAM because the material could essentially rewrite itself to store data. Eliminating the need for a large external magnetic field or a read head would make the device less power-intensive and more rugged because it has fewer moving parts.

“It’s the difference between a pre-whiteout typewriter and the first word processor,” said Steven Spurgeon, PhD, an alumnus whose doctoral work contributed to the team’s recently published research in Nature Communications. “The old method required you to move a read head over a bit and apply a strong magnetic field, while the newer one lets you insert data anywhere on the fly. Spintronics could be an excellent, non-destructive alternative to current hard drive and RAM devices and one that saves a great deal of battery life.”

While spintronic materials have been used in sensors and as part of hard drive read heads since the early 2000s, they have only recently been explored for direct use in memories. Taheri’s group is closely examining the physical principles behind spintronics at the atomic scale to look for materials that could be used in memory devices.

“We're trying to develop a framework to understand how the many parameters—structure, chemistry, magnetism and electronic properties—are related to each other,” said Taheri, who is the principle investigator on the research program, funded by the National Science Foundation and the Office of Naval Research. “We're peering into these properties at the atomic scale and probing them locally, in contrast to many previous studies. This is an important step toward more predictive and far-reaching use of spintronics.”

Theoretically, spintronic storage could encode data by tuning electron spins with help from a special, polarized electrical current running through the material. The binary pattern is then created by the “up” or “down” spin of the electrons, rather than their presence “in” or “out” of a capacitor.

To better understand how this phenomenon occurs, the team took a closer look at structure, chemistry and magnetism in a layered thin film oxide material that has shown promise for use in spintronic data storage,  synthesized by researchers at the Univ. of Illinois—Urbana Champaign.

The researchers used advanced scanning transmission electron microscopy, electron energy loss spectroscopy and other high-resolution techniques to observe the material’s behavior at the intersections of the layers, finding that parts of it are unevenly electrically polarized—or ferroelectric.

“Our methodology revealed that polarization varies throughout the material—it is not uniform,” said Spurgeon, who is now a postdoctoral research associate at Pacific Northwest National Laboratory. “This is quite significant for spintronic applications because it suggests how the magnetic properties of the material can be tuned locally. This discovery would not have been possible without our team’s local characterization strategy.”

They also used quantum mechanical calculations to model and simulate different charge states in order to explain the behavior of the structures that they observed using microscopy. These models helped the team uncover the key links between the structure and chemistry of the material and its magnetic properties.

“Electronic devices are continually shrinking.” Taheri said. “Understanding these materials at the atomic scale will allow us to control their properties, reduce power consumption and increase storage densities. Our overarching goal is to engineer materials from the atomic scale all the way up to the macroscale in a predictable way. This work is a step toward that end.”

Enron becomes unlikely data source for computer science researchers

Computer science researchers have turned to unlikely sources - including Enron - for assembling huge collections of spreadsheets that can be used to study how people use this software. The goal is for the data to facilitate research to make spreadsheets more useful.

"We study spreadsheets because spreadsheet software is used to track everything from corporate earnings to employee benefits, and even simple errors can cost organizations millions of dollars," says Emerson Murphy-Hill, an assistant professor of computer science at NC State and co-author of two new papers on the work.

However, there are relatively few public collections of spreadsheet data available for research purposes. For example, the collection currently used by most researchers consists of approximately 4,500 spreadsheets.



But researchers are now making two new collections available - one has 15,000 spreadsheets and the other has more than 249,000.

"In addition, we are publishing a technique that other researchers can use to collect additional spreadsheet data," Murphy-Hill says.

The 15,000 spreadsheet collection consists entirely of spreadsheets collected from internal Enron emails, which were made public after the emails were subpoenaed by prosecutors.

"Our focus is on how users interact with spreadsheets," Murphy-Hill says. "And these spreadsheets actually tell us a lot about how users represent and manipulate data."

To assemble the second set of spreadsheets, called Fuse, the researchers developed their own technique to identify and extract spreadsheets from an online archive of over 5 billion webpages. Using their technique, the researchers collected 249,376 spreadsheets - including spreadsheets made as recently as 2014.

"Fuse used cloud infrastructure to search through billions of webpages to identify and extract the spreadsheets we write about in this paper," says Titus Barik, a Ph.D. student at NC State, researcher at ABB Corporate Research, and lead author of the paper on Fuse. "Commodity cloud computing is incredibly exciting - searching those pages would take about seven years of continuous computation on a single computer, but the economies of scale with cloud computing allowed us to accomplish this with Fuse in only a few days."

"And the fact that Fuse includes recent spreadsheets is a significant advantage over other spreadsheet collections, because the information is more up-to-date and reflects changes in Excel and other spreadsheet software," Murphy-Hill says.

"Fuse is also more reproducible than other spreadsheet collections," says Kevin Lubick, a Ph.D. student at NC State and co-author of a paper about Fuse. "Reproducibility is the cornerstone of good scientific research, but many existing spreadsheet collections are difficult to reproduce. Our technique can be used by anyone, and they'll get the same results we get. But the results will also include any new spreadsheets made available since the last time the program was run."

Computer scientists speed up mine detection

Computer scientists at the Univ. of California, San Diego, have combined sophisticated computer vision algorithms and a brain-computer interface to find mines in sonar images of the ocean floor. The study shows that the new method speeds detection up considerably, when compared to existing methods—mainly visual inspection by a mine detection expert.

“Computer vision and human vision each have their specific strengths, which combine to work well together,” said Ryan Kastner, a professor of computer science at the Jacobs School of Engineering at UC San Diego. “For instance, computers are very good at finding subtle, but mathematically precise patterns while people have the ability to reason about things in a more holistic manner, to see the big picture. We show here that there is great potential to combine these approaches to improve performance.”


Researchers worked with the U.S. Navy’s Space and Naval Warfare Systems Center Pacific (SSC Pacific) in San Diego to collect a dataset of 450 sonar images containing 150 inert, bright-orange mines placed in test fields in San Diego Bay. An image dataset was collected with an underwater vehicle equipped with sonar. In addition, researchers trained their computer vision algorithms on a data set of 975 images of mine-like objects.

In the study, researchers first showed six subjects a complete dataset, before it had been screened by computer vision algorithms. Then they ran the image dataset through mine-detection computer vision algorithms they developed, which flagged images that most likely included mines. They then showed the results to subjects outfitted with an electroencephalogram (EEG) system, programmed to detect brain activity that showed subjects reacted to an image because it contained a salient feature—likely a mine. Subjects detected mines much faster when the images had already been processed by the algorithms. Computer scientists published their results in the IEEE Journal of Oceanic Engineering.

The algorithms are what’s known as a series of classifiers, working in succession to improve speed and accuracy. The classifiers are designed to capture changes in pixel intensity between neighboring regions of an image. The system’s goal is to detect 99.5% of true positives and only generate 50% of false positives during each pass through a classifier. As a result, true positives remain high, while false positives decrease with each pass.

Researchers took several versions of the dataset generated by the classifier and ran it by six subjects outfitted with the EEG gear, which had been first calibrated for each subject. It turns out that subjects performed best on the data set containing the most conservative results generated by the computer vision algorithms. They sifted through a total of 3,400 image chips sized at 100 by 50 pixels. Each chip was shown to the subject for only 1/5 of a sec (0.2 sec)—just enough for the EEG-related algorithms to determine whether subject’s brain signals showed that they saw anything of interest.

All subjects performed better than when shown the full set of images without the benefit of prescreening by computer vision algorithms. Some subjects also performed better than the computer vision algorithms on their own.

“Human perception can do things that we can’t come close to doing with computer vision,” said Chris Barngrover, who earned a computer science PhD in Kastner’s research group and is currently working at SSC Pacific. “But computer vision doesn’t get tired or stressed. So it seemed natural for us to combine the two.”

New chip architecture may provide foundation for quantum computer

Quantum computers are in theory capable of simulating the interactions of molecules at a level of detail far beyond the capabilities of even the largest supercomputers today. Such simulations could revolutionize chemistry, biology and material science, but the development of quantum computers has been limited by the ability to increase the number of quantum bits, or qubits, that encode, store and access large amounts of data.

In a paper appearing in the Journal of Applied Physics, a team of researchers at Georgia Tech Research Institute and Honeywell International have demonstrated a new device that allows more electrodes to be placed on a chip—an important step that could help increase qubit densities and bring us one step closer to a quantum computer that can simulate molecules or perform other algorithms of interest.



"To write down the quantum state of a system of just 300 qubits, you would need 2^300 numbers, roughly the number of protons in the known universe, so no amount of Moore's Law scaling will ever make it possible for a classical computer to process that many numbers," said Nicholas Guise, who led the research. "This is why it's impossible to fully simulate even a modest sized quantum system, let alone something like chemistry of complex molecules, unless we can build a quantum computer to do it."

While existing computers use classical bits of information, quantum computers use "quantum bits" or qubits to store information. Classical bits use either a 0 or 1, but a qubit, exploiting a weird quantum property called superposition, can actually be in both 0 and 1 simultaneously, allowing much more information to be encoded. Since qubits can be correlated with each other in a way that classical bits cannot, they allow a new sort of massively parallel computation, but only if many qubits at a time can be produced and controlled. The challenge that the field has faced is scaling this technology up, much like moving from the first transistors to the first computers.

Creating the building blocks for quantum computing
One leading qubit candidate is individual ions trapped inside a vacuum chamber and manipulated with lasers. The scalability of current trap architectures is limited since the connections for the electrodes needed to generate the trapping fields come at the edge of the chip, and their number are therefore limited by the chip perimeter.

The GTRI/Honeywell approach uses new microfabrication techniques that allow more electrodes to fit onto the chip while preserving the laser access needed.

The team's design borrows ideas from a type of packaging called a ball grid array (BGA) that is used to mount integrated circuits. The ball grid array's key feature is that it can bring electrical signals directly from the backside of the mount to the surface, thus increasing the potential density of electrical connections.

The researchers also freed up more chip space by replacing area-intensive surface or edge capacitors with trench capacitors and strategically moving wire connections.

The space-saving moves allowed tight focusing of an addressing laser beam for fast operations on single qubits. Despite early difficulties bonding the chips, a solution was developed in collaboration with Honeywell, and the device was trapping ions from the very first day.

The team was excited with the results. "Ions are very sensitive to stray electric fields and other noise sources, and a few microns of the wrong material in the wrong place can ruin a trap. But when we ran the BGA trap through a series of benchmarking tests we were pleasantly surprised that it performed at least as well as all our previous traps," Guise said.

Working with trapped ion qubits currently requires a room full of bulky equipment and several graduate students to make it all run properly, so the researchers say much work remains to be done to shrink the technology. The BGA project demonstrated that it's possible to fit more and more electrodes on a surface trap chip while wiring them from the back of the chip in a compact and extensible way. However, there are a host of engineering challenges that still need to be addressed to turn this into a miniaturized, robust and nicely packaged system that would enable quantum computing, the researchers say.

In the meantime, these advances have applications beyond quantum computing. "We all hope that someday quantum computers will fulfill their vast promise, and this research gets us one step closer to that," Guise said. "But another reason that we work on such difficult problems is that it forces us to come up with solutions that may be useful elsewhere. For example, microfabrication techniques like those demonstrated here for ion traps are also very relevant for making miniature atomic devices like sensors, magnetometers and chip-scale atomic clocks."

The next step in DNA computing

Conventional silicon-based computing, which has advanced by leaps and bounds in recent decades, is pushing against its practical limits. DNA computing could help take the digital era to the next level. Scientists are now reporting progress toward that goal with the development of a novel DNA-based GPS. They describe their advance in The Journal of Physical Chemistry B.

Jian-Jun Shu and colleagues note that Moore’s law, which marked its 50th anniversary in April, posited that the number of transistors on a computer chip would double every year. This doubling has enabled smartphone and tablet technology that has revolutionized computing, but continuing the pattern will come with high costs. In search of a more affordable way forward, scientists are exploring the use of DNA for its programmability, fast processing speeds and tiny size. So far, they have been able to store and process information with the genetic material and perform basic computing tasks. Shu’s team set out to take the next step.



The researchers built a programmable DNA-based processor that performs two computing tasks at the same time. On a map of six locations and multiple possible paths, it calculated the shortest routes between two different starting points and two destinations. The researchers say that in addition to cost- and time-savings over other DNA-based computers, their system could help scientists understand how the brain’s “internal GPS” works.

Digitizing neurons

Supercomputing resources at the U.S. Dept. of Energy (DOE)’s Oak Ridge National Laboratory (ORNL) will support a new initiative designed to advance how scientists digitally reconstruct and analyze individual neurons in the human brain. Led by the Allen Institute for Brain Science, the BigNeuron project aims to create a common platform for analyzing the 3-D structure of neurons. Mapping the complex structures of individual neurons, which can contain thousands of branches, is a labor-intensive and time-consuming process when done by hand.

 
BigNeuron’s goal is to streamline this process of neuronal reconstruction—converting two-dimensional microscope images of neurons into 3-D digital models. “Neuronal reconstruction is a huge challenge for this field,” said ORNL’s Arvind Ramanathan. “Unless you understand how these different nerve endings are connected to each other, you’re not going to make any sense of how the brain is functioning.” Digital algorithms could help automate the process, but researchers worldwide use different approaches to collect images, manage data and create their models. The BigNeuron collaborators hope to standardize the process and identify which algorithms are best suited for different neuron types, which would accelerate scientists’ attempts to map every neuron in the brain. The human brain contains nearly 100 billion neurons. ORNL’s Titan, the second most powerful supercomputer in the world, will allow scientists to gauge which algorithms are most effective at reconstruction and tune the codes to take advantage of high-performance computers. “By bench-testing, we’ll get an idea of which ones tend to perform better than others,” Ramanathan said. “If Titan were to help even one of these algorithms to run faster or better, then I think that would be a huge win.” In a series of BigNeuron workshops, participants will contribute neuron reconstruction algorithms and datasets to a common software platform. ORNL will provide a supporting framework through its computing and data management resources, including the lab’s Health Data Sciences Institute, a multidisciplinary initiative designed to examine these kinds of complex, heterogeneous datasets. “Neuroscience imaging represents a unique type of dataset that typically requires supercomputing,” Ramanathan said. “The computers will be used for what they do best, which is massive amounts of computation in a short amount of time. Plus, hosting these very large and complex datasets is at the heart of what we do every day.” Ramanthan also hopes ORNL’s involvement in the initiative will further integrate the high-performance computing and brain science communities. Although supercomputing is used for image reconstruction in applications such as satellite imagery, neuroscience presents unique challenges. “Brain science is very specialized; you can’t take an existing algorithm and make it to work with brain data,” he said. “We want to show that Titan can handle all these types of datasets.” Scientists anticipate that mapping the neuronal connections in an entire brain could provide a wealth of insights in medicine, but Ramanathan notes that BigNeuron is only an initial step in that direction. The project aims to lay the groundwork to enable these future studies. “The biological implications are huge,” said Ramanathan. “If it works on a healthy human brain, then you can do these analyses on a diseased human brain, on patients with Alzheimer’s or Parkinson’s for instance, to try to understand how the wiring is different. That could lead to many different avenues and hopefully drive the future of medicine. “First we need to build the basics, the tools of trade. Because these systems are so complex and so important, the community is trying to do this as accurately and systematically as possible,” he said.

Computing at the speed of light

Univ. of Utah engineers have taken a step forward in creating the next generation of computers and mobile devices capable of speeds millions of times faster than current machines.

The Utah engineers have developed an ultracompact beamsplitter—the smallest on record—for dividing light waves into two separate channels of information. The device brings researchers closer to producing silicon photonic chips that compute and shuttle data with light instead of electrons. Electrical and computer engineering associate professor Rajesh Menon and colleagues describe their invention in Nature Photonics.

Silicon photonics could significantly increase the power and speed of machines such as supercomputers, data center servers and the specialized computers that direct autonomous cars and drones with collision detection. Eventually, the technology could reach home computers and mobile devices and improve applications from gaming to video streaming.

“Light is the fastest thing you can use to transmit information,” says Menon. “But that information has to be converted to electrons when it comes into your laptop. In that conversion, you’re slowing things down. The vision is to do everything in light.”

Photons of light carry information over the Internet through fiber-optic networks. But once a data stream reaches a home or office destination, the photons of light must be converted to electrons before a router or computer can handle the information. That bottleneck could be eliminated if the data stream remained as light within computer processors.

“With all light, computing can eventually be millions of times faster,” says Menon.

To help do that, the U engineers created a much smaller form of a polarization beamsplitter (which looks somewhat like a barcode) on top of a silicon chip that can split guided incoming light into its two components. Before, such a beamsplitter was over 100 by 100 microns. Thanks to a new algorithm for designing the splitter, Menon’s team has shrunk it to 2.4 by 2.4 microns, or one-fiftieth the width of a human hair and close to the limit of what is physically possible.

The beamsplitter would be just one of a multitude of passive devices placed on a silicon chip to direct light waves in different ways. By shrinking them down in size, researchers will be able to cram millions of these devices on a single chip.

Potential advantages go beyond processing speed. The Utah team’s design would be cheap to produce because it uses existing fabrication techniques for creating silicon chips. And because photonic chips shuttle photons instead of electrons, mobile devices such as smartphones or tablets built with this technology would consume less power, have longer battery life and generate less heat than existing mobile devices.

The first supercomputers using silicon photonics—already under development at companies such as Intel and IBM will use hybrid processors that remain partly electronic. Menon believes his beamsplitter could be used in those computers in about three years. Data centers that require faster connections between computers also could implement the technology soon, he says.

A foundation for quantum computing

Quantum computers are in theory capable of simulating the interactions of molecules at a level of detail far beyond the capabilities of even the largest supercomputers today. Such simulations could revolutionize chemistry, biology and materials science, but the development of quantum computers has been limited by the ability to increase the number of quantum bits, or qubits, that encode, store and access large amounts of data.
In a paper published in the Journal of Applied Physics, a team of researchers at the Georgia Tech Research Institute (GTRI) and Honeywell International have demonstrated a new device that allows more electrodes to be placed on a chip—an important step that could help increase qubit densities and bring us one step closer to a quantum computer that can simulate molecules or perform other algorithms of interest.


"To write down the quantum state of a system of just 300 qubits, you would need 2^300 numbers, roughly the number of protons in the known universe, so no amount of Moore's Law scaling will ever make it possible for a classical computer to process that many numbers," said Nicholas Guise, a GTRI research scientist who led the research. "This is why it's impossible to fully simulate even a modest sized quantum system, let alone something like chemistry of complex molecules, unless we can build a quantum computer to do it."
While existing computers use classical bits of information, quantum computers use "quantum bits" or qubits to store information. Classical bits use either a 0 or 1, but a qubit, exploiting a weird quantum property called superposition, can actually be in both 0 and 1 simultaneously, allowing much more information to be encoded. Since qubits can be correlated with each other in a way that classical bits cannot, they allow a new sort of massively parallel computation, but only if many qubits at a time can be produced and controlled. The challenge that the field has faced is scaling this technology up, much like moving from the first transistors to the first computers.
One leading qubit candidate is individual ions trapped inside a vacuum chamber and manipulated with lasers. The scalability of current trap architectures is limited since the connections for the electrodes needed to generate the trapping fields come at the edge of the chip, and their number are therefore limited by the chip perimeter.
The GTRI/Honeywell approach uses new microfabrication techniques that allow more electrodes to fit onto the chip while preserving the laser access needed.
The team's design borrows ideas from a type of packaging called a ball grid array (BGA) that is used to mount integrated circuits. The ball grid array's key feature is that it can bring electrical signals directly from the backside of the mount to the surface, thus increasing the potential density of electrical connections.
The researchers also freed up more chip space by replacing area-intensive surface or edge capacitors with trench capacitors and strategically moving wire connections.
The space-saving moves allowed tight focusing of an addressing laser beam for fast operations on single qubits. Despite early difficulties bonding the chips, a solution was developed in collaboration with Honeywell, and the device was trapping ions from the very first day.
The team was excited with the results. "Ions are very sensitive to stray electric fields and other noise sources, and a few microns of the wrong material in the wrong place can ruin a trap. But when we ran the BGA trap through a series of benchmarking tests we were pleasantly surprised that it performed at least as well as all our previous traps," Guise said.
Working with trapped ion qubits currently requires a room full of bulky equipment and several graduate students to make it all run properly, so the researchers say much work remains to be done to shrink the technology. The BGA project demonstrated that it's possible to fit more and more electrodes on a surface trap chip while wiring them from the back of the chip in a compact and extensible way. However, there are a host of engineering challenges that still need to be addressed to turn this into a miniaturized, robust and nicely packaged system that would enable quantum computing, the researchers say.
In the meantime, these advances have applications beyond quantum computing. "We all hope that someday quantum computers will fulfill their vast promise, and this research gets us one step closer to that," Guise said. "But another reason that we work on such difficult problems is that it forces us to come up with solutions that may be useful elsewhere. For example, microfabrication techniques like those demonstrated here for ion traps are also very relevant for making miniature atomic devices like sensors, magnetometers and chip-scale atomic clocks."

Entangled photons unlock super-sensitive characterization of quantum tech

A new protocol for estimating unknown optical processes, called unitary operations, with precision enhanced by the unique properties of quantum mechanics has been demonstrated by scientists and engineers from the Univ. of Bristol and the Centre for Quantum Technologies in Singapore. The work could lead to both better sensors for medical research and new approaches to benchmark the performance of ultra-powerful quantum computers.
History tells us the ability to measure parameters and sense phenomena with increasing precision leads to dramatic advances in identifying new phenomena in science and improving the performance of technology: famous examples include x-ray imaging, magnetic resonance imaging (MRI), interferometry and the scanning-tunneling microscope.


Scientists are understanding how to engineer and control quantum systems to vastly expand the limits of measurement and sensing is growing rapidly. This area, known as quantum metrology, promises to open up radically alternative methods to the current state-of-the-art in sensing.
In this new study, the researchers re-directed the sensing power of quantum mechanics back on itself to characterize, with increased precision, unknown quantum processes that can include individual components used to build quantum computers. This ability is becoming more and more important as quantum technologies move closer to real applications.
Dr. Xiao-Qi Zhou of Bristol's School of Physics said: "A really exciting problem is characterizing unknown quantum processes using a technique called quantum process tomography. You can think of this as a problem where a quantum object, maybe a photonic circuit of optics or an atomic system, is locked in a box. We can send quantum states in and we can measure the quantum states that come out. Our challenge is to correctly identify what is in the box. This is a difficult problem in quantum mechanics and it is a highly active area of research because its solution is needed to enable us to test quantum computers as they grow in size and complexity."
One major shortcoming of quantum process tomography is that precision using standard techniques is limited by a type of noise known as 'shot noise'. By borrowing techniques from quantum metrology, the researchers were able to demonstrate precision beyond the shot noise limit. They expect their protocol can also be applied to build more sophisticated sensors that identify molecules and chemicals more precisely by observing how they interact with quantum states of light.
Co-author Rebecca Whittaker, a PhD student in Bristol's Centre for Quantum Photonics said: "The optical process we measured here can be used to manipulate quantum bits of information in a quantum computer but they can also occur in nature. For example, our setup could be used to measure how the polarization of light is rotated by a sample. We could then infer properties of that sample with better precision.
"Increasing measurement precision is particularly important for probing light-sensitive samples where we want to get as much information as we can before our probe light damages or causes alterations to the sample. We feel this will have a big impact on the tools used in medical research."
The researchers' protocol relies on generating multiple photons in an entangled state and this study demonstrates that they can reconstruct rotations which act on the polarization of light.

Blogroll

Kategori

Kategori