Early stage radiation damage in materials quantified for first time
06 June 2012
Nuclear radiation leads to highly energetic ions that can penetrate large distances within matter, often leading to the accumulation of damage sites as the ions pass through the material. During this process, the energetic ions eventually slow down as energy is lost by friction with the materials' electrons. Like a speedboat moving through a calm body of water, the passage of fast ions creates a disturbance in the electron density in the shape of a wake.

Now, Lawrence Livermore National Laboratory researchers have for the first time simulated and quantified the early stages of radiation damage that will occur in a given material. "A full understanding of the early stages of the radiation damage process provides knowledge and tools to manipulate them to our advantage," said Alfredo Correa, a Lawrence Fellow from Lawrence Livermore National Laboratory in the Quantum Simulations Group.
Correa along with colleagues Alfredo Caro from Los Alamos National Laboratory, Jorge Kohanoff from the the UK and Emilio Artacho and Daniel Sánchez-Portal from Spain, have directly simulated this quantum friction of the electrons in a real material for the very first time.
The team simulated the passage of a fast proton through crystalline aluminium. By accounting for the energy absorbed by the electrons and the magnitude of the impulse given to the aluminium atoms, the team was able to predict the rate at which the proton is stopped and the amount of momentum transferred. This is a precise atomistic simulation of the deposited energy and momentum, which is ultimately responsible for the damage that is produced in the material.
The new method opens up the possibility to predict the effect of radiation on a wide range of complex materials. The research not only applies to materials for nuclear applications, but also for materials related to the space industry, and new processing techniques for lasers and highly energetic ions. In biology and medicine, it also may contribute to understanding the effects of radiation on living tissues, both for damage and therapeutic processes.
The illustration above shows a model of the electronic wake (blue surfaces) generated by an energetic proton (red sphere) travelling in an aluminium crystal (yellow spheres). The resulting change in electronic density is responsible for modification of chemical bonds between the atoms and consequently for a change in their interactions (photograph courtesy of Lawrence Livermore National Laboratory).
The research is highlighted on the cover of the May 25 issue of Physical Review Letters.
For more information about this work, click here.
Simulating nuclear weapon performance
Meanwhile, researchers at Purdue university in Indiana, USA and high-performance computing experts at Lawrence Livermore National Laboratory are perfecting simulations that show a nuclear weapon's performance in precise molecular detail. These tools are becoming critical for national defence because international treaties forbid the detonation of nuclear test weapons.
The simulations, which are needed to certify nuclear weapons more efficiently, may require 100,000 machines, a level of complexity that is essential to reveal molecular-scale reactions taking place over milliseconds. The same types of simulations also could be used in areas such as climate modelling and studying the dynamic changes in a protein's shape.
Such highly complex jobs must be split into many processes that execute in parallel on separate machines in large computer clusters, says Saurabh Bagchi, an associate professor in Purdue University's School of Electrical and Computer Engineering. "Due to natural faults in the execution environment there is a high likelihood that some processing element will have an error during the application's execution, resulting in corrupted memory or failed communication between machines," Bagchi said. "There are bottlenecks in terms of communication and computation."
These errors are compounded as long as the simulation continues to run before the glitch is detected and may cause simulations to stall or crash altogether.
"We are particularly concerned with errors that corrupt data silently, possibly generating incorrect results with no indication that the error has occurred," said Bronis R. de Supinski, co-leader of the ASC Application Development Environment Performance Team at Lawrence Livermore. "Errors that significantly reduce system performance are also a major concern since the systems on which the simulations run are very expensive."
The researchers have developed automated methods to detect a glitch soon after it occurs.
"You want the system to automatically pinpoint when and in what machine the error took place and also the part of the code that was involved," Bagchi said. "Then, a developer can come in, look at it and fix the problem."
One bottleneck arises from the fact that data are streaming to a central server.
"Streaming data to a central server works fine for a hundred machines, but it can't keep up when you are streaming data from a thousand machines," said Purdue doctoral student Ignacio Laguna, who worked with Lawrence Livermore computer scientists. "We've eliminated this central brain, so we no longer have that bottleneck."
Each machine in the supercomputer cluster contains several cores, or processors, and each core might run one "process" during simulations. The researchers created an automated method for "clustering," or grouping the large number of processes into a smaller number of "equivalence classes" with similar traits. Grouping the processes into equivalence classes makes it possible to quickly detect and pinpoint problems.
"The recent breakthrough was to be able to scale up the clustering so that it works with a large supercomputer," Bagchi said.
Lawrence Livermore computer scientist Todd Gamblin came up with the scalable clustering approach.
A lingering bottleneck in using the simulations is related to a procedure called checkpointing, or periodically storing data to prevent its loss in case a machine or application crashes. The information is saved in a file called a checkpoint and stored in a parallel system distant from the machines on which the application runs.
"The problem is that when you scale up to 10,000 machines, this parallel file system bogs down," Bagchi said. "It's about 10 times too much activity for the system to handle, and this mismatch will just become worse because we are continuing to create faster and faster computers."
Doctoral student Tanzima Zerin and Rudolf Eigenmann, a professor of electrical and computer engineering, along with Bagchi, led work to develop a method for compressing the checkpoints, similar to the compression of data for images.
"We're beginning to solve the checkpointing problem," Bagchi said. "It's not completely solved, but we are getting there."
The checkpointing bottleneck must be solved in order for researchers to create supercomputers capable of "exascale computing," or 1,000 quadrillion operations per second.
"It's the Holy Grail of supercomputing," Bagchi said.