Translate

четверг, 28 мая 2015 г.

The Future Fabric of Data Analysis

The Future Fabric of Data Analysis



When subatomic particles smash together at the Large Hadron Collider in Switzerland, they create showers of new particles whose signatures are recorded by four detectors. The LHC captures 5 trillion bits of data — more information than all of the world’s libraries combined — every second. After the judicious application of filtering algorithms, more than 99 percent of those data are discarded, but the four experiments still produce a whopping 25 petabytes (25×1015bytes) of data per year that must be stored and analyzed. That is a scale far beyond the computing resources of any single facility, so the LHC scientists rely on a vast computing grid of 160 data centers around the world, a distributed network that is capable of transferring as much as 10 gigabytes per second at peak performance.

The LHC’s approach to its big data problem reflects just how dramatically the nature of computing has changed over the last decade. Since Intel co-founder Gordon E. Moore first defined it in 1965, the so-called Moore’s law — which predicts that the number of transistors on integrated circuits will double every two years — has dominated the computer industry. While that growth rate has proved remarkably resilient, for now, at least, “Moore’s law has basically crapped out; the transistors have gotten as small as people know how to make them economically with existing technologies,” said Scott Aaronson, a theoretical computer scientist at the Massachusetts Institute of Technology.


Instead, since 2005, many of the gains in computing power have come from adding more parallelism via multiple cores, with multiple levels of memory. The preferred architecture no longer features a single central processing unit (CPU) augmented with random access memory (RAM) and a hard drive for long-term storage. Even the big, centralized parallel supercomputers that dominated the 1980s and 1990s are giving way to distributed data centers and cloud computing, often networked across many organizations and vast geographical distances.

These days, “People talk about a computing fabric,” said Stanford University electrical engineerStephen Boyd. These changes in computer architecture translate into the need for a different computational approach when it comes to handling big data, which is not only grander in scope than the large data sets of yore but also intrinsically different from them.

The demand for ever-faster processors, while important, isn’t the primary focus anymore. “Processing speed has been completely irrelevant for five years,” Boyd said. “The challenge is not how to solve problems with a single, ultra-fast processor, but how to solve them with 100,000 slower processors.” Aaronson points out that many problems in big data can’t be adequately addressed by simply adding more parallel processing. These problems are “more sequential, where each step depends on the outcome of the preceding step,” he said. “Sometimes, you can split up the work among a bunch of processors, but other times, that’s harder to do.” And often the software isn’t written to take full advantage of the extra processors. “If you hire 20 people to do something, will it happen 20 times faster?” Aaronson said. “Usually not.”

Researchers also face challenges in integrating very differently structured data sets, as well as the difficulty of moving large amounts of data efficiently through a highly distributed network.

Those issues will become more pronounced as the size and complexity of data sets continue to grow faster than computing resources, according to California Institute of Technology physicist Harvey Newman, whose team developed the LHC’s grid of data centers and trans-Atlantic network. He estimates that if current trends hold, the computational needs of big data analysis will place considerable strain on the computing fabric. “It requires us to think about a different kind of system,” he said.

Strange Object Found -- "The Long-Sought 'Missing Link' That Creates a Neutron Star or Black Hole"

Strange Object Found -- "The Long-Sought 'Missing Link' That Creates a Neutron Star or Black Hole"

Image_1774e-Cassiopeia-A
 The object, called Supernova 2012ap (SN 2012ap) is what astronomers term a core-collapse supernova. This type of blast occurs when the nuclear fusion reactions at the core of a very massive star no longer can provide the energy needed to hold up the core against the weight of the outer parts of the star. The core then collapses catastrophically into a superdense neutron star or a black hole. The rest of the star's material is blasted into space in a supernova explosion.
Astronomers using the National Science Foundation's Very Large Array (VLA) have found a long-sought "missing link" between supernova explosions that generate gamma-ray bursts (GRBs) and those that don't. The scientists found that a stellar explosion seen in 2012 has many characteristics expected of one that generates a powerful burst of gamma rays, yet no such burst occurred.
"This is a striking result that provides a key insight about the mechanism underlying these explosions," said Sayan Chakraborti, of the Harvard-Smithsonian Center for Astrophysics (CfA). "This object fills in a gap between GRBs and other supernovae of this type, showing us that a wide range of activity is possible in such blasts," he added.
The most common type of such a supernova blasts the star's material outward in a nearly-spherical bubble that expands rapidly, but at speeds far less than that of light. These explosions produce no burst of gamma rays.
In a small percentage of cases, the infalling material is drawn into a short-lived swirling disk surrounding the new neutron star or black hole. This accretion disk generates jets of material that move outward from the disk's poles at speeds approaching that of light. This combination of a swirling disk and its jets is called an "engine," and this type of explosion produces gamma-ray bursts.
The new research shows, however, that not all "engine-driven" supernova explosions produce gamma-ray bursts.
"This supernova had jets moving at nearly the speed of light, and those jets were quickly slowed down, just like the jets we see in gamma-ray bursts," said Alicia Soderberg, also of CfA.

Image_2740_1e-SN-2012ap
 An earlier supernova seen in 2009 also had fast jets, but its jets expanded freely, without experiencing the slowdown characteristic of those that generate gamma-ray bursts. The free expansion of the 2009 object, the scientists said, is more like what is seen in supernova explosions with no engine, and probably indicates that its jet contained a large percentage of heavy particles, as opposed to the lighter particles in gamma-ray-burst jets. The heavy particles more easily make their way through the material surrounding the star.
"What we see is that there is a wide diversity in the engines in this type of supernova explosion," Chakraborti said. "Those with strong engines and lighter particles produce gamma-ray bursts, and those with weaker engines and heavier particles don't," he added.
"This object shows that the nature of the engine plays a central role in determining the characteristics of this type of supernova explosion," Soderberg said.
Chakraborti and Soderberg worked with an international team of scientists from five continents. In addition to the VLA, they also used data from the Giant Meterwave Radio Telescope (GMRT) in India and the InterPlanetary Network (IPN) of spacecraft equipped with GRB detectors. The team, led by Chakraborti, is reporting their work in a paper accepted to the Astrophysical Journal. Other articles, led by co-authors Raffaella Margutti and Dan Milisavljevic, also report on the X-ray and optical follow-up on SN 2012ap using a suite of space and ground-based facilities.
In 2007 NASA’s Spitzer space telescope found the infrared signature of silica (sand) in the core-collapse supernova remnant Cassiopeia A shown at the top of the page The light from this exploding star first reached Earth in the 1600s. The cyan dot just off center is all that remains of the star that exploded. NASA/JPL-Caltech/ O. Krause (Steward Observatory)
Researchers from Washington University in St. Louis report finding tiny grains of silica, which they believe came from a core-collapse supernova, in primitive meteorites.
The Daily Galaxy via NRAO

Emergence of Spacetime --"Built by Quantum Entanglement"

Emergence of Spacetime --"Built by Quantum Entanglement"

Spacetime
 "It was known that quantum entanglement is related to deep issues in the unification of general relativity and quantum mechanics, such as the black hole information paradox and the firewall paradox," says Hirosi Ooguri, a Principal Investigator at the University of Tokyo's Kavli IPMU. "Our paper sheds new light on the relation between quantum entanglement and the microscopic structure of spacetime by explicit calculations. The interface between quantum gravity and information science is becoming increasingly important for both fields."

A collaboration of physicists and a mathematician has made a significant step toward unifying general relativity and quantum mechanics by explaining how spacetime emerges from quantum entanglement in a more fundamental theory. The paper announcing the discovery by Ooguri, withCaltech mathematician Matilde Marcolli and graduate students Jennifer Lin and Bogdan Stoica, will be published in Physical Review Letters.
Physicists and mathematicians have long sought a Theory of Everything (ToE) that unifies general relativity and quantum mechanics. General relativity explains gravity and large-scale phenomena such as the dynamics of stars and galaxies in the universe, while quantum mechanics explains microscopic phenomena from the subatomic to molecular scales.
The holographic principle shown in the image  below is widely regarded as an essential feature of a successful Theory of Everything. The holographic principle states that gravity in a three-dimensional volume can be described by quantum mechanics on a two-dimensional surface surrounding the volume. In particular, the three dimensions of the volume should emerge from the two dimensions of the surface. However, understanding the precise mechanics for the emergence of the volume from the surface has been elusive.