AI News, Applying machine learning to the universe's mysteries
- On Sunday, June 3, 2018
- By Read More
Applying machine learning to the universe's mysteries
The researchers programmed powerful arrays known as neural networks to serve as a sort of hivelike digital brain in analyzing and interpreting the images of the simulated particle debris left over from the collisions.
The images used in this study -- relevant to particle-collider nuclear physics experiments at Brookhaven National Laboratory's Relativistic Heavy Ion Collider and CERN's Large Hadron Collider -- recreate the conditions of a subatomic particle 'soup,' which is a superhot fluid state known as the quark-gluon plasma believed to exist just millionths of a second after the birth of the universe.
These collisions are believed to liberate particles inside the atoms' nuclei, forming a fleeting, subatomic-scale fireball that breaks down even protons and neutrons into a free-floating form of their typically bound-up building blocks: quarks and gluons.
Researchers hope that by learning the precise conditions under which this quark-gluon plasma forms, such as how much energy is packed in, and its temperature and pressure as it transitions into a fluid state, they will gain new insights about its component particles of matter and their properties, and about the universe's formative stages.
'We thought, 'If we have some visual scientific data, maybe we can get an abstract concept or valuable physical information from this.'' Wang added, 'With this type of machine learning, we are trying to identify a certain pattern or correlation of patterns that is a unique signature of the equation of state.'
When researchers employed an array of GPUs that work in parallel -- GPUs are graphics processing units that were first created to enhance video game effects and have since exploded into a variety of uses -- they cut that time down to about 20 minutes per image.
Discussions are already underway to apply the machine learning tools to data from actual heavy-ion collision experiments, and the simulated results should be helpful in training neural networks to interpret the real data.
- On Thursday, February 27, 2020
Reactive Systems and Microservices: Michael Mahoney
Computationally-intensive machine learning at the tera-scale Michael W. Mahoney, UC Berkeley One of the important aspects about recent work in deep ...
Inferring Regulatory Networks from Experimental Morphological Phenotypes: A Computational Method...
Inferring Regulatory Networks from Experimental Morphological Phenotypes: A Computational Method Reverse-Engineers Planarian Regeneration. Daniel ...
Data Innovation Day: Visualizing Public Opinion, Ken Goldberg
Ken Goldberg, UC Berkeley, Engineering, CITRIS.
2018 Isaac Asimov Memorial Debate: Artificial Intelligence
Isaac Asimov's famous Three Laws of Robotics might be seen as early safeguards for our reliance on artificial intelligence, but as Alexa guides our homes and ...
Hilary Putnam on the Philosophy of Science (1977)
In this program, world-renowned author and professor Bryan Magee and Hilary Putnam of Harvard examine current philosophical thought that dismisses the ...
Topological quantum computing with Majorana Fermions
Research in quantum computing has offered many important new physical insights as well as the potential of exponentially increasing the computational power ...
Microscopy: High Throughput Microscopy (Jan Ellenberg)
Determining the genes involved in different cellular processes is essential for understanding how cells function. However, with 23000 protein coding genes in ...
Doing Science at Berkeley
First Lecture Series: Physical Science. Presented by Professor (and Dean) Bob Jacobsen on August 16, 2017, to a group of new freshmen and transfer students ...