Tuesday, January 23, 2018

Emergent stories

Steve Blundell has written a very nice article
Emergence, causation and storytelling: condensed matter physics and the limitations of the human mind

The article is lucid, creative, and stimulating.
He explores some issues that are of particular interest to philosophers such as the differences between "weak" and "strong" emergence, which are sometimes called "epistemological" and "ontological" emergence, respectively.

Part of his argument is based on the fact that human minds are finite and constrained by the physical world and that "information is physical". Unlike the philosophers, he argues that emergence always has both an ontological and an epistemological character.

To illustrate his arguments Steve uses several beautiful examples.

Storytelling.
"To work, stories have to be succinct, told well, have a point and express some truth."
This is to accommodate the physical limitations of the human mind.

Number theory.
Integers are defined by the rules of a very simple algebra. Yet, rich phenomena emerge such as how the asymptotic distribution of prime numbers [given by the zeros of the Riemann zeta function] can be described by random matrix theory.

Conway's game of life.
He considers the "Scattering"  and the creation and destruction of "objects" such as spaceships, "Canada geese" (shown below), and "pulsars".


How emergence comes into play is described by the figure below.

This reminds me a bit of particle physics experiments. New entities emerge from the underlying rules encoded in the Standard Model.

Spin ice.
Emergent gauge field and magnetic monopoles.
This is also discussed as an example of emergence in a 2016 article by Rehn and Moessner.

Friday, January 19, 2018

Observation of renormalised quasi-particle excitations

A central concept of quantum-many body theory is that of coherent quasi-particles. Their key property is a well-defined relationship between energy and momentum (dispersion relation). Prior to the rise of ARPES (Angle-Resolved Photo-Emission Spectroscopy) over the past three decades, the existence of electronic quasi-particles was only inferred indirectly.

A very nice paper just appeared which shows a new way of measuring quasi-particle excitations in a
strongly correlated electron system. Furthermore, the experimental results are compared quantitatively to state-of-the-art theory, showing several subtle many-body effects.

Coherent band excitations in CePd3: A comparison of neutron scattering and ab initio theory 
Eugene A. Goremychkin, Hyowon Park, Raymond Osborn, Stephan Rosenkranz, John-Paul Castellan, Victor R. Fanelli, Andrew D. Christianson, Matthew B. Stone, Eric D. Bauer, Kenneth J. McClellan, Darrin D. Byler, Jon M. Lawrence

The mixed valence compound studied is of particular interest because with increasing temperature it exhibits a crossover from a Fermi liquid with coherent quasi-particle excitations to incoherent excitations, an example of a bad metal.

The figure below shows a colour intensity plot of the dynamical magnetic susceptibility
at a fixed energy omega, and a function of the wavevector Q. The top three panels are from the calculations of DFT+DMFT (Density Functional Theory + Dynamical Mean-Field Theory).

The bottom three panels are the corresponding results from inelastic neutron scattering.
A and B [D and E] are both at omega=35 meV and in two different momentum planes. C [F] is at omega=55 meV.
The crucial signal of coherence (i.e. dispersive quasi-particles) is that the shift of the maxima between the G and R points at 35 meV to the M and X points at 55 meV.

It should be stressed that these dispersing excitations are not due to single (charged) quasi-particles, but rather spin excitations which are particle-hole excitations.

The figure below shows how the dispersion [coherence] disappears as the temperature is increased from 6 K (top) to 300 K (bottom). The solid lines are theoretical curves.
The figure below shows that the irreducible vertex corrections associated with the particle-hole are crucial to the quantitative agreement of theory and experiment. The top (bottom) panel in the figure below shows the calculation with (without) vertex corrections.
The correction has two effects: First, it smooths out some of the fine structure in the energy dependence of the spectra while broadly preserving both the Q variation and the overall energy scale; and second, it produces a strong enhancement of the intensity that is both energy and temperature dependent, for example, by a factor of ~6.5 at w = 60 meV at 100 K. This shows that the Q dependence of the scattering is predomi- nantly determined by the one-electron joint density of states, as expected for band transitions, whereas the overall intensity is amplified by the strong electron correlations. 
This landmark study is only possible due to recent parallel advances in theory, computation, and experiment. On the theory side, it is not just DMFT but also including particle-hole interactions in DMFT.
On computation, it is new DMFT algorithms and increasing computer speed. On the experimental side, it is pulsed neutron sources, and improvements in the sensitivity and spatial and energy resolution of neutron detectors.

Monday, January 15, 2018

The emergence of BS in universities

The Chronicle of Higher Education has an excellent (but depressing) article, Higher Education is Drowning in BS, by Christian Smith.

In both scope and eloquence, this article goes far beyond my post, The rise of BS in science and academia. Furthermore, as a sociologist, Smith argues that one of the challenges, is to the think about the problem in collective (dare I say emergent!) terms, rather than just individualistic terms.
Essential to realize in all of this is that most of the BS is produced not by pernicious individuals, but instead by complex dysfunctions in institutional systems. It is easy to be a really good academic or administrator and still actively contribute to the BS. So we need to think not individualistically, but systemically, about culture, institutions, and political economies. Pointing fingers at individual schools and people is not helpful here. Sociological analysis of systems and their consequences is.
Smith also spells out the broader moral and political implications of the problems.

In the end, a key issue, central to the problem, is there are many competing ideas and interests concerning what a university is actually for.  That ultimately comes from different values and world views, leading to different ethical, moral, and political perspectives. Nevertheless, the core mission should be clear: it is thinking about the world and training students to think.

I agree with Smith,
BS is the failure of leaders in higher education to champion the liberal-arts ideal — that college should challenge, develop, and transform students’ minds and hearts so they can lead good, flourishing, and socially productive lives — and their stampeding into the "practical" enterprise of producing specialized workers to feed The Economy.
Aside: One interesting feature of the comments on the article, is how much the problem of students using cell phones in class gets discussed.

I thank Mike Karim for bringing the article to my attention.

Wednesday, January 10, 2018

Should we be concerned about irreproducible results in condensed matter physics?

The problem of the irreproducibility of many results in psychology and medical research is getting a lot of attention. There is even an Wikipedia page about the Replication Crisis. In the USA the National Academies have just launched a study of the problem.

This naturally raises the question about how big is the problem is in physics and chemistry?

One survey showed that many chemists and physicists could not reproduce results of others. 

My anecdotal experience, is that for both experiments and computer simulations, there is a serious problem. Colleagues will often tell me privately they cannot reproduce the published results of others. Furthermore, this particularly seems to be a problem for "high impact" results, published in luxury journals. A concrete example is the case of USO's [Unidentified Superconducting Objects]. Here is just one specific case.

A recent paper looks at the problem for the case of a basic measurement in a very popular class of materials.

How Reproducible Are Isotherm Measurements in Metal–Organic Frameworks? 
 Jongwoo Park, Joshua D. Howe, and David S. Sholl
We show that for the well-studied case of CO2 adsorption there are only 15 of the thousands of known MOFs for which enough experiments have been reported to allow strong conclusions to be drawn about the reproducibility of these measurements.
Unlike most university press releases [which are too often full of misleading hype] the one from Georgia Tech associated with this paper is actually quite informative and worth reading.

A paper worth reading is that by John Ioannidis, "Why most published research findings are false", as it contains some nice basic statistical arguments as to why people should be publishing null results. He also makes the provocative statement:
The hotter a scientific field (with more scientific teams involved) the less likely the research findings are to be true.
I thank Sheri Kim and David Sholl for stimulating this post.

How serious do you think this problem is? What are the best ways to address the problem?

Friday, December 22, 2017

Postdoc available for strongly correlated electron systems

Ben Powell and I have just advertised for a new postdoc position to work with us at the University of Queensland on strongly correlated electron systems.

The full ad is here and people should apply before January 28 through that link.

Monday, December 18, 2017

Are UK universities heading over the cliff?

The Institute of Advanced Study at Durham University has organised a public lecture series, "The Future of the University." The motivation is worthy.
In the face of this rapidly changing landscape, urging instant adaptive response, it is too easy to discount fundamental questions. What is the university now for? What is it, what can it be, what should it be? Are the visions of Humboldt and Newman still valid? If so, how?
The poster is a bit bizarre. How should it be interpreted?


Sadly, it is hard for me to even imagine such a public event happening in Australia.

Last week one of the lectures was given by Peter Coveney,  a theoretical chemist at University College London, on funding for science. His abstract is a bit of rant with some choice words.
Funding of research in U.K. universities has been changed beyond recognition by the introduction of the so-called "full economic cost model". The net result of this has been the halving of the number of grants funded and the top slicing of up to 50% and beyond of those that are funded straight to the institution, not the grant holder. Overall, there is less research being performed. Is it of higher quality because the overheads are used to provide a first rate environment in which to conduct the research?  
We shall trace the pathway of the indirect costs within U.K. universities and look at where these sizeable sums of money have ended up.  
The full economic cost model is so attractive to management inside research led U.K. universities that the blueprint is applied willy-nilly to assess the activities of academics, and the value of their research, regardless of where their funding is coming from. We shall illustrate the black hole into which universities have fallen as senior managers seek to exploit these side products of modern scientific research in U.K. Meta activities such as HEFCE's REF consume unconscionable quantities of academics' time, determine university IT and other policies, in the hope of attracting ever more income, but have done little to assist with the prosecution of more and better science. Indeed, it may be argued that they have had the opposite effect.  
Innovation, the impact on the economy resulting from U.K. universities' activities, shows few signs of lifting off. We shall explore the reasons for this; they reside in a wilful confusion of universities' roles as public institutions with the overwhelming desire to run them as businesses. Despite the egregious failure of market capitalism in 2008, their management cadres simply cannot stop themselves wanting to ape the private sector.
Some of the talk material is in a short article in the Times Higher Education Supplement. The slides for the talk are here.  I thank Timothee Joset, who attended the talk, for bringing it to my attention.

Thursday, December 14, 2017

Statistical properties of networks

Today I am giving a cake meeting talk about something a bit different. Over the past year or so I have tried to learn something about "complexity theory", including networks. Here is some of what I have learnt and found interesting. The most useful (i.e. accessible) article I found was a 2008 Physics Today article, The physics of networks by Mark Newman.


The degree of a node, denoted k, is equal to the number of edges connected to that node. A useful quantity to describe real-world networks is the probability distribution P(k); i.e. if you pick a random node it gives the probability that the node has degree k.

Analysis of data from a wide range of networks, from the internet to protein interaction networks, finds that this distribution has a power-law form,


This holds over almost four orders of magnitude.
This is known as a scale-free network, and the exponent is typically between 2 and 3.
This power law is significant for several reasons.
First, it is in contrast to a random network for which P(k) would be a Poisson distribution, which decreases exponentially with large k, with a scale defined by the mean value of k.
Second, if the exponent is less than three, then the variance of k diverges in the limit of an infinite size network, reflecting large fluctuations in the degree.
Third, the "fat tail" means there is a significant probability of "hubs", i.e. nodes that are connected to a large number of other nodes. This reflects the significant spatial heterogeneity of real-world networks. This property has a significant effect on others properties of the network, as I discuss below.

An important outstanding question is do real-world networks self-organise in some sense to lead to the scale-free properties?

A question we do know the answer to is, what happens to the connectivity of the network when some of the nodes are removed?
It depends crucially on the network's degree distribution P(k).
Consider two different node removal strategies.
a. Remove nodes at random.
b. Deliberately target high degree nodes for removal.
It turns out that a random network is equally resilient to both "attacks".
In contrast, a scale-free network is resilient to a. but particularly susceptible to b.

Suppose you want to stop the spread of a disease on a social network. What is the best "vaccination" strategy?  For a scale-free network that will be to target the highest degree nodes in the hope of producing "herd immunity".

It's a small world!
This involves the popular notion of "six degrees of separation". If you take two random people on earth then on average one has to just go six steps in "friend of a friend of a friend ..." to connect them. Many find this surprising but, this arises because your social network increases exponentially with the number of steps you take and something like (30)^6 gives the population of the planet.
Newman states that a more surprising result is that people are good at finding the short paths, and Kleinberg showed that the effect only works if the social network has a special form.

How does one identify "communities" in networks?
A quantitative method is discussed in this paper and applied to several social, linguistic, and biological networks.

A topic which is both intellectually fascinating and of great practical significance concerns

This is what epidemiology is all about. However, until recently, almost all mathematical models assumed spatial homogeneity, i.e. that the probability of any individual being infected was equally likely. In reality, it depends on how many other individuals they interact with, i.e. the structure of the social network.

The crucial parameter to understand whether an epidemic will occur turns out to not be the mean degree but the mean squared degree. Newman argues
Consider a hub in a social network: a person having, say, a hundred contacts. If that person gets sick with a certain disease, then he has a hundred times as many opportunities to pass the disease on to others as a person with only one contact. However, the person with a hundred contacts is also more likely to get sick in the first place because he has a hundred people to catch the disease from. Thus such a person is both a hundred times more likely to get the disease and a hundred times more likely to pass it on, and hence 10 000 times more effective at spreading the disease than is the person with only one contact.
I found the following Rev. Mod. Phys. from 2015 helpful.
Epidemic processes in complex networks 
Romualdo Pastor-Satorras, Claudio Castellano, Piet Van Mieghem, Alessandro Vespignani

They consider different models for the spread of disease. A key parameter is the infection rate lambda which is the ratio of the transition rates for infection and recovery. This is the SIR [susceptible-infectious-recover] model proposed in 1927 by Kermack and McKendrick [cited almost 5000 times!]. This was one of the first mathematical models for epidemiology.

Behaviour is much richer (and more realistic) if one considers models on a complex network. Then one can observe "phase transitions" and critical behaviour. In the figure below rho is the fraction of infected individuals.

In a 2001 PRL [cited more than 4000 times!] it was shown using a "degree-based mean-field theory" that the critical value for lambda is given by
In a scale-free network the second moment diverges and so there is no epidemic threshold, i.e. for an infinitely small infection rate and epidemic can occur.

The review is helpful but I would have liked more discussion of real data about practical (e.g. public policy) implications. This field has significant potential because due to internet and mobile phone usage a lot more data is being produced about social networks.