Efficient information processing can be easier than what economists usually think

Austin Frakt for The Upshot (my own highlights):

A great deal of the decrease in deaths from heart attacks over the past two decades can be attributed to specific medical technologies like stents and drugs that break open arterial blood clots. But a study by health economists at Harvard, M.I.T., Columbia and the University of Chicago showed that heart attack survival gains from patients selecting better hospitals were significant, about half as large as those from breakthrough technologies. […]

Rather than clinical quality, which is hard to perceive, patients may be more directly attuned to how satisfied they, or their friends and family, are with care. That’s something they can more immediately experience and is more readily shared.

Fortunately, most studies show that patient satisfaction and clinical measures of quality are aligned. For example, patient satisfaction is associated with lower rates of hospital readmissions, heart attack mortality and other heart attack outcomes, as well as better surgical quality.

This story is a striking illustration that simple heuristics as “following the satisfaction of other people” can be surprisingly good (and cheap) at processing information. In other words, relevantly processing information does not necessarily require sophisticated individuals with alleged high cognitive skills – like homo economicus.

It’s also (and to me, very) important to note that these heuristics are collective heuristics: it’s not a single individual who processes information for the whole group. Rather, it’s virtually all the members of the group who do a little bit of information processing, and what’s actually relevant is the aggregated result. A patient with a bad experience in a given hospital is statistical noise: mistakes can happen ; a lot of patients telling bad things about a given hospital probably means that this hospital is actually not that good: a lot of mistakes means something’s not going well.

Source: The Life-Changing Magic of Choosing the Right Hospital – The New York Times

Are economists actually “boring”?

Saw this on Twitter today:

Before retweeting it, I decided to check the source. Mainly to be sure it isn’t an hoax. Well, it is not an hoax but the author himself wrote “This no peer review, and I wouldn’t describe this as a “study” in anything other than the most colloquial sense of the word“.

On the source website, the RateMyProfessor.com reviews can be sorted by positive and negative ones. It made me wondered about the context in which the word has been written: “boring” in “this class was boring” doesn’t mean the same thing than “boring” in “this class was everything but boring” or “I expected a boring class but it was finally not“. Can we control for that? Short answer: we can’t. At least, not easily in this graph. Again, author’s words.

Reasonably, what this graph shows is that students use “boring” more when they wrote reviews of economic classes in RateMyProfessor.com. To say what? Hard to tell. And is RateMyProfessor.com data generalisable? Naturally, I wouldn’t trust this kind of website – where control is very limited, leading to very noisy data. For instance, are students leaving reviews here representative of all the students? We don’t know.

At the end, the most striking thing about this tweet is the kind of debate it ignited – especially on Facebook. I have to admit this is a bit disturbing to see many researchers (i.e. guys with a PhD and stuff) not checking at all the source but still trying to figure out why economics would be boring with arguments like “economics uses math for ideological reason, this graph is another proof of that“. I mean, aren’t researchers among the best in the world to check anything before trusting it? For someone claiming that economics is an ideology, that’s not a very scientific way to assess something!

Anyway, this is still a funny tweet, and I guess I’ll eventually retweet it. As an economist I may be “boring“, but I also display some basic form of self-mockery. Better than nothing I guess…

Is most published research wrong? Or why incentives actually matter in science

Doing science is hard. Like, really hard. Because even if you do your best and follow all the best practices, there is still plenty of chance you results will be “wrong” – because of type I errors.

But “worst” than that, incentives to get published can deter researchers from following these best practices. If like me you’re an economist, you won’t be surprised by that – because humans respond to incentives. Social psychologist Brian Nosek summarised things well:

There is no cost to getting things wrong. The cost is not getting them published.

This 12 minutes video by Veritasium is one of the clearest explanation I know about type I errors and how incentives actually shape scientific discoveries. A must watch!

Why starting an academic blog?

There is a growing trend in academic claiming that academic blogging is important both to strengthen the societal impact of research and to improve the quality of research itself 1.

I’m aware that social sciences in general and economics in particular are not that well understood by policy designers – so the importance to work on having a better social impact. But my primary goal, with this blog, is not to talk to a audience of non-specialists. Instead, I explicitly target my peers. These Notes, then, will likely be technical – at least, when confronting to a jargon or something like that, I will not avoid it. What I want is very clear: to improve my research, both by writing ideas and thoughts, and also to expand my network of both people and knowledge 2.

The reason why I kinda need to start this blog now is pretty clear to me: I’ve recently decided to dive within complexity in economics, and this a very new, hard field 3. I strongly believe that complexity economics can dramatically expand the way we, economists, model (and think about) society, by allowing us to design simple non-equilibrium models. But this is only a promise (and a huge bet 4), and a lot of work is needed to make this promise real. This blog is clearly hear to help – by allowing me to connect with other researchers working in complexity economics (or agent-based models in economics, or agent-based computational economics), to clarify my though and to keep a record of them.

A last word about my English: I’m a native French speaker (and writer), and I’ve learnt to write (and speak and understand and read) English on-the-job. So I apologize in advance for any linguistic mistake I may write.

Oh, I almost forgot: do not hesitate to interact with me and the blog. You can write a comment on any post (this one and the future ones), you can also follow me in Twitter and/or like the Facebook page. Any feedback or discussion will of course be deeply appreciated!