Exaspérés par les radars

Cette semaine le gouvernement a annoncé que le projet de retirer les panneaux annonçant les radars était gelé. Seuls 13 panneaux ont été retirés sur environ 1000. Cette décision fait suite à un appel de députés à qui leurs électeurs ont fait savoir leur “profonde exaspération” suite à ce fameux retrait de panneaux et plus généralement, à cause des contrôles automatisés.

Heureusement, la sécurité routière est l’un des sujets les plus analysés par les pouvoirs publics. Alors, chiffres en mains, quelle est la bonne décision?

Est-ce que, comme disent certains, les radars ne sont qu’un moyen de faire rentrer de l’argent dans les caisses de l’État? ou est-ce que les radars sauvent des vies?

Le rôle de la vitesse dans les accidents de la route

La théorie qui relie vitesse, nombre d’accidents et victimes est le modèle de Nilsson qui fut établi au début des années 1980. Pour simplifier, si la vitesse moyenne augmente de 1%, le nombre d’accidents augmente de 2% et celui de morts de 4%. Ce modèle s’est vérifié empiriquement, c’est pourquoi ont l’utilise toujours 30 ans plus tard.

Cela dit, quand on parle d’augmentation ou de diminution de vitesse dans ce modèle, on parle de vitesse moyenne, avec la même distribution, c’est-à-dire qu’il y aura toujours autant de personnes qui conduisent 10% plus vite que la moyenne, 20% plus vite etc.

Or que savons-nous précisément sur ces vitesses trop élevées, celles précisément qui donnent lieu à des amendes?

D’après La sécurité routière en France – BILAN 2009, page 168,

La vitesse est un facteur particulier d’accident dans la mesure où elle est pratiquement toujours présente dans une collision, que ce soit une vitesse inappropriée aux circonstances ou excessive.

Dépassements de vitesse et radar

Mais est-ce que les radars ont été efficaces pour lutter contre ces vitesses excessives? Là encore on peut vérifier chiffres en main.

L’introduction des radars a augmenté le nombre de contrôles de vitesse dans de grandes proportions: de  1,350,000 à peu près en 2002 à près de 10 millions aujourd’hui. Dans le même temps,  la proportion de véhicules qui roulent 10km/h au-dessus des limites autorisées est passé d’environ 40% à 10%. Les chiffres passent de 5% à 0.5% pour ceux qui roulent à plus de 30 km/h au-dessus de la limite.
La lettre des députés note que les accidents ont été divisés par 2 en 20 ans alors que la circulation a augmenté de 80%. Il serait plus exact de dire que les morts sur la route ont été divisés par 2… suite à l’introduction des radars automatiques.

Le coût des accidents

Un exercice récurrent des bilans de sécurité routière est de chiffrer le coût pour la société des accidents de la route. Ceux que le détail intéressent pourront consulter le Bilan 2009 de la Sécurité Routière, page 35 et suivantes.

Les coûts unitaires et totaux sont les suivants:

Catégorie Nombre Coût unitaire Coût total
morts 4273 1,254,474 5,360,367,402
blessés graves 33323 135,526 4,516,132,898
blessés légers 57611 5,421 312,309,231
accidents corporels 72315 6,526 471,927,690
accidents matériels 1997140 6,526 13,033,335,640
23,694,072,861

Le coût total de l’insécurité routière est donc de 23.7 milliards d’euros en 2009. Il y a un accident toutes les 15 secondes environ.

En contrepartie, l’État a récupéré 212,000,000 €. (Contrôle de la circulation et du stationnement routiers)

Soit… 112 fois moins.

Un paradoxe statistique

Même avec une conduite dangereuse, la probabilité d’avoir un accident est relativement faible. Environ 70000 Français sont blessés chaque année, ce qui fait à peine plus de 1 sur 1000, et il y a moins d’une chance sur 100,000 de mourir de la route. Mêmes s’ils sont très graves, ces incidents sont suffisamment rares pour qu’il ne fassent pas partie de la vie quotidienne de la plupart des Français ou de leur entourage.

En revanche, les chances de se faire flasher sont bien plus élevées. Avec 10 millions de PV par an pour environ 40 millions d’automobilistes, chaque conducteur a donc une chance sur 4 d’écoper d’une amende. Pour un groupe de 10 automobilistes (famille, collègues…) à risque égal, il y a 95% de chances pour qu’au moins l’un d’entre eux y soit confronté.

Pour les Français, donc, pas étonnant que le désagrément de ces amendes semble bien plus important que les conséquences catastrophiques d’un accident grave. Pourtant, les conséquences de la vitesse au volant sont beaucoup plus graves que le mal nécessaire des radars. Plutôt que de céder aux sirènes de la démagogie, le rôle du gouvernement devrait être de défendre l’intérêt général et de faire comprendre son action aux Français.

Better life index – a post mortem

Today, OECD launched the Your better Life Index, a project on which I had been involved for months.

The short

I’m happy with the launch, the traffic has been really good, and there was some great coverage.

The three main lessons are:

  • Just because data is presented in an interactive way doesn’t make it interesting. The project is interesting because the design was well-suited to the project. That design may not have worked in other contexts, and other tools may not have worked in this one.
  • The critical skill here was design. There are many able developers, but the added value of this project was the ability of the team to invent a form which is unique, excellent and well-suited for the project.
  • Getting a external specialists to develop the visualization was the critical decision. Not only did we save time and money, but the quality of the outcome is incomparable. What we ended up with was far beyond the reach of what we could have done ourselves.

The less short

Before going any further, I remind my kind readers that while OECD pays my bills, views are my own.

Early history of the project

OECD had been working on measuring progress for years and researching alternative ways to measure economy. In 2008-2009 it got involved in the Stiglitz-Fitoussi-Sen commission which came up with concrete recommendations on new indicators to develop. It was then that our communication managers started lobbying for a marketable OECD indicator, like the UN’s Human Development Index or the Corruption Perception Index from Transparency International.

The idea was to come up with some kind of Progress Index, which we could communicate once a year or something. Problem – this was exactly against the recommendations of the commission, which warned against an absolute, top-down ranking of countries.

Eventually, we came up with an idea. A ranking, yes, but not one definitive list established by experts. Rather, it would be a user’s index, where said user would get their own index, tailored to their preferences.

Still, the idea of publishing such an index encountered some resistance, some countries did not like the idea of being ranked… But at some point in 2010 those reserves were overcome and the idea was generally accepted.

Our initial mission

It was then that my bosses asked  a colleague and I to start working on what such a tool could look like, and we started working based on the data we had at hand. I’ll skip on the details of that but we first came up with something in Excel which was a big aggregation of many indicators. It was far from perfect (and further still from the final result) but it got the internal conversation going.

Meanwhile, our statistician colleagues were working on new models to represent inequality, and started collecting data  for a book on a similar project, which will come out later this year (“How is Life?”). It made sense to join forces, we would use their data and their models, but will develop an interactive tool while they write their book, each project supporting the other one.

From prototypes to outsourcing

It wasn’t clear then how the tool would be designed. Part of our job was to look at many similar attempts online. We also cobbled some interactive prototypes, I made one in processing, my colleague in flash. Those models were quite close to what we had seen, really. My model was very textbook stuff, one single screen, linked bar charts. Quite basic too.

I was convinced that in order to be marketable, our tool needed to be visually innovative. Different, yes, but not against the basic rules of infovis! No 3D glossy pie or that kind of stuff. Unique, but in a good way. There was also some pressure to use the existing infovis tools used by OECD. We have one, for instance, which we had introduced for subnational statistics and which was really good for that, and which we have used since then in other contexts, with mixed success. My opinion was that using that tool as is on this project would bury it.

That’s my 1st lesson here. I’ll take a few steps back here.

In 2005, the world of public statistics was shaken by the introduction of gapminder. The way that tool presented statistics, and the huge success of the original TED talk – which attracted tens of millions of viewers – prompted all statistical offices to consider using data visualization, or rather in our words to produce “dynamic charts”, as if the mere fact that gapminder was interactive was the essence of its success. The bulk of such initiatives was neither interesting nor successful. While interactivity opens new possibilities, it is a means and certainly not than an end in itself. Parenthese closed.

At this stage, the logical conclusion was that we needed to have a new tool developed from scratch, specifically suited for the project. Nothing less would give it the resonance that we intended. My colleague lobbied our bosses, who took it to their bosses, and all the way to the Secretary-General of OECD. This went surprisingly well, and soon enough we were granted with a generous envelope and tasked with finding the right talent to build the tool.

Selecting talent

So our job shifted from creating a tool and campaigning for the project to writing specifications that could be understood by external developers. We had to “unwrite” our internal notes describing our prototypes and rewrite them in a more abstract way, trying to describe functionality rather than how we thought we could implement it (i.e. “the user can select a country” rather than “and when they click on a country, the middle pane changes to bla bla bla”.)

Being a governmental organization we also had to go through a formal call for tenders process, where we’d have a minimum number of bidders and an explicit decision process that could justify our choices.

This process was both very difficult and very interesting. Difficult because we had many very qualified applicants and not only could we only choose one, but that choice had to be justified, vetted by our bosses, which would take time. And it was rewarding because all took a different approach to the project and to the selection process. What heavily influenced the decision process was (nod to the 2nd lesson I outlined) whether the developers showed potential to create something visually unique. We found that many people were able to answer functionally to what we had asked. But the outcome probably wouldn’t match the unspoken part of our specifications. We needed people who could take the project beyond technical considerations, and imbue it with the creative spirit that would make it appealing to the widest audience.

Working with a developer

When we officially started to work with the selected developer – a joint effort by Moritz Stefaner and RauReif, some time had passed since we had introduced the project to them. When Moritz started presenting some visual research (which by the way has very little to do with the final site) I was really surprised by how much this was different what we had been working on. And that’s my 3rd lesson here.

We had become unable to start again from a blank sheet of paper and to re-imagine the project from scratch. We were too conditioned by the other projects we had seen and our past prototypes that we lacked that mental agility. Now that’s a predicament that just can’t affect an external team. Besides, even if we had the degree of mastery of our developers in flash or visual design (and we don’t), we still had our normal jobs to do, meetings to attend and all kind of office contingencies, and we just couldn’t be that productive. Even if we had equivalent talent inhouse, it would still had been more effective to outsource it.

What I found most interesting in our developers approach is that it underplayed the accuracy of the data. The scores of each country were not shown, nor the components of that score. That level of detail was initially hidden, which produced a nice, simple initial view. But added complexity could be revealed by selecting information, following links etc. At any time, the information load would remain manageable.

Two things happened in a second phase. On one hand, Moritz had that brilliant idea of a flower. I instantly loved it, so did the colleagues who worked with me since the start of the project. But it was a very hard sale to our management who would have liked something more traditional. Yet that flower form was exactly what we were after: visually unique, a nice match with the theme of the project, aesthetically pleasing, an interesting construction, many possibilities of variation… Looking back, it’s still not clear how we managed to impose that idea that almost every manager hated. The most surprising is that one month after everybody had accepted that as an evidence.

On the other, the written part of the web site, which was initially an afterthought of the project, really gained in momentum and importance, both in terms of contents and design. Eventually the web site would become half of the project. What’s interesting is that the project can cater to all kinds of depths of attention: it takes 10 seconds to create an index, 1 minute to play with various hypotheses and share the results on social networks, but one could spend 10 more minute reading the page of a country or of a topic, and several hours by checking the reference texts linked from these pages…

Closing thoughts

Fast forward to the launch. I just saw a note from Moritz that says that we got 60k unique visitors and 150k views. That’s about 12 hours after the site was launched (and, to be honest, it has been down during a couple of these 12 hours, but things are fine now)!! those numbers are very promising.

When we started on that project we had an ambition for OECD. But beyond that, I hoped to convince our organization and others of the interest of developing high-quality dataviz projects to support their messages. So I am really looking forward to see similar projects that this one might inspire.

An analysis of two New York Times interactive visualization

In the field of information visualization, professing one’s admiration for the work of the New York Times is not a very bold statement. However, my point is that they are admired mostly for the wrong reason (excellence in visual design and aesthetics). And by that, I don’t mean that it is not important to produce a visually pleasing experience, but rather that the work of the NYT graphics team deserves even more praise for its conception than its execution.

In the two examples I have chosen I’m highlighting aspects of their work that should be emulated with more dedication than their trademark visual style.

The examples

You fix the budget

You fix the budget, New York Times, November 13th 2010

Those will be: You Fix the Budget, published in November 13th 2010, and the recent The Death of a Terrorist: A Turning Point? published May 3rd, 2011.

Death of a terrorist - a turning point

Death of a terrorist - a turning point, New York Times, May 3rd 2011

Putting the user in charge

In both examples the visualizations work by asking the user their opinion in a very simple, non-intrusive manner. In the budget example the user can check or uncheck boxes. Each box is attached to a highly legible text that can easily entice a reaction. The title alone (i.e. “cut foreign aid in half”) which is always short and to the point, is enough for the user to take a position – agree (and check the box) or disagree. In a possible second phase, the user can read a more detailed description and see how much money can be saved by enacting such or such measure.

All in all, the experience is not directive and feels user-controlled. On typical information visualizations (say, gapminder) even if there are many controls the user is left on the spectator seat: the data unfold, they can be presented differently but the output cannot be changed. Conversely,  this is a simulation: by capturing a certain number of key inputs from the user, there can be different outcomes.

The same can be said about the Ussama Bin Laden one. The user simply positions their mood on a map. In one gesture they answer two questions. Then, they can speak their mind.  While this doesn’t take a lot of energy from the user the system is able to collect, in this simple interaction, a very precise answer that can be aggregated with everyone else’s.

Each user input has an impact on the overall shape of the visualization. By using it, people are naturally re-shaping it. Again, the question is non-directive (although it seems in all fairness that extreme positions are made more appealing with this presentation). There is no right, or wrong answer. The authors of the visualization are not giving a lecture on how people should feel or react to the event, likewise, they were not weighing too heavily on one side or the other of the political spectrum in the budget puzzle. I did feel a slight bias but I think they did their best to make it objective. But by letting the user experiment with the options at their disposal they encourage them to make their own opinion.

The visualization reacts to me

So we’ve established that the user in charge in both cases. The visualization reinforces that feeling by providing clear feedback when the user interacts with it, even it this is not the “end” of the experience. For instance, every cross checked and unchecked causes mini-panels to rotate in the budget puzzle, which are an evidence that something is happening, or that the system is taking the user into account. Technically, these transitions are absolutely not necessary but they really support that idea that the user is in charge and that even the most innocuous input is taken into account.

This relates to me

When discussing budget it’s easy to get carried away in a swirl of millions, billions, and the like. This is why it is not uncommon to see, even in the most serious publications, writers who, by an honest mistake, divide or multiply an economic indicator by a factor of thousand or a million. It is not very effective to present such big numbers without a referent, especially to a non-specialist audience. I don’t know what a billion dollar is. This is too abstract. A million people? this is awfully like 2 million people or 100,000 people in my opinion.

I think it is pointless to try to “educate” the citizens and hope they will remember “important statistics”  like GDP. Those large and abstract numbers don’t relate to them and they don’t need them to live their daily lives. That said, every citizen can make an informed decision based on their values if they are presented facts in a way that speak to them. For instance, whether medicare budget should be cut by $10 billion per year is a difficult question. But whether the eligibility age should be risen to 68 years is framing the question in a way that does relate to users.

For the death of a terrorist one, my initial reaction was to look for the words of people who would be in the same quadrant as me. Do they feel as I do? How about those who are in very different parts of the matrix? how do they put their feelings into words? I relate to both of these groups, differently but in a way that interests me and encourages me to interact further. Also, I see that I am not part of the majority. That again tells me something which is based on my relationship with the visualization and the respondents. This relationship is enabled by the author, but again not directed.

Going further – game mechanisms in visualization

Letting the user manipulate parameters that change not only how data is represented, but change the data proper, is not unlike videogames. Many games are really a layer of abstraction over an economic simulation, like Sim City or (gasp) Farmville. There is now ample research in gamification, which is the introduction of game mechanisms in non-game contexts. Such game mechanisms can make visualizations more compelling, more engaging for the users and, by putting them in the right state of mind, these mechanisms can improve the transmission of ideas and opinions.