Changez de thème
Changez de thème

Introduction

Readability refers to how easy it is to read and understand a piece of text. In this note, I examine the readability of Bank of Canada publications for the years 2015–17, using a common metric for this type of evaluation. In particular, I gauge whether the Bank’s publications are written at a level that makes them accessible to audiences beyond economists, market participants and other Bank watchers. Overall, I find that Bank publications are not as easy to read as news articles and other content that the Bank’s audiences would normally consume. Still, the Bank ranks well among its international peers.

Clarity is crucial

Clear communications are important for any public institution. They allow audiences to understand and evaluate public policy, which is important for accountability. In its 2017 annual report, the Bank said it is “committed to open and transparent communication with Canadians about its policies, actions and analysis” and that the goal is “to communicate with Canadians clearly and effectively, while providing a comprehensive account of the Bank’s activities.”1

For a central bank, clear communication is essential. Research has highlighted how macroeconomic outcomes can be improved by aligning households’ and firms’ expectations with a central bank’s policy objectives. It follows, then, that a central bank that communicates simply and with clarity should have more influence over those expectations and their alignment with the central bank’s objectives.

Readability can be measured

Readability is key to making communications clear and accessible to a broad audience. Readability formulas like the one used in this study provide an efficient, objective and consistent methodology to assess how easy—or how difficult—publications are to read. That said, these types of mechanical formulas have many shortcomings and offer only a partial evaluation. Robert Gunning, the developer of the Gunning Fog methodology used in this study, cautioned, “The Fog Index is a tool, not a rule. It is a warning system, not a formula for writing.”2

Readability formulas

At its broadest, the concept of readability includes not just the written text itself but also aspects such as the layout, visual elements and context. For example, the Bank’s web page for the new $10 polymer note is easy to read, with strong visual elements and interactive features that help the reader understand the content. Despite efforts by researchers, though, we currently don’t have tools that can measure the overall readability of a piece of content. We can only rely on subjective impressions informed by experience.

How formulas work

We do have tools that assess readability by using algorithms that measure the difficulty of a text through features such as the average length of words and sentences.

Researchers started creating these formulas in the 1920s to establish an easy and reliable way to assign texts to students, a major challenge at the time. They correlated the scores of their formulas with the results of reading comprehension tests. This allowed them to predict that a text with a certain score would be understood by a given share of students in a particular grade level.3

While many readability indexes are available, I use the Gunning Fog Index (GFI) in this study, because it’s the one most often recommended for advanced texts like those the Bank produces. Relative to the most popular formula—the Flesch–Kincaid Grade Level—the GFI applies a greater penalty to complex words.

Gunning Fog Index calculation

This is how the readability of a sample text is calculated:

$$ \text{GFI} = 0.4 \left[\left( \text{total words} \over \text{total sentences} \right) + 100 \left( \text{complex words} \over \text{total words} \right)\right] $$

where the average word and sentence lengths are used as proxies for difficulty and weighted so that the formula correlates with reading scores.4

Choosing one formula over another is mostly a matter of preference. The most popular formulas are highly correlated and convey the same message.5 In other words, if one readability formula shows that a text is very easy to read, another formula will show the same.6

Two ways to interpret readability results

Reading ability

The most common way to understand readability scores is to focus on the reader’s ability. Simply put, a student reading at a grade 7 level will struggle to understand a text assessed at a grade 12 level. In some fields, such as pharmaceutical drugs and consumer products, writing for the entire literacy range can be a matter of life or death. In those cases, the norm is to target readability scores around grade 8 or lower.7

The Organisation for Economic Co-operation and Development conducts regular assessments of adult literacy, and the results for Canada give us a useful snapshot of the population. Chart 1 shows literacy skills for all Canadian adults, as well as literacy levels by education.

  • Only one adult in seven scores at the top level (4/5).
  • While education correlates with literacy skills, we shouldn’t assume that all people who have post-secondary education are highly literate.

People at level 0/1—17 per cent of the Canadian adult population—can readily understand short texts that use common words. It’s likely that most Bank publications would be challenging for this group. At the other end of the spectrum, people at level 4/5 (14 per cent of Canadian adults) are highly literate and can “integrate, interpret, or synthesise information from complex or lengthy texts.”8 People in this group would not struggle to read Bank publications. People at level 2 and level 3 are fully functional readers and account for 69 per cent of the population. Based on ability alone, this is the group that is most sensitive to readability.

Effort

The second way to understand readability scores is to focus on effort, a concept that applies to all readers. Even a person who can read at a grade 20 level will find a grade 10 text easier to read than a grade 18 text. Seeing readability through this lens forces us to consider whether readers will consider it worth their time and effort to read a text.

For the Bank, improving readability would make its publications accessible to more people, and would give those who can already understand the content an incentive to read more of it.

Methodology

Preparing the text

The advantages of readability formulas are their simplicity, efficiency and consistency. The main challenge in analyzing texts is that readability tools can be misled by bits of text, such as headers, that are included as part of the layout. To ensure accurate measurements across publications and organizations, I followed a consistent process for all documents:

  1. Import text.

    The raw text was pasted into a commonly used word processing software. Broken sentences were fixed and unrelated elements, such as page numbers or repeated headers, were removed.

  2. Remove supporting content.

    All charts, tables, pictures and infographics were removed, as were footnotes.

  3. Remove extra text.

    Any text not part of the main sample was removed. For speeches, this included information about the venue, the host organization and so on. For news articles, captions and links to social media and related articles were eliminated.

Materials unrelated to my readability analysis were removed

Materials unrelated to my readability analysis were removed (cbc.ca)

Processing the text

At the end of this process, only the title, author, date and the text itself remained. In other words, how the text was packaged would not affect the readability results. I used a specialized website to generate readability scores for each sample of text.

Consistency is key

Using this consistent process, I analyzed all Bank of Canada speeches and every Monetary Policy Report (MPR) and Financial System Review (FSR) published from 2015 to 2017, as well as all fixed announcement date (FAD) press releases for the years 2012–17.9

Setting a benchmark

Comparators

To set a fair level of readability that the Bank should target, I analyzed various texts on topics with the same degree of complexity as those that the Bank writes about—for example, tariffs, forward guidance, tax policy, nuclear risks, etc. These texts included

  • CBC reports
  • Globe and Mail articles
  • Bloomberg news stories
  • articles by some well-regarded business and economics journalists
  • speeches by politicians and public figures

Initial observations

Chart 2 shows the average readability scores for each source of sample texts.

In general, the authors, speakers and publications I studied have an average GFI close to grade 13, or lower.

Target

Based on these results, it seems reasonable to target a GFI of grade 13 or lower for Bank publications. In the next section, I show that although it will take some effort to reach that goal, it is attainable. (This note has a GFI of 12.3.)

Of course, this benchmark should apply only to publications aimed at a broad audience. For content written for subject matter experts, such as research papers, readability—as defined in this note—is less of a concern. One reason for this is that the technical terms in each field of study have very precise meanings and can’t be easily substituted. Another is that experts have to keep up with the research in their field and are more likely to continue reading even if the text is difficult.

Results

The readability scores for publications by the Bank and others are presented below.

Bank of Canada public speeches

For the 2015–17 period:

  • 14 of 72 speeches had a GFI of grade 13 or lower.
  • Nine of those were by the Governor and four by the Senior Deputy Governor.
  • Excluding two speeches on bank notes, the most readable speech of the past three years was “The Way Home: Reading the Economic Signs,” delivered by Governor Poloz in 2015, with a GFI of 11.7.

Media coverage of the Bank’s speeches

To test the idea that speeches with better readability (i.e., a lower GFI) are more appealing to media than difficult ones, I calculated the correlation between readability and media coverage.10 If readability and media coverage were completely unrelated, we would expect a correlation close to 0. In contrast, if readability played some role, we would expect some correlation between readability scores and the amount of media coverage.

Table 1 shows the correlation between media coverage and readability. The table shows a negative correlation between GFI and all types of media coverage. This is to be expected: reporters would be less inclined to write about texts that are difficult to understand than easy ones.

Table 1: Correlation between readability scores and media coverage

Wire Print Web Broadcast French (all media types) All
All GFI -0.2 -0.3 -0.2 -0.2 -0.2 -0.3
Governor GFI -0.3 -0.5 -0.4 -0.3 -0.2 -0.4
Rest of Governing Council GFI -0.4 -0.2 -0.1 -0.1 -0.1 -0.2

Comparators

To see how the Bank compares with its peers, I analyzed speeches by other central banks as well as the International Monetary Fund and the Bank for International Settlements.

Chart 4 shows that the readability of Bank of Canada speeches compares well with those of similar organizations.

The results reveal an interesting case study: the European Central Bank’s Sabine Lautenschläger. As Chart 5 shows, when she was appointed in 2014 her speeches were less readable (higher GFI) than the average Bank of Canada speech. Since 2016, her GFI readability has improved dramatically. Lautenschläger’s speeches show that it’s possible to talk about complex topics—in her case, banking supervision—in a way that more people can understand.

Topics

Intuitively, some topics are easier to write about than others. Chart 6 shows the readability of Bank of Canada speeches by topic.

Chart 6 highlights a couple of points. First, while some topics do appear inherently more complex than others, the variability of GFI among speeches on the same topic is often higher than that across topics. This is consistent with Lautenschläger’s speeches, whose readability improved even though the topics remained the same.

Second, as a very broad rule of thumb, it seems that concrete topics, such as jobs and regional economic developments, make for easier reading than abstract ones like policy frameworks and financial regulations.

Readability in similar settings

To see if the wide range of readability could be explained by the fact that some speeches are intended for a more sophisticated audience, I assessed speeches on the same topic and given to the same type of audience. In those situations, the text should, in principle, have a consistent level of readability. But as Chart 7 shows, readability varies widely even in similar settings.

Monetary Policy Report readability

Chart 8 shows the readability scores of the MPR.

The average GFI for the MPR between 2015 and 2017 is 16.1, which is above the grade 13 benchmark discussed earlier. As Chart 8 shows, the readability of the MPR has worsened since 2015. This is explained by increases of 8 per cent in both the average sentence length and the share of complex words.

Comparators

Other central banks have similar scores: the Bank of England’s Inflation Report—February 2018 had a GFI of 14.9, while the Federal Reserve’s Minutes of the Federal Open Market Committee (FOMC) scored 15.1 for January and 15.8 for December.

Table 2 shows that the MPR could achieve the same readability as the Bank of England’s inflation report by using fewer complex words. Writing shorter sentences would allow the Bank to match the Fed’s minutes.11

Table 2: Comparing Gunning Fog Index components

GFI Percentage of complex words Words per sentence
Bank of Canada 16.1 19.6 20.6
Bank of England 14.9 16.3 21.0
US Federal Reserve 15.5 20.1 18.6

Note: GFI is Gunning Fog Index.

Financial System Review readability

Chart 9 shows the readability scores for the FSR. For this analysis, I included the text on risks and vulnerabilities (i.e., the “front end”) and the boxes, but not the reports at the back of the FSR or the summary tables.

Comparators

The Bank’s FSR averages a GFI of grade 17, similar to other central banks’ comparable publications. The Reserve Bank of Australia’s October 2017 Financial Stability Review had a GFI of 17.1, and the Reserve Bank of New Zealand’s November 2017 Financial Stability Review had a GFI of 16.

Readability of the Bank’s fixed announcement date press releases

Chart 10 shows the readability of the Bank’s interest rate announcements since 2012.

The GFI readability level of the FAD improved significantly in 2014 and 2015, but has worsened somewhat since then.

Drivers of readability changes

As Table 3 shows, the 17 per cent improvement in the GFI from 2012 to 2015 was attributable to a 26 per cent drop in the share of complex words and sentences that were 6 per cent shorter, on average. From 2015 to 2017, the average sentence length and the share of complex words both crept up by about 4 per cent, contributing about equally to the higher GFI score.

Table 3: Gunning Fog Index components for fixed announcement date press releases

Gunning Fog Index (average) Percentage of complex words Words per sentence
2012 16.7 23.0 18.8
2013 16.1 20.8 19.5
2014 14.1 17.9 17.4
2015 13.9 17.1 17.7
2016 14.4 17.4 18.5
2017 14.5 17.8 18.4

Comparators

On average, the Bank of Canada scores better than the Federal Reserve, whose FOMC statements had an average GFI of 18.4 between December 2017 and March 2018, and the Bank of England, whose four decisions between September 2017 and February 2018 had an average GFI of 17.4. The Reserve Bank of Australia’s policy statements appear to be more readable than the Bank of Canada’s; its decisions between September and December 2017 had an average GFI of 13.6.

French readability

This readability assessment would be incomplete without some analysis of the Bank’s French-language text.

Most Bank content is created in English and then translated into French, and Bank translators have to be faithful to the original text. My goal is to see how closely the readability of the translation aligns with that of the original. In other words, a grade 11 level English text should be at that same level in French.

Unfortunately, the tools I used to measure readability in English can’t be used for French texts.

Methodology

For this analysis, I used Scolarius, a tool created by Influence Communication to measure French readability. According to the firm’s website, Scolarius was inspired by four measures created for English. Scores are usually between 50 and 250, with higher scores indicating more difficult texts.

To gauge readability,

  1. I took several samples of French content from Bank publications (translated in-house by Bank translators), as well as the corresponding English text,
  2. I obtained Scolarius scores for the French samples and GFI scores for the corresponding English text,
  3. I computed a conversion factor between the two measures, and
  4. I followed the same process with samples of content created elsewhere and translated by non-Bank translators.

If the two measures are perfectly aligned, the conversion factor will be constant. (To give an analogy, kilograms and pounds are two units of measure for the mass of an object. If we know the mass in kilos, we can multiply by a conversion factor—2.2—to get the equivalent mass in pounds.) From Table 4 below, we can see that the conversion factor is consistently around 11.5. The two measures are also highly correlated, which suggests that they measure the same thing.

If the translators had made the text less readable in French, we would expect a conversion factor higher than 11.5, and vice versa.

Comparators

As Table 4 shows, the Bank’s average conversion factor (11.4) seems comparable, or perhaps even a little better, than that of outside organizations (11.7). The table also shows that the correlation between the two scores is higher at the Bank (0.92) than at other organizations (0.82). This suggests that the Bank’s translators are more faithful to the original text, or that the texts are easier to translate.

Table 4: Assessing the readability of translations at the Bank and in other organizations

Bank content Scolarius score GFI (English version) Conversion factor
Trois choses qui m’empêchent de dormir la nuit (speech) 148 12.8 11.6
Le marché de l'emploi (speech) 142 12.6 11.3
Prévention des tensions (speech) 213 19.6 10.9
Dévoilement du nouveau billet de 10$ (speech) 117 10.4 11.3
Ancrer les attentes (speech) 167 16 10.4
État de la situation (speech) 169 14.1 12.0
FAD  (interest rate announcement) 191 14.5 13.2
Histoire de la Banque (website) 135 11.8 11.4
Politique monétaire (website) 182 16.5 11.0
Correlation 0.92 Average 11.4
Content from other organizations Scolarius score GFI (English version) Conversion factor
CCA text (press release) 195 16.5 11.8
Canadian Press text (article) 153 13.3 11.5
Public Services and Procurement Canada text (press release) 191 15.3 12.5
Minister Carla Qualtrough text (speech) 174 12.7 13.7
Prime minister speech in California (speech) 116 9.9 11.7
Prime minister speech at World Economic Forum (speech) 123 11.7 10.5
Speech by Superintendent of Financial Institutions Jeremy Rudin (speech) 159 16.2 9.8
Correlation 0.82 Average 11.7

Note: GFI is Gunning Fog Index.

Conclusion

Keeping in mind that the readability formula used in this note has some shortcomings and can offer only a partial evaluation, I find that Bank publications are not as easy to read as they could be, relative to a reasonable benchmark.

Chart 11 shows the different combinations of sentence length and share of complex words that lead to the benchmark GFI of grade 13 (green line) and the distance the sampled Bank publications are from this line. The chart also lets us explore the different ways to achieve the benchmark. For example, the share of complex words in speeches could shrink to 14 per cent from the current 18 per cent, while leaving the average sentence length unchanged. Or the share of complex words could stay the same, with the average sentence length shortened to 15 words. And, of course, another option would be to reduce both the share of complex words and average sentence length.

According to this approach to measuring readability, for speeches and FAD press releases the Bank could achieve a GFI of grade 13 by using simpler words and decreasing the number of words per sentence.

For the MPR and the FSR, on the other hand, more fundamental changes may be necessary to make them accessible to a broader audience.

Of course, there’s more to good writing than average sentence length and the share of complex words. The results of this note are preliminary and serve as a starting point for further research and discussion. Next steps could include deciding if the readability of Bank publications should be improved and, if so, how.

Notes

  1. 1. Bank of Canada, Annual Report 2017 (Ottawa: March 26, 2018).[]
  2. 2. As cited in J. Bogert, “In Defense of the Fog Index,” Business and Professional Quarterly 48, issue 2 (1985): 9–12.[]
  3. 3. Each readability measure is calibrated on a given threshold. For example, the Flesch–Kincaid measure uses a 75 per cent threshold, while the Gunning Fog Index is set at 90 per cent. This means Flesch–Kincaid will return a lower grade level of readability than Gunning Fog for the same text.[]
  4. 4. Another factor that is commonly used in readability formulas—such as the Dale–Chall formula—is word frequency. Over the years, researchers have looked at a wide range of measurable factors that affect readability, such as the type of sentence, the presence of prepositional phrases and sentences per paragraph. Adding more variables requires much more effort and produces very small gains in accuracy. For an extensive survey of readability formulas, see W. H. DuBay, The Principles of Readability, (Costa Mesa, California: Impact Information, 2004).[]
  5. 5. For example, the GFI and the Flesh–Kincaid Grade Level have a correlation of 0.96 over the 72 Bank of Canada speeches in my study and of 0.99 over the 164 speeches by other organizations.[]
  6. 6. According to DuBay, “What is important is not how the formulas agree or disagree on a particular text, but their degree of consistency in predicting difficulty over a range of graded texts.” (Op. cit.)[]
  7. 7. For example, public health communications should aim for a grade 6 or 7 level, according to S. Roy, K. Phetxumphou, A. M. Dietrich, P. A. Estabrooks, W. You and B. M. Davy, “An Evaluation of the Readability of Drinking Water Quality Reports: A National Assessment,” Journal of Water and Health 13.3 (2015): (645–652). The U.S. National Library of Medicine recommends aiming for a grade 7 or 8 level.[]
  8. 8. For a more detailed explanation, see Organisation for Economic Co-operation and Development, “Key Facts About the Survey of Adult Skills (PIAAC),” available here on the OECD’s website.[]
  9. 9. I excluded MPR opening statements to Parliament because they are nearly identical to the MPR press conference opening statements.[]
  10. 10. I used the numbers computed by the Media Relations team for its media analyses. The correlation could reflect a direct link between readability and media coverage, or an indirect one. For example, some topics might be, at once, more interesting to the media and easier to write about. []
  11. 11. The Fed’s minutes are similar to the MPR in that they give a comprehensive assessment of the economy; they are different in that they capture the discussions of the Federal Open Market Committee. For this study, I kept the attendance list in the Minutes, which would tend to yield a better readability score. []

Acknowledgements

I thank Jeremy Harrison, Katherine Macklem and Bob Amano for their helpful guidance.

Avis d’exonération de responsabilité

Les notes analytiques du personnel de la Banque du Canada sont de brefs articles qui portent sur des sujets liés à la situation économique et financière du moment. Rédigées en toute indépendance du Conseil de direction, elles peuvent étayer ou remettre en question les orientations et idées établies. Les opinions exprimées dans le présent document sont celles des auteurs uniquement. Par conséquent, elles ne traduisent pas forcément le point de vue officiel de la Banque du Canada et n’engagent aucunement cette dernière.

Sur cette page
Table des matières

DOI : https://doi.org/10.34989/san-2018-20