Ever find yourself scrolling through the latest round of FinAccess or FinScope and think to yourself, “But they only visited these respondents once. Did they really get the full picture?” Me too.
The Kenya Financial Diaries gives us some idea of the size of the underreporting bias that a onetime access survey might face, at least in Kenya. Our researchers registered all known financial devices on their fifth visit to each household, but before beginning to collect transactional data. Over time, devices that we missed, but respondents later revealed, were added to the database as well. By comparing when the respondent reported to have started using the device to the date we first discovered it we can tell which devices were being used but missed when our team first registered their portfolio of financial devices.
It’s quite possible that our first roster of financial devices was more complete than a FinAccess survey would be, since we asked on the fifth visit, after some relationship was already established. Still, it gives us a starting point to understand the size and nature of possible bias in one time surveys.
So, what do we miss?
We missed 20% of financial devices (21% informal; 18% formal) when we first asked about them. This is the share of devices that were open when we started cash flow interviews, but we learned about them sometime after those Diaries interviews started.
We missed 49% of income sources on the first ask, but most of those we missed were irregular or infrequently used income sources.
Portfolios are quite dynamic. 41% of all registered financial devices in the study were actually started during the Diaries year. These were often relatively small changes, like a new loan from a brother, but it does indicate a lot of frequent changes.
Even though we missed 1 in 5 financial devices on our first query, it made very little difference to respondents’ overall shifts in the access categories, “access strands,” used by FinAccess, for example, shifting from “excluded” to “informally included.” Only 5% of respondents actually changed access strands because of this underreporting bias, and the share of the sample formally included only changed by two percentage points. Most formal accounts were picked up in the first ask. As we see in Figure 1, we missed 20% of formal savings accounts and 14% of formal checking accounts, compared to 35% of arrangements for saving in the house. Since misses of formal accounts had little impact on access strands, we likely more often missed accounts where there were multiple accounts owned by the same individual.
When it comes to saving in the house, many likely felt uncomfortable talking about immediately accessible cash to relative strangers. We are also unsurprised to see many informal savings and lending arrangements were missed. Often, respondents remembered these only when a payment was made or due, reminding them of the obligation. For example, the respondent wouldn’t really think about the loan they gave their neighbor until they received a repayment. They wouldn’t think about the money owed to the school as a debt until it needed to be paid.
While usage was too low to appear in Figure 1, we do not find evidence of systematic underreporting of formal debt in our data. Our sample of 298 households had only 53 instances of formal loans outstanding at the start of the project. We initially missed 15 of these, or 28%, putting the underreporting in the same line as informal borrowing and lending. From our discussions with respondents, it did not appear they were embarrassed by or ashamed of formal borrowing; these loans were simply not top of mind in our first registration of devices.
The much bigger source of bias in initial access strands came from overreporting of financial devices, where a respondent claimed to “have” a device, but then did not use it actively during the entire year of the study. Thirty-three per cent of the devices reported in our first roster of devices were used for none or only one transaction over the entire year of the study. Many of these dormant devices were bank accounts (30% of bank accounts open at the start of the project were actually dormant), which has a big effect on our access strand calculations. Removing these dormant devices from consideration, 16% of our respondents shift access strands, and the share of respondents we would observe in the highest access strand – formal prudential access – falls from 42% down to 31% (Figure 2).
Figure 3 highlights this distinction, showing that it was much more common for formal savings to be over reported rather than missed. We also see high numbers of dormant informal loans both given and received. What is going on here? These arrangements tend to be quite flexible. For many, they existed at the start of the study but were not used again. These can be quite low transaction kinds of arrangements. (Recall our cutoff for “dormancy was 0 or 1 transaction during the year, which gives space for a single repayment of an outstanding loan, but not installment type payments or new borrowing.)
So, what does this mean for surveys of access? Well, most certainly things are not always as they appear. However, when using FinAccess to estimate inclusion, our bias is likely to be towards overestimation of formal inclusion rather than the other way around. This is in part a definitional issue: how important is it whether a person “has” a device versus whether one “uses” that device? As attention shifts from mere “access” towards questions about more meaningful usage of financial devices, we need to use new categories –beyond account ownership—to track progress.