Block-based Inclusion Algorithm
The Missouri Census Data Center has multiple applications that require us to determine if a geographic
entity (such as a census tract, block group or block) falls "within" a circular area. Our series of CAPS applications
are the most notable examples of programs needing to make such determinations. Our usual way of handling
this is to use the internal point coordinates of the entity; we calculate the distance between this point and the center of the
circular area, and if this distance is less than or equal to the radius value then we include the entity. This is an all-or-nothing approach - the entity is either entirely included or entirely
excluded. For example, if we are doing a 5-mile radius of a point in downtown St. Louis then we look at all
census tracts in the area and identify all those to be included in our calculations. A tract that straddles the boundary of our 5-mile circle might contain a population of 5000 people, some of whom live within the circle and some outside. We make a not-so-educated guess that they should all be included or
excluded based on the location of the geographic internal point assigned by the Census Bureau. We count
on the assumption that there will be multiple tracts that straddle the area and that there will be a balancing of those being included amd those being excluded. It is not a perfect algorithm, but it works fairly well, especially if you are using block level entities (as we do with our caps10c application as well as in our geocorr applications). But in our CAPS applications that work with ACS data we cannot use block-level entities because there is no ACS data at the block level. There are data at the block group level, but not as much as there is at the census tract level. So, for example, in our caps16acs version of CAPS we use census tracts as the entities to be aggregated when the smallest circle need is over 3 miles (or when the user specifies on the form to use tracts for smaller circles, presumably because they need to have some
data that are not available at the block group level). This can lead to pretty serious problems in trying to approximate, say, a 4-mile circle using census tracts. It is especially problematic in rural areas where the tracts can be very large. The block-based inclusion algorithm, described below, is our new approach to improving the way we select and process geographic entities for approximating circular areas.
Summary of the BBIA Algorithm
The concept is simple enough. We have a "ground zero" location (latitude-longitude coordinates) and we
have a data set with, say, census tract level data that include internal point coordinates for the tract. What we
traditionally would do (at the MCDC web site) is look at tracts in the area and determine which ones we would select for aggregation (to the n-mile circular area) based on the distance between the tract's internal point and the ground zero point (circle center). If we had data at the block level we could do this much better since a typical tract is comprised of 10-30 blocks, which are obviously much smaller spatially.
While there are no ACS data at the block level we do have 2010 decennial census data at the block level including internal point coordinates and the 2010 population count for each block. We also have land and total area in square miles for each block. We use these block level data to implement the BBIA as follows:
For each tract we begin by seeing if the tract has a chance of being in the circle using a bounding-box algorithm. (While this is not really a part of the BBIA algorithm per se it is rather important because it very significantly reduces the amount of processing that has to be done).
We look at each census block within the tract (or block group) and determine if the block's internal point is within our circle. If it is we accumulate that block's 2010 population count, as well as the landsqmi and areasqmi (land and total areas in square miles). We also know the 2010 pop count for the entire census tract (block group -- whenever we say "tract" here, we could just as easily say "block group"). After processing all the blocks in the tract we now have an accumulated count of block populations where the blocks are "inside" the circle. We use this "inside population" figure divided by the tract's total pop (from the same source: 2010 SF1) to define an apportioning factor for the tract. What we now have is a pretty good approximation of what portion of the tract's population is within the circle.
Note that if the tract is entirely within the circle then all of the block points will be also, so we would be summing all of the block pops and that would equal the tract total and the apportioning factor would be 1.0 (which happens a lot, especially with larger circles).
The apportioning factor is stored for use in the aggregation step. We take the ACS data at the tract level and aggregate it with allocation factors. This is a familiar algorithm that we have been
using for decades, and for which we have macros that handle the tricky aspects of the method. We aggregate the two spatial area variables separately (without any apportioning) because we do not want to apportion spatial area based on population portions as this is counter-productive. A larger population does not typically go with a larger spatial area. Many spatially large blocks have little (or even zero) population, and many spatially small blocks contain large populations (think Manhattan, NY).
That is the essence of the algorithm. Some interesting things that you will note regarding the algorithm as used with the caps16acs web app:
We get a 2010 census count as a byproduct of the method. In the caps16acs app we call this variable sf1pop.
We calculate total land area in square miles. We use this to calculate a population per square mile for the circles (variable is called Poppsqmi).
The apportioning is done based on the portion of the tract's population in 2010. That might not be such a good factor in certain cases, but in general it works very well. It works best for person-based ACS estimates in areas where there has not been any sudden growth or shrinkage since 2010. It works less well for housing-unit based estimates, although generally housing units and populations tend to be highly correlated.