LIHTC Approval Disparities
(Nov. 16, 2015)
This page was created in connection with the preparation of an amicus curiae brief in Texas Department of Housing and Community Affairs, et al. v. The Inclusive Communities Project, Inc., Supreme Court No. 13-1371. The brief should eventually be available here.
In a case that would be styled Texas Department of Housing and Community Affairs, et al. v. The Inclusive Communities Project, Inc. in the Supreme Court, the district court found that the defendant housing authority’s procedures for approving applications for low income housing tax credits (LIHTC) caused an unjustified disparate impact in violation of the Fair Housing Act. It then required the defendant to develop a procedure with a less discriminatory effect. In doing so the district court cited facts that the approval rate for applications in the areas with less than 10% white was 49.7% compared with an approval rate of 37.4% for areas that were more than 90% white. That difference would seem to favor minority areas. But the court regarded that difference as contributing to a pattern of segregation reflected by the fact that 92.29% of the LIHTC units were in areas that were less than 50% white.
The relationship of the perceived disparities is difficult to analyze. The success rate data involve a comparison of rates for a less than 10% white area with rates for a greater than 90% area. That those figures are limited to situations of very low white representation and very high representations diminishes their utility for revealing the exact relationship between minority representation in an area and likelihood of application approval. Further, the absence of information on the number of units for which credits were sought in the different areas precludes an appraisal of the comparative importance of (a) the number of units for which credits were sought in various areas and (b) the difference in approval rates in various areas, with respect to perceived overrepresentation of units in certain areas. Implications of the number of units sought in different areas with respect to appraisals of the size of the disparate impact believed to be caused by the procedures at issue are addressed below.[i]
Notwithstanding that the approval rate figures are limited to areas with extremely low and extremely high white representations and that that the number of units sought in each area are not provided, the approval rate figures are sufficient to illustrate some points concerning the unsoundness of standard measures for quantifying the strength of the forces causing outcome rates to differ or for determining whether those forces are stronger according to one group of criteria than another or whether they are otherwise growing stronger or weaker over time. In doing so, for simplicity and lack of pertinent information to do otherwise, I treat the matter simply in terms of two areas, one of which is predominantly minority and one of which is predominantly white.
The lower court decisions do not indicate how the difference between the approval rates was measured. The most typical approach to measuring a disparity regarding types of outcomes like this, at least in the rate ranges at issue, would be the relative difference between approval rates. The first row of Table A shows that the ratio of the approval rate in the higher minority area (in this case, the advantaged area with respect to likelihood of approval) to the approval rates in the lower minority area is 1.33. Since this ratio is above 1.25 (and, correspondingly, at .753 with the lower rate as the numerator, would be below 80%), in an employment case, the difference would be deemed to violate the 80% Rule of the Uniform Guidelines on Employee Selection Procedures. But, given the rates at issue, the strength of association reflected by these rates is considerably less than that reflected by the rates in the first row of Table 1 of “Race and Mortality Revisited,” Society (July/Aug. 2014) (used also as Table 1 of the referenced amicus brief), even though the Table 1 figures involve a smaller relative difference. The LIHTC approval rate figures reflect an EES of .314 standard deviations, whereas the referenced Table 1 figures from the Society article and brief were based on a difference of .50 standard deviations. The .314 figure reflects a situation where approximately 38% of the disadvantaged group’s distribution is above the mean for the advantaged group, whereas in the case of .50 standard deviation difference, only 31% of the disadvantaged group would be above the mean for the advantaged group).
Table A illustrates the way each of the standard measures of differences would be affected according to the process examined in the decision if different numbers of LIHTC were available, or if different numbers of such units were applied for, than was in fact that case. Although the actual number of applications is unknown, the illustration can be accomplished simply be inserting an arbitrary number of applications.
The first row of the table presents the situation described by the court, though with 1000 units arbitrarily substituted for the unknown number of units sought in the both the high minority (HM) and low minority (LM) areas. The table also presents the four standard measures of differences between rate that are shown in Table 1 of the Society article and the brief. All subsequent rows reflect the same strength of the forces causing the outcome rates to differ (i.e., an EES of .31, which is a rounding of the .314 EES figure mentioned above). But the rows show the ways that changes in the numbers of LIHTC units sought in subsequent years will affect the each standard measure of difference between outcome rates. Changes in the application patterns have the same effects on these measures as would changes in the number of units available to be awarded, though in the opposite direction. That is, increases in the numbers of units sought have the same effects on these measures as reciprocal proportionate reductions in the number of unit available,[ii] while decreases in the number of units would have the same effect as reciprocal proportionate increases in the number of units available.
Table A. Illustrations of the effect of changes in the number of LIHTC units applied for on standard measures of disparity in the face of no actual change in the strength of the forces causing the outcome rates to differ (EES = .31)
|
Row No
|
EES
|
HM Units Req
|
LM Units Req
|
HM App Rt
|
LM App Rt
|
HM/LM App Ratio
|
LM/HM Rej Ratio
|
Abs Df
|
Odds Ratio
|
1
|
0.31
|
1000
|
1000
|
49.70%
|
37.40%
|
1.33
|
1.24
|
12.30
|
1.65
|
2
|
0.31
|
2000
|
2000
|
26.20%
|
17.20%
|
1.52
|
1.12
|
9.00
|
1.71
|
3
|
0.31
|
500
|
500
|
90.30%
|
83.80%
|
1.08
|
1.67
|
6.50
|
1.80
|
The second row of the table posits the situation where the number of unit sought doubles in each area, an occurrence, like a reduction in available units, that reduces overall approval rates, but with the number of units essentially unchanged.[iii] Correspondingly, the rate ratio for approval increased (potentially causing observers who rely on that measure to mistakenly believe that process was exhibiting a greater disparate impact than previously). But the rate ratio for the adverse outcome decreased (potentially causing observers who rely on that measure to mistakenly believe that process was exhibiting a smaller disparate impact than previously).
The third row posits a situation where the number of unites sought in each area was reduced by 50%, an occurrence, like an increase in the number of units available, that increases overall approval rates. Correspondingly, observers who rely on either of the two relative differences would reach mistaken conclusions about changes in the disparate impact of the process that, in addition to being the opposite of one another, would be the opposite of those yielded by the corresponding measures in the case of the increase in the number of units sought.
It may warrant note here that in the case of the reduction in applications reflected in the third row, the rate ratio for approval has grown quite small whereas the rate ratio for rejection has increased considerably. That could prompt some persons who previously measured the disparate impact in terms of relative differences in favorable outcomes now to rely on relative differences in adverse outcome. Such observers would find increasing disparities in both the Row 2 situation of increased applications and the Row 3 situation of reduced applications.[iv]
Though I include the absolute difference and odds ratio in the table for completeness, I do not discuss them save to note the following. In the case of both the increase in units sought (reduction in approval rates) reflected in Row 2 and the decrease in units sought (increase in approval rates) reflected in Row 3, the absolute difference declined (while the odds ratio increased in both cases). Observers attempting to divine the reason for that pattern should recognize that were there merely a small increase in approval rates, the absolute difference would have tended to increase between Row 1 and Row 3. But the increase in approval rates between Rows 1 and 3 was such as to go beyond the point where increases in approval rates tend to increase absolute difference between rates and to move into the range where reach increases in approval rates tend to decrease absolute differences between rates. The situation might be compared to movement from Setting B to Setting D in Table 5 of the Society article (Table 2 of the amicus brief). At any rate, the potential for mistaken interpretations based on absolute differences or odds ratios when the number of units sought increases is greater than that suggested in the table, since smaller changes would cause them to change in one direction while larger changes would cause them to change in the opposite direction.
In reality, while there could presumably be substantial changes in the number of units sought in the two areas, there is no particular reason to believe that the proportionate size of the changes would be similar in the two areas, and they could certainly be very different. But it is unnecessary to explore those possibilities in order to make the key point – that the standard measures of difference between outcome rates are not useful for determining whether the forces causing the outcome rates to differ have changed over time because those measures tend to change for reasons unrelated to whether the strength of those forces have changed.
In Table B the first row again presents the existing situation described in the opinions. The subsequent rows, however, reflect a situation where there has occurred a decrease in the strength of the forces causing the outcome rates to differ, which is here simulated by a decline of the EES from .31 to .26. The second row presents that situation with the same number of units as in the first row. It shows that, with the decline in the EES and number of units sought, all standard measures of difference decreased. The third and fourth rows then show the implications of an overall increase in the number of units requested and an overall decrease in the number of units requested. In the former case (Row 3) the decline in overall approval rates was such as to cause the relative difference in approval rates to increase from that observed in the situation examined by the district court (Row 3 compared with Row 1) notwithstanding that the EES had declined. In the latter case (Row 4) the increase in the overall approval rates was such as to cause the relative difference in rejection rates to increase (Row 4 compared with Row 1) notwithstanding that the EES had decreased. Thus, in each of the two cases persons relying on one of the standard relative measures would draw a mistaken conclusion about a pattern of change in the strength of the forces causing the outcome rates to differ.
Table B. Illustrations of the effect of changes in the number of LIHTC units applied for on standard measures of disparity in the face a decline in the strength of the forces causing the outcome rates to differ (EES drops from .31 to .26)
|
Row No
|
EES
|
HM Units Req
|
LM Units Req
|
HM App Rt
|
LM App Rt
|
HM/LM App Ratio
|
LM/HM Rej Ratio
|
Abs Df
|
Odds Ratio
|
1
|
0.31
|
1000
|
1000
|
49.70%
|
37.40%
|
1.33
|
1.24
|
12.30
|
1.65
|
2
|
0.26
|
1000
|
1000
|
48.60%
|
38.40%
|
1.27
|
1.20
|
10.20
|
1.52
|
3
|
0.26
|
2000
|
2000
|
25.50%
|
17.90%
|
1.42
|
1.10
|
7.60
|
1.57
|
4
|
0.26
|
500
|
500
|
89.80%
|
84.40%
|
1.06
|
1.53
|
5.40
|
1.63
|
But the situation differs from that reflected in Table A, where the described mistaken conclusions about changes in the impact of the selection procedure occurred in the face of no actual change. Here, the described mistaken conclusions as to changes would be in the face of true changes in the opposite direction. Moreover, in no case, including the situation where the number of applications remained constant and all measure changed in the same direction (Row 2) was any of the measures that was correctly identifying the direction of the changes of the forces causing the rates to differ effectively quantifying the size of the changes. Only a measure like the EES could accomplish that.
Table C presents information similar to that in Table B. But rows 2 to 4, rather than reflect a decline in the EES from .31 to .26, reflect an increase in the EES from .31 to .36. Observations made about mistaken perceptions of directions of changes made with respect to Table B would apply here, save that it would it would be the relative difference in the adverse outcome that would be giving the mistaken impression about change in disparity in the situation posited in Row 3 and the relative difference in the favorable outcome that would be giving the mistaken impression the mistaken perception about the change in direction of disparity in the situation posited in Row 4.
Table C. Illustrations of the effect of changes in the number of unit requested on standard measures of disparity in the face an increase in the strength of the forces causing the outcome rates to differ (EES increased from .31 to .36)
|
Row No
|
EES
|
HM Units Req
|
LM Units Rq
|
HM App Rt
|
LM App Rt
|
T
|
HM/LM App Ratio
|
LM/HM Rej Ratio
|
Abs Df
|
Odds Ratio
|
1
|
0.31
|
1000
|
1000
|
49.70%
|
37.40%
|
871.00
|
1.33
|
1.24
|
12.30
|
1.65
|
2
|
0.36
|
1000
|
1000
|
50.50%
|
36.50%
|
870.00
|
1.38
|
1.28
|
14.00
|
1.77
|
3
|
0.36
|
2000
|
2000
|
26.90%
|
16.50%
|
868.00
|
1.63
|
1.14
|
10.40
|
1.86
|
4
|
0.36
|
500
|
500
|
90.70%
|
83.20%
|
869.50
|
1.09
|
1.81
|
7.50
|
1.97
|
[i] The cited figure respecting the concentration of units in area with less than 50% white populations fail to indicate the proportion of the population that lives in areas that are less than 50% white. Thus, these figures do not provide a basis for inferring that a higher proportion of LIHTC units are located in areas with less than 50% white population than the proportion of the population that is located in such areas (though I assume that was so). But I do not here further address the implications of the patterns by which measures of concentration in this context tend to be affected by the prevalence of an outcome or other factors unrelated to the strength of the forces causing the concentration, save to note that the following point about the relationship of difference in approval rate element of the lower court decisions to the overrepresentation/segregation element of the court’s decision. If more units are sought for areas with high minority representation than areas with low minority representation, the same approval rates for both areas should cause a higher proportion of units to be in the former areas than the latter areas.
[ii] That is, doubling the number of units sought has the same effect as halving the number of units available.
[iii] The number of units awarded according to the figures in Row 1 would be 871. In order to accomplish the simulations in the succeeding rows of Table A, as well as like rows of Table B and C, it is necessary to modify that figure somewhat (with figures ranging from 871 to 868).
[iv] See the McKinsey Achievement Gap Study subpage of the Educational Disparities page regarding a situation where a study relied on the relative difference in the favorable outcome to measures disparities in reaching/failing to reach the advanced achievement level (where that would be the larger relative difference) but relative difference in the adverse outcome to measures disparities in reaching/failing to reach the basic level (where that would be the larger relative difference). Under such an approach, general improvements in education would tend to be found to reduce disparities regarding reaching/failing to reach advanced level but increase disparities regarding reaching/failing to reach the basic level. See also the discussion in the Immunization Disparities page regarding a study that measured immunization disparities in terms or relative differences in favorable outcome with respect to receipt/nonreceipt of full immunization but relative differences in adverse outcomes with respect to receipt/nonreceipt of any immunization, while not recognizing that general increases in immunization would tend to reduce the former relative difference but increase the latter relative difference.