In several of the categories, there are less than 10 sample cases. You made the claim that you could make an accurate assessment on race based on these sample cases. Within the very study, it says that you cannot make an accurate assessment because the sample sizes are too small. You have stated over and over that the BOJ created some kin d of weight that corrects for this. That's simply not true. You are clinging to that point because you feel like it makes a point about race and crime, when no logical point can be inferred from such a small sample size. The authors of the study admit that.
Please peddle some more idiotic mumbo jumbo to stay the course.
So you are disputing the study? You just said you weren't disputing the studies claims. Does it say anywhere in the study the exact words, "you cannot make an accurate assessment because sample sizes are too small"
Yes they do have a methodology and weighted adjustments. All statistics have a probability of error.
http://www.bjs.gov/index.cfm?ty=dcdetail&iid=245#MethodologyNonresponse and weighting adjustments
In 2013, 90,630 households and 160,040 persons age 12 or older were interviewed for the NCVS. Each household was interviewed twice during the year. The response rate was 84% for households and 88% for eligible persons. Victimizations that occurred outside of the United States are excluded. In 2013, less than 1% of the unweighted victimizations occurred outside of the United States and are excluded from analyses of NCVS data.
Estimates in NCVS reports typically use data from the 1993 to 2013 NCVS data files, weighted to produce annual estimates of victimization for persons age 12 or older living in U.S. households. Since the NCVS relies on a sample rather than a census of the entire U.S. population, weights are designed to inflate sample point estimates to known population totals and to compensate for survey nonresponse and other aspects of the sample design.
The NCVS data files include both person and household weights. Person weights provide an estimate of the population represented by each person in the sample. Household weights provide an estimate of the U.S. household population represented by each household in the sample. After proper adjustment, both household and person weights are also typically used to form the denominator in calculations of crime rates.
Victimization weights used in analysis of NCVS data account for the number of persons present during an incident and for high-frequency repeat victimizations (or series victimizations). Series victimizations are similar in type but occur with such frequency that a victim is unable to recall each individual event or describe each event in detail. Survey procedures allow NCVS interviewers to identify and classify these similar victimizations as series victimizations and to collect detailed information on only the most recent incident in the series.
The weight counts series incidents as the actual number of incidents reported by the victim, up to a maximum of 10 incidents. Including series victimizations in national rates results in rather large increases in the level of violent victimization; however, trends in violence are generally similar regardless of whether series victimizations are included.
In 2013, series incidents accounted for about 1% of all victimizations and 4% of all violent victimizations. Weighting series incidents as the number of incidents up to a maximum of 10 incidents produces more reliable estimates of crime levels, while the cap at 10 minimizes the effect of extreme outliers on the rates. Additional information on the series enumeration is detailed in the report Methods for Counting High Frequency Repeat Victimizations in the National Crime Victimization Survey, NCJ 237308, BJS web, April 2012.Standard error computations
When national estimates are derived from a sample, as with the NCVS, caution must be taken when comparing one estimate to another estimate or when comparing estimates over time. Although one estimate may be larger than another, estimates based on a sample have some degree of sampling error. The sampling error of an estimate depends on several factors, including the amount of variation in the responses, and the size of the sample. When the sampling error around an estimate is taken into account, the estimates that appear different may not be statistically different.
One measure of the sampling error associated with an estimate is the standard error. The standard error can vary from one estimate to the next. Generally, an estimate with a small standard error provides a more reliable approximation of the true value than an estimate with a large standard error. Estimates with relatively large standard errors are associated with less precision and reliability and should be interpreted with caution.
In order to generate standard errors around numbers and estimates from the NCVS, the Census Bureau produced generalized variance function (GVF) parameters for BJS. The GVFs take into account aspects of the NCVS complex sample design and represent the curve fitted to a selection of individual standard errors based on the Jackknife Repeated Replication technique. The GVF parameters are used to generate standard errors for each point estimate (such as counts, percentages, and rates) in reports using NCVS data.
BJS conducts tests to determine whether differences in estimated numbers and percentages in reports using NCVS data are statistically significant once sampling error is taken into account. Using statistical programs developed specifically for the NCVS, all comparisons in the text of reports are tested for significance. The Student’s t-statistic is the primary test procedure, which tests the difference between two sample estimates.
Data users can use the estimates and the standard errors of the estimates provided in reports to generate a confidence interval around the estimate as a measure of the margin of error. The following example illustrates how standard errors can be used to generate confidence intervals:
According to the NCVS, in 2013, the violent victimization rate among persons age 12 or older was 23.2 per 1,000 persons (see table 1 in Criminal Victimization, 2013, NCJ 247648, September 2014). Using the GVFs, it was determined that the estimated victimization rate estimate has a standard error of 1.6 (see appendix table 2 in Criminal Victimization, 2013, NCJ 247648, September 2014). A confidence interval around the estimate was generated by multiplying the standard errors by ±1.96 (the t-score of a normal, two- tailed distribution that excludes 2.5% at either end of the distribution). Therefore, the 95% confidence interval around the 23.2 estimate from 2013 is 23.2 ± (1.6 X 1.96) or (20.0 to 26.3). In others words, if different samples using the same procedures were taken from the U.S. population in 2013, 95% of the time the violent victimization rate would fall between 20.0 and 26.3 per 1,000 persons.
BJS also calculates a coefficient of variation (CV) for all estimates, representing the ratio of the standard error to the estimate. CVs provide a measure of reliability and a means to compare the precision of estimates across measures with differing levels or metrics. In cases where the CV is greater than 50%, or the unweighted sample had 10 or fewer cases, the estimate is noted with a “!” symbol (Interpret data with caution. Estimate based on 10 or fewer sample cases, or the coefficient of variation is greater than 50%).