*Statistix* offers many association tests that can be used to examine the similarity or association among two or more variables.

The **Multinomial Test** is a goodness-of-test that tests how well frequencies of mutually exclusive categories fit a hypothesized distribution.

The **Chi-Square Test** computes the traditional chi-square goodness-of-fit test for two-way tables. Two hypotheses can be examined with this test: the hypothesis of
independence, and the hypothesis of homogeneity.

The **Kolmogorov-Smirnov Test** is useful for comparing the similarity of the distributions of samples from two populations. If there is an intrinsic ordering to the categories,
the Kolmogorov-Smirnov test is usually better than the chi-square test because it can exploit the information in the ordering while the chi-square analysis cannot.

The **McNemar’s Symmetry Test** is a goodness-of-fit test that’s often useful for measuring change. It’s used to analyze square contingency tables; often the rows represent
classifications before some event, while the columns represent the same classes after some event. Individuals may be in one class before the event but in another class after the
event. However, if the table is symmetric about the diagonal from the upper left to the lower right, there will be no net shift in the row and column proportions before and after.
McNemar’s test examines whether the table is in fact symmetric.

The **Two by Two Tables** procedure computes a variety of tests of association for two by two contingency tables. A typical example of a two by two table is where a number of
individuals are cross-classified by two dichotomous variables, such as treated-not treated and survived-died. The tests include Fisher’s exact test, Pearson chi-square, log odds ratio, and
others, along with standard errors.

The **Log-Linear Models** procedure is a powerful tool for analyzing discrete multidimensional categorical data. Log-linear models are the discrete data analogs to analysis of
variance. If a set of discrete data has more than two classifying variables, you may be tempted to analyze such data as a series of two-way tables with traditional chi-square
tests. However, the danger of such an approach is that collapsing the data over some categorical variables results in these variables becoming confounded with the remaining two categorical
variables. Log-linear models allows all dimensions of multidimensional contingency tables to be treated simultaneously and so avoids such potential confounding.

The **Spearman Rank Correlations** procedure produces nonparametric correlation coefficients that are suitable for examining the degree of association when the samples violate the
assumption of bivariate normality.