Evaluating your Institution's Web Accessibility Efforts
Part 2: Evaluating the Product

Jon Whiting

Evaluating web content for accessibility requires an evaluation of both the process (your institutional work on web accessibility) and product (the accessibility of your web content). This resource outlines how to prepare for and conduct an evaluation of your institution's web content. While it may be tempting to jump straight into an evaluation, following these steps can help make your accessibility reporting process more effective:

These steps build on one another. The purpose of the evaluation will influence the scope, which will in turn affect the sample size, etc. Each of these steps is explained in detail below.


Before structuring the evaluation, you should determine its overall purpose. This purpose will influence every other aspect of the evaluation. For example, if the purpose of the evaluation is to gather baseline data before accessibility initiatives occur, or if it is part of an annual benchmark of system-wide web accessibility, you would want to include a broad and representative sample of pages from across the institution. If the evaluation is intended to help educate or motivate developers, it might include more detailed information on the impact of certain accessibility issues from the pages of those developers. Then the results could be helpful in training and technical assistance directed to their needs. While there can be different purposes for engaging in an evaluation, you will want to be clear on just what is that purpose.


At some point the scope of accessibility work in higher education must be addressed. For some, the answer is everything under the "institution.edu" domain. For others, they find it helpful to limit their scope so that student-generated or alumni-generated content is not included. Finally, some units are interested in evaluation of the accessibility of web content even if their entire institution will not participate. It is vital that you understand what will be included and what will be excluded from your evaluation.


Once you have selected the scope, the next step is to determine the sample size and how you'll get that sample. The real issue to consider here is getting at a representative sample of your intended scope. Some institutions gather a sample between 5% -10% of all pages that appear in their scope. They determine what to sample through a process of randomly selecting pages at varying levels of depth within the institution. While this may work for some, for others, this could represent tens of thousands of pages. Also, there is definitely a law of diminishing returns when evaluating web content. A shallow report identifying the same issue or issue type across hundreds of pages is almost always less effective than an in-depth evaluation of a much smaller sample of representative pages. If you use this approach to determine the sample size you will want to include high profile pages as well as a representative sample of other pages. For example, let's say that I would like to sample 100 pages from my institution (i.e., aligned with my scope). First, I would identify high profile pages and those pages where I believe there are the greatest differences in design type and functionality. Here is my sample list:

If possible, identify the different development teams (or developers) that created the designs and content and identify a sample from each. It is possible to create a matrix of pages to evaluate by creator so you can track your strategic sampling. While it is not a perfect science, you can make strategic decisions to sample from across as many developers as possible. Then all that is left is to sample from these pages and engage in your evaluation.

If your institution has access to an automated accessibility evaluation tool, a broader evaluation of hundreds or even thousands of pages is possible as well. This can be helpful when monitoring pages that have already been addressed.


Before you engage in the evaluation you need to know the standard against which you are evaluating content. Even if your institution has not adopted a technical standard for accessibility, it is important that you establish a standard for your site evaluation. WCAG 2.0 is typically the best standard for evaluation, even if your institution is currently using a dated technical standard like WCAG 1.0 or Section 508. For more information on this topic, see our blog post on Choosing a Technical Web Accessibility Standard. This does not mean you must structure or limit your evaluation according to a technical standard. Some accessibility issues, such as small text and empty headings may not map to a specific checkpoint, but they could still be included in the report. It is important to note, however, that if your purpose is to use this information in a comparative manner (e.g., to look at trends over time), you must use the same standard. Otherwise you could compare apples and oranges.

If you are also evaluating conformance to a set standard, you may want to use a spreadsheet to record specific conformance and non-conformance (this is not always the same as pass/fail- a page can have an accessibility issue that could be improved without being a strict "failure"). The GOALS project has provided a sample spreadsheet for recording WCAG 2.0 Level AA conformance (which also requires conformance to all Level A criteria). Our partner WebAIM has created a Section 508 checklist and WCAG 2.0 checklist that may be useful in evaluating your web content.


Before you engage in your evaluation you need to know how you will summarize the results in ways meaningful for your purpose, and your audience. Some institutions record a strict pass or fail of the page, but we typically recommend recording what types of errors occurred on any given page. This format results in richer descriptions of accessibility barriers (e.g., "94% of pages included errors with form labels"). This type of information can then become actionable information to your training team.

While every report differs based on the variables identified above, most reports contain some of the following sections:


Now that all of the preliminary decisions have been made, it is time to evaluate the accessibility of your web content. Site evaluation usually includes several components:

As you evaluate each page, you should record your findings using the report template that you created in the previous step. Our partner WebAIM created a single page guide to testing for web accessibility that can help you through tool-assisted and manual accessibility evaluations.

Evaluating web content for accessibility does require a fairly deep understanding of accessible design principles. If there is no one in your institution with this level of expertise, there are several groups who offer web accessibility evaluation services.


Now that you have completed your evaluation you will want to communicate this with institutional stakeholders. This can take many forms. Some will email results in a simple paragraph to the institution's web accessibility committee. Others generate a report that is linked from the institutions assessment page, and sent to the web accessibility committee, key administrators and others on campus.

Regardless of its format, an accessibility evaluation is of little value unless it drives your institution to action, so a web accessibility evaluation is not complete without a plan for improvement. While the accessibility evaluation is usually conducted by a small team of individuals who focus on accessibility, this final plan should include other key decision makers on campus, such as the webmaster or IT representative.

This plan should include system-level improvements (e.g., additional training, clearer communication) as well as remediation of existing accessibility issues. The following factors should be used to help prioritize the order that content will be repaired:

For more information on this topic, read Prioritizing Remediation of Web Accessibility Issues by Karl Groves.