Indicator #4: Assessment
Ongoing assessment is necessary to ensure that your web accessibility plan is working and on track. Processes must be in place to measure progress, constituent satisfaction, and outcomes. This information is then used to help determine the sustainability of the current efforts and make improvements to the overall program.
Assessment Review Teams might see evidence of assessment in a number of ways. Three Benchmarks illustrate the Assessment Necessary for Institution-Wide Web Accessibility. Under each benchmark are some examples of evidence that would support institutional claims of adherence to that particular benchmark - other evidence may also exist. Clicking in the (+) next to each example will open a list of questions that can be used to help determine the strength of the given evidence.
Benchmark A: Evaluation of Implementation Progress
It should be noted that not all examples below are required to point to evidence of this Benchmark. However, work across these examples show added efforts to evaluate implementation progress.
The collection and analysis of data or information of an institution's progress within the implementation process (+)
- Is information collected on the institution‘s progress in implementing the accessibility plan?
- Are different components of the plan included in data collection and analysis?
- Scope?
- Benchmarking?
- Communication?
- Budget?
- Personnel?
- Training and Support?
- Timelines and Metrics?
- Outcomes?
- Is there documentation that progress is evaluated to determine if implementation is occurring at predicted levels?
- Is this evaluation used to identify problems in implementation?
- Is there evidence that issues found in evaluation are used to adjust and improve the plan?
- Is there evidence that mechanisms are in place to communicate findings and changes to the affected stakeholders?
- Are these mechanisms used consistently across groups?
Formal reports on the progress of the intended implementation plan (+)
- Does the institution create formal reports on implementation progress?
- Do the reports review different components of the plan?
- Scope?
- Benchmarking?
- Communication?
- Budget?
- Personnel?
- Training and Support?
- Timelines and Metrics?
- Outcomes?
- Do the reports include information from a variety of sources representing a range of different viewpoints?
- Do the reports provide insight into the institution-wide process that may not be apparent when components are reviewed in isolation?
- Are the reports understandable?
- Do they communicate a useful picture of current progress?
- Do the reports provide information on any implementation issues found and actions taken?
- Do the reports discuss changes or edits made to the current plan?
- Are these reports used to make adjustments to the accessibility plan?
Informal summaries or communications on the progress of the implementation plan (+)
- Is there evidence that there are mechanisms in place to collect and track informal information on plan progress? i.e.
- Emails
- Updates
- Unofficial reviews
- Feedback appraisals
- Webpage accessibility checks
- Is there documentation that informal information is used to identify potential issues or problems?
- Is there evidence that actions are taken to alleviate or resolve problems before they become critical?
Benchmark B: Evaluation of Web Accessibility Outcomes
It should be noted that not all examples below are required to point to evidence of this Benchmark. However, work across these examples add strength to the evaluation.
The collection and analysis of institutional web accessibility data (+)
- Is there documentation that evaluations and checks are scheduled to ensure that web accessibility outcomes meet expected levels for
- The Institutional policy?
- The technical standard?
- Plan milestones?
- Is there evidence that there is a reasonable cycle and timeline for assessments?
- Is formative data collected?
- Is summative data collected?
- Is data collection ongoing?
- Is there documentation that key personnel are specifically assigned to oversee or conduct these assessments?
- Is there comprehensive documentation on how web accessibility of the institutional web is evaluated?
- How are accessibility checks performed?
- Automated checks?
- Manual checks?
- *Both automated and manual? A mix of automated and manual checks is strongly recommended as automated evaluation tools alone do not provide an accurate picture and manual checks limit the number of pages that can be reasonably sampled.
- Is a reasonable sample evaluated?
- What percentage of pages on the institutional website are evaluated?
- Are enough pages sampled to provide accurate picture of the institutional web?
- Are the sampled pages representative?
- Are different parts of the institution‘s web site included in the sample?
- Are all page types specified in the policy and plan included in the evaluation schedule?
- How are sample webpages chosen?
- Are the sampled pages randomly selected?
- Are the pages to be evaluated set based on other criteria?
- Is the person/people responsible for accessibility checks specified?
- Do they have the appropriate qualifications and knowledge needed to conduct the assessments?
- Do pages that are found to be accessible continue to be checked over time to ensure that they remain accessible?
- Is there evidence that outcome collection strategies are reviewed and evaluated as technology and standards change over time?
- Are changes made to the strategy to ensure that outcome collection is in line with current standards and practices?
- Is there documentation that the results of evaluations are disseminated?
- Are the results disseminated to important stakeholders? (e.g., the institutional committee, those who must make content accessible, and those with disabilities)
- How widely are the results disseminated?
- Is there evidence that the results are used in meaningful and productive ways?
- To make adjustments to the plan?
- To identify areas/personnel requiring additional assistance or who may serve as support for others?
The development of institutional reports containing web accessibility data or summaries (+)
- Does the institution create reports or summaries on outcome data?
- Do the reports include information about evaluations across all affected parts of the institutional website?
- Do the reports provide insight into the institution-wide process that may not be apparent when components are reviewed in isolation?
- Are the reports understandable?
- Do they communicate a useful picture of current progress?
- Are the reports disseminated to those important stakeholders?
- Do the reports provide information on any implementation issues found and actions taken?
- Do the reports discuss changes or edits made to the current plan?
- Are the results are used in meaningful and productive ways?
- To make adjustments to the plan?
- To identify areas/personnel requiring additional assistance or who may serve as support for others?
The creation of reports from external evaluations of web accessibility outcomes (+)
- Is there documentation of reviews or accessibility audits by external reviewers?
- Is there documentation on who conducted these outside evaluations?
- Peer institutions?
- Web accessibility groups?
- Web standards specialists?
- Is there evidence that the results of the external reviews are in line with internal data collection?
- If no, are internal collections strategies reviewed and modified as necessary?
- Are the results included in institutional reports?
- Is there evidence that the results are disseminated to important stakeholders?
- Is there evidence that the results are used in meaningful and productive ways?
- To make adjustments to the plan?
- To identify areas/personnel requiring additional assistance or who may serve as support for others?
The collection and use of correspondence describing accessibility outcomes (+)
- Is there documentation of mechanisms used to track correspondence between administrators, key personnel and stakeholders regarding accessibility data?
- Is there evidence that this correspondence is used
- To help monitor progress to the accessibility plan?
- To identify potential issues or problems?
- Are actions taken to alleviate or resolve problems before they become critical?
- To identify areas/personnel requiring additional assistance or who may serve as support for others?
Benchmark C: Assessment Results Are Used To Improve Institutional Accessibility
It should be noted that not all examples below are required to point to evidence of this Benchmark. However, work across these examples show a greater commitment to sustained improvement.
The development and use of reports that reflect data-based recommendations for change (+)
- Are documents available that recommend changes or actions based on assessments and data collected?
- Note: These documents can be recorded in a range of formats including reports, meeting minutes, or correspondence
- Is there documentation that recommendations come from a variety of sources?
- Formal Reports?
- Informal Reports?
- Accessibility Audits?
- Outside Evaluations?
- Communications from key personnel?
- Is there evidence that a range of key personnel are involved in making recommendations?
- Are there indications that recommended changes or actions target the appropriate areas?
- Policy?
- Plan Components?
- Scope?
- Benchmarking?
- Communications?
- Budget?
- Personnel?
- Training and Support?
- Timelines and Metrics?
- Outcomes?
- Assessments?
- Process?
- Are recommended changes or actions prioritized or timelined?
- Is there evidence that data reviewed in an ongoing schedule and new recommendation reports are developed as necessary?
Documentation that describes how data sources inform institutional efforts (+)
- If the institution in a phase before data collection has begun or is between data cycles, is there documentation on how data sources will inform efforts once data is collected?
- Are priority issues specified?
- Are processes for likely issues outlined?
- Are contingencies for severe issues considered?
- Is there evidence that mechanisms are in place that can help serve as early warning indicators for critical aspects of the plan in advance of an assessment cycle?
- Is there documentation that information from these mechanisms is being tracked and used to prevent or mitigate potential problems?