- Faculty & Staff
An important step in the assessment process is choosing an appropriate method for collecting data. When considering how to assess your goals or outcomes, it can be helpful to start by thinking about answers to the following questions:
- What type of data do you need?
- Has someone already collected the information you are looking to gather?
- Can you access the existing data?
- Can you use the existing data?
- Is there potential for collaboration with another individual, program, or department?
- How can you best collect this data?
- How will you specifically use the information you collect?
The most important aspect of choosing a method is ensuring that the method will provide the evidence needed to determine the extent to which the goal or outcome was achieved. Decisions about which assessment methods to utilize should be based primarily on the data that is needed for the specific goals and outcomes being assessed, not on past data collection efforts or convenience.
Direct and Indirect Methods of Assessment
There are many assessment methods to consider, and they tend to fall into one of two categories: direct and indirect methods. When assessing student learning in particular, direct methods are often needed in order to accurately determine if students are achieving the outcome.
- Direct Method - Any process employed to gather data which requires participants todemonstrate their knowledge, behavior, or thought processes.
- Indirect Method - Any process employed to gather data which asks participants to reflect upontheir knowledge, behaviors, or thought processes.
For example, if a department or program has identified effective oral communication as a learning goal or outcome, a direct assessment method involves observing and assessing students in the act of oral communication (e.g., via a presentation scored with a rubric). Asking students to indicate how effective they think they are at communicating orally (e.g., on a survey-like instrument with a rating scale) is an indirect method.
Direct Evidence of Student Learning
Sources of direct evidence of student learning consist of two primary categories: observation and artifact/document analysis. The former involves the student being present, whereas the latter is a product of student work and does not require the student to be present. Here are some examples of each:
- Observation opportunities: performances, presentations, debates, group discussions.
- Artifact/document analysis opportunities: portfolios, research papers, exams/tests/quizzes, standardized tests of knowledge, reflection papers, lab reports, discussion board threads, art projects, conference posters.
The process for directly assessing learning in any of the above situations involves clear and explicit standards for performance on pre-determined dimensions of the learning outcome, often accomplished through the development and use of a rubric. For example, assessment of the learning outcome “Students in Research Methods will be able to document sources in the text and the corresponding reference list.” could be assessed by randomly selecting papers from the course and using a rubric to determine the extent to which students are actually able to document sources. It is important to note that stand-alone grades, without thorough scoring criteria, are not considered a direct method of assessment due to the multiple factors that contribute to the assignment of grades.
Indirect Evidence of Student Learning
In addition to the sources of direct evidence, there are also other types of data that indirectly provide evidence of student learning. While data of this nature can be useful, it is important to note that direct evidence is needed to fully assess student learning outcomes.
Examples of indirect assessments include:
- Student participation rates
- Student, alumni, and employer satisfaction with learning
- Student and alumni perceptions of learning
- Retention and graduation rates
- Job placement rates of graduates
- Graduate school acceptance rates
Once methods have been discussed, it can be helpful (and ensure timeliness) to think about the assessment implementation plan for each method:
- What: What specific data do we need to collect?
- Who: Who is responsible for implementing the assessment?
- Whom: From whom are we collecting this data?
- When: When are we collecting this data? (i.e., What is the timeline for data collection?)
- How: How will we collect this data (i.e., What resources will be used to collect the data?)
- Why: Why are we collecting this data (i.e., What do we plan to do with it?
The answers to these questions are often discussed as part of the assessment planning process and may be included in assessment plan documents.
Dr. Lisa Hunter, Ph. D. Associate Provost for Curriculum, Assessment, and Academic Support
810 Maytum Hall
State University of New York at Fredonia
Fredonia, NY 14063