RULE 1 - Begin with good questions
Program assessment must always be a process of active inquiry rather than passively trying to comply with external drivers. Too often, I hear people ask, “what do I need to do for assessment?”. That is the wrong question. I usually ask these people, “what do you want to learn?”. True assessment begins and ends with good questions. These include things like “How well did my students perform at task X?”, “Was there a performance difference between groups A and B on this task?”, or “Did intervention Q affect student performance on the task?”. These questions have the advantage of being SMART and are more likely to lead to programmatic improvements.
RULE 2 - Measure only what you are willing (or able) to change
Just because you can measure it, does not mean that you should! It is tempting to measure what is “easy” rather than what is “important”. However, even the “easy” things can lead to sticky situations. I believe assessment results produce a moral obligation for curricular change and development. If a study finds that there is a difference in performance between two groups, but they do not use it to address the difference in any way - I have a big problem. What is the value of such an assessment?? So I say, count the costs first. Before we ask if there is a difference between X and Y, we should first ensure that there are resources available to remediate any problems that may surface.
RULE 3 - Do not fear “failure”
This is a common stumbling block. I use scare quotes here because there really never is a failure in assessment. The data collected may or may not end up supporting our hypotheses; but either way, we learn something from the study. Too many people are afraid that the results will make them look bad. We are assessing student learning NOT instruction. Administrators and other stakeholders need to see that our students show the knowledge, skill, and abilities we expect. They do not use the results of an assessment projects to evaluate instructors for retention, promotion, or tenure at Ferris. However, active participation in the process of assessment may support faculty portfolios for those purposes.
RULE 4 - Make the process reproducible
If we are to move the science of learning assessment forward, we must work to make the process reproducible. We can achieve this with the five steps listed below:
Sample authentically—We must select the best, most reliable, and valid measures to address the question at hand. In most cases, we are assessing students at the course-level. Therefore, the best measures include embedded course assignments. Faculty members are in the best position to identify the assignments that align with the outcomes. Therefore, it is essential that the faculty be engaged at this stage of any assessment project.
Analyze transparently—If we want others to trust, verify, and extend our work, we need to share. Not just share our conclusions - we need to share our assumptions, our data, and our analysis code. Obviously, FERPA will mandate that we anonymize or de-identify our data before we publish it. But that rarely impacts the ability of others to computationally reproduce our work. Online repositories like GitHub, BitBucket, or the Open Science Framework (OSF) make sharing code easy (and they are free). Sharing code and data is one way of improving trust levels of the faculty members that take part in an assessment. They can easily see what and how the data are analyzed.
Report publicly—Showing an assessment report briefly at a meeting one time is not sufficient to promote change. Typically, many stakeholders are not present at the meeting. In addition, people do not have enough time to thoughtfully engage with the report in a mere twenty-minute presentation. I think we should house all assessment reports online. In the vast majority of cases, this can be an externally facing web page. There are no deep dark state secrets we need to protect. Rather, if more universities adopted this stance, we could all learn from each other and improve our programs even more quickly. Assessment needs to be seen as a legitimate form of scholarship.
Discuss objectively—Everyone’s voice and opinion are important when discussing assessment results and their proper interpretation. However, many faculty members and other stakeholders are reluctant to speak up during public presentations of assessment data. A more thorough and thoughtful discussion is possible using online discussion forums. We can design web-based reports that have associated comment sections. With this form of reporting, interested parties would have much longer (weeks or months) to study the report, consider the interpretations, and craft a meaningful response. In addition, the forums themselves would be evidence of stakeholder engagement in the assessment process.
Act deliberately—We need to make and document data-informed decisions. Nobody will value the assessment process if it is not an integral part of our decision-making processes. Once again, online reporting and discussion boards can facilitate this behavior. I believe all online assessment reports should have addenda. We can use these to document plans and action steps that follow from the reports. This is an easy way to show how we are “closing the loop”. The addenda would also reinforce the importance of assessment for program improvement.
If we followed these four rules for assessing programs, there would never be an issue with compliance (either internally or externally mandated). Please use the discussion forum below if you have any additional thoughts on the philosophy of education. Thanks.