What's the best way to approach testing your online course before its launch? And how can you reframe this crucial step to take it from tedious to totally manageable?
Testing your online course before you launch is a lot like brushing your teeth. It’s tempting to skip it, but that makes things a lot more painful (and expensive) down the road. Testing is an essential step before launching a successful course (or a chunk of the course, if you’re working iteratively). So what’s the best (and least painful) way to approach testing? Focusing on what it’s like to be a student taking the course.
1: Test from the Student Perspective
Testing a course involves going through it in its entirety. This means combing through every lesson and module, looking for missing or misplaced elements. It means confirming that all the components such as audio, video, and handouts appear as expected. It involves clicking on all the links to make sure they’re working and confirming navigation to ensure everything flows. Sounds tedious? It is. Instead of thinking of testing as a chore, it’s helpful to reframe it with the student’s experience in mind. After all, they are the reason for building the course in the first place.
We want our students to feel engaged, gain knowledge, and feel supported during the learning process. If we fail at these, then our course has failed. Reframing testing as a way to increase the likelihood of student success gives purpose to our actions.
Testing is also more effective when conducted from the perspective of a student taking the course. Unlike the course development team, the student has a fresh set of eyes.
To adopt the student perspective, ask yourself questions such as:
What is confusing?
What is surprising?
Does anything seem inconsistent?
Is it clear what to do next?
Is it clear where to access help if needed?
Is it clear what is expected?
Does anything detract from learning the material?
Reflecting on the student experience as you navigate through the course will help uncover any issues that were previously overlooked.
2: Build a Detailed List of Issues that Affect Students
As you comb through the course, take detailed notes. Each time you encounter an issue, or “bug”, record the following:
Date and time the issue occurred
Where the issue occurred
What sequence of steps was followed to trigger the issue
What outcome was expected
What outcome was observed
Be as specific as possible in each bug report and include screenshots when visuals will help to document the issue.
Assign each bug a severity level: Low, Medium, High, or Critical. The severity level is a reflection of how much the student will be impacted by the bug. If the bug prevents the student from completing the course or causes confusion or frustration, its severity is higher. If the bug is less likely to be noticed or its impact is minimal, the severity is lower.
If you follow these best practices, your future self and other members of the course development team will thank you. A thorough bug report makes it easier to find and replicate the issue.
When you finish this process, you may find yourself staring at a long list of bugs. If so, don’t panic. Reflecting on our students will guide us once more.
3: Prioritize Bugs Based on their Impact on Students
It’s now time to decide which bugs to address and how quickly. Sorting bugs into priority categories takes practice. It’s tempting to fix every issue that has been identified! If you have unlimited time and an unlimited budget, go for it! More likely, you need to wrap up the course soon and are eager for its launch. This means we need to evaluate where to focus our efforts to maximize the return for students.
Go through the list of bugs and reflect on how much improvement is gained compared to the effort of fixing each bug. Then sort the bugs into three priority buckets. Fix now. Fix later. Fix never.
The “Fix Now” bugs are ones that are Critical or High in severity. These are bugs that students are likely to encounter and will have the most negative impact on the student experience. Ultimately, they’re the ones that will cause students to question the quality of the course and will damage its reputation. The risk of not fixing these bugs is immense.
Bugs to “Fix Later” are ones that are Medium in severity—they’re important, but still less critical than the Fix Now category. They should indeed be addressed but there is less harm in waiting to fix them. These can wait until the next update or until there’s a batch of bugs that makes sense to address together. Fixing bugs in this category gives the course a more polished feel.
Bugs in the “Fix Never” category are likely Low in severity. They’re unlikely to be noticed, or if they are, their overall impact on student learning is minimal. They may reflect an opinion instead of incorrect functioning. Fixing them may require extensive effort to address but provide little improvement or benefit to the overall course.
Note that the assigned bucket for each bug may change over time. If you’ve allocated a bug to Fix Never but then hear from students that it is a source of confusion, you may decide to reevaluate its priority. Being open to this sort of feedback is beneficial as it helps build your understanding of what students care about. If you get dozens of complaints about a particular issue, it probably needs to be addressed.
One of the biggest adjustments for me when I first started building online courses was learning how to decide what to let go of. When I put myself in the shoes of a student, it becomes much more clear what matters. Testing connects us to our students and lets us preview, and ultimately improve, their experience. When done well, it provides a clear roadmap for how to wrap up the course-building process—and increases the likelihood of a smooth launch and successful course!