How will I know if students have achieved the desired results? What will I accept as evidence of student understanding and proficiency?
In my line of work, which deals with remotely-delivered, self-paced, simulation-based cybersecurity training, this stage is arguably the most difficult. Our simulations are actually authentic computers running in the cloud, incorporating a variety of different pieces of software, and it can be immensely difficult to track learner activity in these environments. We often settle for a "flag" - a unique value that the learner would only encounter if they correctly complete all of the required steps - or some other piece of information that the learner would only know if they completed the task. These methods are serviceable, but as a learning designer, I often feel like we are settling for proxies to the genuine evidence of proficiency that we actually want to track and measure against.