I was asked a really interesting question this morning. I’ll quote it verbatim:
how does assessment in the game works—or better yet, you were explaining what’s special about the assessment in the game, what would you tell the layman (i.e., someone who isn’t involved in serious gaming or academics or computer science or engineering)—that is, someone in small business.
For someone who has been leading the assessment effort at DISTIL for almost the past two years, this was surprisingly difficult to do. It took me forever to understand what serious games are, and then forever to make the link between learning and gaming and then to understand the requirement of assessment, and the distinction between learning-focused assessment and certification-focused assessment. Then came the phase of trying out different techniques to see which one worked. For the record, I’ve implemented everything from asking the students their opinions, to sophisticated machine learning that can build models of the type of biases and ‘plans’ that the user employs during their problem solving.
As always, the correct level of engineering is somewhere in the middle between the trivially simple and unreasonably complex. In the last game that I was working on, I employed the analytics capabilities that underlies the data preparation phase that takes place prior to the application of machine learning classification models, without going so far as to build the actual statistical classification models.
So what was the answer to the question? Well, here it is, in the required two sentences:
The assessment in the learning simulation employs sophisticated analytics that generates learning reports based on actions that the student chooses to make in-game. This is a much more relevant criteria for assessment feedback than the traditional alternatives of asking the student their opinion (via smile sheets) or by triggering canned reports based on the final outcome of the simulation session (which hide details on the efficacy of choices made in-between).
The technology itself is based on modern analytics and event identification methodology, which is the same insight-generating tools that have transformed the field of marketing and catapulted modern corporate giants (such as Google and Amazon.com) to great successes by literally enabling them to ‘read the mind’ of their customers based on their actions and thus serve them better.
So what exactly are analytics, and why have their allowed firms like Amazon and Google to prosper. An analogy is quite useful here, once again from the field of marketing. In the days when dinosaurs roamed the Earth, marketing efforts consisted of purchasing media space, whether it was a printed advert in the papers, a timeslot on TV or sticking a billboard on a hill near a major road. The hallmark of all three techniques mentioned was that you never actually knew if the advertisement was seen, and even if you could approximate the number of exposure (which is defined as the instances when the advertisement could have been potentially seen by a human nearby), you never knew the actual conversion rate (which is the instances when the person seeing the advertisement took the desired action, i.e. bought your product or gave your sales-team a call).
Nowadays, on the web (and the other rich media), not only can you specifically measure every instance of your advertisement being seen (i.e. the exposure), but you can track every conversion event that takes place (i.e the user coming to your page, interacting with your shopping cart, checking out, and making the final payment). You can also calculate the conversion ratio for each step, and employ closed-loop feedback to judge the impact of changes that you make to your advertisements, to you content and layout, and the online sales process. All changes initiated by you will impact user behaviour and shift key conversion and sales statistics.
Surprisingly enough, the same techniques work remarkably well for judging the competency level of individuals in learning games. As long as you can identify the key events (akin to that of conversion in online sales), you have useful checkpoints by which you can judge the student’s competency. Best of all, this can all be done without the active intervention of an instructor, which can really boost the relevance and applicability of eLearning and distance learning platforms and free the instructor from the day to day operational drudgery of assessment to focus on the actual crafting of better content, and better assessment. Perhaps this was the reason that DISTIL was conferred the best product award at DevLearn ’08, which is the premier platform to judge the leading innovations in eLearning in North America.
This innovative form of assessment that we developed assuaged a key pain point of eLearning instructors and instructional designers. Anyone who has taught courses either in academia or in industry knows about the wasted days checking exams. For them, this automatic assessment is definitely a welcome relief. It is also a lot less threatening for the student, and does not provoke the same level of anxiety that formal exams elicit. Best of all, it can directly support criteria-referenced assessment, without devolving into a norm based examination (which happens with many other exams) as the actual choices are tracked, rather than the way these choices are expressed.