How to Design Mobile Game-Based Learning (#GBL, #mlearning) – Part II
This question has been raised for a long time : How can we make learning as engaged as playing games? Because if it’s possible, we can expect higher and richer learning outcomes. As mobile learning is becoming the new focus of education, we are still asking this question.
“Too often instructional design is about the content and not about the actions they need to occur. Game design is about action.” — Karl Kapp
Karl shared the slides and some resources from his presentation — “Thinking like a Game Designer” — in DevLearn 2013 here.
Thinking like a game designer
Here are the five elements of thinking like a game designer.
Give Learners Choices Also related is to Create Multiple Levels of Entry into Your Instruction.
Game design is about action
(F : feature phone; S : smartphone; T : tablet)
- Send texts (F, S)
- Make calls (F, S)
- Take photos (F, S, T)
- Listen to music (F, S, T)
- Read books (F, S, T)
- Social networking (F, S, T)
- Web searches (F, S, T)
- Web browsing (F, S, T)
- Send MMS (S, T)
- Video calls (S, T)
- Record videos (S, T)
- Record audios (S, T)
- Watch online videos (S, T)
- Edit photos and videos (S, T)
- Edit documents (S, T)
- Use maps (S, T)
- Shop online (S, T)
- Install App (S, T)
- Use geo-location positioning (S, T)
- Push notification (S, T)
- Tagging and Scanning (S, T)
From designers’ point of view, the game mechanism can be facilitated through the capabilities provided by some toolkit. Take the ARLearn as an example(mentioned in this post, a paper with more details can be downloaded here.), the toolkit facilitates game design through the following means.
Game design facilitator
Media artifacts definition
ARLearn implements a simple data model that enables the definition of several kinds of media artifacts, including multiple choice messages, video messages, and audio messages. Media artifacts are bound to a context that can be defined by a location and/or a timestamp. The context defines where or when in the game messages have to appear.
Furthermore, a flexible dependency mechanism enables the author to define the game logic. Through dependencies, the author specifies conditions for making media artifacts appear or disappear. ARLearn is currently equipped with three kinds of dependencies:
- An action-based dependency refers to a game action.
- A time-based dependency specifies a time relative to another dependency.
- Combined dependencies (AND/OR) combine two or more dependencies. A game author can thus use expressions to specify multiple conditions that need to be fulfilled.
Notifications are a central element of ARLearn, both from a technical and conceptual point of view. It’s to facilitate the communication needed for different roles and contexts through out the game.
Tagging and Scanning
Allowing the user to scan a tag to reveal his location puts the player in control. The player actively decides to inform the system that he entered a room, and gets direct feedback that this action was registered.
One of the ongoing developments in ARLearn is the integration of displays into a serious game, it can add many features to a game play, including content display, ambient information, classroom response system.
This example manifests a fully-fledged mixed reality application platform including support for field trips, serious gaming, augmented virtuality, and notification systems. In the spectrum of mixed reality, the essential elements – the story and the interactions with learners – are presented.
But most of the cases, mobile learning only utilizes part of these capabilities. Some learning designs layout on time frames, some are location-based. Most of mobile learning games are in virtual environment only, even simple graphics and texts can construct a good game, you can find diagnosis tools in this “Mobile Learning Decision Path“.
Next post we’ll look into Augmented reality.