This evaluation test provide you a simple procedure to evaluate the level of the trainees competences in production (dicatic axis). The test allows to evaluate each trainee individually before and after the training, and to assess the progression during the training by comparison between the two tests. The test is based on the idea that at most the trainees are competent, at the most they are able to describe a pedagogic sequence using various terms and linking them.
The trainees have to conceive (describing) a pedagogical sequence about a given topic. At most they are able to draw a coherent sequence mobilizing objectives, relevant activities and relevant resources, at most they are considered as competent.
The topic of the pedagogical sequence as wel the targeted audience is given by the trainer. Another elements may be given by the trainer or be left free to the trainees appreciation (see Mehods below).
Method 1 – Free page for responding: “Describe a pedagogical sequence about [topic] addressed to [audience] in [extra relevant framing elements].” This method is especially interesting with presumed high-level trainees (for example in-service teachers) or to highlight the spontaneous approach they put in place. This method is less interesting with presumed low-level trainees (for example in-training future teachers) because they may not see what is expected.
Method 2 – Responding in formatted questions: “Describe your pedagogical sequence following the different items: (i) objectives, (ii) activities, (iii) resources, (iv) timing (etc.)” This method is especially interesting with presumed low-level trainees (for example in-training future teachers). This method is less interesting with presumed high-level trainees (for example experimented in-service teachers) because the guided answers may be not sufficiently discriminant (all high-level trainees will reach the maximal level, avoiding to highlight a progression during the training).
Method 3 – Mixed approach: in a first time use a free page for responding (Method 1 above), in a second time ask more formatted questions (Method 2 above). This mixed method is especially interesting when you don’t have indications about the level of the trainees. It is also really discriminant between low and high-level trainees. The main inconvenient is that answering the questionnaire is longer, and the repetition between open and specific questions may disturb some trainees.
Method 4 – Mindmapping. The sequence is presented in a mindmap showing alignment between objectives, tasks, resources, evaluation, and so on. The mindmap may ber created with “traditional” tools (paper and pencil) or with specialized digital tools (like Mapmind, XMind, etc.). Using a specialized software may be intimidating for some trainees unfamiliar with digital tools; using specialized software makes the mindmap sharing easier on the Platform.
Each answer is individually coded with the following scale:
|0||The trainee is unable to describe a pedagogical sequence. His/her answer is incoherent and/or focus on the content without preoccupation for methods and mean (“I will say that”).||The trainee has no production (didactic) competence.|
|1||The trainee is able describe a basic pedagogical sequence. His/her description counts a few keywords indicating that he/she makes the difference between content, objectives and method (“objective”, “method”, etc.).||The trainee has basic production (didactic) competences.|
|2||The trainee is able describe a coherent pedagogical sequence. His/her description counts most keywords indicating that he/she masters the whole process: objectives, task, ressources, evaluation, timing, etc.||The trainee has good production (didactic) competences.|
|3||The trainee is able describe a coherent pedagogical sequence and to justify each element. His/her description counts all keywords indicating that he/she masters the whole process (objectives, task, ressources, evaluation, timing, etc.) and makes links between these elements showing their coherence/alignment.||The trainee has excellent production (didactic) competences and may be viewed as an “expert” in these competences.|
The progression is evaluated by comparison between the scores before and after the training. You should expect that the final score is (individually and/or globally) higher at the end of the training that at the beginning.
It is recommended that the same evaluator does the scoring in the pre-training and post-training tests to ensure that the interpretation of the scoring indicators are constant.
Examples in eMEL
- Belgium – Mediatized images in context > Method 3
- Finland – TS2 Transcultural competences in media education > Method 1
- Italy – TS1 Digital storytelling as self-presentation and social/civic agent (didactic test)
- France – TS2 Images of Science in the Media (didactic test)