Prototype 1: Analysing competences – Media Literacy by systematic analysis

Overview

This evaluation test provide you a simple procedure to evaluate the level of the trainees competences in analysis (media literacy competences). The test allows to evaluate each trainee individually before and after the training, and to assess the progression during the training by comparison between the two tests. The test is based on the idea that at most the trainees are competent, at the most they are able to analyse spontaneously media objects.

Task

The trainees have to analyse one (or several) media object(s). At most they can do it following different various and relevant dimensions, at most they are considered as competent.

Just provide the trainees the media object(s) to be analyzed.

Method 1 – Free analysis in an open questionnaire: “Analyse these media objects following several dimensions.” This method is especially interesting with presumed high-level trainees or to highlight the spontaneous approach they put in place. This method is less interesting with presumed low-level trainees because they may not see what is expected.

Method 2 – Structured analysis guided by detailed questions: “Analyse these media objects according to the following criteria: (i) the author of the document, (ii) the targeted audience, (iii) the way the message was fabricated, (iv) the effect on the receivers (etc.)”

This method is especially interesting with presumed low-level trainees. This method is less interesting with presumed high-level trainees because the guided answers may be not sufficiently discriminant (all high-level trainees will reach the maximal level, avoiding to highlight a progression during the training).

Method 3 – Mixed approach: in a first time ask a general open question (Method 1 above), in a second time ask more specific and detailed questions (Method 2 above). This mixed method is especially interesting when you don’t have indications about the level of the trainees. It is also really discriminant between low and high-level trainees. The main inconvenient is that answering the questionnaire is longer, and the repetition between open and specific questions may disturb some trainees.

Interpretation

Each answer is individually coded with the following scale:

Level Criteria Interpretation
0 The trainee is unable to analyse the media objects whatever the considered dimension (or only with unrelevant dimension: “I like it”, etc.) The trainee has no analysis competence.
1 The trainee is able to analyse the media objects using only 1 relevant dimension (for example: targeted audience). The trainee has basic analysis competences.
2 The trainee is able to analyse the media objects using several relevant dimensions (for example: targeted audience and sender’s intentions). The trainee has good analysis competences.
3 The trainee is able to analyse the media objects using several relevant dimensions that are justified and organised in a coherent manner. He/she is able to make links between the considered dimensions. (For example targeted audience, sender’s intention and highlight that the audience is chosen following intentions). The trainee has excellent analysis competences and may be viewed as an “expert” in these competences.

 

The progression is evaluated by comparison between the scores before and after the training. You should expect that the final score is (individually and/or globally) higher at the end of the training that at the beginning.

Extra comments

It is recommended that the objects used in the pre-training and post-training tests are equivalent in difficulty. Do not use easier objects at the end of the training: it should false the results.

It is recommended that the same evaluator does the scoring in the pre-training and post-training tests to ensure that the interpretation of the scoring indicators are constant.

Examples in eMEL