This evaluation test provide you a simple procedure to evaluate the level of the trainees competences in analysis (media literacy competences). The test allows to evaluate each trainee individually before and after the training, and to assess the progression during the training by comparison between the two tests. The test is based on the idea that at most the trainees are competent, at the most they are able to compare spontaneously media objects.
The trainees have to compare 2 (or more) media objects, according to different criteria. At most they can make a reasoned and nuanced comparison implying a high number of relevant criteria, at most they are considered as competent.
Just give your trainees 2 (or more) media objects and ask to compare them.
Method 1 – Free comparison in an open questionnaire: “Compare these objects according several criteria that you explain” or “what make those objects similar/different and why?” This method is especially interesting with presumed high-level trainees or to highlight the spontaneous approach they put in place. This method is less interesting with presumed low-level trainees because they may not see what is expected.
Method 2 – Structured comparison guided by detailed questions: “Compare these objects according to the following criteria: (i) the author of the document, (ii) the targeted audience, (iii) the way the message was fabricated, (iv) the effect on the receivers (etc.)”
This method is especially interesting with presumed low-level trainees. This method is less interesting with presumed high-level trainees because the guided answers may be not sufficiently discriminant (all high-level trainees will reach the maximal level, avoiding to highlight a progression during the training).
Method 3 – Mixed approach: in a first time ask a general open question (Method 1 above), in a second time ask more specific and detailed questions (Method 2 above). This mixed method is especially interesting when you don’t have indications about the level of the trainees. It is also really discriminant between low and high-level trainees. The main inconvenient is that answering the questionnaire is longer, and the repetition between open and specific questions may disturb some trainees.
Method 4 – Mindmapping. Comparison is presented in a mindmap showing similarities and differences between the considered media objects. The mindmap may ber created with “traditionnal” tools (paper and pencil) or with specialized digital tools (like Mapmind, XMind, etc.). Using a specialized software may be intimidating for some trainees unfamiliar with digital tools; using specialized software makes the mindmap sharing easier on the Platform.
Each answer is individually coded with the following scale:
|0||The trainee is unable to compare the media objects whatever the considered criteria (or only with unrelevant criteria).||The trainee has no analysis competence.|
|1||The trainee is able to compare the media objects using only 1 relevant criteria (for example: targeted audience).||The trainee has basic analysis competences.|
|2||The trainee is able to compare the media objects using several relevant criteria (for example: targeted audience and sender’s intentions).||The trainee has good analysis competences.|
|3||The trainee is able to compare the media objects using several relevant criteria that are justified and organised in a coherent manner. He/she is able to make links between criteria. (For example targeted audience, sender’s intention and highlight that the audience is chosen following intentions).||The trainee has excellent analysis competences and may be viewed as an “expert” in these competences.|
The progression is evaluated by comparison between the scores before and after the training. You should expect that the final score is (individually and/or globally) higher at the end of the training that at the beginning.
It is recommended that the objects used in the pre-training and post-training tests are equivalent in difficulty. Do not use easier objects at the end of the training: it should false the results.
It is recommended that the same evaluator does the scoring in the pre-training and post-training tests to ensure that the interpretation of the scoring indicators are constant.
Examples in eMEL
- Italy – TS1 Digital storytelling as self-presentation and social/civic agent (analysis test)
- France – TS2 Images of Science in the Media > close to Method 4 (mindmapping)