General Introduction to the e-MEL Evaluation Toolkit

Digital resources in the form of a ‘toolkit’ are always created in anticipation of a range of certain kinds of user behaviours and needs. The e-MEL project resources are no different but, in every case, evaluation of those resources and the outcomes for representative types of users are both essential and necessarily complex.

Created from a synthesis of media education and media literacy research and teaching experience across five countries and aimed at a range of pre-service and in-service teachers and teacher educators, the importance and potential reach of the e-MEL project for media education is great, particularly at a time when it is so badly needed in the world. By which I mean the era of ‘Fake news’ and the massive manipulation of both corporate and social media for political ends.

In the toolkit you will find a range of training scenario evaluations which reflect widely differing needs and aspirations, different groupings and different stages of career. They range from media effects work, through to production and more. They lend themselves to larger cohorts and smaller group scenarios. They present resources and experiences which lie across the range of digital and print experience of media, broadcast and social media, film and animation, news and advertising content (and everything in between). Their function is to provide a library of tools, of resources which can be taken directly and applied in the designated scenario.

The importance of evaluation for the toolkit resides in the fact that resource creation is an iterative process, particularly in the digital age, the era of ‘dynamic literacies’ and ‘third spaces’. Media education and its corollary discipline, media literacy, exist in the cultural moment, in the lived experience of users. And the users of these media, consumers, producers and prosumers occupy the same space as the teachers, teacher-educators and wider communities of practice amongst the learners. In fact they experience and live with media together, undifferentiated at the point of reception or production. The ways in which media is interpreted or apprehended across the different domains, however, depends on the Discourse in that setting, incorporating the nature of media in the curriculum, the place of education in relation to it, the performative nature of the structures around it. To a very large extent it depends on the experiences of the end user, their confidence, their own previous experience with media and more.

As a result the training scenarios in e-MEL have been thoroughly tested and mapped for the full range of need and experience, tracked by team members, synthesised into a report on their effectiveness in that range. They have been measured according to pre-test and post-test scores, qualitative statements of observation and recording, underpinned by expert discussion and analysis. No one training scenario in a given topic will provide the full answer to an issue of evaluation in media education or media literacy without a full understanding of context. As a result the scenarios are all described in detail and referred to in the evaluation report.

It should be obvious but it is also worth noting that the scenarios exist within a learning environment, based on a common, open source web-standard, the Moodle platform. So, in many ways, the scenarios are also tailored to the environment in which they sit. The virtual exerts a pull over the material in this platform which is familiar to anyone who has learned online but we must be aware that amongst the users of e-MEL are people for whom e-learning is itself new and represents a challenge. We are all still learning about the effects or otherwise of the platform on the context and content of the learning about media. The iterative process extends to the design of the environment and all opinions are welcome.

From the literature of evaluation in e-learning we read of the importance of understanding what it means to be using a medium to learn concepts, to practise skills and exhibit certain dispositions. We must also be aware that these resources are available anytime, anywhere and in many forms, including those which are mobile, tablets, laptops and phones. This changing nature, and its mobility, finds its reflection in the ways in which the tasks are set up through e-MEL to be adaptable and flexible across platforms and contexts.

Finally, you could ask: why evaluate at all when there are so many variables involved? The answer is that each iteration contributes to the development of the whole resource. Evaluation of the detailed nature carried out by e-MEL is important because it creates the conditions for wider usability. And the toolkit model is the best suited for this because, as Grainne Conole and Martin Oliver have written they are predicated on the assumption that they will be:

  • derived from an explicit theoretical framework;
  • easy-to-use for practitioners;
  • able to provide demonstrable benefit;
  • able to provide guidance, without being prescriptive;
  • adaptable to reflect the user’s practice and beliefs;
  • able to produce outputs that reflect the local context.
    (Conole and Oliver, 2002).[1]

These are the features, aims and drivers of the toolkit devised and developed by the e-MEL team and feedback of the kind described herein and above is most welcome as you explore the various resources with the overarching aim of developing and enhancing media education and media literacy in the digital age with all its many challenges.

General Inroduction to the e-MEL Evaluation Toolkit


[1] Conole, G. and Oliver, M. (2002). Embedding Theory into Learning Technology Practice with Toolkits. Journal of Interactive Media in Education, 2002 (8). ISSN:1365-893X [www-jime.open.ac.uk/articles/10.5334/2002-8/]