These guidelines are intended to provide ready-to-use tools to evaluate the effectiveness of your training and trainees satisfaction. We propose tools addressing both trainers and trainees.
It is assumed that the trainers have been already appropriately engaged in the project, they know the training scenarios and the e-learning environment. It is also assumed that the trainees have been already identified and all the organizational (e.g. reservation of adequate space to carry out face to face activities or preliminary check of ITC access by students) and institutional aspects (e.g. in higher education context, official approval of the experimentation by the academic board) to involve them in the experimentation have been already dealt with. It is also taken for granted that during the first meeting with trainees, the trainers will introduce the training, providing information about contents, objectives, timeline and the online Platform.
Section 1 – Aims and tools
The overall aim of this evaluation is to test the effectiveness of the training to develop media literacy/education competences. Within this overall framework, tools were created to evaluate the following dimensions, comparing the point of view of trainers and trainees:
- Effectiveness of training, e.g. to what extent was the training relevant/effective for the development of media literacy/media education competences? How did trainees self-evaluate their level of outcomes in terms of competences and products? What were the learning outcomes? What strengths and weakness did emerge during the learning process?
- Quality of methods/resources/activities and tools of training, e.g. how were the resources/activities/tools? Were they appropriate? How was the level of clarity of instructions? How was the trainers’ support? What changes or improvements should be done?
- Sustainability of training, e.g. to what extent was the training sustainable, particularly in terms of management, workload, time, structure etc.?
- Usability, e.g. to what extent was the online learning environment usable? What were the technical difficulties?
- Satisfaction, e.g. according to trainees point of view, what activities were most/less enjoyable? What activities were most/less interesting? What was trainees’ perception of the importance of topics and consistence of activities?
- Participation, e.g. to what extent did trainees interact with trainers? To what extent did they interact with each other?
- Transferability of competences, e.g. to what extent could the competences learnt during the experimentation be transferred to and re-used in trainees’ professional contexts?
1.2. Data collection tools
In order to evaluate the dimensions mentioned in the previous paragraph, a series of tools will be used according to the specific phase of the activity, i.e.
- a pre-survey to be administered at the beginning to collect general information about trainees including their expectations and previous experiences with online learning. The survey is anonymous and could be administered online or on paper depending on your organizational preferences. The compilation time is about 15 minutes.
- a logbook that trainers will use during the process to take notes about advancements, participation, interaction with the e-learning platform and so on by referring to each unit. The logbook could be very useful to collect trainer’s perception during the development of the training: it guides trainers in a reflection exercise about their teaching process, which can allow to improve their practices.
- a post-survey that trainees should answer at the end of the activity to provide a global evaluation of the training. The survey will include both close and open questions. The survey is anonymous and could be administered online or on paper depending on your organizational preferences. The compilation time is about 15 minutes.
Further data relating trainers-trainees and trainees-trainees interaction could be gathered from the online platform. We suggest to consider in your analysis the following type of data:
- Access to resources (eg. type of resources, number of visits)
- Interaction between trainers – trainees (see statistics about chat, forum, direct message; number of messages etc.)
- Interaction between trainees – trainees (see statistics about chat, forum, direct message; number of messages etc.)
- What topics were covered in the discussion (e.g. technical issue, questions on topic, clarification of activities and tasks)
- Typology of interaction (sharing, discussion, group collaboration)
- Participation of trainees (e.g. course completion, time on platform)
Table 1 provides an overview of phases and tools.
Table 1. Training evaluation: phases and tools
– General information
– Previous online learning experiences
Includes information on the process focusing on
– Quality of resources
– Interaction with the platform
– ChallengesPlatform data
– Interaction trainers-trainees
– Trainees’ participation
– Trainees’ interaction
– Access to resources
– Quality of methods and resources