Using digital tools to solve pedagogical problems
| Site: | Loomen za stručna usavršavanja |
| Course: | Digital Technologies for Communication, Collaboration and Professional Development |
| Book: | Using digital tools to solve pedagogical problems |
| Printed by: | Gost (anonimni korisnik) |
| Date: | Sunday, 22 February 2026, 6:22 PM |
Description
This activity will present topics related to the evaluation and monitoring of the impact of digital technologies.
1. Introduction
The introduction of digital technologies into higher education brings many opportunities, but also challenges. While digital tools can significantly enhance learning, collaboration and communication processes, their actual effectiveness should not be taken for granted. It is necessary to systematically monitor how they are used, to what extent they contribute to the achievement of pedagogical and organizational goals, and what their impact is on students and teachers.
Evaluation and monitoring of the impact of digital technologies is therefore a key segment of modern education. They enable teachers and institutions to make evidence-based decisions, improve teaching practices, and ensure that investments in digital tools actually bring the desired results. In this process, it is important to consider different dimensions: from defining clear goals, through the selection of tools and methodologies, to interpreting data and involving all relevant stakeholders in the decision-making process.
2. Defining objectives
Defining objectives is the first and most important step in the process of evaluating the impact of digital technologies. If objectives are not clearly articulated, subsequent efforts may result in unclear findings or misinterpretations. In the context of higher education, objectives are often linked to pedagogical outcomes, teaching quality, student engagement, or the development of specific competencies. For example, an instructor who introduces Moodle as the primary learning management system may aim to increase active student participation in discussions and therefore set the following objective: “Increase the number of student posts in forums by 30% compared to the previous academic year.”
In addition to pedagogical objectives, institutions may also define organizational objectives. A higher education institution (HEI), for instance, may implement a new videoconferencing platform such as BigBlueButton with the goal of reducing licensing costs associated with commercial tools and increasing control over institutional data. In this case, the evaluation would measure not only student engagement but also economic efficiency, technical reliability, and instructor satisfaction.
It is essential to apply the SMART goal framework (Specific, Measurable, Achievable, Relevant, and Time-bound). Rather than setting a broad objective such as “improve students’ digital skills,” an instructor might define a more precise goal: “By the end of the semester, 80% of students enrolled in the Digital Competencies course will successfully use collaborative tools (e.g., Google Docs or Microsoft Teams) to write a joint term paper.”
Additional examples of objectives include increasing the frequency of participation in online discussions, reducing course dropout rates, improving success rates on online quizzes, or decreasing the time instructors spend on administrative tasks. Clearly defined objectives guide the selection of appropriate methodologies and tools, as well as the interpretation of collected data. Without such objectives, the evaluation process loses its purpose and fails to provide meaningful value to educational practice.
3. Data collection tools and platforms
Data collection forms the foundation of high-quality evaluation, and the selection of appropriate tools in higher education is crucial for ensuring the accuracy and reliability of results. Today, nearly every digital system used in universities records data on user interactions. Moodle, for example, serves as a central learning management system that enables the collection of information such as student enrollment numbers, time spent in courses, forum activity, assessment results, and activity completion status. These data can be exported as reports or integrated with learning analytics tools, such as the Moodle Analytics Dashboard.
In addition to learning management systems, survey tools (e.g., Google Forms, LimeSurvey, or institutional survey platforms) are widely used to gather data on student attitudes and satisfaction. For instance, after completing a course, students may be asked to complete a survey evaluating the quality of video lectures, the clarity of assignment instructions, and the usefulness of the feedback provided.
To monitor interaction in greater detail, tools that analyze data from videoconferencing platforms are increasingly being used. Systems such as Zoom or Microsoft Teams can record metrics including duration of participation, frequency of contributions, and use of features such as chat, polls, or reactions.
Alongside quantitative data, tools for collecting qualitative insights are equally important. Digital discussion forums, blogs, and reflective journals allow instructors to analyze the content of student contributions. Such qualitative analysis can help determine whether students are developing critical thinking, reflection, or argumentation skills.
As a practical example, an instructor at a university may combine Moodle reports, a Google Forms survey, and an analysis of student forum posts to gain a comprehensive understanding of the impact of introducing interactive video lessons. Using multiple tools ensures that the evaluation is balanced and captures diverse aspects of student engagement and learning experience.
4. Monitoring and evaluation methodologies
Monitoring and evaluation methodologies in higher education must be adapted to goals and resources. A combination of quantitative and qualitative methods is most often used in order to obtain a balanced and reliable picture.
Quantitative methods involve analyzing data from systems such as Moodle LMS, where a teacher can track activity completion rates, average quiz scores, log-in frequency, or the number of assignments submitted. For example, a teacher may notice that students who participate more frequently in online forums perform better on final exams. Such findings allow for conclusions to be drawn about the relationship between engagement and success.
Qualitative methods complement this picture. Interviews and focus groups with students can reveal reasons why some do not actively participate or why they prefer certain tools. For example, students may state that forums are not motivating enough, but that interactive tools such as Mentimeter encourage greater involvement. Content analysis of written reflections can further reveal perceptions of the usefulness of digital tools.
One of the modern methodologies is learning analytics, which uses advanced algorithms to predict student behavior. For example, analyzing enrollment patterns can signal students at risk of dropping out of a course, giving teachers the opportunity to intervene in a timely manner.
Example from practice: in a course, a combination of quantitative data from Moodle LMS and qualitative interviews with students was used to evaluate the effectiveness of the introduction of video materials. Quantitative data showed that students view shorter video lessons more often, while qualitative insights explained that they are more transparent and easier to learn. Such an integrated approach enables a more precise and meaningful interpretation.
5. Data processing and interpretation
Data processing and interpretation are perhaps the most sensitive stages of the evaluation process, as this is where raw data are transformed into meaningful insights. Digital tools used in higher education can generate vast amounts of data; however, without appropriate processing and interpretation, these data remain of limited value.
Quantitative data are typically processed using statistical tools and methods. For example, an instructor may analyze the relationship between time spent in the Moodle learning management system and students’ final exam performance. If the analysis shows that students who are more active in discussion forums achieve higher grades, this finding may be interpreted as evidence that digital interaction has a positive effect on learning outcomes.
Qualitative data require a different analytical approach. Student reflections posted in blogs or discussion forums, for instance, can be examined through thematic analysis, in which recurring patterns and categories are identified, such as satisfaction with digital tools, challenges in their use, or suggestions for improvement.
Interpretation must be conducted carefully and within the appropriate context. For example, a low frequency of logins does not necessarily indicate low engagement, as students may rely on alternative resources or work offline. For this reason, findings should always be considered alongside complementary data sources.
A practical example illustrates this point. In the evaluation of a course that incorporated interactive quizzes in Moodle, quantitative analysis revealed that students who completed the quizzes regularly achieved better results. However, qualitative analysis of student feedback showed that the quizzes also helped learners monitor their own progress, rather than serving solely as exam preparation. This combined interpretation provides instructors with valuable insights for improving course design.
In this way, data processing and interpretation extend beyond the mere presentation of statistics and contribute to a deeper understanding of the broader impact of digital technologies on teaching and learning.
6. Stakeholder engagement and decision-making
When evaluating digital technologies in higher education, it is essential to involve all relevant stakeholders, including students, teachers, administrators, and institutional decision-makers. Only through such inclusive evaluation can the results have practical value and lead to meaningful change.
Students are the primary users of digital technologies, and their experiences should therefore be at the center of the evaluation process. Involving students through surveys, focus groups, or reflective assignments ensures that their perspectives are heard and their needs better understood. For example, students may report that access to recorded lectures via the Moodle learning management system is beneficial, while also expressing a preference for shorter and more clearly structured recordings.
Teachers represent another key stakeholder group. They can provide valuable insights into how digital tools support or hinder teaching practices. If instructors report, for instance, that a platform such as BigBlueButton reduces the time required to organize online discussions, this feedback becomes an important input for institutional decision-making.
Faculty administrators and governing bodies also play a crucial role, as they are responsible for strategic decisions related to funding, infrastructure, and technical support. Evaluation results enable them to determine whether further investment is needed in specific platforms or in professional development for teaching staff.
A case study illustrates the importance of incorporating multiple perspectives. When a university department evaluated the introduction of Microsoft Teams as its primary communication tool, students emphasized that it significantly facilitated group work. Teachers, however, raised concerns about functional overlap with the Moodle LMS, while administrators focused on financial implications and GDPR compliance. By considering all stakeholder perspectives, the department reached a balanced decision: Microsoft Teams was retained as a collaboration tool, while Moodle remained the primary platform for teaching and learning.
7. Conclusion
Evaluation and monitoring of the impact of digital technologies are not one-off activities, but continuous processes that follow the dynamics of higher education. Their purpose is not only to prove that the technology works, but also to ensure that it brings real pedagogical and organizational value. With clearly defined objectives, systematic data collection, the application of combined methodologies and careful interpretation of the results, it is possible to gain relevant insights into the impact of digital tools.
More importantly, involving students, teachers, and administrators in the evaluation process ensures transparency and increases the likelihood of decisions being made that will be accepted and useful. In this way, evaluation becomes a tool for improving the quality of education, rather than a mere formality. Ultimately, only systematic monitoring of impact allows digital technologies to become a truly integrated and meaningful part of academic practice, rather than a passing trend or technical innovation without long-term impact.
8. References
Background Colour
Font Face
Font Size
Text Colour
Font Kerning
Image Visibility
Letter Spacing
Line Height
Link Highlight