Photo: OLPC
In a pursuit to understand the core principles in successful M&E of ICT-based education programs, I spoke with Daniel Light, M&E expert at the Education Development Center (EDC). Light has evaluated EDC and USAID tech-related education programs for around twenty years. As he explains, ICT education programs are only effective to the extent that the teachers utilize ICTs for learning activities and make the student the center of focus. In other words, ICTs cannot add to education much unless the teachers utilize the tools correctly.
Traditional evaluation of education programs focuses on easily quantifiable indicators, such as teacher and student attendance, and student test scores. Though these indicators are important, Light argues that the quality of teaching and learning is not fully captured in these statistics.
Instead, evaluation should consider what researchers know about education quality, namely teachers’ pedagogical beliefs and practices. In education, teachers that focus on rote memorization and lectures are generally less effective than teachers who engage the students in activities and who adapt their lessons to meet particular students’ needs and interests.
Student-centered pedagogical beliefs are especially important in education programs that include ICTs. For example, computers are most likely to be effective tools when each student has access to a computer, and has a teacher to direct their usage. If the students aren’t the ones controlling the mouse, then much of potential knowledge to be gained is lost; they need to direct their own learning.
Photo: Microsoft
Many development funders now require randomized control trials (RCTs) to evaluate the impact of their development program. There is a problem with the emphasis on RCTs, Light argues. RCTs measure specific behaviors, but education is inherently unpredictable in its outcomes, and technology is similar in that regard. Combined, ICT education programs have many unexpected consequences. Many funders want to secure a particular impact, like increased mathematics scores, and want to do so by increasing students’ ICT usage. Light, however, contends that ICT education programs can improve mathematics scores, especially when they are directed to do so, but they will always have other impacts, unforeseeable before the start of the program.
A better way to measure the impact of ICT education programs, says Light, involves a series of phases, lasting about one year per phase. The first phase should be exploratory, to see what is actually happening in a program compared to what was originally planned. Since outsiders design many development programs, implemented programs often turn down different pathways over time. After exploring the program implementation, evaluators should fine-tune their methods, progressively tightening their measurements. They should engage in group observations, participant observations, and focus groups. Through these methods, they can design interview and survey questions, eventually measuring particular behaviors amongst the population under study. RCTs at this stage in the research process are appropriate, since the researchers should have outlined the behavior methods through their observations and discussions with participants.
When used effectively, ICTs increase educational achievement and change teachers’ pedagogical beliefs and practices. In fact, they can change teachers’ role from talking heads to activity facilitators. ICT programs, then, can easily highlight the need for pedagogical teacher change. When they are then applied to national education policy, they can bring about national curriculum changes, affecting all education practices, not just for ICT programs.