Professional Development Workshop

Multi-dimensional Evaluation

for Influence and Transformation


The program committee has selected 9 workshops for March 4. These workshops will be held either for half day (6) or a full day (3). They are designed for beginners, intermediate-level participants, or experienced professionals.

⚠ Important: Some workshops will take place at the FAO Headquarters, while others will be held at the IFAD offices.

FAO Headquarters

IFAD Offices

Half day (AM)

AI for Transformative Evaluation: Practical guide on Applications of AI in International Development

Fiona Kastel and Sanchi Lokhande (3ie)

(Place: FAO)

Audience level: Intermediate - general evaluation professional & users

As artificial intelligence (AI) technology rapidly advances, it is reshaping how we collect, analyze, and interact with information in the international development space. AI tools have the potential to support evaluators as they work to generate insights that drive transformational change. This training provides an essential foundation for those looking to integrate AI in their work, equipping participants with the knowledge and skills to make informed decisions and leverage AI responsibly.

This training module serves as an introductory guide to leveraging AI for evaluation in international development. Participants are introduced to fundamental AI concepts and terminology, and guidance on when and how to use different AI tools. Fundamental AI concepts covered include the basics of machine learning and generative AI, with examples of their use in various contexts, including geospatial analysis and evidence synthesis. We also provide an overview of existing ML and generative AI-based internal assistants and chatbots used by MDBs and NGOs that have been developed in the international development space, their use cases and differences. We will discuss the rapid evolution of this technology and considerations for effective applications with examples drawn from 3ie’s Development Evidence Portal.

Through an interactive breakout session, participants will have the opportunity to trial some common AI interfaces (e.g., ChatGPT, Claude, Bing) for a selected workflow enhancing use-case – summarization, brainstorming, or coding support. This hands-on component is designed to bridge theory and practice, allowing attendees to gain firsthand experience with AI and see how it can be adapted to meet their unique needs, as well as examine its limitations.

In addition to introducing tools, this training provides practical guidance on maximizing AI’s utility through effective prompting strategies. Participants will learn how to communicate with generative AI applications to achieve the most accurate and helpful results. 

The promise of AI to enhance workflows and increase evaluation capacity comes alongside certain challenges. This module addresses the critical challenges and limitations associated with AI use in development, including ethical concerns, data privacy issues, and the potential for misinformation. By understanding these constraints, participants will be better equipped to mitigate risks and make decisions that align with development goals and values.

This training is designed to empower participants with the knowledge to navigate the rapidly evolving world of AI and apply these tools thoughtfully and effectively in their work, setting a foundation for more impactful and ethical AI applications in development.

Participants will be able to:


Fiona Kastel leads the Data Innovations Group at 3ie and provides research, program management, and data analytics support for multiple programs and initiatives. She designs and conducts impact evaluations and supports evidence synthesis products, including conducting a geospatial impact evaluation of an agricultural intensification program in Niger, and an impact evaluation of the European Bank for Reconstruction and Development’s COVID-19 Solidarity Package. Prior to joining 3ie, Fiona worked on projects studying crime and resilience in Trinidad and Tobago and education and upward mobility in the U.S. She has a Masters in Public Affairs, specializing in Data Analysis, from Brown University and Bachelor of Science in Quantitative Analysis of Markets and Organizations with emphases in Cognitive Science and Finance from the University of Utah.

Sanchi Lokhande provides research and business development support at 3ie. Prior to joining 3ie, Sanchi worked as a consultant with the Poverty and Equity Global Practice at the World Bank in Washington, DC. Her work involved generating evidence on gender indicators in the Western Balkan countries in order to shape policy recommendations for their European Union ascension process. Sanchi has also supported 3ie's Swashakt program and worked as a mental health and gender specialist across several organizations in India. Sanchi holds a Master’s in International Development Policy from Georgetown University, and a Master’s in Clinical Psychology from the Tata Institute of Social Sciences, Mumbai.

What do inclusive and participatory approaches to development mean for evaluation?

Isabel Rocha de Siqueira and Beatriz Teixeira (BRICS Policy Center/International Relations Institute, PUC-Rio)

(Place: IFAD)

Audience level: Intermediate - general evaluation professional & users

This workshop invites participants to engage in a reflective and imaginative exercise, questioning what evaluation might look like when engaging with inclusive and participatory approaches to international development. It stems from the work of the Critical Approaches to Development Network (ACD-Rede), a network of academics, activists, and professionals from the South who engage critically with the theme of development.

 ACD-Rede seeks to facilitate the formation of a body of shared knowledge based on experiences and perspectives from the South, in order to refocus the debate on development with a look from the margins, as it provides and builds networks through a plural dialogue between academia, activism, and professional development sectors.

Evaluation plays a central role in international development programmes. By defining success through choices of metrics, benchmarks, and indicators, it shapes perceptions of what development should aim to achieve. The role it plays in accountability often shows to whom development initiatives are accountable and which actors hold (and leverage) the most influence.  

Moreover, by often prioritizing what can be measured, evaluation can inadvertently narrow the conceptual and operational scope of development, as the reliance on measurable outcomes can lead to a reductive understanding of development. Complex, nuanced, or intangible aspects may be sidelined if they cannot be easily quantified. This creates a risk that development is defined by what is measurable rather than by what is meaningful.

If measurability is a prerequisite for action, evaluation frameworks can end up stifling experimentation and constraining diverse development strategies. It also risks overlooking context-specific or culturally significant aspects of development. Therefore, by proposing an inclusive and participatory  engagement with development practice, relying on the experiences of those who inhabit spaces of intervention and on the knowledge produced by members of historically marginalized communities, the activity aims to contribute to the reimagining of evaluation frameworks and ultimately, to new narratives of development, marked by shifts in power dynamics and more equality and inclusivity.

By the end of the workshop, participants should be able to:


Isabel Rocha de Siqueira is Director of the International Relations Institute (IRI), PUC-Rio, Brazil. She hold a PhD in International Relations (War Studies, King's College London), funded by CAPES, and an MSc in International Relations by IRI/PUC-Rio. A few recent and current research initiatives include projects on territorial development, participatory methodologies; intersections of peace and development in the Global South and South-South Cooperation for peace and development (with UNOSSC); fragile states (with the g7+); on datafication and development in the Global South; and on digitalization, datafication and decent work (FORD, within contributions to T20). She currently coordinate the Methodology Laboratory at IRI/PUC-Rio at postgraduate level. She also founded and coordinate the Critical Approaches to Development Network (ACD-rede) and is editor-in-chief for the Brazilian Journal in Social Sciences (RBCS, in Portuguese).

Beatriz Teixeira is a researcher at the BRICS Policy Center (Rio de Janeiro, Brazil) and a member of the Critical Approaches to Development Network. Her main research interest is the intersection of gender, data and development. She holds a master’s in international relations, an MBA in Sustainability and a degree in Law. With over 10 years of experience in MEAL, she works as Performance, Research and Consultation Manager for the West Berkshire Council, in Newbury, England. She managed an M&E observatory on the use of technology in public schools of Brazil and worked as M&E Specialist at the United Nations Institute for Training and Research (UNITAR), in Geneva, Switzerland. Her first book, “Caring about women and the role of quantification in family planning” will be published in Summer 2025 and is based on her thesis, which was awarded an Honorable Mention in the 10th Dissertation and Thesis Contest of the Brazilian Association of International Relations (ABRI). 

Evaluation Ecosystems: Building Culturally Relevant and Context-Adapted Methodologies for Transformational Impact (Joint workshop by evaluation offices of the NDB and IFAD)

Monica Lomena-Gelis, Kouessi Maximin Kodjo (IFAD-IEO), Henrique Pissaia (NDB-IEO) and Chao Sun (NDB)

(Place: IFAD)

Audience level: Intermediate - general evaluation professional & users to advanced

In increasingly interconnected and complex world, evaluation methodologies must evolve to be contextually relevant, culturally responsive, and capable of unpacking the complexities inherent in development operations, particularly in BRICS and other developing countries. This workshop, jointly conducted by the Independent Evaluation Offices of the New Development Bank (NDB) and the International Fund for Agricultural Development (IFAD), aims to provide participants with actionable tools and strategies to design evaluations that are adapted to specific socio-cultural and policy environments.

The session begins with an introductory presentation outlining the enabling factors, barriers, and strategies for building robust evaluation ecosystems. This is followed by the presentation of concrete case studies showcasing practical approaches for integrating cultural and contextual considerations in evaluations. Participants will engage in an interactive framework-building exercise, collaboratively designing evaluation models using conceptual guidelines provided by facilitators. The final segment focuses on group discussions, where participants receive feedback, refine their approaches, and enhance their strategies for demand-driven, context-adapted evaluations.

By leveraging the combined expertise of NDB and IFAD’s evaluation offices, the workshop will offer practical insights from both multilateral infrastructure development and rural agricultural development perspectives, providing a nuanced understanding of how evaluation frameworks can be adapted across different sectors and regions.

By the end of this half-day workshop, participants will have a deeper understanding of the critical role of culturally sensitive evaluation methodologies. They will gain practical tools to address barriers to evaluation ecosystem development and techniques to manage complexity in evaluation frameworks. This session is designed to empower evaluation practitioners, policymakers, and researchers with the knowledge and skills necessary to develop transformative, demand-driven evaluation models tailored to national and regional contexts.

Expected Learning Outcomes:


Kouessi Maximin Kodjo has been a Lead Evaluation Officer at the Independent Office of Evaluation of IFAD since 2017. He has over 25 years of experience in international technical cooperation, including program management and evaluation. Before joining IFAD, Max Kodjo served as a Monitoring and Evaluation Expert for the Technical Cooperation Programme of the International Atomic Energy Agency (IAEA) in Vienna, Austria, from 2011 to 2017. He was also the Monitoring and Evaluation Component Manager for the FAO - National Programme of Food Security (NPFS) in Abuja, Nigeria, from 2008 to 2011. Prior to these roles, he held several positions, including Senior Advisor for Research and Evaluation for international NGOs in Nairobi, Kenya, Lecturer of Agricultural Economics at the University of Abomey-Calavi in Benin, and Agricultural Economist and Team Leader for World Bank-funded projects in Benin. Maximin Kodjo holds a PhD in Agricultural Development Planning and Evaluation from Humboldt University of Berlin, Germany, a Master’s degree in Agricultural Economics, and a Bachelor’s degree in Agriculture and Horticulture from the Republic of Benin.

Monica Lomena-Gelis has been a Senior Evaluation Officer in the Independent Office of Evaluation (IOE) at the International Fund for Agricultural Development (IFAD) since 2019. She is an environmentalist and evaluator with twenty years of professional experience in development in Latin America and Africa. Before joining IFAD, she worked for the offices of evaluation of the Inter-American and African Development Banks, other United Nations agencies, NGOs, and the private sector. She has presented and provided training at several regional evaluation conferences. Monica has been actively engaged in various evaluation networks of the UN, the Evaluation Cooperation Group among development banks, and voluntary organizations of professional evaluators. She holds a Doctorate in Sustainability from the Polytechnic University of Catalunya, Spain, and a Master’s in Environment and Development from the University of East Anglia, United Kingdom.

Henrique Pissaia has been a Principal Professional Specialist in the Independent Evaluation Office (IEO) of the New Development Bank since March 2023. Henrique previously worked at the Brazilian Ministry of Planning and held positions on the Board of Directors and Governors in various Multilateral Development Banks and Funds, such as the AfDB, CAF, FONPLATA, IDB, IFAD, and others. He also worked at IFAD and FONPLATA, where he was Chief of Staff of the Executive Presidency and General Coordinator for Strategic Alliances. Henrique has extensive experience working on the entire project cycle, from inception to evaluation, and was a member of the Evaluation Committee of IFAD’s Executive Board. A Brazilian national, Henrique holds a PhD in International Economics from the University of International Business and Economics in Beijing, China, a Master’s in Law from the University of California, Berkeley, USA, and a Law degree from UNICURITIBA, Brazil. He has published various papers, books, and book chapters, and has given speeches on international development, ESG policies, international law, and international economics.

Chao Sun is a Senior Professional in evaluation methods. He began his career at the New Development Bank (NDB) in November 2018 as a member of the Internal Audit Department, where he focused on evaluating and improving the controls, business processes, risk management, and governance practices of NDB. Prior to that, he was a Senior Manager at PricewaterhouseCoopers, with around ten years of risk assurance and audit experience in both China and the United Kingdom. Chao is a Certified Public Accountant, a Certified Internal Auditor, and a BAR member of the Ministry of Justice of China. He graduated from Peking University with dual degrees in Law and Economics and obtained a Master’s degree in Finance from the London School of Economics and Political Science.

Half day (PM)

Evaluation in service of equity? Approaches and methods for equitable evaluation

Dr Steven Masvaure and Dr Taku Chirau (CLEAR AA)

(Place: FAO)

Audience level: Intermediate - general evaluation professional & users

The World Health Organization (WHO) (2015), defines equity as “the absence of avoidable or remediable differences among groups of people, whether those groups are defined socially, economically, demographically or geographically.”  The goal of equity is to eliminate the unfair and avoidable circumstances that deprive people of their rights. Therefore, inequities generally arise when certain population groups are unfairly deprived of basic resources that are made available to other groups. A disparity is ‘unfair’ or ‘unjust’ when its cause is due to the social context, rather than biological factors. Equitable evaluation contends that conducting evaluations with an equity approach is more powerful, as evaluation is used as a tool for advancing equity. It emphasizes that context, culture, history and beliefs shape the nature of evaluations, specifically in the diverse and often complex African reality. Furthermore, equitable evaluation can render power to the powerless, offer a voice to the silenced and give presence to those treated as invisible. Despite the importance of equitable evaluation in the Global South, there are limited approaches and methodologies that evaluators can use in their practice. This workshop will discuss several approaches and methodologies that can use by evaluators to promote a just social order.

Evidence from various sources shows that inequality is prevalent in the African continent, hence the need to focus on evaluative solutions that address the structural issues that contribute to the different forms of inequality such as economic, political, and social inequality. Despite a plethora of development interventions in the African continent, a large proportion of the population in the continent is still lacking access to basic goods and services for survival. The effectiveness of developmental programmes in sub-Saharan Africa has been elusive, to the extent that minimal inroads have been made in addressing key challenges such as poverty, inequality and currently, the effects of climate change. One is forced to ask the question: Why is it that millions of people in Africa have limited access to clean water? Why is it that millions are without food, medicine, education, or a political voice? Why is it that millions suffer from human rights abuses? The realities cut far deeper than just being poor.

Learning Objectives:


Dr Steven Masvaure is a Senior Evaluation Technical Specialist at the Centre for Learning on Evaluation and Results based at the University of the Witwatersrand. He holds a PhD in Development Studies. Steven possesses more than 15 years of working experience as a researcher and evaluator in the development sector across several African countries. He is an expert in strengthening country-led monitoring and evaluation systems in Anglophone Africa. His areas of interest include local government, climate change adaptation, food security, public employment programmes, social protection and monitoring and evaluation. He has also worked as an evaluator in several African countries and published research papers on food security transforming evaluation (Made in Africa Evaluation), Adaptive management of climate change and national evaluation systems. He is currently one of the editors of a book on Equitable Evaluations.

Dr Taku Chirau is a Deputy Director at the Centre for Learning on Evaluation and Results Anglophone Africa. Takunda has conducted research and several National Evaluation Capacity Development (NECD) activities through respective UNICEF country offices and the East and Southern Regional Office. His work contributions include capacity strengthening and or development through pieces of training, working with VOPEs, developing evaluation guidelines and evaluation plans/agendas, developing national evaluation plans and national monitoring and evaluation policies and developing monitoring and evaluation capacity strengthening strategies and plans, amongst others. He holds a PhD in Sociology, He has written peer-reviewed journal articles and book chapters. He is currently one of the editors of a book on Equitable Evaluations. 

Applying Foresight Thinking and Methods to Evaluation

Annette L Gardner, PhD  (ALGardner Consulting) and Steven Lichty, PhD (REAL Consulting Group)

(Place: IFAD)

Audience level: Intermediate - general evaluation professional & users

The past few years have demonstrated that our economy, climate, politics, and social order can change much faster than in past decades. Our present, and certainly our future, will continue to be volatile, uncertain, complex, and ambiguous (VUCA). For our evaluation clients, it is no longer enough to reflect on the past and the present in program development. Evaluators must actively anticipate what may happen in the future and feed that information back into decision-making and evaluation planning. Foresight or the ability to use futures methods to inform strategy and decision-making, provides a rigorous and proven set of tools to perceive, make sense of, and act upon ideas about the future. In this workshop, we both present and demonstrate foresight methods that are teachable in a one-day workshop and can be readily adapted to evaluation practice, including the Futures Wheel and Wind Tunneling with alternative scenarios. 

In this workshop, co-facilitators  will use a 3-part format that is a combination of lecture and group exercises. For the most part, the workshop is a series of exercises done in break-out groups of 2 to 5 people.  To demonstrate the application of foresight tools, we will use the case of an evaluation of a program to build community gardens in urban areas. Also, throughout the Workshop we will map the two foresight methods--Futures Wheel and Wind Tunneling--on to a typical evaluation design and discuss how they can bolster the development and testing of a Theory of Change, data collection, informing strategy, etc.  Last, we will use recent foresight evaluation cases to highlight specific foresight methods that lend themselves for inclusion in evaluations to evaluate foresight and non-foresight programs.

Part 1:  Teach participants foresight fundamentals. Using a lecture and question/answer approach,  we will lay the foundation for using foresight methods, including an explanation of what foresight is and key methods. We will make the case for strengthening evaluator foresight in the face of great change and demonstrate the fit of foresight with evaluation and how it supports strategy and evaluation to support transformational change.

Part 2:  Demonstrate the use of the Futures Wheel to systematically explore the implications of factors that shape a program or policy.  We have found that pairing up participants supports participation and learning the method. Using the community gardens evaluation case, participants will determine the first and second order consequences of the trend "increased demand for healthy, locally-grown produce." This exercise will deepen participant understanding of trends and their impacts, as well as provide a tool that evaluators can use with stakeholders in developing a theory of change. 

Part 3: Demonstrate Wind Tunneling and how to use alternative scenarios of the year 2040  to test the fit and robustness of evaluation recommendations.  We will use an immersive, ‘day in the life’ approach whereby participants step into a scenario and use the characteristics of the scenario to consider whether a specific evaluation recommendation (scaling up the community gardens project from 5 to 50 gardens) will hold up under the scenario. Participants will answer questions about strengthening the robustness of recommendation, such as what are the strategies or actions that will increase public and policymaker support for more community gardens?

Learning Outcomes:


Annette L Gardner, PhD, Principal at ALGardner Consulting, has 30 years of evaluation expertise, directing national, state, and local evaluations focusing on health care reform, adoption of new models of care, and expanded advocacy capacity. A thought leader in advocacy and policy change evaluation, she co-authored the definitive book, Advocacy and Policy Change Evaluation: Theory and Practice. Recently, she co-edited the recent special issue of New Directions of Evaluation on foresight evaluation. A futurist, she led the Association of Professional Futurists Foresight Evaluation Task Force and Initiative to advance foresight practitioner evaluation capacity, including developing an online foresight evaluation toolkit housed on the BetterEvaluation web-page and a foresight evaluation guide. 

Steven Lichty, PhD, is a co-founder and managing partner of REAL Consulting Group, a boutique firm focused on foresight, research, and evaluation in Nairobi, Kenya. He has 25 years’ experience in various sectors across Africa, Asia, Europe, and Latin America, including those in post-conflict, transitional, and fragile environments. He works with transformative foresight, futures thinking, systems mapping, theory of change, and other qualitative methods. He was a Rotary Peace Fellow and Futures Fellow with the Association of Professional Futurists (APF) and is currently leading its Foresight Evaluation Initiative. Steven has a PhD in African Studies, where his doctoral research examined religious pedagogies of political socialization in Kenya. He also has an MPhil in Futures Studies from Stellenbosch University. Steven is currently focused on the nexus of healing-centered wellbeing programs among marginalized groups and its long-term impact on the WHO’s concept of the Triple Dividend.

Role of Emerging Technologies in Development Evaluation: Global Developments and BRICS Perspectives

Dr. Srinivas Yanamandra  (PayTM)

(Place: IFAD)

Audience level: Intermediate - general evaluation professional & users

The primary goal of this workshop is to enhance the technical capacity of development evaluation professionals in leveraging cutting-edge technologies in three different dimensions as follows: 

Open Source Investigations: Digital Public Infrastructure (DPI) plays a vital role in the promotion of Open Data and the generation of High-Value Datasets (HVDs) by establishing systematic frameworks for data collection, management, and dissemination, useful for Open Source Investigations. The technology behind DPI, characterised by its modularity, API integration, and a strong focus on metadata, significantly enhances the accessibility and usability of Open Data. The Workshop delves into this synergy between DPI, Open Data, HVDs and development evaluation using practical case studies from India’s DPI framework and its relevance for evaluation of projects such as metro-rail financing systems or financing COVID-19 relief packages. 

Cyber resilience in development evaluation: Cyber resilience can be dissected into three components: (a) Vulnerability Assessment involves systematically scanning for security flaws such as misconfigured network settings, outdated software versions, and inadequate access controls (e.g., specific vulnerabilities in the energy sector may include the risk of cyberattacks on grid management systems). (b) Incident Response refers to the established protocols that organisations employ to detect, contain, and recover from cyber incidents. (c) Incorporating cyber insurance as part of a comprehensive risk management strategy against the potential fallout from cyber incidents. The Workshop discusses cyber resilience policy initiatives such as Brazil's PNCiber and CNCiber (national cyber security policy and the apex body to oversee the same), India's National Critical Information Infrastructure Protection Centre (NCIIPC), and China's Cybersecurity Law (notably Articles 31 and 38) and their intersection with development evaluation in sectors vital to economic stability, such as energy, finance, and telecommunications.

Sentiment analysis powered by AI tools: Sentiment analysis specifically leverages vast amounts of unstructured data from social media platforms, enabling the detection and extraction of public emotions and opinions about various projects, including public transport systems, water infrastructures, and social development initiatives. For instance, in the context of public transport projects, sentiment analysis can reveal how communities perceive changes in service quality, accessibility, and safety. Similarly, in the water sector, sentiment analysis can be particularly useful for identifying public concerns related to service reliability and water quality. For example, a spike in negative sentiment regarding water tastes or odours can prompt immediate investigations and corrective actions. The Workshop focuses on the importance of sentiment analysis for development evaluation assisted with dedicated technology tools. It will discuss how institutionalising the AI toolkits (similar to World Bank's Development Impact Evaluation (DIME) toolkits) can significantly enhance the development evaluation process.

Learning Outcomes:


Dr. Srinivas Yanamandra is the Group Head - Regulatory Affairs & Policy at PayTM, India's largest fintech payment company. Earlier, he led compliance functions at BRICS New Development Bank (Shanghai), and IDFC First & ICICI Banks (Mumbai). He is a Chartered Accountant, a Cost & Management Accountant, a Fellow of International Compliance Association (UK), a Certified Anti Money Laundering Specialist (USA), Certified Global Sanctions Specialist (CGSS), and a doctorate from the University of Manchester (UK). His development finance experience includes setting up the integrity/anti-corruption/compliance function at the NDB (during 2017-22) (contributing to economic sanctions evaluation, proactive integrity assessment, participation in missions for non-sovereign projects, and to handle project grievances). He is a TEDx speaker and recipient of “Global Achiever Award 2019” from the Institute of Chartered Accountant of India. 

Full day

Utilization-Focused Evaluation for Systems Transformation

Michael Quinn Patton and Charmagne Campbell-Patton (Utilization-Focused Evaluation)

(Place: FAO)

Audience level: Intermediate - general evaluation professional & users

Utilization-focused evaluation has a fifty-year history. The 5th edition of the book (2022) addresses the challenges of adapting utilization-focused evaluation to support a more equitable and sustainable world.  This constitutes a major new direction for utilization-focused evaluation in the face of the climate emergency and related threats to the future of humanity (the polycrisis: multiple overlapping and mutually reinforcing crises). This workshop will engage participants in applying the principles of utilization-focused evaluation to evaluate systems transformation. That will include introducing a theory of transformation based on multiple, integrated theories of change facilitated by the utilization-focused evaluator.  The role of the evaluator is to support and influence transformational change through principles-focused developmental evaluation within a utilization-focused framework. Such a role requires transforming evaluation to evaluate transformation. The workshop will integrate theory and practice based on fundamental premises, extensive research on evaluation use and influence, and the 10 operating principles of utilization-focused evaluation. Participants will learn to use the Transformation Impact Matrix for tracking transformational trajectories and interconnections.

Each of four modules will include a framework presentation of relevant premises, principles, and practices (20 minutes); questions and comments from participants (10 minutes); small group application exercise (20 minutes); debrief of the exercise to extract insights and lessons.

Goal:

Participants will understand and be able to apply utilization-focused evaluation principles to evaluate systems transformation.

Objectives: 


Michael Quinn Patton has 50+ years’ experience years as an evaluator, was former President of the American Evaluation Association and author of 8 major evaluation books including a 5th edition of Utilization-Focused Evaluation. His books include Practical Evaluation, Creative Evaluation, and Developmental Evaluation, Principles-Focused Evaluation, Facilitating Evaluation,Blue Marble (Global) Evaluation, and Getting to Maybe: How the World Is Changed. He received the Alva and Gunnar Myrdal Award for Outstanding Contributions to Useful and Practical Evaluation Practice, the Lazarsfeld Award for Lifelong Contributions to Evaluation Theory, and the Research on Evaluation Award, all from the American Evaluation Association. EvalYouth named him recipient of its first Transformative Evaluator Award (2021).He regularly conducts training for The Evaluators’ Institute and is a founding member and former Board Trustee of the International Evaluation Academy.

Charmagne Campbell-Patton, Director of Organizational Learning, Utilization-Focused Evaluation. Program Leader and Coordinator for the Blue Marble Evaluation Network.  Co-Author of the 5th edition of Utilization-Focused Evaluation. Recent co-presenter with Michael Patton at the conferences of the American Evaluation Association and the European Evaluation Association. Former evaluation director for World Savvy, an international educational initiative. 

Scaling Impact: New Ways to Plan, Manage, and Evaluate Scaling

Dr. John Gargani (Gargani + Company) and Dr. Robert McLean (IDRC)

(Place: FAO)

Audience level: Basic. (e.g young and emerging evaluators)

In this workshop, participants will learn a new approach to scaling the social and environmental impacts of programs, policies, products, research, and investments. The approach is based on the book *Scaling Impact: Innovation for the Public Good* written by Robert McClean and John Gargani, and is grounded in their collaborations with social innovators in the Global South. The workshop goes beyond the book, reflecting the authors’ most recent thinking, and challenges participants to adopt a new scaling mindset. We first introduce participants to the core concepts of the book. After each concept, participants practice what they learned by engaging in small-group, hands-on exercises drawn from their own professional settings. The workshop is intended as an introduction, and participants will be provided with free resources to continue their learning. Participants should have a basic understanding of evaluation, either as a practitioner or user. They should know what a logic model is and recognize that programs, policies, and products create impacts in complex environments. Participants may come from any field, sector, or functional role. Program designers, managers, and evaluators are welcome. By the end of the workshop, participants will be able to define impact, scaling, operational scale, and scaling impact; apply the four principles of scaling; understand there are many ways to scale and know how to choose among them; articulate a scaling theory of change that conveys the logic of scaling and identifies scaling risks; and apply the dynamic evaluation systems model.


Objectives: 


Dr. John Gargani is an evaluator with over 30 years of experience. He served as President of the American Evaluation Association in 2016, coauthored the book *Scaling Impact: Innovation for the Public Good*, and directs evaluations around the world ranging from multi-site randomized controlled trials to early-stage innovation design. He spends most of his time conducting research, writing, speaking, and teaching on evaluation topics related to impact, scaling, value, and AI. He is an experienced teacher of graduate students (Claremont Graduate University and University of Pennsylvania) and professionals around the world. He holds an MBA from the Wharton School at the University of Pennsylvania, an MS in Statistics from New York University, and a PhD from University of California, Berkeley.

Dr. Robert McLean is a Senior Program Specialist in Policy and Evaluation at Canada’s International Development Research Centre (IDRC) and a Fellow of the Integrated Knowledge Translation Research Network (IKTRN) at the Ottawa Hospital/University of Ottawa. His broad interest lies in understanding how human creativity can create a better world, and he has pursued it by working across government, private, and NGO sectors. He publishes scientific research and invited commentary in venues ranging from Nature to the Stanford Social Innovation Review, and is coauthor of the book *Scaling Impact: Innovation for the Public Good*. He holds a Ph.D. in the Department of Medicine at Stellenbosch University, South Africa; an M.Sc. from the Global Development Institute of the University of Manchester, England; and undergraduate degrees from both Carleton University, Canada and the University of KwaZulu-Natal, South Africa.

Evaluating Transformational Change: Tools and Examples from the Global South and North

Neha Sharma (RMB), Tabitha Olang (GEAPP's Africa), Númi Östlund (EBA), Kevin Moull and Thomas Wencker (DEval)

(Place: FAO)

Audience level: Basic. (e.g young and emerging evaluators)

Addressing global environmental and development challenges, including the Sustainable Development Goals (SDGs) and climate-resilient development, demands transformational changes that are just and inclusive, leaving no one behind. Stakeholders in both the Global South and North are actively working toward just transitions and impactful transformation. This workshop introduces participants to the essential tools and practical examples needed to evaluate and monitor transformational change in diverse contexts, drawing on both cutting-edge science and evaluation literature. 

The workshop opens with an overview of the need for transformational change, covering core concepts, definitions, and dimensions. Trainers will present insights from the latest scientific and evaluative research.

Participants will explore real-world applications of transformational change evaluations, including recent case studies from bilateral and multilateral development actors. Examples include an ex-ante evaluation of the transformative portfolio of Swedish climate ODA, an evaluation by German development cooperation, and summative evaluation of CIF’s Pilot Program for Climate Resilience that uses transformational change dimensions to assess progress. Hands-on examples will help participants define the dimensions of transformational change for their evaluation needs. 

The workshop will provide an overview of the evaluative tools available to evaluate transformational change, including theories and signals of transformational change. Participants will work collaboratively to create evaluation designs tailored to their contexts, developing realistic or hypothetical approaches for assessing transformational change. 

The workshop will share ideas and lessons on how to use evaluative evidence on transformational change for decision making, strategy development, and adaptative management in organizations. By creating processes throughout the program cycle, evaluative work can influence decisions real-time, and not just at the end of programs and projects.

Objectives: 


Neha Sharma leads Results-Based Management (RMB), Knowledge and Learning at the Adaptation Fund (AF), driving the organization's mission to help vulnerable communities in developing countries adapt to climate change. She has over fifteen years of experience in the climate, environment, and development sectors, and is an expert on evaluation methods, results measurement, and knowledge management. In past roles, Neha headed the Evaluation and Learning Unit at the Climate Investment Funds and advanced national and organizational evaluation capacity at the Independent Evaluation Group of the World Bank. She has contributed to evaluative research and policy engagement in organizations such as the Abdul Latif Jameel Poverty Action Lab. Neha holds a Master’s degree in Public Administration from the Harvard Kennedy School and a Master’s degree in Economics from Jawaharlal Nehru University. She is deeply committed to using evidence to shape strategy and drive transformative change.

Tabitha Olang leads Monitoring, Evaluation and Learning functions in Global Energy Alliance for People and Planet’s Africa portfolio. Her work involves impact management, evaluation commissioning, and building the MEL capacity of the programmatic teams. She has over 9 years’ experience in the renewable energy, energy access, energy transition and development sectors.

Númi Östlund is Programme Manager at EBA, where he works as evaluator and evaluation commissioner. Publications include reports on climate mitigation, results reporting and Theories of Change. 

Kevin Moull, Evaluator, German Institute for Development Evaluation.

Thomas Wencker is Senior Evaluator at German Institute for Development Evaluation (DEval). He focuses on climate change mitigation, development cooperation in fragile contexts, and patterns of aid allocation. At DEval’s Competence Centre for Methods, he specializes in applying machine learning to evaluations.