Improving Instructor Impact on Learning with Analytics
Each of us can recall an instructor who made learning engaging, relevant and impactful, inspiring us to apply what we learned. Unfortunately, each of us can also recall an instructor who failed in one or more of these areas.
BY ERIC A. SURFACE, Ph.D., AND REANNA P. HARMAN, Ph.D. | Originally Published in Training Industry Magazine
Instructors are force multipliers, reaching hundreds — if not thousands— of learners, impacting both their learning experience and motivation to transfer. So, how can we improve instructor impact on learning?
Learning and development (L&D) professionals use metrics and analytics to demonstrate program effectiveness, and to make program management/improvement decisions. This approach can also be applied to manage, improve and develop instructors.
Instructor-focused formative evaluations and analytics are typically neglected, even as they can help improve instruction and, as a result, learning outcomes. The following example demonstrates how formative instructor evaluations and analytics can improve instruction and learning.
How Much Do Instructors Matter?
Over a decade ago, a client project presented an opportunity to explore how much instructors matter in the learning process. We frame this case study using the following two questions posed in the article, “Two Fundamental Questions L&D Stakeholders Should Answer to Improve Learning,” to explore a problem and guide evaluation and analytics:
- How well did I do?
- How can I do better?
Context:
In 2005, we investigated a gap between desired and actual learner skill proficiency in a job-required foreign language training course, which lasted 18-24 weeks and was the last phase in the training pipeline for U.S. Army Special Forces (SF). Note that selecting and training each candidate was a six-figure investment. Failing to achieve the proficiency standard for graduation meant a candidate was dropped from or recycled through the training pipeline, creating not only a monetary loss but also the loss or delayed deployment of a soldier with job-focused skills. Achieving a 100% rate graduation was critical, as this was during the height of operations in Iraq and Afghanistan.
Questions:
Program leaders asked themselves, “Is our training program meeting its proficiency and graduation objectives and producing the capabilities needed by operational units?” After evaluating its effectiveness, they determined it was not. Then, they asked, “How do we improve learning, graduation rates, and program effectiveness?”
Approach:
We helped program leaders answer the second question. Almost no data existed on diagnostic factors shown by research to impact learning. We decided on two strategies — analyzing archival learning outcome data and collecting survey data focused on diagnostic factors from future learners and instructors.
We had information about the training’s objectives, structure, stakeholders and context as well as learners’ class assignments and end-of-course (EOC) proficiency scores. This allowed us to determine how much individual and class-level characteristics impacted proficiency scores.
The nested structure of learning events provides the opportunity to explore sources of influence on outcomes, even in the absence of direct data on diagnostic factors associated with a level of analysis (e.g., class). For our client, each learner was nested within a class and each class within an event. Learners and classes were also nested within instructors, as each instructor taught multiple classes.
Results:
Our analyses provided evidence that instructors contributed strongly to learners’ success in developing proficiency. For example, instructors accounted for 42% of differences (i.e., variance) in learner reading proficiency scores.
SO, INSTRUCTORS IMPACT LEARNING. NOW WHAT?
Identifying instructors as a lever to improve learning outcomes gave us a diagnostic factor to focus on … but now what? No specific instructor data existed to guide the creation of diagnostic survey items or of interventions to improve instruction. We determined what factor needed improving, but we still had to determine how to improve it.
How Do Instructors Impact Learning?
Instructors impact learning directly through their decisions and actions in preparing and delivering training content and by interacting with learners. We defined and measured instructor performance — the decisions, actions or behaviors under instructors’ control related to their roles and objectives in the learning enterprise — not instructor characteristics.
Will Instructors Differ on Performance?
Since instructors were content subject matter experts (SMEs) with varying degrees of instructional experience, it was reasonable to assume there would be performance variability. Without instructor variability, this approach does not work.
INSTRUCTORS ARE IMPACT MULTIPLIERS THROUGH THEIR INFLUENCE ON LEARNERS AND NEED INSIGHTS, TOOLS AND SUPPORT TO MAXIMIZE THEIR IMPACT.
Defining Instructor Performance:
We reviewed research to identify instructional behaviors empirically linked to learner outcomes that could be rated by learners, instructors and/or supervisors. We identified behaviors that fit into four performance domains:
- Learner Engagement
- Classroom Management
- Responsiveness to Learners
- Adapting to Learner Needs
Over the years, we identified additional performance domains, but these four remained relevant for instructor-led training (ILT). Training context and content, instructor effectiveness measure(s), instructional philosophy, and learner and instructor populations all impact what performance domains are relevant.
Measuring Instructor Performance:
We also developed and validated instructor performance metrics, which assessed key behaviors in the four domains. Then, we collected data multiple times during and at the end of the training for two complete cycles. The metrics performed as designed with excellent construct validity and reliability.
Does Instructor Performance Impact Learning?
Performance ratings collected throughout the course, starting at the 25% course completion mark, significantly correlated with EOC outcomes. When we retrospectively compared the performance ratings of instructors who had high and low-proficiency classes, instructors who taught high-proficiency classes had higher ratings on all items, across all time points; higher performing instructors had higher performing learners.
With such robust findings, we developed and piloted a feedback intervention. We distributed a feedback report to instructors with results from the 25% collection, offered guidance on interpreting its results and suggested improvement resources.
When we had data from four training cohorts (two with feedback, two without), we compared instructors who received feedback to those who did not. Instructors who received feedback improved their subsequent performance ratings, and their learners had higher EOC assessment scores.
Intervention:
We implemented a formative evaluation and feedback program to deliver results and provide tools for reflection and improvement/development planning. The reports provided comparisons to help instructors determine if they needed to improve. Instructors used the report to guide conversations about development with supervisors. Supervisors used the reports to identify instructors for observation and coaching. The reports later transitioned to web-based dashboards.
ARE INSTRUCTORS STILL RELEVANT?
With so much focus on asynchronous, technology-delivered learning, it is understandable to question whether instructors and instructor-led training (ILT) are still relevant. The short answer is yes!
Approximately 67% of formal learning hours available in 2017 were instructor-led (53% traditional, 9% virtual and 5% nononline remote classroom), according to ATD research. Training Industry research concurs, finding on average companies deliver 64% of their training portfolios via ILT (39%) or virtual ILT (VILT; 25%). Other recent research (What Learners Want: Strategies for Training Delivery) found that 63% and 28% of learners, respectively, participated in at least one ILT course and in at least one VILT in the past 12 months.
Training Industry research found that, over the next 12 months, 21% and 31% of companies plan to increase their use of ILT and VILT, while only 10% and 8% plan to decrease their use. Thus, we see a place for ILT and VILT in training portfolios and a role for instructors into the foreseeable future.
Now, we successfully answered the question, “How do we improve learning, graduation rates, and program effectiveness?” and provided a mechanism to use formative evaluation, analytics and feedback to drive improvement.
Over time, instructor performance and effectiveness increased, and variability in instructor performance decreased. Thus, the program’s effectiveness increased, producing more capability.
INSTRUCTOR-FOCUSED FORMATIVE EVALUATIONS AND ANALYTICS ARE TYPICALLY NEGLECTED.
Are You Ready to Try This Approach?
Formative evaluation focused on levers, such as instructor performance, can drive continuous improvement and optimize the learning process and its outcomes. Every L&D program is different, so tailor the process as needed and let your findings guide its implementation. Before you get started, however, it is important to do the prep-work:
- Ask if the training program is meeting its objectives. Asking questions about effectiveness allows stakeholders to identify gaps between actual and desired outcomes linked to their roles and objectives. Prioritize outcomes desired by multiple stakeholders. If there are no gaps, stop. If stakeholders are satisfied with current performance, stop.
-
Ask if there is opportunity for improvement. Then, determine if improvement is possible given the context, stakeholders’ cooperation and the outcome’s measurement. If not, stop.
-
Develop questions related to improvement, such as “How can I impact the focal outcome?” or “What factors drive the focal outcome?” Training effectiveness research and models identify factors that typically influence learning outcomes. Statistical techniques can identify sources that influence outcome measures to narrow the candidates. Instructor performance is just one potential factor. Select factors to investigate that are easily measured.
-
Develop and pilot metrics for the selected factors, choosing the most appropriate data sources, measurement methods and collection times to test the impact on the focal outcome. Determine if the metrics function as designed, meeting both validity and reliability standards. If not, repeat until they do.
-
Collect and analyze data on these metrics along with learning outcomes. Determine if there is a relationship between the factor(s) and learning outcome(s). If not, stop.
-
Determine if the factor is suitable to be used in an intervention. Is the factor actionable? Does the factor’s measurement occur before the focal outcome’s measurement? Is there time for a change in the factor to impact the outcome? Determine if the evidence supports use of the factor as an intervention. If not, stop.
-
Develop and implement an analytics intervention to improve the relevant factors and associated outcomes. Evaluate and adjust over time.
FINAL THOUGHTS
Our case demonstrates the “two questions” approach in driving evaluation, analytics and feedback practice. Specifically, it provides an example of how instructor performance was identified as a key lever impacting learning, and how instructor performance measurement, analytics and feedback were used to improve instruction and its impact on learning outcomes.
Instructors are impact multipliers through their influence on learners and need insights, tools and support to maximize their impact. Analytics help supervisors have timely performance conversations, coach instructors and provide support based on data and insights. Ultimately, analytics and development tools provide instructors agency over their professional and career development. Timely, analyticsbased feedback empowers instructors to adjust their practice in process, sharpen their craft and create more value for themselves, their learners and their employers.
Dr. Eric Surface is CEO and Dr. Reanna Harman is VP for Practice at ALPS Insights. They have 35 years of combined L&D and consulting experience. ALPS Insights provides L&D evaluation, analytics and insights through its software platform, ALPS Ibex™, as well as consulting and analytics services. Email Eric and Reanna.
RECENT INSIGHTS
Impact Evaluation: From Employee Training to Leadership Development
SIOP Annual Event: Sat, April 25, 12:30PM-1:20PM [Cancelled due to COVID-19 policies]
Drawing on the combined experience of a diverse panel of learning and development experts, this session will examine and discuss current practices and future opportunities in impact evaluation for a wide range of interventions, from employee training to leadership development programs. Panelists will share insights to help build value using evaluation data.
Create More Value With Your Learning Evaluation, Analytics, and Feedback (LEAF) Practice
TK2020 Event: Wed, February 5, 11:30AM – 12:00PM
Optimizing your LEAF practice is your best opportunity to improve learning and its impact. Less than half of organizations indicate evaluation helps them meet their learning and business goals. Data alone doesn’t create value. People acting on data create value. Our ALPS Ibex™ platform drives effective, purpose-driven evaluation empowering L&D stakeholders with insights and creating a culture of continuous improvement in the workplace. Examples demonstrate how using ALPS Ibex helps L&D stakeholders Act on Insights™ to drive improvement and impact.
Want More Value from Evaluation? AIM to Answer Two Questions
TK2020 Event: Thurs, February 6, 9:00AM – 10:00AM
While almost all learning is evaluated, less than half of organizations report that evaluation helps meet their learning and business goals. Data create no value. People acting on meaningful data within the L&D process create value. The Alignment and Impact Model (AIM) focuses evaluation on helping all stakeholders create value. AIM incorporates purpose, process, stakeholder roles, and two questions to guide evaluation design and focuses on maximizing learning, transfer, and impact. Examples demonstrate the fundamentals of AIM and how it can be implemented and used.
Create More Value With Your Learning Evaluation, Analytics, and Feedback (LEAF) Practice
TK2020 Event: Thurs, February 6, 10:15AM-10:45AM
Optimizing your LEAF practice is your best opportunity to improve learning and its impact. Less than half of organizations indicate evaluation helps them meet their learning and business goals. Data alone doesn’t create value. People acting on data create value. Our ALPS Ibex™ platform drives effective, purpose-driven evaluation empowering L&D stakeholders with insights and creating a culture of continuous improvement in the workplace. Examples demonstrate how using ALPS Ibex helps L&D stakeholders Act on Insights™ to drive improvement and impact.
Let’s Connect.
Ready to learn how ALPS Insights can help your organization improve?
Hi, this is a comment.
To get started with moderating, editing, and deleting comments, please visit the Comments screen in the dashboard.
Commenter avatars come from Gravatar.