2014 EAGER Grant PI Convening

EAGER K-12 STEM Education Indicators Principal Investigators Convene in Washington

Hard on the heels of the 35th Marine Corps Marathon, principal investigators (PIs) for the 14 EAGER grants that NSF awarded in support of a K-12 STEM Education Indicator System held a marathon two-day session at SRI’s Washington offices (which are really in Rosslyn, right next to the start of the marathon).

NSF’s project officer for this work, Karen King, opened the meeting by explaining the purposes for calling the group together: The first goal was to provide the PIs with a deeper understanding of the broader policy context for the NSF Dear Colleague Letter they had responded to and of ongoing efforts by NSF and SRI to build a framework and set of processes for collecting indicator data as called for in the National Research Council’s 2013 report Monitoring Progress Toward Successful K-12 STEM Education. Secondly, the meeting would provide an opportunity for PIs to share their research plans with each other and identify opportunities for synergy across projects and with the ongoing work of the SRI project team.

Barbara Means welcomed attendees on behalf of SRI and described the K-12 STEM education indicators work as a refreshing example of inter-governmental communication and collaboration, involving Congress, the National Science Foundation, the National Research Council, and the National Center for Education Statistics. Jessica Mislevy, task leader for the SRI contract to help NSF implement the indicator system and produce reports for consideration as companion pieces to the Science and Education Indicators, provided an overview of SRI’s work with NSF and NCES over the past year. Members of the SRI team shared what has been learned so far about the availability of existing data and potentially suitable mechanisms for gathering new data for indicators in four areas:

  1. Access to Quality STEM Education
  2. Curriculum and Assessment
  3. Educator Capacity
  4. Policy and Funding Initiatives

Several invited commentators provided an external perspective on the emerging indicator data collection plans. Sharon Lynch (George Washington University) pointed out that if the access to quality STEM education indicator data are collected through NAEP and/or TIMSS questionnaires, they will cover Grade 4 only, which may not be representative of elementary school practice generally as Grade 4 is often the only elementary year in which states assess science. Lynch also argued that the Indicator 1-3 access data should not be confined to elementary school, as originally conceived by the NRC committee, but should span K-12, and that Indicator 1 on specialized schools deserves a high priority. Lynch urged NSF to make sure the indicator system includes indicators that are forward looking, not just measures of the status quo.

Drew Gitomer (Rutgers University) discussed the difficulties of trying to measure what really happens in classrooms in terms of STEM content and practices. He noted that research has developed some reliable classroom observation systems for mathematics, but that classroom observations are not really appropriate for an indicator system, not just because of their cost but also because only a small number of classes and instructional interactions can be sampled. Gitomer argued that the assignments and tests that teachers use in their classes are a better representation of what teachers expect of their students and that these artifacts can be sampled systematically and coded in ways that get at classroom content and practices.

EAGER Principal Investigators Enlighten Us in Just Six Minutes

Next up was the K-12 STEM Education Indicators Project version in a lightning round. Each principal investigator was challenged to describe his or her project in 6 minutes using no more than six Power Point slides.

Jeanne Century (University of Chicago) set the pace, describing her project’s consensus-building approach to developing a systematic way to define various types of STEM schools and programs.

Rena Dorph (UC Berkeley Lawrence Hall of Science) explained how her project will tackle the issue of measuring the amount and kind of time devoted to elementary science instruction. Dorph pointed out that past efforts to measure the quantity of science instruction have been episodic and have not been related to achievement outcomes in ways that would allow analysts to explore the issue of how much time is adequate.

The next set of EAGER projects concerned the nature of the materials and instruction used in classrooms. April Gardner (Biological Sciences Curriculum Study) described how her project will convene stakeholders to develop a consensus around criteria for determining whether instructional materials align with the NGSS and a process for applying the criteria.

Morgan Polikoff (University of Southern California) presented his project’s plan to develop an automated approach to gathering textbook adoption information from local education agencies and to implement the approach in five states (CA, FL, Illinois, NY and TX). Polikoff expects to have results available by June 2015.

Leland Cogan represented the project led by Bill Schmidt (Michigan State University) to test out more cost effective approaches to getting detailed information about math content coverage in classrooms, comparing teacher reports to expert recommendations and textbook coverage of content.

Laura Hamilton (RAND) is taking on the related problem of getting a representative sample of classroom coverage of CCSS-Math and NGSS practices. Hamilton pointed out the technical and practical limitations of commonly used approaches, such as surveys, teacher logs and classroom observations, especially at a time when personalized learning approaches are becoming increasingly commonplace. Her project will be trying out novel approaches such as ecological pinging at random intervals to collect moment-to-moment activities.

Eric Banilower (Horizon Research) also will focus on classroom practices in his EAGER grant. Banilower learned from his prior work on the National Survey of Science and Mathematics Educators that the CCSS “mathematical practices” are understood differently by different teachers. For the NGSS science and engineering practices, Banilower’s EAGER project will work with teachers to operationalize definitions and then validate survey questions based on those definitions through observations of instruction and teacher interviews.

The third cluster of EAGER projects are focused on educator capacity. Dan Goldhaber (University of Washington) described the first of these projects, which will use data from the teacher licensure test used in Washington State linked to student basic skills and content area tests to ascertain the strength of the relationship between the two. If there is a strong positive relationship, then teacher licensure test scores might be a useful indicator of teachers’ content knowledge for teaching.

Heather Howell (Educational Testing Service) and Yvonne Lai (University of Nebraska-Lincoln) have a collaborative grant to work on developing better measures of mathematical knowledge for teaching at the secondary level. They conjecture that the distinction between math knowledge and knowledge of student mathematical difficulties that appears to be so important at the elementary level may play out differently in secondary mathematics teaching.

A very different approach to measuring mathematics knowledge for teaching was described by Nicole Kersting (University of Arizona). Kersting will be adapting her Classroom Video Analysis instrument to map onto the CCSS-M and will collect data from 50 teachers and their students to ascertain the revised instrument’s validity and reliability.

Jamie Mikeska (Educational Testing Service) described her project to develop a science-knowledge-for-teaching tool for the areas of ecosystems and earth’s place in the universe as taught in upper elementary grades. This project too will examine the distinction between content knowledge and content knowledge for teaching, using survey data from 150 teachers as well as cognitive interviews.

The final cluster of EAGER grants focus on state and federal policy issues. Ellen Mandinach (WestEd) described her plan to examine state longitudinal data systems to identify items aligned with the STEM K-12 education indicators. Mandinach plans to report on this analysis in 2015, providing action steps and recommendations for leveraging the state data sets.

Rolf Blank (National Opinion Research Center) will be examining states’ science and mathematics assessment policies and instruments. His project will look at the alignment between the assessments that states are doing and the performance expectations outlined in the NGSS and CCSS-M, in an attempt to answer the question of whether states are in fact responding to the call for higher mathematics and science education standards.

Joe Taylor (Biological Sciences Curriculum Study) wrapped up the individual project presentations by describing his project plan to produce baseline data on the extent to which federal agencies have funded the kinds of research called for in the NRC report. Taylor’s team will be reviewing project abstracts from NSF, IES, and NIH as well as examining published articles from 22 academic journals.

Partners Put the EAGER Projects in Context

After the rapid-fire presentations, participants were happy to have their lunch while SRI’s Vanessa Peters gave a tour of the alpha version of the website for the K-12 STEM Education Indicators Project. Meeting participants recommended that the website be targeted first to external national audiences of policymakers, researchers, and practitioners, with guidance and tips on how different audiences can use the site.

Joan Ferrini-Mundy, NSF’s Director of Education and Human Resources (EHR) described the national policy context for the K-12 STEM education indicators work. She directed the group’s attention to the federal STEM Education 5-Year Strategic Plan commissioned by Congress and produced by the Office of Science and Technology Policy (OSTP). This plan calls for 100,000 new K-12 STEM teachers by 2020, a 50% increase in the number of youth with authentic STEM experiences each year, and the graduation of 1 million more students with STEM degrees over a decade while increasing the number of degrees earned by females and under-represented minorities. The plan tasks NSF and NIH with leading efforts to improve preparation for higher education among under-represented groups, building and using evidence-based approaches. Ferrini-Mundy views the K-12 STEM education indicator system as part of this process of improving STEM education across the nation. She also posed the question of how the indicator data could be used for local improvement. Ferrini-Mundy further noted that having the indicator system could provide a useful framework for evaluations of EHR grant portfolios such as DRK-12.

Jamie Deaton spoke next on behalf of the National Center for Education Statistics. Deaton described the new National Teacher and Principal Survey (NTPS) which will replace the Schools and Staffing Survey (SASS) in 2015. The plan is for the NTPS to be conducted every 2 years (in contrast to the 4-year cycle of the SASS) and to contain some rotating survey modules. It will be possible to link NTPS data to other data sets, including those from EDFacts and the Office of Civil Rights. Deaton said that the NTPS will contain items relevant to STEM K-12 education indicators 1, 2, 6, 7, and 11. He noted also that the TIMSS questionnaires contain items relevant to indicators 2, 5, 6, 7, and 8, and another TIMSS administration is scheduled for 2015. Finally, the NAEP Science Teacher Questionnaire will now use a survey question that asks teachers to write in the amount of time they spent on science instruction weekly, as recommended by the K-12 STEM Education Indicators Project. Deaton says that various NAEP questionnaires have items relevant to indicators 2, 3, 5, 6, 7 and 8.

Deaton described the process that NCES is required to go through to test new survey questions before large-scale administration. This includes cognitive labs as well as pilot and field-testing in a process that typically takes 3-4 years. This process can be attenuated or skipped if a question has already undergone the process for another NCES survey (as was the case for the science instructional time question added to the NAEP Science Questionnaire).

At the end of Day 1, meeting participants dug deeper into the various project plans in the course of a round robin poster exhibition.

K-12 STEM Education Indicators as a Catalyst for Improvement

The focus for the meeting’s second day was thinking about how to make sure the K-12 STEM education indicators data actually get used for education improvement. SRI’s Barbara Means urged the group to consider the distinction between Accountability, Theoretical, and Improvement measures as set forth in a recent paper by Yeager, Bryk and colleagues. Means suggested that broad-scale indicator data should be presented as a baseline from which improvement could be measured, and that policymakers and practitioners could benefit from guidance on how they could collect more frequent and detailed data that would map to the national indicators. Means also urged the EAGER project researchers to begin engaging with policymakers and practitioners from the outset of their projects to make sure that their measures make sense to, and are valued by, those audiences.

A stakeholder panel then articulated policymaker perspectives on how the STEM education indicator data could be useful. Dave Evans, Executive Director of the National Science Teachers Association (NSTA), described his organization’s role in publishing “journals of practice” and books on science teaching, and in providing professional development for teachers. In his view, data on the amount of time spent in science instruction, implementation of the NGSS, and math and science content knowledge for teaching would be extremely useful in guiding decisions about what to publish and what to emphasize in professional learning activities.

Karen Cole, Initiatives Manager for Professional Development in STEM for the DC Public Schools, described her district’s recent growth and need for research-based insights. Cole expressed her district’s desire to be able to see the indicator data for districts similar to their own, not just at the national level. Cole also urged the EAGER researchers to become familiar with districts’ data systems so that they can design indicators that work with those systems. Cole offered the group a favorite aphorism — “Lead with Why”— that got picked up and used for the rest of the meeting.

Michael Lach, formerly with Chicago Public Schools, the U.S. Department of Education and NSF and now with the University of Chicago, pointed out that the people responsible for math and science in district and state education agencies typically have little interaction with the people responsible for school reform. Many district leaders come from a literacy background and don’t fully appreciate how math and science are different. Lach urged the collection of data against Indicator 13 about the number of full-time math and science people in state offices. He noted that Illinois doesn’t have any full-time people with these responsibilities anymore and that information about what other states have would give state offices ammunition in going to their state legislatures.

Lach and Evans both urged the STEM indicators effort to agree on something that is good enough to start with and launch the work with stakeholders rather than trying to keep data close-held until an ideal dataset is available.

Drew Gitomer (Rutgers University) raised the question of how to prevent policymakers from focusing on a single indicator and maximizing on that with unintended negative consequences. Lach suggested that this potential pitfall is the reason to focus on multiple indicators and to promote sustained relationships between researchers, policymakers, and practitioners.

The remainder of the meeting was devoted to break-out groups working on four issues selected by the attendees:

  1. Tools and approaches to support states, districts, and schools in using indicator data for education improvement
  2. Ways in which the STEM indicator data can support efforts to improve equity in K-12 STEM education opportunities
  3. Ways to make sure that the findings of the EAGER grants and related STEM education indicator work are presented in a coherent way and help practitioners make sense of weak or mixed results.
  4. Strategies for coordinating across EAGER projects with similar goals and methods

Group 1’s discussion included consideration of how STEM indicators 1-8 might be linked to early warning indicators that could be part of the data dashboards used in schools and districts. This would enable linking the indicator data to student achievement outcomes. The group encouraged EAGER projects to share the kinds of data they’re planning to collect with local leaders and to gauge their reactions to it.

Group 2 members talked about the importance of taking an equity perspective on the data from all the indicators, but keeping in mind that the way perspective is manifested may vary. The reason for looking at the extent to which instructional materials used in the classrooms of students from different backgrounds align with the NGSS, for example, is the assumption that the lack of opportunity to work with more advanced concepts contributes to achievement gaps. The Department of Education’s Equity Assistance Centers were identified as a resource for bridging between the STEM indicator data and equity improvement efforts.

Group 3 urged the researchers in this effort to declare their expected findings upfront to permit comparing actual findings with predictions. They pointed out that there will be limits to the generalizability of project findings and these limitations should be stated clearly. A series of briefs, like those produced by the Carnegie Foundation for the Advancement of Teaching, could be an effective mechanism for disseminating a coherent, issue-oriented set of research findings.

Group 4 discussed useful types of communication across projects and ways to coordinate through mechanisms that are part of researchers’ everyday lives, such as joint panel presentations at conferences. The group endorsed the idea of a private website for the EAGER grantees and of SRI-hosted webinars for grantees on topics of mutual interest.

Finally, Sarah Kay McDonald, NSF’s Acting Division Director for DRL, closed this marathon two-day meeting with encouragement to continue the indicator-related research over the longer term, taking advantage of funding opportunities such as PRIME, CORE, and DRK-12.

Posted in resource