• The Journal of Applied Instructional Design
  • About the Journal
  • Project Management Competencies of Educational Technology Professionals in Higher Education
  • From Zero to Designing Instruction
  • By Hook or by Crook
  • Design Considerations for Bridging the Gap Between Instructional Design Pedagogy and Practice
  • Building Empathy and Developing Instructional Design Experience and Skills
  • Evaluating a Capstone Course
  • Designing Forward
  • Download
  • Translations
  • Evaluating a Capstone Course

    Graduate Student Efficacy and Instructional Strategies to Accommodate Them
    DOI:10.51869/93ma
    Graduate student researchEfficacyDepth of knowledgeGraduate educationProgram design
    This article investigates the depth of knowledge and sense of efficacy of graduate students in a capstone course. Grounded in literature on successful learning, high impact practices, and signature assessments, this article contributes to the literature on students' perceived success in conducting graduate-level research and the writing of that research. The aim of this study is to determine if students enrolled in a capstone course increased their depth of knowledge in research processes and if they increased efficacy in conducting research over the same time period. The findings indicate a decrease in many areas of efficacy and the need for an additional course supporting their research knowledge prior to the capstone. This work indicates that graduate students in service field programs--especially anywhere internationally--require support without the resources to ask for the proper support. The instructional strategies and design are reliant upon these results; this is a program evaluation research project.

    Introduction

    This study investigates an authentic assessment housed in the final course students take as they complete their master’s degree in education. This course, the capstone, is specifically listed as a high impact practice and authentic assessment (Kuh, 2008). The capstone is a project and paper that requires students to integrate what they have learned across an entire program. In our program, this culmination results in a paper comparable to a master's thesis. In previous semesters, students in this course have offered anecdotal feedback about their struggles with such a challenging course. Instructors described the lack of depth illustrated in the content areas by the capstone papers, and students indicated a lack of efficacy in completing a daunting project. A study designed to determine how much the anecdotal data mirrored formal data was needed. Therefore, this study was a Scholarship of Teaching and Learning (SoTL) project to investigate the depth of knowledge and sense of efficacy reported by students in a capstone course (August & Dewar, 2010). SoTL is the systematic inquiry of student learning, often used in the humanities and most often at the university level (McKinney, 2012).

    This study was designed to determine the depth of knowledge students reported before and after completing the stand-alone capstone course. Additionally, determining students' sense of efficacy and concerns with the course was critical, as student perceptions of learning do not always align with their performance (Brown et al., 2014; Unrau & Beck, 2004). This study allowed for an in-depth look at what students experienced in completing a signature assignment as well as their performance based upon a program assessment rubric (Hazelkom, 2015). Did students formally report feeling unprepared or overwhelmed by taking this course? Were students appreciative or intimidated by the amount of instructor feedback provided in this course? There is literature indicating that students often have a negative emotional response when receiving critical feedback (Taggart & Laughlin, 2017). This study was done with an intent to increase the student engagement needed that high impact practices provide (Sweat et al., 2013). Literature on growth mindsets indicate that if students do not feel confident in their ability to complete a challenging task or do not indicate depth of knowledge following completion of a task, they may not remain engaged (Brown et al., 2014). For example, did students perform at a high level if they perceived their performance as needing improvement? This follows suggestions by some SoTL scholars to apply SoTL work to enhance curriculum and learning (McKinney, 2012). This research is not an exploration but is instead research to improve. The instruction in the course was part of what was being improved - a self-study and SoTL effort for the instructor/author/researcher to improve practice. However, it was also a study on how the program in which the capstone resides could be improved.

    The assignment in the course being investigated is the only focus of this course, the capstone project and paper. This assignment is aligned with research on “lasting learning,” assignments requiring students to pursue rigorous activities beyond their comfort zone lead to learning that lasts (Brown, Roediger, & McDaniel, 2014). In this study, as will be noted in the findings, student learning was measured by a rubric based upon course standards. The results showed that students believed they had accomplished more than their assessment results indicated (Brown et al., 2014; Carlisle & Kruzich, 2013; Lambie et al., 2013). Additionally, students perceived they were performing beyond the level indicated by multiple assessments. Multiple studies note this as typical of graduate students (Christie et al., 2015; Lambie et al., 2013). This resulted in a lack of efficacy and failure to develop a growth mindset; students believed they were performing quite well in the course and that their writing needed little improvement. When the faculty team met to look over the results, each instructor who taught the course shared what he or she had experienced over the previous several years. Based upon those experiences and this student data, the team decided to break the one course, described here, into two courses. This was done to allow more time to explicitly teach APA and research methods in course 1 and data collection, analysis, and writing in course 2.

    Problem Statement

    Students indicated difficulty in completing a signature assignment at the end of their master's program in education. In order to formally investigate this anecdotal information, a study was designed to investigate students' sense of efficacy in completing the assignment and their depth of knowledge before and after the course. This study is therefore an investigation into the depth of knowledge and sense of efficacy of students in a capstone course at the end of a master's degree program using pre- and post-surveys, midpoint open-ended survey questions, and observation. This study is based upon data from one course only and is intended to provide a description of the students’ learning as they completed a capstone course over one semester. Additionally, the study was designed to trace students’ changing sense of efficacy during the process. Lastly, this is a SoTL study that used qualitative inquiry in the form of a longitudinal panel study (Creswell, 2012). This methodology was most appropriate given the need to look in depth at a small population and in response to the following research questions:

    o Sub question: Did the capstone course increase the depth of student knowledge with regard to formulating a statement of the problem and research questions based upon what they have learned in their program, followed by a review of literature and the execution and write up of a research project?

    Literature Review

    As indicated by the problem statement, there was a question about how much students learned in their capstone course. The question was about their depth of knowledge; how much did students know about conducting research when they left the program? There was also a question about their sense of efficacy. Upon completing a signature assignment (the capstone) requiring the integration of so many skills, did students feel more confident in their ability to design and execute a research project? Finally, there was a question of what was meant by a signature or authentic assessment. A signature assignment is listed as a high impact practice in the field of education. These are assignments that allow students to show depth of knowledge and critical thinking through some culminating paper, project, or portfolio (AACU, 2013). An authentic assessment was first used by Wiggins (1990) in the field of education. Additionally, an authentic assessment is often a signature assignment; it is an assessment that allows students to show the depth of their knowledge (Koh, 2017). How did faculty know that the capstone (as designed) adhered to the norms of authentic assessment? These, therefore, became the themes of the literature review.

    Depth of Knowledge and Growth Mindset

    In order to impact student growth in the classroom setting, teachers and teacher candidates must have a deep understanding of their content (Harris & Sass, 2011; Hashweh, 1987; McConnell et al., 2013). Professional development in these content areas is widely cited as impactful to increasing the depth of content knowledge in teachers (Czerniak & Chiarelott, 1999; McConnell et al., 2013; Reeves, 2000; Stewart, 2014). When teachers return to university for a graduate degree, this can be an extended form of professional development. In the instance that a teacher pursues a degree in the field they are most often assigned to teach, they are seeking professional development in that content area.

    The specifics of what teachers should know, and how much of it they should know, has largely gone unquantified (Grossman & Richert, 1988). Pedagogical content knowledge is generally thought to be the complex process of coming to terms with what a teacher must know to effectively teach his or her content area (Grossman & Richert, 1988; Schulman, 1987). The depth of content knowledge needed to successfully impact student learning has been of interest to policymakers since the passing of the No Child Left Behind Act of 2001 (Klein, 2015). The requirement of No Child Left Behind for "highly qualified teachers" (Hill et al., 2005) demanded that content knowledge be quantifiable. Quantifying that content knowledge has largely been done by investigating courses completed and degrees obtained (Hill et al., 2005). While this method provides administrators and policy makers with a big picture view of how teachers are prepared, it does little to investigate how teachers use that content knowledge to impact student achievement. More recent studies suggest that how teachers apply content knowledge as well as their knowledge of teaching pedagogy are more important than the number of courses taken in their specific content areas (Darling-Hammond & Richardson, 2009; Darling-Hammond, 2012). There are multiple studies showing that knowing how to teach the students in ways that they can learn, defined by using culturally relevant pedagogy, allows for a depth of knowledge that is more impactful on student achievement (Darling-Hammond, 2012; Ladson-Billings, 1998).

    Efficacy in Conducting Research

    Bandura (1997) situates self-efficacy within social-cognitive theory. The theory posits that self-efficacy is a perception of self, where one uses "beliefs in one's capabilities to organize and execute the courses of action required to produce given attainments" (p. 3). For graduate students in a research course, this would mean that self-efficacy is needed to believe that completing a project is possible. This belief is based upon both their knowledge of and ability to execute the steps of the research process. Additionally, one large scale study by van Dinther et al. (2011) presented the following finding:

    [E]nactive mastery experiences are stated as the most powerful source of creating a strong sense of efficacy. With regard to this… almost every study stresses the relevance of providing students with practical experiences, i.e. students performing a task while applying knowledge and skills within demanding situations. (p. 11)

    This review of self-efficacy in students in higher education investigated how to foster self-efficacy in challenging tasks; completing challenging tasks and providing authentic experiences were indicated as having critical importance. This indicates that students must perform a task--in this case the research project--and receive feedback on their performance to increase their efficacy.

    While limited studies have been conducted investigating what factors must be in place to foster graduate students' sense of self-efficacy in conducting research, several studies have conducted linking undergraduate research with a decision to pursue graduate degrees with a research emphasis (Russell et al., 2007). Those studies indicate that students must experience success conducting research in order to pursue additional degrees where it is required. There are also investigations into the research environment--including the program expectations and amount of time learning about research prior to conducting any studies--and the connection between the environment and productivity (Phillips & Russell, 1994). These studies indicate that students who feel productive and successful conducting graduate-level research are in programs where they are supported by faculty and given ample time to conduct their projects. Finally, there is a connection between the immersion of the graduate student into research and the self-efficacy of the graduate student in conducting research (Philllips & Russell, 1994; Russell et al., 2007).

    Methods

    The research questions led to a pre- and post-survey with identical questions. Thus, as mentioned previously, a longitudinal panel study (Creswell, 2012) was most appropriate. The panel study was created to study the same people over time while investigating some identified criteria. The identified criteria in this study are “depth of knowledge” and “sense of efficacy” as well as how these two criteria change over time. Therefore, the defining characteristic demanding a panel study was being enrolled in the capstone course in a specific semester--specifically, the last semester before program revisions were implemented. To supplement survey data, however, the researcher added open-ended questions and used observation for analysis. For observation, a reflexive journal was maintained by the researcher. All data were entered into Dedoose and will be described further in the analysis section of this paper. Additionally, a midpoint survey with open-ended questions about the students' process was included to determine if the pre- and post-surveys were measuring changes due to the course. That tool is also included in the analysis.

    Context of the Study

    This study examined graduate students’ changing sense of efficacy in conducting research as well as their depth of knowledge about research in a capstone course. As mentioned before, the graduate program is a master's in education. Furthermore, the study took place at a regional university in the southeastern United States, and the participants were eight students enrolled in a capstone course during spring semester of 2018. The participants consisted of seven females and one male. There were three white females under 30 years old, two white females over 30, one African American female under 30, one African American female over 30, and one white male over 30. The participants all had some teaching experience; the range was between three- and 20-years teaching experience. The participants primarily lived within 50 miles of the university; however, one participant lived outside of the state where the study occurred. There was also a range of teaching responsibilities. Most (five participants) were teaching in a K-12 setting. Of those five, two were teaching in an elementary school setting, and three were teaching middle school. Two of the participants were school administrators--one in an elementary school and one in a high school. The final participant was a middle school teacher who was acting as a substitute teacher. This was an online program, so all observations were done during optional face to face meetings online via Zoom. Additionally, all surveys were anonymous and were distributed using the course's “Desire to Learn (D2L)” platform. There were no IP addresses collected to maintain anonymity, and the Institutional Review Board approved this study. The surveys were collected at the beginning and end of the semester.

    One unique situational context is that this course was formative until the final capstone was submitted during the final week of the course. The paper was submitted multiple times for feedback; each candidate had the opportunity to make constant revisions until final grades were submitted. It should be noted that the feedback was asset-based and incredibly detailed. Students were provided with an “estimated” grade; this is what he or she would have earned on the paper if it were submitted without revision. Students were then given repeated comments through the “comment and track changes” feature in Microsoft Word. This allowed for the student to focus on both the paper’s strengths and areas needing improvement. It also allowed both the current instructor and future instructors to see all of the changes that students made to their papers and the feedback upon which those changes were based. In most cases, the paper submitted by the due date was the final submission--no additional edits were needed.

    Data Collection

    o Sub-question: Did the capstone course increase the depth of knowledge of students with regard to formulating a statement of the problem and research questions based upon what they have learned in their program, followed by a review of literature and the execution and write up of a research project?

    The survey questions were aligned with categories on the capstone rubric. The first survey was on efficacy. It was 15 questions, with five questions each devoted to students' confidence in their abilities related to research concepts, confidence in their knowledge of the research process, and confidence in their knowledge of APA format (Appendix A). This survey was administered in the first week of class and again during the final week of class. The survey included five likert scale options as possible responses (Appendix A). These responses ranged from indicating “no knowledge” of the criteria to “mastery-level understanding” of the criteria.

    The second survey was administered identically to the first. It was linked in the D2L course in the first week of class and again in the final week of class. The survey had similar likert scale response options, ranging from “strongly disagree” to “strongly agree” (Appendix B). The prompts in this survey were nearly identical to those in the efficacy survey; they asked about depth of knowledge (Appendix A). However, the efficacy survey included the term "confident" and phrased the prompts differently to address students' sense of efficacy in completing the process. The three sections were also devoted to research concepts, research process, and APA formatting (Appendix B).

    At the midpoint of the semester, an open-ended questionnaire was linked through D2L as well. This allowed for anonymous candidate feedback on the process. The first question asked how students felt about completing a major research assignment (efficacy), and the second question asked how their depth of knowledge on completing a literature review had changed. These questions provided insight into the students' process; this was to determine if the course was responsible for the changes that might be seen when looking at the pre- and post-survey data.

    Analysis and Results

    Five out of eight students responded to the pre-surveys. Four out of eight students responded to the “Efficacy” post-survey, and three out of eight responded to the “Depth of Knowledge” post-survey. Each response was a forced single response, meaning that each participant provided only one response per question. The likert responses were identical for each survey. The possible responses were:

    1. do not understand this concept or never seen before
    2. have seen/am somewhat familiar with this concept
    3. feel somewhat comfortable with this concept
    4. understand this concept and/or am able to utilize this skill consistently
    5. master level/I can teach this to others.

    The results are included in Table 1 and Table 2Both tables are included below in order to emphasize the changes from pre- to post-survey.

    Table 1

    Efficacy Survey Results

    Concepts Pre-Results Post-Results
    Efficacy on choosing a problem related to content 4.0 4.75
    Efficacy on using scholarly literature appropriately 3.4 4.0
    Efficacy on using practitioner pieces appropriately 3.2 4.0
    Efficacy on connecting literature to problem of practice 3.6 3.75
    Efficacy on importance for students and the field 3.8 4.5
    Process
    Efficacy on pulling literature from multiple databases 4.0 4.75
    Efficacy on connecting literature to conceptual framework 3.4 4.5
    Efficacy on research questions addressing topic 3.8 4.5
    Efficacy on choosing a research method appropriate to answer research questions 3.6 4.25
    Efficacy in ensuring that capstone will reflect changes in thinking across the program 3.4 3.75
    APA
    Efficacy on locating sources and putting them in sections with APA headings 4.0 4.0
    Efficacy on citing appropriately throughout document 4.0 4.0
    Efficacy on evaluating sources prior to citing 3.6 4.0
    Efficacy on citing appropriately in reference section 3.8 3.75
    Efficacy on writing utilizing appropriate APA throughout 3.8 3.5

    The most significant finding illustrated by Table 1 is the decrease in efficacy in two areas. The discussion section of this paper will go into more detail, but the "efficacy on citing appropriately in reference section" and "efficacy on writing utilizing appropriate APA throughout" were criteria where students felt more confident at the beginning of the course than they did at the end. Given that the study is grounded in what Brown et al. (2014) call "the science of successful learning" (n.p.), this is important. Graduate students (and all learners) often determine at the end of a course that they were poor judges of when they learned well and when they did not (Brown et al., 2014, p.3).

    Table 2

    Depth of Knowledge Survey Results

    Content Pre-Results Post-Results
    I can identify a problem of practice or research topic in my content area. 3.8 3.67
    I can conduct a review of relevant literature. 3.4 3.67
    I can differentiate between scholarly and practitioner articles. 3.6 2.33
    I can connect appropriate literature to my research questions. 4.0 3.67
    I can utilize literature to refine/formulate research questions. 3.6 3.33
    Process
    I can utilize multiple databases to search for literature. 4.0 4.0
    I can construct a conceptual framework and problem statement. 3.4 3.0
    I can craft appropriate research questions. 3.2 3.0
    I can choose an appropriate methodology based upon my research questions. 3.6 2.67
    I can complete the research process. 3.2 2.67
    APA
    I can utilize headings effectively and according to APA. 3.4 3.33
    I can cite sources appropriately within the text of my paper. 3.8 3.33
    I can evaluate sources, including identifying any problems with the article. 3.4 3.33
    I can create a reference page using APA. 3.8 4.0
    I can evaluate an article and find APA errors. 3.2 2.67

    The Depth of Knowledge survey expanded upon the “Efficacy” survey. Aligned with the findings in Table 1 on being poor judges of personal learning, students revealed many decreases from pre- to post-survey in their depth of knowledge. As noted in Brown et al. (2014), this is likely due to students' use of "calibration" (p. 210). By investigating their "illusions" about their learning and comparing those to their evaluations, they found that they were not as knowledgeable about various criteria as they believed when the course began (Bailey, 2006). This is demonstrated by students’ increased responses criteria listed (Appendix B). The extent to which the students doubted their depth of knowledge is discussed further in the discussion section below.

    In addition to the pre- and post-survey, there was an open-ended midpoint survey administered. Four out of eight students responded to that survey. The first question asked about students' confidence in completing a major research assignment at that point in the course (i.e., at the ten-week mark in a sixteen-week course). One respondent noted that he or he was surprised at how well he or she did on the literary analysis portion of the course, which referred to feedback from the instructor on the literature review section of the capstone paper. Another respondent noted that time management was critical in having confidence in completing a major research assignment and noted that "[t]he past few semesters have been marked by the constant interference of life. But no matter how ahead you get, or fall behind, just keep pushing forward.". This increase in confidence due to relevance of the research to the teaching context was repeated often in other contexts.

    The second question asked how the students’ depth of knowledge in conducting a literature review had changed. At this point in the course, most students had only completed the literature review and had not done any data collection. Two respondents noted that their depth of knowledge had not changed but did not expound upon whether that meant they still knew very little or whether they felt comfortable with literature reviews prior to the course. A third respondent explained that staying focused was the key to doing well on literature reviews. The fourth respondent noted that while "getting used to writing again was a stumbling block," understanding how to find the literature was critical.

    Discussion

    The capstone course, specifically the process of conducting a research project, did not increase students' sense of efficacy in conducting research or their depth of knowledge on the research process in measurable ways. Generally, the observation data confirmed what was reported in the surveys. Students felt quite confident in their ability to conduct a major research assignment and certainly felt confident in their ability to use APA formatting to write up their literature review and data processes when the course began. However, when asked about their depth of knowledge, their responses indicated a lesser depth than their efficacy scale indicated. Additionally, there were often decreases in the rating from the pre- to post-surveys, and if there was an increase it was minimal. This corroborates previous findings that people are "usually overconfident in their estimated knowledge" (Lundeberg et al., 1994, p. 3). Additionally, the findings parallel previous studies on “calibration in confidence”--meaning that as a person learns, they calibrate their confidence to reflect their increasing knowledge. As students in the current study responded to the post-survey, they realized that they knew less than they estimated on the pre-survey in many cases.

    The midpoint survey was also illuminating in multiple ways. First, it confirmed another note in the researcher's observation journal. There was some confusion on the part of at least two students on the formative grading system. One candidate noted in the midpoint survey that he or she "was surprised by how well [he or she] did on the literary analysis assignment." . The literary analysis assignment he or she referenced likely referred to the graded literature review portion of the paper. However, that portion of the assignment was for a completion grade only; there was substantial feedback on each submission requiring edits prior to the next submission. In future iterations of the course, even after it was revised into two courses—"Capstone 1” and “Capstone 2”-- the researcher should begin offering even more detailed formative feedback. In addition to the comments on papers, which were still asset-based and focused on both strengths and areas of improvement, it is now also a course requirement to have two face to face (synchronous) digital conferences where they used to be optional. This allows a conversation about the work of the graduate student and how it furthers his/her teaching goals.

    Another candidate noted that "middle school students are thankful for their school and lives." This certainly indicates a connection to the research topic and a sense of efficacy that the topic is impacting students. However, that same respondent noted that his or her depth of knowledge had not changed over the capstone course. This prompts the following question: If there is no increase in knowledge on how the research is impacting the students, is the change in efficacy warranted by knowledge of the research? The changes in the scores on the pre- and post-surveys indicate that it is not. One SoTL study indicates that this may be due to exposure; students who practice their craft repeatedly become more confident regardless of any change in their knowledge base (Gormally et al., 2009). A group faculty discussion also indicated that students were rushed in the original version. At that point there was only one course as opposed to both Capstone 1 and Capstone 2, and this meant that students were not able to take the time needed to process so much new information.

    Efficacy Surveys

    The Efficacy surveys were divided into three sections. The first section was generally about research concepts. Questions in this section asked students to rate their confidence level in completing certain concepts--such as choosing a problem related to their content or using scholarly literature appropriately (see Appendix A). In this first section, students’ responses indicated show some increases in student efficacy between the pre- and post-surveys. For example, candidate scores measured from an average of 3.4 (with 3.0 being somewhat comfortable with the concept) to 4.0 (with 4.0 indicating comfort and ability to perform consistently in this area) on using scholarly literature appropriately. This gain reflects the work students did compiling the review of literature and revising that section repeatedly prior to the final submission. The largest increases measured using practitioner pieces appropriately and feeling confident that the research is important for students and the field; the first went from 3.2 to 4.0 and the latter went from 3.8 to 4.5. In both cases, the increases in scores are logical. Students likely knew little (if anything) about practitioner literature prior to the course, and all students engaged in research directly impacting their classroom or school.

    In each question in the process section of the efficacy survey, students also showed in increase from pre- to post-survey. Students indicated significant increases in confidence in response to two prompts in particular. The first, efficacy on connecting literature to the conceptual framework increased from 3.4 to 4.5, and it was the largest increase across all surveys. The second increase measured student efficacy on formulating research questions that addressed the research topic; students average went from 3.8 to 4.5. In both cases, the increase indicates that students believe they are near mastery level upon completion of the capstone report. These results additionally indicate that research process instruction is a strength of the course design; each section of the paper was turned in as it was developed, and students received feedback with each revision. The process of formulating a question based upon a review of literature, choosing a methodology, determining the conceptual framework, utilizing APA to cite literature, collecting data, analyzing data, and writing up a report were all clearly defined for students. Additionally, modules were devoted to their development of those sections.

    However, students' confidence may be falsely inflated (Brown et al., 2014). Students in this course were observed using performance goals as opposed to learning goals . In setting a learning goal, students aim for learning and growth; in a performance goal, students focus on "validating or showing off . . . ability" (p. 180). For example, the APA manual was examined via a lecture, readings, and a quiz. APA was seen in the original version of the course as a drill of skills with a one-time assessment, where the refining of research questions and methodologies were repeatedly woven into future sections of the course. Based on the results of the pre- and post-surveys, it seems clear that this pedagogical approach was a failure. APA should be woven in as a process for students like other parts of the research process, especially considering that this citation format is updated every few years requiring all researchers (novice and expert) to re-learn.

    The final section of the survey was devoted to APA. Students routinely struggled to master formatting in their papers in the college’s graduate programs. The survey responses reflected that the students realized this. Two questions reflect no growth; one question reflects a very slight increase; two questions indicate a decrease in efficacy. The decreases measured student efficacy on citing appropriately in the reference section and student efficacy on writing utilizing appropriate APA formatting throughout the paper. These also reflected the emotional response students sometimes possessed toward critical feedback. For instance, in the pre-survey, students’ responses indicated that they felt very confident in their APA knowledge, but as they received feedback throughout the course their confidence decreased (Taggart & Laughlin, 2017). In the new program model, Capstone 1 dedicates more time (3 weeks) to ensuring that students know how to use APA to format, revise, and cite in their formal research.

    Depth of Knowledge Surveys and Growth Mindset

    The depth of knowledge surveys offer an interesting contrast to the efficacy survey. The three sections were quite similar. Prompts on the depth of knowledge survey did not include terms of confidence, but they did use the term "I can." The phrasing was often adjusted in prompts, and students certainly responded to those differently. Where students’ responses reflected an increase in efficacy toward all five content prompts in the efficacy survey, the responses showed a decrease in depth of knowledge in four out of five of those same prompts. In response to "I can differentiate between scholarly and practitioner articles" (Table 2), students went from 3.6 (indicating familiarity) to 2.33 (indicating less familiar). Yet in the efficacy survey, students indicated increases in confidence from low 3.0 to 4.0 in using both scholarly and practitioner articles. This difference may be, in part, why the researcher did not see any changes in--or in the development of--a growth mindset during the course.

    Finally, responses toward the APA section were a closer reflection to what was demonstrated in the efficacy survey. Students showed a decrease in response to four out of five prompts. The reference page was the only prompt where students’ responses indicated an increase in their depth of knowledge; however, that increase was only from 3.8 to 4.0. This also mirrors what was included in the researcher's observation log. The students demonstrated both verbally and in the drafts of their capstone papers a thorough understanding of research concepts; they understood why and how to locate literature to support their topic and were able to connect their research questions to a conceptual framework and an appropriate methodology. Each draft and each optional conference also revealed students' increasing knowledge of the research process; the midpoint survey responses also indicated that practice led to higher quality in their writing. APA formatting was not discussed during the conferences, but it clearly should have been since it was the area of weakness for most students. This change was implemented as an instructional improvement right away.

    This survey's responses mirror literature on "illusions of knowing" (et al, 2014, p. 15). Students who are exposed to material repeatedly (e.g., students hearing certain academic terms repeatedly as in the case of this research course), believe they have achieved mastery. When faced with critical, constructive feedback and forced to "reflect" and "calibrate" (p. 210), students realized that they still had much to learn to become competent in research. This growth mindset may be what students need to become more proficient beyond the course, but it explains why they showed a decrease from pre- to post-survey.

    Conclusions and Limitations

    This study affirms the literature that indicates what students described; an additional course devoted solely to reviewing literature and APA formatting is needed. Thus, this study provides information that can be used as points of comparison nationally and internationally. Additionally, this study provides information for other programs using authentic assessments such as a capstone course. Students in this study were not able to articulate increased depth of knowledge at the completion of the course and paper. Additionally, graduate students in programs of education may not know how to articulate what support they need to successfully complete assessments like thesis papers, capstones, or dissertations.

    Correspondingly, another limitation of the study is that a pre- and post-survey is likely not the most effective way to determine what the course and project has done for students' efficacy and depth of knowledge (Hess, Jones, Carlock, & Walkup, 2009). Instead, in future studies, a pre- and post-survey may be administered, but interviews and a focus group would better determine causality. An interview would allow probing questions on why a candidate's sense of efficacy in using APA decreased over time, for example. In order to use this study to improve teaching, using more open-ended questionnaires at multiple points throughout each semester is prudent. There should also be a voluntary focus group session at the midpoint of the course for students to give feedback on growth areas based upon their initial drafts. This may not only add to SoTL scholarship on authentic assessments and high impact practices (e.g., capstone projects and master's level research), but it may also improve students' satisfaction with the course and program.

    As self-efficacy is a predictor of graduate student success, this study adds to literature encouraging developing efficacy through appropriate supports (Huerta et al., 2017). As colleges of education--and service field graduate programs more generally--consider the needs of graduate students, providing support that include content knowledge and knowledge of the academic writing demands are critical (Darling-Hammond, 2012; Brown et al., 2014).

    References

    American Association of Colleges and Universities. (2013, Month day). High-impact educational practices: A brief overview. Insert site name. http://www.aacu.org/leap/hip.cfm

    August, S. E., & Dewar, J. (2010). SoTL and community enhance one another to create impact at Loyola Marymount University. Transformative Dialogues: Teaching & Learning Journal, 4(1), 1-15. https://edtechbooks.org/-pvC

    Bailey, J. G. (2006). Academics' motivation and self-efficacy for teaching and research. Higher Education Research & Development, 18(3), 343-359. https://edtechbooks.org/-QjR 

    Bandura, A. (1997). Self-efficacy: The exercise of control. Macmillan.

    Brown, P. C., Roediger, H. L., & McDaniel, M. A. (2014). Make it stick: The science of successful learning. The Belknap Press of Harvard University Press.

    Carlisle, S. K., & Kruzich, J. M. (2013). Increasing student evaluation capacity through a collaborative community-based program evaluation teaching model. Journal of the Scholarship of Teaching and Learning, 13(4), 85-102. https://edtechbooks.org/-FfrV 

    Christie, M., Grainger, P., Dahlgren, R., Call, K., Heck, D., & Simon, S. (2015). Improving the quality of assessment grading tools in master of education courses: A comparative case study in the scholarship of teaching and learning. Journal of the Scholarship of Teaching and Learning, 15(5), 22-35. https://edtechbooks.org/-zzv

    Creswell, J. (2012). Research design: Qualitative and quantitative approaches. Sage.

    Czerniak, C., & Chiarelott, L. (1999). Teacher education for effective science instruction – A social cognitive perspective. Journal of Teacher Education, 41(1), 49-54. doi:10.1177/002248719004100107

    Darling-Hammond, L., & Richardson, N. (2009). Research review/teacher learning: What matters. Educational Leadership, 66(5), 46-53. https://edtechbooks.org/-Gue 

    Darling-Hammond, L. (2012). Powerful teacher education: Lessons from exemplary programs. John Wiley & Sons.

    Gormally, C., Brickman, P., Hallar, B., & Armstrong, N. (2009). Effects of inquiry-based learning on students' science literacy skills and confidence. International Journal for the Scholarship of Teaching and Learning, 3(2). https://edtechbooks.org/-xSQ 

    Grossman, P. L., & Richert, A. E. (1988). Unacknowledged knowledge growth: A re-examination of the effects of teacher education. Teaching and Teacher Education, 4(1), 53-62. https://edtechbooks.org/-ntT 

    Hashweh, M. Z. (1987). Effects of subject-matter knowledge in the teaching of biology and physics. Teaching and teacher education, 3(2), 109-120. https://edtechbooks.org/-mNwK 

    Harris, D. N., & Sass, T. R. (2011). Teacher training, teacher quality and student achievement. Journal of Public Economics, 95(7-8), 798-812. https://edtechbooks.org/-Pro 

    Hazelkom, E. (2015). Rankings and the reshaping of higher education: The battle for world-class excellence. Springer.

    Hess, K. K., Jones, B. S., Carlock, D., & Walkup, J. R. (2009). Cognitive rigor: Blending the strengths of Bloom's Taxonomy and Webb's depth of knowledge to enhance classroom-level processes. Retrieved from ERIC. (ED517804).

    Hill, H. C., Rowan, B., & Ball, D. L. (2005). Effects of teachers' mathematical knowledge for teaching on student achievement. American Educational Research Journal, 42(2), 371-406. https://doi.org/10.3102/00028312042002371

    Huerta, M., Goodson, P., Beigi, M., & Chlup, D. (2017). Graduate students as academic writers: Writing anxiety, self-efficacy and emotional intelligence. Higher Education Research & Development, 36(4), 716-729. https://edtechbooks.org/-shnF 

    Klein, A. (2015). No Child Left Behind: An overview. Education Week. Retrieved Month day, year, from https://www.edweek.org/ew/section/multimedia/no-child-left-behind-overview-definition-summary.html

    Koh, K. H. (2017). Authentic assessment. In Oxford research encyclopedia of education.

    Kuh, G. D. (2008). High-impact educational practices: What they are, who has access to them, and why they matter. AAC&U.

    Ladson-Billings, G. (1998). Just what is critical race theory and what's it doing in a nice field like education? International Journal of Qualitative Studies in Education, 11(1), 7-24. https://edtechbooks.org/-Krpy 

    Lambie, G. W., Ieva, K. P., & Mullen, P. R. (2013). Graduate counseling students' learning, development, and retention of knowledge. Journal of the Scholarship of Teaching and Learning, 13(4), 54-67. https://edtechbooks.org/-DWta 

    Lundeberg, M. A., Fox, P. W., & Punccohar, J. (1994). Highly confident but wrong: Gender differences and similarities in confidence judgments. Journal of Educational Psychology, 86(1), 114. https://edtechbooks.org/-goao 

    McConnell, T. J., Parker, J. M., & Eberhardt, J. (2013). Assessing teachers' science content knowledge: A strategy for assessing depth of understanding. Journal of Science Teacher Education, 24(4), 717-743. https://edtechbooks.org/-akp 

    McKinney, K. (2012). Increasing the impact of SoTL: Two sometimes neglected opportunities. International Journal for the Scholarship of Teaching and Learning, 6(1). https://doi.org/10.20429/ijsotl.2012.060103

    Phillips, J. C., & Russell, R. K. (1994). Research self-efficacy, the research training environment, and research productivity among graduate students in counseling psychology. The Counseling Psychologist, 22(4), 628-641. https://edtechbooks.org/-eifa 

    Reeves, T. C. (2000). Enhancing the worth of instructional technology research through "design experiments" and other development research strategies. International Perspectives on Instructional Technology Research for the 21st Century, 27, 1-15. https://edtechbooks.org/-eCQs 

    Russell, S. H. Hancock, M. P., & McCullough, J. (2007). Benefits of undergraduate research experiences. Science (Washington), 316(5824), 548-549. https://edtechbooks.org/-PNRz 

    Schulman, L. S. (1987). Knowledge and training: Foundations of the new reform. Harvard Educational Review, 57, 1-22. https://edtechbooks.org/-Jbm 

    Stewart, C. (2014). Transforming professional development to professional learning. Journal of Adult Education, 42(1), 28. https://edtechbooks.org/-Smwv 

    Sweat, J., Jones, G., Han, S., & Wolfgram, S. M. (2013). How does high impact practice predict student engagement? A comparison of white and minority students. International Journal for the Scholarship of Teaching and Learning, 7(2). https://doi.org/10.20429/ijsotl.2013.070217

    Taggart, A. R., & Laughlin, M. (2017). Affect matters: When writing feedback leads to negative feeling. International Journal for the Scholarship of Teaching and Learning, 11(2). https://edtechbooks.org/-NZyC 

    Unrau, Y. A., & Beck, A. R. (2004). Increasing research self-efficacy among students in professional academic programs. Innovative Higher Education, 28(3), 187-204. https://edtechbooks.org/-VwH 

    Van Dinther, M., Dochy, F., & Segers, M. (2011). Factors affecting students' self-efficacy in higher education. Educational Research Review, 6(2), 95-108. https://edtechbooks.org/-xuN 

    Wiggins, G. (1990). The case for authentic assessment. Practical assessment, research, and evaluation, 2(1), 2. https://edtechbooks.org/-YZuo 

    Appendix A

    Efficacy Survey

    1. I agree to participate in this research student (student moves forward)
    2. I do not agree to participate (survey does not appear)

    Directions: This survey is designed to provide the program faculty with a better understanding of the depth of knowledge students report in elements of completing the capstone.

    For all questions, the responses are:

    1=do not know/do not understand

    2=have seen/am somewhat familiar

    3=feel somewhat comfortable

    4=understand and/or am able to do consistently

    5=master level/could teach to others

    Concepts

    1. I am confident that my project addresses a problem relevant to my content
    2. I am confident that my project utilizes scholarly literature appropriately
    3. I am confident that my project utilizes practitioner pieces appropriately
    4. I am confident that my project connects the literature to my problem of practice
    5. I am confident that the problem of practice I identified is important to my students and my field

    Process

    1. I am confident that I am able to pull relevant literature from multiple databases
    2. I am confident that I am able to connect literature to the conceptual framework
    3. I am confident that the research questions address the topic I want to investigate
    4. I a confident that I chose an appropriate method to investigate the research questions
    5. I am confident that the signature assignment (capstone) appropriately reflects the progress I have made in my thinking across the program

    Apa

    1. I am confident that I am able to locate sources and place them in appropriate headings in my paper
    2. I am confident that I am able to cite sources appropriately using APA in the text of the document
    3. I am confident that I am able to evaluate sources in order to determine if they are methodologically sound
    4. I am confident that I am able to cite sources appropriately in the references
    5. I am confident that my writing, including headings, appropriately utilizes APA

    Appendix B

    Candidates' Depth of Knowledge Survey

    Completing a Capstone (Signature Program Assignment)

    1. I agree to participate in this research student (student moves forward)
    2. I do not agree to participate (survey does not appear)

    Directions: This survey is designed to provide the program faculty with a better understanding of the depth of knowledge students report in elements of completing the capstone.

    For all questions, the scale is:

    1 – Do not know/understand this concept/have never seen this before

    2 – Have seen and/or am somewhat familiar with concept

    3 – Feel somewhat comfortable with this concept

    4 – Understand this concept and/or am able to utilize this skill consistently

    5 – Mastery level – I could teach this to others

    Section 1: Content

    1. I can identify a problem of practice or research topic in my content area
    2. I can conduct a review of relevant literature of scholarly articles
    3. I can differentiate between scholarly and practitioner articles
    4. I can connect appropriate literature to my research questions
    5. I can utilize literature to refine/formulate research questions

    Section 2: Process

    1. I can utilize multiple databases to search for literature
    2. I can construct a conceptual framework/problem statement
    3. I can craft appropriate research questions
    4. I can choose an appropriate methodology based upon my research questions
    5. I can complete the research process (from conceptualization to write up of project)

    Section 3: APA

    I can utilize headings effectively and according to APA

    1. I can cite sources appropriately within the text of my paper
    2. I can evaluate sources, including identifying any problems with the article methodologically.
    3. I can create a reference page using APA
    4. I can evaluate an article and find APA errors

    This content is provided to you freely by EdTech Books.

    Access it online or download it at https://edtechbooks.org/jaid_9_3/evaluating_capstone_course.