• The Journal of Applied Instructional Design
  • About the Journal
  • Instructional Designers in Higher Education: Roles, Challenges, and Supports
  • How Instructional Designers Approach Conflict with Faculty
  • Participants' Perceptions of Burden During the Needs Assessment Process
  • Activity Theory as a Lens for Developing and Applying Personas and Scenarios in Learning Experience Design
  • Conducting a Formative Evaluation on a Course-Level Learning Analytics Implementation Through the Lens of Self-Regulated Learning and Higher-Order Thinking
  • A Marie Kondō-Inspired Approach to Designing Accelerated Online Courses
  • Say What? Learner Reactions to Unexpected Agent Dialogue Moves
  • Download
  • Translations
  • Participants' Perceptions of Burden During the Needs Assessment Process

    Instructional DesignNeeds AnalysisNeeds AssessmentConsultants
    Needs assessments are avoided due to perceptions of burden associated. While most research focuses on the facilitators, this research leverages the Perceived Burden in Needs Assessment Participant Scale (a= 0.86) to explore the participant perspective. Most participants reported low levels of burden (n = 244, M = 2.97, SD = 0.88), debunking the myth of severe levels of needs assessment burden. The results also yielded implications for NA practice, including that practitioners should: 1) make use of extant data, 2) ensure tasks and recommendations are reasonable, 3) minimize what participants must give up, 4) remain flexible, and 5) seek understanding.

    Introduction

    Instructional design (ID) is the “science and art of creating detailed specifications for the development, evaluation, and maintenance of situations which facilitate learning and performance,” (Richey et al., 2011, p. 3). While the design and implementation of interventions can be most readily envisioned as linked to ID, the field and practice has a great deal of depth. One of the ultimate goals of ID is to achieve learning and other relatively permanent changes in behavior or performance for the better (Mayer, 1982). Needs assessment (NA) is a tool that can help ID practitioners achieve that end; it supports quality decision-making that can lead to improving learning outcomes or performance (Watkins et al., 2012). Unfortunately, NA is not leveraged as much as it could be in many instances. They are sometimes deemed inconvenient or unfeasible due to the constraints of time or the perceived drain on resources (Cervero & Wilson, 2006; Zemke, 1998). In fact, they are often avoided or relabeled with other names (Adams et al., 2021; Watkins et al., 2012) to diminish misconceptions of burden (Pinckney-Lewis, 2021b)

    This research seeks to explore these perceptions of the NA process but also to fill a gap in the literature (e.g., Altshuld & Witkin, 2000; Guerra-Lopez, 2018; Kaufman & Guerra-López, 2013; Stefaniak, 2020; Watkins, 2014; Wedman, 2014), which most often explores NA challenges from the practitioner point of view (Bates & Holton, 2002; Zemke, 1998). This research centers the actual lived experience of NA participants and addresses the dearth of literature focusing on their perceptions of burden. When we look at NA holistically, participants play an essential role in the process by serving as partners or data sources (Watkins et al., 2012). Therefore, it is crucial to examine their perceptions of burden in the process to see if these reported claims of burden are warranted. Examining the complexities of the participant experience within NA will help to inform ID practice and ultimately enhance the participant experience going forward.

    Within the researcher’s (2021a) mixed-methods study, they explored several aspects of the phenomena of perceived burden in the needs assessment experience. The overall study included 1) the development and validation of a scale to measure NA participants perceived burden in the NA process, 2) the results of NA participant responses to the scale, 3) a comparative case study of NA participant experiences, and 4) a cumulative case study of NA facilitators experiences and perceptions. This article focuses solely on the second portion of the larger study: NA participants experiences and re recommendations resulting from that inquiry. Specifically, this article will address these questions:

    1. How do participants in needs assessments rate their perceived burden in the process?
    2. Do participant perceptions of burden ratings vary across organizational contexts, constituent types, and/or lengths of affiliation with the organization?
    3. Which aspects of the NA process are rated the most and least burdensome for participants?

    Literature Review

    To understand the participant experience within needs assessment (NA), it is important to understand what needs assessment is. NAs aim to identify gaps between what is and what is desired to be (Watkins & Kavale, 2014). The researcher operationally defines NA as the data-driven search for opportunities to maximize individual, team, or organizational performance by contributing to the effectiveness, efficiency, and/or ease of supporting organizational goals (Pinckney-Lewis, 2021b; Pinckney-Lewis & Baaki, 2020b, 2020a). NAs can be leveraged both when there is a suspected problem with performance as well as proactively, to determine the level of success current performance (Kaufman & Watkins, 1999; Pinckney-Lewis & Baaki, 2020b; Watkins et al., 2012). While there are several models of NA ranging in formality and rigor, this research does not limit its focus to any one model, accepts that needs assessment in practice lives across the spectrum of these practices, and seeks to understand the participant experience regardless of the model used.

    Defining Perceived Burden

    It is important to also define burden as it may surface within NA. While well studied within the medical field (e.g., Disease Burden Morbidity Assessment (Wijers et al., 2017), Perceived Family Burden Scale (Nielsen et al., 2016)), the study of burden is not well established within the NA literature. This research explores four realms within NA where burden may manifest: 1) the duties, obligations, and responsibilities participants are asked to fulfill; 2) the cost (i.e., what they must give up) of participating; 3) how they perceive the technical credibility of the NA facilitator; and 4) how well they perceive the NA facilitator to be able to navigate the organizational system (Pinckney-Lewis, 2021b). When it comes to duties, obligations, and responsibilities, NA participants can be tasked in various ways. They may 1) provide project scoping or oversight (Altschuld & Kumar, 2010; Kaufman & Guerra-López, 2013; Witkin & Altschuld, 1995); 2) supply various extant data sources, serve as the gateways to other data forms (Kaufman, 1977; Kaufman & Guerra-López, 2013; Rossett, 1982); 3) provide data themselves via focus groups, interviews, or surveys (Altschuld & Kumar, 2010; Leigh et al., 2000; Stefaniak, 2020; Watkins et al., 2012); 4) or otherwise remain involved over time. Coming from expectancy-value models, cost is “what an individual has to give up to do a task, as well as the anticipated effort one will need to put into task completion,”(Eccles, 2005, p. 113). The sacrifices participants make affect their experience. Finally, when NA participants interact with facilitators, they need to feel heard while also entrusting the practitioners to be credible and flexible while causing minimal disruption to the organizational social system. To do so, NA facilitators must be able to understand and navigate organizational systems, power, interests, negotiation schemas, and responsibility (Cervero & Wilson, 2006; Pinckney-Lewis, 2021a; Stefaniak, 2020; Wilson & Cervero, 1996).

    Defining Needs Assessment Participant Types

    To achieve the best practice of triangulation within the NA, collecting data from entities at various organizational levels is essential (Stefaniak, 2020; Witkin & Altschuld, 1995). To that end, it is important to define NA participant types of interest within this research. Generally, participants are those NA constituents that were not liable for the analysis, findings, or results of the effort. For additional clarification, the researcher described NA participants as fulfilling one of three major roles: Clients, Data Providers, and/or Stakeholders. Clients are those that either request the NA or are the primary recipients of NA results. Data Providers are those responding to surveys, participating in interviews or focus groups, and/or providing documentation to contribute to the NA. Finally, stakeholders are any others with a vested interest in the organization and/or the outcomes of the NA. While it is common for participants to identify with more than one constituent type, participants in this research had to have served as at least one (Pinckney-Lewis, 2021b).

    Methodology

    In order to assess the perceived burden in NA participants, the researcher leveraged the Perceived Burden in Needs Assessment Participants Survey (PBNAPS) (a= 0.86) (Pinckney-Lewis, 2021b). While the rigorous scale development and validation process from the first portion of the mixed-methods study is described in detail in Pinckney-Lewis & Lynch (in process), it is important to note the researcher modified items from Pinckney-Lewis’ (2019) scale and Flake, et. al.’s (2015) Expectancy-Value Scale, in addition to crafting new items to cover four components: 1) Perceptions of Duties, Obligations, and Responsibilities (PDOR), 2) Perceptions of Cost (POC), 3) Perceptions of Practitioner Skills (PPS), and 4) Perceptions of Practitioner Organizational Sensitivities (PPOS).

    After a subject matter expert beta review and pilot, the researcher applied a 7-point Likert and deployed the scale via QualtricsTM to reach as many diverse participants as possible (Vito & Higgins, 2015; Watkins & Altschuld, 2014). Through a combination of criterion and maximum variation sampling (Hays & Singh, 2012), participants were required to represent at least one completed NA project as well as be aware they served as at least one of the constituent types. However, the researcher did not restrict organizational types or contexts within which these projects took place. While participation was voluntary, every research participant had the option to enter a lottery for one of five $25 gift cards.

    Participants

    While 381 self-selected individuals visited the PBNAPS online, the researcher eliminated 137 respondents for either not providing consent or not completing a substantial portion of the actual survey to receive an overall PBNAPS score. Therefore, 244 total participants were included in the overall analyses. As was the goal of the research, participants represented various organizational contexts (see Table 1), organizational affiliation types (see Table 2), and years of affiliation (see Table 3).

    Table 1

    Summary of PBNAPS Respondents Organizational Context Types

    Organizational Type #Respondents % Respondents
    Government entity (i.e., county, state, or federal level) 111 45
    For profit entity 36 12
    Non-profit entity 73 15
    No response provided 8 3
    Other 29  
    “Other” Organizational Context References
    Education Sector 29  
    Charter Schools 1  
    Higher Education 9  
    Private Schools 2  
    Public Schools 12  
    Medical Sector 4  
    Clinic 1  
    Doctor’s Office 2  
    Hospital 1  
    Family 1  

    Table 2

    Summary of PBNAPS Respondents Organizational Affiliation Types

    Affiliation Type #Respondents % Respondents
    Customer or Client 53 21.63
    Employee 105 42.86
    Executive-level Leader 16 6.53
    Manager/Supervisor 39 15.92
    Partner 10 4.08
    Volunteer 12 4.90
    More than One Affiliation Type 7 2.86
    Blank 6 2.45
    Othera 14 5.71

    aNote: Other affiliations listed by participants include: Parents, Retired Employees, Teachers, Administrators, Students, and having no known affiliation.

    Table 3

    Summary of PBNAPS Respondents Years of Organizational Affiliation

    Affiliation Length #Respondents % Respondents
    <1 year 27 11.02%
    1 – 3 years 55 22.45%
    4 – 6 years 55 22.45%
    7 – 10 years 48 19.59%
    11+ years 53 21.63%

    Analysis

    To explore how participants rate their perceived burden in the NA process, the researcher leveraged quantitative, descriptive statistics of the survey results, including calculating the overall scores, mean scores, and standard deviations of PBNAPS scores for all respondents and for each of the individual items. To analyze PBNAPS results, the researcher divided the 7-point Likert scale into three segments such that scores falling within the 4.5 or above range were considered high (n = 15, 6.1%). They were considered medium if they fell between 3.3 and 4.4 (n = 76, 31.1%). Finally, scores were considered low if they were 3.2 or below (n = 161, 66.0%) (Pinckney-Lewis, 2021a). To determine whether these reports varied across organizational context, affiliation type, or length of organizational affiliation, the researcher also compared the means of these groups via one-way analysis of variance.

    Results

    Overall Ratings

    Overall, the PBNAPS respondents (n = 244) reported relatively low amounts of perceived burden (M = 2.97, SD = 0.88) based on the 7-point scale (Pinckney-Lewis, 2021b). Their distribution of scores was slightly positively skewed (0.39, SE = 0.16) with a kurtosis of 0.02, suggesting the distribution of scores is slightly peaked in the center (Pallant, 2016). Figure X. summarizes the frequency and distribution of these scores.

    Figure 1

    Summary of PBNAPS Overall Score Distributions

    11-01-KPLFig1-2.png
    A picture containing a visual representation of descriptive statistics. The title of the image reads: PBNAPS Overall Score Distribution. The x axis title is PBNAPS Overall Score. The y axis title is frequency. Bars for each score show how many participants reported that overall scores.

    PBNAPS Ratings by Organizational Context

    When responding to the PBNAPS, participants could select as many organizational context options as applied to their experience. They were also able to fill in “Other” answers to provide additional context. For the purpose of these analyses, the researcher transformed their input accordingly by 1) honoring all written-in responses, 2) recording all instances where respondents selected more than one organizational context as “More than one organizational context,” and 3) recording the one recorded family the organizational context within the “Other unspecified” context. As such, many of the groups described within the PBNAPS Participants section, decreased in this analysis.

    The largest number of constituents belonged to the government sector (n = 99), which also had the highest average PBNAPS score (M = 3.15, SD = 0.94). Table 4 summarizes the remaining PBNAPS results by organizational context. When comparing the means of these groups via a one-way analysis of variance, there was no significant difference by organizational context, F (6, 231) = 1.58, p = 0.154 (Pinckney-Lewis, 2021b).

    Table 4

    Summary of PBNAPS Scores by Organizational Context

    Organizational Context N Average PBNAPS Score SD
    Government 99 3.15 0.94
    Non-Profit 64 2.89 0.91
    For-Profit 33 2.84 0.76
    Education 25 2.91 0.67
    Medical 3 2.59 0.69
    More than one context 9 2.61 0.60
    Other/Unspecified 5 2.49 0.54

    PBNAPS Ratings by Organizational Affiliation Type

    When identifying their organization affiliation types, PBNAPS respondents were also able to select as many options as were applicable. They again had the option to select “Other” and provide additional details for context. For these analyses, the researcher transformed these responses by: 1) coding those reporting multiple affiliation types within the organization at the most senior level they selected; 2) recording those that chose “Other” and specified being a paid member of an organization, parent in relation to an educational setting, student in relation to an educational setting, or a member of the public, as a “Client or Customer”; and 3) including the one case selecting multiple affiliation types that could not be slated by the preceding protocol, within the “Other, not specified” group.

    The largest number of respondents indicated they were organizational Employees (n = 105) and reported an overall average perceived burden rating of 3.10 (SD = 0.79). Table 5 summarizes the remaining PBNAPS scores by affiliation type. When comparing means of PBNAPS scores by affiliation type, there was no significant difference, F(6,230) = 1.38, p = .222 (Pinckney-Lewis, 2021b).

    Table 5

    Summary of PBNAPS Scores by Affiliation Type

    Organizational Context N Average PBNAPS Score SD
    Volunteer 9 3.19 1.28
    Employees 105 3.10 0.79
    Manager/Supervisor 38 2.87 0.97
    Executive-level Leader 16 2.58 0.95
    Partner 8 2.74 1.50
    Client/Customer 57 2.94 0.74
    Other, not specified 4 2.56 0.46

    PBNAPS Ratings by Length of Affiliation

    Finally, the PBNAPS respondents reported the lengths of time for which they were affiliated with the organizations in which they were involved in the NA. For this demographic type, respondents were only allowed to select one time length option. Those respondents that reported less than a year-long affiliation with their organization (n = 27) also reported the lowest average perceived burden (M = 2.61, SD = 0.69). Table 6. summarizes the remaining data. Again for this group of demographics, a one-way analysis of variance showed the effect of length of affiliation on PBNAPS outcomes was not significant, F (4, 233) = 1.57, p = .183 (Pinckney-Lewis, 2021b).

    Table 6

    Summary of PBNAPS Scores by Length of Affiliation

    Length of Affiliation N Average PBNAPS Score SD
    <1 year 27 2.61 0.69
    1 – 3 years 55 3.11 1.07
    4 – 6 years 55 2.95 0.86
    7 – 10 years 48 2.99 0.83
    11+ years 53 3.03 0.76

    Perceived Burden by PBNAPS Component

    Participant Ratings on Perceived Duties, Obligations, and Responsibilities (PDOR) Subscale Items

    Two hundred sixty-three respondents fully completed the PDOR subscale’s six items (α = 0.53). Overall, these respondents reported an average PDOR subscale score of 3.67 (SD = 1.07). PBNAPS respondents reported the highest average amount of burden in response to item PDOR6: I should not be tasked with addressing any recommendations from the needs assessment (M = 5.35, SD = 1.83). Respondents reported the least amount of reported perceived burden against PDOR3: The tasks I was asked to complete were reasonable given the scope of my responsibilities within the organization (M = 1.94, SD = 1.34). Table 7 provides a summary of average PDOR subscale results.

    Table 7

    Summary of Average PBNAPS Respondent Scores by PDOR Subscale Item

    ID Item Description Average Score N = 263 SD
    PDOR1 I had few responsibilities within the needs assessment. 4.00 2.10
    PDOR2 I volunteered to participate in the needs assessment. 2.78 2.14
    PDOR3 The tasks I was asked to complete were reasonable given the scope of my responsibilities within the organization. 1.94 1.34
    PDOR4 I had too many responsibilities within the needs assessment. 3.18 1.91
    PDOR5 I was obligated by my organization to participate in the needs assessment. 4.73 2.34
    PDOR6 I should not be tasked with addressing any recommendations from the needs assessment. 5.35 1.83

    Participant Ratings on Perceptions of Cost (POC) Subscale Items

    Two hundred sixty-three respondents completed all six POC subscale items (α =0.68). Overall, they reported an average POC subscale score of 2.69 (SD = 1.14). The item against which respondents reported the most reported perceived burden was POC1: I had to give up other commitments to work on this needs assessment (M = 3.11, SD = 2.13). The item with the lowest average score, and therefore the least reported perceived burden, was POC6: The efforts I made to participate in the needs assessment are worth the benefits the organization will gain (M = 2.28, SD = 1.65). Table 8 provides a summary of the POC subscale results.

    Table 8

    Summary of Average PBNAPS Respondent Scores by POC Subscale Item

    ID Item Description Average Score N = 263 SD
    POC1 I had to give up other commitments to work on this needs assessment. 3.11 2.13
    POC2 I have so many other commitments that I could not put forth the effort required for the needs assessment. 2.66 1.79
    POC3 I have put too much energy into this needs assessment. 2.88 1.86
    POC4 The needs assessment required a reasonable amount of effort. 2.89 1.85
    POC5 I was able to complete other tasks required of me while participating in the needs assessment. 2.29 1.64
    POC6 The efforts I made to participate in the needs assessment are worth the benefits the organization will gain. 2.28 1.65

    Participant Ratings on Perceptions of Practitioner Skills (PPS) Subscale Items

    PBNAPS respondents were able to answer the six PPS items (α =0.84) up to two times if they had more than one known NA facilitator. Within the first round, respondents (n = 240) reported an average perceived burden score of 2.75 (SD = 1.27). Within the second round, respondents (n = 29) reported an average perceived burden score of 2.70 (SD = 1.27). Across both rounds, PPS5: The needs assessment facilitator worked around my schedule is the item with the highest reported average score (first round: M = 3.05, SD = 1.77; second round: M = 3.83, SD = 2.00). Participants across both rounds were also consistent in identifying the item against which they experienced the least amount of burden, PPS3: The needs assessment facilitator explained their process in terms that I did not understand (first round: M = 2.46, SD = 1.68; second round: M = 2.00, SD = 1.49). Table 9 provides a summary of the PPS subscale results.

    Table 9

    Summary of PPS Subscale Results

    ID Item Description 1st Round Average Score N = 263 1st Round SD 2nd Round Average Score N = 29 2nd Round SD
    PPS1 The needs assessment facilitator was a good listener. 2.85 1.68 2.34 1.42
    PPS2 I did not feel understood when interacting with the needs assessment facilitator. 2.80 1.71 2.69 2.02
    PPS3 The needs assessment facilitator explained their process in terms that I did not understand. 2.46 1.68 2.00 1.49
    PPS4 I trusted the needs assessment facilitator to carry out the needs assessment with the appropriate level of rigor. 2.68 1.60 2.48 1.79
    PPS5 The needs assessment facilitator worked around my schedule. 3.05 1.77 3.83 2.00
    PPS6 I was not confident in the needs assessment facilitator’s skills. 2.63 1.75 2.83 2.19

    Participant Ratings on the Perceived Systemic Sensitivity of the Practitioner (PSSP) Subscale Items

    PBNAPS respondents were also able to respond to the PSSP subscale’s seven items (α = 0.83) twice if they had more than one known facilitator. Within the first round, respondents (n = 237) reported an average PSSP score of 2.84 (SD = 1.18), while respondents (n = 29) reported an average PSSP score of 2.92 (SD = 0.85). Unlike the PPS subscale, participants differed in their reportings of the item with which they experienced the highest and lowest amounts of perceived burden. Within the first round, they reported the highest average perceived burden score against item PSSP7: The needs assessment facilitator had very little influence on the organization’s decision making (first round: M = 3.45, SD = 1.62). Within the second round, they reported the highest average against PSSP2: I did not feel understood when interacting with the needs assessment facilitator: (M = 5.31, SD = 2.02). The item against which PBNAPS respondents reported the lowest average perceived burden was, PSSP5: The needs assessment facilitator understood the culture of the organization (M = 2.61, SD = 1.57). For the second round, the lowest scored item was PSSP6: The presence of the needs assessment facilitator disrupted organizational productivity (M = 2.07, SD = 1.22). Table 10 provides a summary of the PSSP subscale results.

    Table 10

    Summary of PSSP Subscale Results

    ID Item Description 1st Round Average Score N = 237 1st Round SD 2nd Round Average Score N = 29 2nd Round SD
    PSSP1 The needs assessment facilitator valued my contributions to the needs assessment. 2.66 1.63 2.34 1.42
    PSSP2 The needs assessment facilitator had a solid understanding of how the organization functions. 2.56 1.61 5.31 2.02
    PSSP3 The needs assessment facilitator had difficulty navigating the organizational dynamics. 2.89 1.74 2.69 1.67
    PSSP4 The interests of the needs assessment facilitator overshadowed my own interests. 3.09 1.89 2.52 1.83
    PSSP5 The needs assessment facilitator understood the culture of the organization. 2.61 1.57 2.45 1.70
    PSSP6 The presence of the needs assessment facilitator disrupted organizational productivity. 2.62 1.57 2.07 1.22
    PSSP7 The needs assessment facilitator had very little influence on the organization’s decision making. 3.45 1.62 3.03 1.90

    Discussion

    Debunking the Myth of Severe Levels of Burden in NAs

    Perceptions are powerful. These mental impressions can influence experiences and decision-making. However, even commonly accepted perceptions do not always directly reflect reality. Because NAs can be falsely perceived as being too burdensome even though they contribute substantially to the ID process, it was important to examine whether or not these perceptions were factual. While the literature suggests NAs are not leveraged as much as possible (Aull et al., 2016), these results suggest that the perceived burden of participants and constituents should not serve as the excuse for such avoidance. This heterogeneous sample of NA participants reported relatively low levels of perceived burden across organizational contexts, affiliation types, and lengths of affiliation with those organizations. These results should help to dismantle incorrect perceptions of severe or elevated levels of burden when conducting NAs. Extreme, negative connotations falsely attributed to needs assessment are not always warranted.

    Not only has this now debunked myth affected potential NA participants, but there are also implications for NA practitioners. In addition to assuaging the fear-based perceptions of NA, these data also suggest that ID practitioners should feel empowered in keeping NA within their toolbox and confidently incorporating needs assessment into their practice. In fact, this finding can arm practitioners with data and evidence to assuage any fears potential clients, customers, and participants may have when considering NA as a tool.

    Implications for NA Practice

    While this participant sample did not report overwhelmingly high burden levels, there are

    lessons to be learned from where PBNAPS respondents reported their highest and lowest amounts of burden. For the myth of the burdensome NA to remain debunked, every effort should be made to ensure participants are not taxed unnecessarily. What participants do, what they give up, how they interpret the NA facilitator’s competence and systemic sensitivities all play a role in their perceptions of burden. Since NA practitioners have the most agency in shaping the NA process, they can take action to ensure minimal burden. Some practical recommendations include making use of extant data collection and analysis, ensuring NA tasks and recommendations are reasonable, minimizing what participants must give up, remaining flexible, and seeking understanding.

    Make Use of Extant Data Collection and Analysis

    Based on the PBNAPS respondents’ overall averages of perceived burden by component, the most burdensome component was Perceived Duties, Obligations, and Responsibilities (PDOR), where n = 242 (M = 3.67, SD = 1.07). What NA participants are asked to do can greatly influence their experience. Therefore, they deserve to have their active participation limited as much as possible. This can be accomplished by prioritizing extant data analysis, or document analysis (Stefaniak, 2020). Extant data include existing documents or visual materials created outside of the researcher’s presence (Charmaz, 2006; Ralph et al., 2014; Salmons, 2016). The more that can be gleaned from extant data, the more the NA can be conducted without many impositions on NA participants  (Altschuld & Kumar, 2010; Zemke, 1998).

    Ensure NA Tasks and Recommendations are Reasonable

    Many NA models end with determining recommendations or decisions to address performance needs (Watkins et al., 2012). Within the sample, the most burdensome aspect of their duties, obligations, and responsibilities had to do with participants’ roles in carrying out the recommendations resulting from the NA. Much like the PBNAPS participants indicated that when NA tasks are perceived as reasonable, they feel less burdened, so too must the recommendations that emerge from NAs be reasonable. NA recommendations have systemic implications regardless of magnitude, such that any intervention will impact all of the organization’s moving parts (Stefaniak, 2020). Recommendations have the best chances for adoption when they offer observable results with clear, relative advantages; are not overly complex; and are compatible with the existing organizational system (Kaufman & Guerra-López, 2013; Rogers, 2003; Surry, 1997).

    Minimize What Participants Must Give Up

    While the Perceptions of Cost (POC) subscale showed the lowest overall average of perceived burden for this sample, PBNAPS respondents indicated their highest perceived burden was in giving up other commitments to participate in the NA. Therefore, the more that NA tasks can be seamlessly incorporated into the participants’ existing activities or duties, the less participants perceive they give up and potentially the less overall burden may have an adversely perceived impact. Though this sample indicated less burden because they knew their NA efforts would benefit the company, participant organizational loyalty will not justify overly taxing participants within the process. Even with company loyalty, NA participant interest and willingness to engage will likely decrease over time (Kaufman & Guerra-López, 2013). NA participant tasks must be convenient for them.

    Remain Flexible for Participants

    Similarly, PBNAPS participants reported high burden against the item indicating poor NA facilitator flexibility. In particular, respondents reported the highest average of perceived burden within the Perceptions of Practitioner Skills (PPS) subscale when their NA facilitators did not work around the participants’ schedule. In this way convenience and NA facilitator flexibility go hand in hand (Pinckney-Lewis, 2021b). Remaining flexible within the NA process is required for real-world application and enhances the participant experience (Altschuld & Kumar, 2010; Watkins et al., 2012).

    Seek Understanding

    The notion of understanding came up in several instances within the PBNAPS results. First, within the PSSP subscale, respondents reported the lowest average perceived burden against the item portraying NA facilitators explaining the process to participants in a way the participants could understand. Making sure that participants can grasp what facilitators ask of them by using plain language and eliminating jargon is important (Pinckney-Lewis, 2021b; Zemke, 1998). Additionally, within the PSSP subscale, one of the highest average ratings of burden was where participants did not feel understood by the facilitator. So not only must NA participants understand the process, but, in turn, they must also feel understood, valued, and seen (Cervero & Wilson, 2006; Forester, 1989). Participants should feel a part of the process and not like the process is foreign or being done to them.

    Limitations and Future Research

    Lack of Existing Literature

    One of the main challenges within the research is that there is not a long-standing body of work or literature to guide this inquiry. Examining participant perceptions of burden within NA is relatively novel. Though the PBNAPS was developed through a rigorous development and Beta review process, this work should still be considered in its infancy. Replications of this research across additional samples and organizational contexts is necessary to help establish a more explicit space within the literature.

    Lack of “Not Applicable” and “Not Sure” Selection Options

    Because the PBNAPS leveraged an odd-numbered Likert scale, the middle demarcation did stand in as the “Neither Agree nor Disagree” option. However, there were no options for respondents to indicate when they felt an item was not applicable to them or whether they were not sure if the item applied to them. For example, while the research did not exclude any NA experiences, two of the PBNAPS subscales did refer to the presence of a NA facilitator. In some NA practices where data are collected via survey, participants may be unaware there is a specific facilitator. For those items, respondents could indicate they “Neither Agree nor Disagree” with the statements referring to facilitators. However, “Not Applicable” or “Not Sure” options may actually be more accurate (Lee et al., 2007). Making these options available in future versions of the PBNAPS is worth pursuing, especially since it may influence the discrepancy analysis results.

    Lack of Deeper Demographic Insight

    Similarly, the PBNAPS only solicited high level demographic information regarding the respondents’ organizational context, their organizational affiliation, and their length of affiliation with the organization. While this is valuable, relevant information, it limits the interpretation of these results, which might be subject to self-selection bias (Bethlehem, 2010). Further research should account for this potential bias and investigate whether perceptions of burden within the NA process varies by race, gender, linguistic diversity, neurodiversity, and/or perceived agency relative to their role within the organization. Organizational environments are increasingly diverse, so taking a globalized and culturally sensitive participant perspective is essential (Altschuld & Watkins, 2014). While these results with the current sample were favorable, future NAs must continue not adversely impacting any contingency within an organization.

    Conclusion

    With the hefty task of facilitating learning and improving performance within various organizational contexts, ID practitioners need to be able to access all the tools within their toolbox to achieve that end. NA is a great resource to help understand the difference between the current state and the desired state of learning and performance (Altschuld & Kumar, 2010). The results of this study show that the myth of the overly burdensome NA can be debunked, at least for this sample of participants. These data can be shared with potential customers and clients as evidence that NAs are not as taxing as they may be rumored to be (Pinckney-Lewis, 2021b). The PBNAPS results are promising: the participant experiences reported here confirm that NAs cause more good than burden.

    References

    Adams, C., Baaki, J., & Stefaniak, J. (2021). Challenges faced by certified performance technologists in conducting needs assessment. Performance Improvement Quarterly, 33(4), 419–442. https://doi.org/10.1002/piq.21329

    Altschuld, J., & Kumar, D. (2010). Needs assessment: An overview. Sage. https://edtechbooks.org/-BzM

    Altschuld, J., & Watkins, R. (2014). A primer on needs assessment: More than 40 years of research and practice. In James W Altschuld & R. Watkins (Eds.), Needs assessment: Trends and a view towards the future. New Directions for Evaluation (144), pp. 5–18.

    Altshuld, J., & Witkin, B. (2000). From needs assessment to action: Transforming needs into solution strategies. Sage Publications.

    Aull, J., Bartley, J., Olson, C., Weisberg, L., & Winiecki, D. (2016). Lessons learned while completing a needs assessment of ITSS, Inc. career development opportunities: A case study. Performance Improvement Quarterly, 28(4), 7–26. https://edtechbooks.org/-gvwh

    Bates, R., & Holton, E. (2002). Art and science in challenging needs assessments: A case study. Performance Improvement Quarterly, 15(1), 111–130. https://edtechbooks.org/-XCeu

    Bethlehem, J. (2010). Selection bias in web surveys. International Statistical Review, 78(2), 161–188. https://edtechbooks.org/-SnfL

    Cervero, R., & Wilson, A. (2006). Working the planning table: Negotiating democratically for adult, continuing, and workplace education. Jossey-Bass.

    Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analysis. Sage.

    Eccles, J. (2005). Subjective task value and the Eccles et al. Model of Achievement-Related Choices. In A. J. Elliot & C. S. Dweck (Eds.), Handbook of competence and motivation (pp. 105–121). Guilford Publications.

    Flake, J., Barron, K., Hulleman, C., McCoach, B., & Welsh, M. (2015). Measuring cost: The forgotten component of expectancy-value theory. Contemporary Educational Technology, 41, 232–244. https://edtechbooks.org/-AYcf

    Forester, J. (1989). Planning in the face of power. University of California Press.

    Guerra-Lopez, I. (2018). Ensuring measurable strategic alignment to external clients and society. Performance Improvement, 57(6), 33–40. https://doi.org/10.1002/pfi

    Hays, D., & Singh, A. (2012). Qualitative inquiry in clinical and educational settings. Guilford.

    Kaufman, R. (1977). Needs assessment: Internal and external. Journal of Instructional Development, 1(1), 5–8. https://members.aect.org/Publications/JID_Collection/A1_V1_N1/5_Kaufman.PDF

    Kaufman, R., & Guerra-López, I. (2013). Needs assessment for organizational success. ASTD Press.

    Kaufman, R., & Watkins, R. (1999). Needs assessment. In D. Langdon, K. Whiteside, & M. McKenna (Eds.), Intervention resource guide: 50 performance improvement tools (pp. 237–242). Pfeiffer.

    Lee, Y.-F., Altschuld, J., & White, J. (2007). Problems in needs assessment data: Discrepancy analysis. Evaluation and Program Planning, 30, 258–266.  https://doi.org/10.1016/j.evalprogplan.2007.05.005

    Leigh, D., Watkins, R., Platt, W. A., & Kaufman, R. (2000). Alternate models of needs assessment: Selecting the right one for your organization. Human Resource Development Quarterly. https://edtechbooks.org/-Pig3.0.CO;2-A

    Mayer, R. E. (1982). Learning. In H. Mitzel (Ed.), Encyclopedia of educational research (pp. 1040–1058). The Free Press.

    Nielsen, M., Ornbol, E., Vestergaard, M., Bech, P., Larsen, F., Lasgaard, M., & Christensen, K. (2016). The construct validity of the perceived stress scale. Journal of Psychosomatic Research, 84, 22–30. https://edtechbooks.org/-ihhn

    Pallant, J. (2016). SPSS survival manual: A step by step guide to data analysis using IBM SPSS (6th ed.). Open University Press/McGraw-Hill.

    Pinckney-Lewis, K. (2019). Perceptions of burden in needs assessment. [Unpublished manuscript]. Department of STEM and Professional Studies, Old Dominion University.

    Pinckney-Lewis, K. (2021). Perceptions of burden in needs assessment: An exploration of measurement creation and validation [Doctoral dissertation, Old Dominion University]. https://edtechbooks.org/-bphi

    Pinckney-Lewis, K. (2021b, April 26-30). The art of the pivot: How to succeed in performance improvement research in the time of COVID-19. [Conference session]. The Performance Improvement Conference: Human Performance in the Age of Disruption, Virtual. https://edtechbooks.org/-Fqwz

    Pinckney-Lewis, K., & Baaki, J. (2020a, April 26-30). Empathy and systems thinking: Keys to the future of assessing needs. [Conference session]. The Performance Improvement Conference: The Future of Work, Virtual. https://edtechbooks.org/-NdjA

    Pinckney-Lewis, K., & Baaki, J. (2020b). Insider effects: Empathy in needs assessment practice. In J. Stefaniak (Ed.), Cases on learning design and human performance technology (pp. 142–162). IGI Global. https://edtechbooks.org/-LHCa

    Ralph, N., Birks, M., & Chapman, Y. (2014). Contextual positioning: Using documents as extant data in grounded theory research. SAGE Open, 4(3), 1–7. https://edtechbooks.org/-pkT

    Richey, R., Klein, J., & Tracey, M. (2011). The instructional design knowledge base: Theory, research and practice. Routledge.

    Rogers, E. (2003). Diffusion of innovations. Free Press.

    Rossett, A. (1982). A typology for generating needs assessment. Journal of Instructional Development, 6(1), 28–33. https://edtechbooks.org/-smxg

    Salmons, J. (2016). Collecting extant data online. In Doing qualitative research online (pp. 115–125). Sage Publications, Ltd. https://edtechbooks.org/-MsrX

    Stefaniak, J. (2020). Needs Assessment for Learning and Performance: Theory, process, and practice. Routledge. https://edtechbooks.org/-PRih

    Surry, D. W. (1997). Diffusion theory and instructional technology. Paper Presented at the Annual Conference of the Association for Educational Communications and Technology (AECT). https://edtechbooks.org/-WIgL

    Vito, G., & Higgins, G. (2015). Needs assessment evaluation. In Practical program evaluation for criminal justice (pp. 31–45). Routledge: Taylor & Francis Group. https://edtechbooks.org/-nqdk

    Watkins, R. (2014). Dimensions of a comprehensive needs assessment. Distance Learning, 11(4), 59–61.

    Watkins, R., & Altschuld, J. W. (2014). A final note about improving needs assessment research and practice. New Directions for Evaluation, 144, 105–114. https://edtechbooks.org/-Nkb

    Watkins, R., & Kavale, J. (2014). Needs: Defining what you are assessing. New Directions for Evaluation, 144, 19–31. https://edtechbooks.org/-eNTg

    Watkins, R., Meiers, M. W., & Visser, Y. L. (2012). A guide to assessing needs: Essential tools for collecting information, making decisions, and achieving development results. The World Bank. https://edtechbooks.org/-hNqz

    Wedman, J. (2014). Needs assessment in the private sector. New Directions for Evaluation, 144, 47–60. https://edtechbooks.org/-oJQ

    Wijers, I., Ayala, A., Rodriguez-Laso, A., Rodriguez-Blazquez, C., Wijers, I., Forjaz, M., & Rodriguez-Rodriguez, V. (2017). Rasch analysis and construct validity of the disease burden morbidity assessment in older adults. The Gerontologist, 58(5), 302–310. https://edtechbooks.org/-PhC

    Wilson, A., & Cervero, R. (1996). Paying attention to the people work when planning educational programs for adults. New Directions for Adult and Continuing Education, 69, 5–13.

    Witkin, B., & Altschuld, J. (1995). Planning and conducting needs assessments a practical guide. Sage.

    Zemke, R. (1998). How to do a needs assessment when you think you don’t have time. Training, 35(3), 38–44.

    Kim Pinckney-Lewis

    National Security Agency

    Kim Pinckney-Lewis leverages her performance improvement and instructional design skills at the National Security Agency (NSA). With a PhD in Instructional Design & Technology from Old Dominion University, her research interests include exploring design heuristics, design for special populations, and needs assessment and evaluation best practices to maximize knowledge transfer.

    This content is provided to you freely by EdTech Books.

    Access it online or download it at https://edtechbooks.org/jaid_11_1/participants_percept.