Hostname: page-component-54dcc4c588-br6xx Total loading time: 0 Render date: 2025-10-04T19:37:15.441Z Has data issue: false hasContentIssue false

Understanding prototype testing: how student designers structure sessions and ask questions with stakeholders

Published online by Cambridge University Press:  27 August 2025

Pa Chang Vang*
Affiliation:
College of Design, University of Minnesota, Twin Cities, USA Human Factors & Ergonomics Program, USA
Carlye Anne Lauff
Affiliation:
College of Design, University of Minnesota, Twin Cities, USA School of Product Design USA

Abstract:

While prototype testing with stakeholders is key for valuable feedback in iterative design, there is limited research on how novice designers, who lack the relevant experience, solicit meaningful feedback. This paper analyzes 30 prototype testing sessions from five student design teams to understand how novices structure their testing time by identifying and reporting the instances of testing interactions and types of questions within different contexts. Initial findings show that novices effectively set up testing, engage in active listening, and ask more close-ended follow-up questions. However, they rarely conclude sessions, seek stakeholder questions, and use fewer leading questions in later testing sessions. This preliminary understanding highlights opportunities to strengthen novices’ skills in prototype testing and how testing approaches affect stakeholder feedback quality.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
© The Author(s) 2025

1. Introduction

Prototyping is more complex than a one-step process of building and translating abstract ideas into tangible concepts; it involves an iterative process of prototyping, testing, and refinement (Reference Ege, Goudswaard, Gopsill, Steinert and HicksEge et al., 2024). This process of testing and learning from stakeholders is embodied in how and what prototypes are built over time and the intended purposes (Reference Codner and LauffCamburn et al., 2017; Reference Camburn, Dunlap, Gurjar, Hamon, Green, Jensen, Crawford, Otto, Wood and SystematicCodner & Lauff, 2024). Design teams commonly solicit feedback using prototypes by interfacing directly with key stakeholders via prototype testing sessions, where stakeholders interact with prototypes and teams ask questions (Reference Deininger, Daly, Lee, Seifert and SienkoBanker & Lauff, 2022; Reference BarnumBarnum, 2020; Reference Grover, Wright, Hoody and LauffDeininger et al., 2019).

Novices, or student designers with 0-3 years of experience, are less well-versed in prototyping and testing. Other research has shown that novices’ perceptions of prototypes are limited in quality, frequency, and intentionality, resulting in not using strategic best practices (Reference Deininger, Daly, Sienko and LeeDeininger et al., 2017; Hansen et al., Reference Hansen, Jensen, özkil and Martins2020; Reference Hilton, Linsey and GoodmanHilton et al., 2015; Reference Lauff, Kotys-Schwartz and RentschlerLauff et al., 2017). The under-realization of these benefits of intentional prototyping suggests that teaching novices how to do something does not equate to being able to excel and apply directly. It is crucial to acknowledge that prototyping outcomes and feedback are shaped by not only what and how prototypes are constructed but also the team’s dynamics in effectively testing prototypes with stakeholders (i.e., communication skills) (Reference Ege, Goudswaard, Gopsill, Steinert and HicksEge et al., 2024).

This research study characterizes early prototype testing sessions by uncovering the types of testing interactions and structure that novices use when interacting with key stakeholders (children, parents, and industry experts) in a toy product development setting. We aim to answer the research question, “How do student design teams organize their time during prototype testing sessions, and what types of questions do they ask to solicit feedback from key stakeholders?” Currently, there is a gap in the literature on how student designers conduct prototype testing sessions. Previous studies focused on designers’ prototyping processes, rather than testing sessions. The contributions of this study lie in (a) documenting 30 prototype testing sessions between student design teams and stakeholders, (b) understanding how design teams structure their time during early prototype testing sessions based on a priori framework, and (c) identifying and reporting patterns of testing interactions.

2. Background

2.1. What is prototype testing?

New product development requires an iterative design process to elicit needs, requirements, and learnings through prototype testing with stakeholders. Prototype testing sessions often include interacting with the prototypes and asking questions to encourage relevant feedback. However, the testing format depends on what designers and researchers want to learn and uncover, such as conducting a usability test or playtesting session (Reference BarnumBanker & Lauff, 2022; Reference Grover, Wright, Hoody and LauffBarnum, 2020). For example, early-stage prototype testing is typically exploration and focused on discovery, using interview-style methods to extract needs and design requirements for future iterations (Reference Grover, Wright, Hoody and LauffBarnum, 2020; Reference Grover, Wright, Hoody and LauffGrover et al., 2022; Reference BarnumLauff et al., 2022). These subsets of prototype testing evaluate aspects of the design such as the user experience, form factor, and functionality, spanning multiple levels of fidelities (Reference Codner and LauffCodner & Lauff, 2024). The structure and execution of these testing sessions depend on the goals, objectives, prototype types, and project domain (Reference BarnumBarnum, 2020). There is significance in using context-specific prototyping strategies to align with stakeholders’ perception of prototypes and asking specific questions targeted for each stakeholder’s domain. The format and quality of the prototypes influence stakeholders’ responses, impacting the types of questions and approach to soliciting feedback. This aligns with the literature where a higher fidelity prototype calls for more restrictive responses while low fidelity prototypes enable more productive conversation between designers and stakeholders (Reference Deininger, Daly, Lee, Seifert and SienkoCodner & Lauff, 2024; Reference Codner and LauffDeininger et al., 2019). While prior research suggests prototyping strategies for creating models (Reference Hilton, Linsey and GoodmanCamburn et al., 2017; Hansen et al., 2020; Reference Hansen, Jensen, özkil and MartinsHilton et al., 2015), there is minimal research on how testing sessions are run and best practices for engaging with stakeholders to maximize learning. This research explores how novice designers engage in prototype testing sessions, specifically how they structure their time and what types of questions are asked.

2.2. Best practices for prototype testing with stakeholders

Due to the nature of toy product development with the inclusion of stakeholders, cognitive and implicit biases are paramount and may necessitate following best practices as a safeguard for reliable and credible data (Reference BarnumBanker & Lauff, 2022; Reference Grover, Wright, Hoody and LauffBarnum, 2020). This is critical when conducting prototype testing with multiple groups of stakeholders who are distinctly different from each other (Reference Codner and LauffVang & Lauff, 2024). For instance, children may be testers in a prototype testing session, but they are playing with the prototypes rather than giving direct feedback (Reference Donker and ReitsmaDonker, & Reitsma, 2004). Common best practices for testing with children include: (a) finding a comfortable, distraction-free environment, (b) limiting each testing session to less than an hour, (c) providing breaks for children when their energy starts to wane, (d) observing both verbal and nonverbal feedback, and (e) being flexible to how children interact with the prototypes (Reference Donker and ReitsmaBanker & Lauff, 2022; Reference Grover, Wright, Hoody and LauffDonker, & Reitsma, 2004). There is also an emphasis on reducing distractors like leading or biased remarks and sharing unsolicited opinions. Importance is placed on asking open-ended questions to obtain more detailed and free-flowing thoughts that could otherwise be missed with a closed-ended question. Building rapport and empathy in prototype testing plays a significant role in probing affirmative and critical feedback (Reference Mohedas, Daly, Loweth, Huynh, Cravens and SienkoBarnum, 2020; Codner & Lauff, Reference Codner and Lauff2024; Reference BarnumGrover et al., 2022; Reference Grover, Wright, Hoody and LauffMohedas et al., 2022). However, novices need more experience applying best practices in a real-world setting, so identifying their prototype testing approach in a design studio course establishes a foundation to prepare students for more complex design challenges.

3. Methodology

3.1. Data collection

Data for this study were collected in the spring of 2024 in an undergraduate product design and innovation course at a large, public R1 university in the United States (Reference Codner and LauffVang & Lauff, 2024). Consent and voluntary participation in this study were obtained beforehand and approved by the Institutional Review Board (IRB) under STUDY00021116 from students and stakeholders. Ethical guidelines were followed to ensure the safety of children included in the study. No additional coursework was needed from students who opted in and participation in the study had no impact on their course performance. This course was intentionally structured to include students (n=62) from various backgrounds (i.e., mechanical engineering, product design, marketing, computer science) to engage in a client-sponsored project, focused on toy innovation. Ten design teams were formed with six to seven members per team based on skills and experiences to develop diverse teams.

This study focuses on five design teams (six students/team), identified as fruit pseudo-codes for each team: raspberry, cherry, apricot, grape, and lemon. We chose these five teams to analyze because of their consistent stakeholder group testing (child/parent and industry experts) and higher-quality video recordings without major audio/video issues. The other five design teams from the class were excluded for data consistency, due to the following reasons: (a) testing with different stakeholder groups due to the direction of their concepts (i.e. designing for pets instead of children), (b) major audio issues that interfered with the quality of the data collected, and (c) failed initial video recording during prototype testing sessions. Throughout the 16 weeks, each team was tasked with a series of prototype checkpoints or design iterations. Students started with concept sketches before moving into prototyping activities for the remainder of the semester. Then, each design team created six “sketch models” or low-fidelity prototypes in the first checkpoint (CP1). Next, the teams refined and iterated the six sketch models into two “works-like” medium-fidelity prototypes in the second checkpoint (CP2). In the final checkpoint (CP3), each design team created one final “looks-like” and “works-like” model (see Figure 1).

Figure 1. Prototype evolution of five design teams across three design checkpoints

3.3.1. Prototype testing sessions

Student design teams conducted prototype testing sessions with key stakeholders to inform future design directions at each checkpoint. A short lecture on best practices and testing guidelines was given in class as an introduction to prototype testing to baseline knowledge; this lecture was a typical part of the course and typically takes about 30 minutes. For example, best practices such as reminding participants that teams are not testing them, starting with broad questions before getting specific, and encouraging think aloud during prototype interactions were shared. Additionally, every team received note-taking guides with testing tips to help structure the sessions and note any observations. Before each testing session, time in class was provided to plan and develop testing approaches (i.e., time management, member roles, goals, and testing objectives). Two volunteer industry lab instructors were assigned to each design team and assisted by sharing their industry knowledge about prototype testing throughout the course. Despite these preparations, the testing approach was left to each team to decide how they structured and conducted the testing sessions since each team had different prototypes and feedback goals for testing. Historically, this portion of the design process is less structured in what to prototype and how to test, which presented a unique opportunity to investigate how student designers approach prototype testing sessions with stakeholders.

The research team organized and recorded the prototype testing sessions during Thursday night lab periods from 6-9pm. Each team had one hour of prototype testing time for each checkpoint: testing with an industry expert (E) for 25 minutes and testing with child/parent (C/P) for 25 minutes with a short transition between testing sessions (see Figure 2). The research team informed the design teams about the video recordings, emphasizing that students were not evaluated on their performance but rather curious to learn and understand how they approach prototype testing. To alleviate any pressure or biases of impacted behaviors, the course professor was removed from the testing environment and results were not shared until after the course ended. For this paper, 30 video recordings were collected and analyzed from five design teams. These 30 prototype testing sessions equate to about 10 hours of active prototype testing with 22 stakeholders and with 45 prototypes (each team had 6 low-fidelity models at CP1; 2 medium-fidelity models at CP2; 1 high-fidelity model at CP3).

Figure 2. Prototype Testing Set Up

3.2. Data analysis

The data analysis used a deductive and exploratory approach based on an a priori coding framework. The video recordings were analyzed using NVivo, a qualitative data analysis software. NVivo has tools to “code” timeframes of the recording and organize the video sections according to specific themes. Coding is the process of tagging parts of the data to highlight themes and interconnections (Reference Braun and ClarkeBraun & Clarke, 2012). This coding process was completed by the lead research assistant in recurring meetings with the primary investigator to review the codes. The coding framework came from a study of novices’ early-stage design interviewing skills (Reference Grover, Wright, Hoody and LauffGrover et al., 2022). Due to the nature of early prototype testing sessions acting as “probes” to help the teams identify new information through asking questions and encouraging shared thoughts, we chose this framework to code the prototype testing sessions. The research team engaged in an initial trial coding of the data using three prototype testing videos to ensure that this framework would apply appropriately to this new data. After this trial, the original framework was slightly adapted to focus on prototype testing interactions based on our learnings (see Table 1).

Table 1. Adapted codebook from interviews to prototype testing sessions (Reference Grover, Wright, Hoody and LauffGrover et al., 2022).

The videos were coded for the designers’ interactions only, as the focus was on understanding their approaches, not the stakeholders (i.e., parents). The codes in NVivo are for a point in time (instance) when the team used a specific type of testing interaction, and do not include the duration of the use. We coded all verbal statements/questions from the designers, along with non-verbal interactions that included gestures and head nods when related to the established code (i.e., head nods as a type of active listening). Additionally, all questions were double coded to the types of questions and specific contexts (i.e., follow-up). For example, a direct question based on stakeholders’ actions or words was coded as “Follow-Up Questions,” with “Open-Ended Questioning” applied if the question was open-ended. An expert in prototyping research reviewed the coding process and results, leading to two iterations and a final review. Figure 3 illustrates the coding process with interaction codes divided into the opening, body, and closing sections for one prototype testing session.

Figure 3. Example: testing interaction codes in NVivo for one prototype testing session

4. Results

The research questions for this study are, “How do student design teams organize their time during prototype testing sessions, and what types of questions do they ask to solicit feedback from key stakeholders?” We evaluated the testing interaction codes for 30 prototype testing sessions (5 teams x 6 sessions) and report the occurrences or instances of testing interactions (see Table 2).

Table 2. Instances of testing interactions per design team for all checkpoints.

4.1. Opening and closing: testing interaction instances

The opening and closing testing interactions account for the smallest portion of time in all prototype testing sessions. “Building Rapport” was used 15 times by all five teams in 14/30 sessions in the opening, suggesting that more than half of the sessions rushed into the prototype interactions. This low instance of greeting and building rapport with stakeholders is negatively skewed by Team Cherry and Grape since these teams only engage in building rapport once. Team Raspberry and Lemon engaged with stakeholders the most in the opening with more than half of the sessions. The teams used “Stage Setting” in 27/30 testing sessions with 73 observed instances. This is expected since there was more than one prototype in the first and second checkpoints, indicating that set-up could happen up to six times in CP1 and two times in CP2. Team Apricot and Lemon engaged in stage setting for all six testing sessions. Team Raspberry, Cherry, and Grape did not set the stage for one session, with two of these missing instances being a testing session with child/parent.

The student design teams used “Closing Remarks” in 23/30 sessions across 25 instances to wrap up the testing sessions. Only Team Raspberry and Apricot concluded all of the sessions while the other teams did not use closing remarks in at least two sessions. It is important to note that the instances were not evenly distributed across all five teams, portraying that some teams may have attempted to conclude the sessions multiple times or ran out of time. There is no indication of whether these closing remarks are of high quality. Rather, the reported instances signify that the design teams did attempt to end their testing sessions through some type of remark. Part of concluding a testing session is to clarify any last-minute questions from the team and stakeholders. However, there were a low number of instances for the testing interactions: “Asking Team for Questions” (3/30 sessions), and “Asking Testers for Questions” (4/30 sessions). Team Apricot and Grape were the only teams that involved their members for further questions. Team Raspberry, Cherry, and Apricot attempted to ask stakeholders for questions at the end in at least one of their testing sessions.

4.2. Body: testing interaction instances

The student design teams used “Active Listening” in 29/30 testing sessions, occurring 147 times, showing that teams were attentive and responsive. All teams except Team Grape engaged with stakeholders via actively listening and/or affirmative non-verbal gestures in all the testing sessions. Team Grape was engaged with stakeholders in the first checkpoint, but the occurrence of this testing interaction decreased over time. We expected the teams to use “Signposting” frequently since they may need to transition often based on the number of prototypes. For example, each design team may need to signpost at least nine times since they should have nine total prototypes for an estimated of 45 instances of signposting. The teams collectively signposted 61 times in 23/30 sessions, exceeding the estimate. There was a high occurrence of leading or biased interactions used in 20 testing sessions across 77 instances. Team Raspberry and Cherry accounted for most of “Leading/biased” statements or behaviours, such as biasing stakeholders on the prototype interactions. Interruptions occurred 16 times for all the design teams in 8/30 testing sessions, mainly in the first checkpoint, with at least one instance for each team. The gradual decline of using detractors is common as teams get more experience in prototype testing. More research is needed to explain the teams’ learnings and changes over time. Team Raspberry, Cherry, and Apricot frequently used “Team Support” interactions, potentially pointing to the team dynamics and approach at an individual level. A total of 92 instances were observed for team support, occurring in 26/30 testing sessions.

4.3. Types of questions asked

We analyzed how novice designers used open or closed-ended questions when applied to three contexts: “Follow-Up Questions,” “Encourage Think Aloud,” and “Reflective Listening” (see Table 3). All the design teams asked 47 follow-up questions in 16/30 testing sessions with 13 being open-ended questions and 34 closed-ended questions. There were 40 instances of “Encourage Think Aloud” in 14/30 testing sessions, but only 21 instances as questions. This indicates that the teams did not rely on questions only to elicit thoughts during testing. They engaged in “Reflective Listening” 25 times across 8/30 testing sessions, but only five instances were questions. Overall, the teams used roughly the same quantity of closed versus open questions, during follow-up situations.

Table 3. Open vs. closed-ended questions during specific contexts.

5. Discussion

5.1. Design team dynamic x testing interactions

Results reported in Section 4.1 could be interpreted as whether the testing interaction occurred or not, and how many times per testing session. Since each design team has six total prototype testing sessions, there should be six instances of “Building Rapport” and “Closing Remark,” ensuring that stakeholder expectations are aligned. However, we observed that 16 prototype testing sessions dove straight into testing. This could be due to (a) the pressure to get started and finish within 25 minutes, (b) the lack of experience interacting with stakeholders to build rapport, and/or (c) the need to adapt each testing session to stakeholders. The teams spent time setting up the testing sessions, but they did not always “properly” conclude (Reference Grover, Wright, Hoody and LauffCardoso et al., 2020; Reference Cardoso, Hurst and NespoliGrover et al., 2022). Despite the high instances of “Team Support,” there seem to be challenges with the team dynamics where the teams rely less on member support to focus more on creating a comfortable environment for stakeholders. For example, Team Cherry had one moderator and two notetakers while Team Raspberry’s members had less defined roles. “Signposting” includes time management, but none of the teams had a dedicated member to track time, leading to a rushed closing of the session or complete dismissal of closing remarks. Since each team has six members, defining roles for all members is crucial to ensure full engagement. Undefined roles for some may leave team members idle, impacting the structure and approach to prototype testing.

5.2. Interesting observations among different stakeholders

The results in Section 4.2 highlight how the design teams structure the body section (typically 15-20 minutes) of the prototype testing sessions. There are noticeable differences in the testing approach for the two stakeholder groups, which should be explored with further research. Children aged 4-9 participated in the first stakeholder group with parents. Younger children (ages 4-6) struggled with attention and answering questions (Reference Donker and ReitsmaDonker & Reitsma, 2004), which may explain the higher frequency of “Closed-Ended Questioning” with children compared to experts. For example, asking, “Do you like this color?” to a 5-year-old child can elicit either a head nod or a yes/no response. There is value in short, direct responses when closed-ended questions are used appropriately. When the child was older (ages 7-9) and could form a dialogue with the design team, more open-ended questions were used. Open-ended questions such as “What are you feeling? What do you think?” enable older children to share their thoughts and have parents less involved. This shows that closed-ended questions can help design teams gather more feedback from children without negatively affecting quality (Reference Mohedas, Daly, Loweth, Huynh, Cravens and SienkoGrove et al., 2022; Reference Grover, Wright, Hoody and LauffMohedas et al., 2022). However, the design teams treated prototype testing with industry experts as consultations, where experts led the discussion. This informal approach fostered collaboration and may explain the “Leading/Biased” instances observed in all teams. Experts tend to ask more questions upfront to understand the prototypes, elucidating why “Signposting” and “Building Rapport” instances are lower. “Encourage Think Aloud” also appeared differently for each stakeholder group. For example, children expressed their thoughts through nonverbal and verbal cues, like repeated touches and comments such as “This is cool.” This required the design teams to be more intentional in encouraging shared thoughts, using observations as the primary method while soliciting thoughts from experts had a greater emphasis on verbal feedback (Reference Grover, Wright, Hoody and LauffDonker & Reitsma, 2004; Reference Donker and ReitsmaBanker & Lauff, 2022). Despite these differences, the teams asked new questions more frequently rather than follow-ups or reflective questions. Possible explanations include (a) having pre-set questions, (b) disorganized roles disrupting the flow, and/or (c) prioritizing quantity of questions over quality and depth of questions.

6. Limitations

This study has several limitations. First, the industry experts involved in prototype testing were also lab instructors for the course. Most lab instructors tested for student teams that they were not consulting weekly. There is one exception: Team Cherry tested with their lab instructor in the second checkpoint, and they appeared more relaxed, which might have affected the testing structure and the observed testing interaction instances. Second, the design teams did not always test their prototypes with children of the intended age range. Due to recruitment constraints, we used a broader age group (2-10 years), potentially affecting the quality and relevance of feedback. Third, three video recordings had an estimated 2-3 minutes missing, preventing a full analysis of testing interactions during the opening and closing sections. Lastly, a single research assistant completed the coding process with the support and collaboration of a research expert, which may introduce bias and human error. We did not have a second coder to establish inter-rater reliability as this was an initial, exploratory study, but we acknowledge that a second coder will be needed to establish this reliability in future, in-depth studies.

7. Future studies

This study reports the findings of the types of questions asked, frequency of testing interactions used, and a summarized testing structure that novice designers followed in early prototype testing sessions, laying a foundation to help answer additional research questions. There are opportunities to look at types of questions asked (open or closed) and types of feedback gained from the testers, or influence of biasing/leading statements on amount of feedback gained. Data were extensively collected from the course and present more opportunities to analyze each design team’s testing approach to the types of stakeholder feedback received (i.e., form, play value, feasibility, desirability). After each checkpoint, teams completed a reflective survey on what they learned, allowing comparisons between perceptions and reality. A more in-depth analysis including a second reviewer to establish an inter-rater reliability can strengthen these findings, too. Future studies will expand the contribution of this research to include analysis on the evolution of the prototypes based on stakeholder feedback, feedback prioritization based on prototype testing learnings, and the intersection between feedback and prototype fidelity.

8. Conclusion

To begin to understand how student design teams conduct prototype testing sessions with key stakeholders (child/parent and industry experts), we analyzed 30 prototype testing sessions and identified patterns of their testing interactions. In particular, we asked: “How do student design teams organize their time during prototype testing sessions, and what types of questions do they ask to solicit feedback from key stakeholders?” Our preliminary findings show that novices engaged well with stakeholders through active listening and adapted their testing approaches to time constraints and different stakeholders. While they often set the stage for testing sessions, they rarely conclude them. By the final prototype checkpoint, novices were less likely to exhibit leading/biased behaviors, reflecting improvements in their testing interactions over time. Student teams engaged in follow-up questions in about half of the sessions (16/30) with two-thirds of these as closed-ended questions. By reporting the testing interaction instances, we highlight opportunities and gaps in how novices structure prototype testing sessions and solicit feedback via the types of questions asked.

References

Banker, A., & Lauff, C. (2022, June 16). Usability testing with children: History of best practices, comparison of methods and gaps in literature. DRS2022: Bilbao. https://doi.org/10.21606/drs.2022.646 CrossRefGoogle Scholar
Barnum, C. M. (2020). Usability Testing Essentials: Ready, Set, .. Test! (Second Edition.). Morgan Kaufmann.Google Scholar
Braun, V., & Clarke, V. (2012). Thematic analysis. APA Handbook of Research Methods in Psychology, Vol 2: Research Designs: Quantitative, Qualitative, Neuropsychological, and Biological., 5771. https://doi.org/10.1037/13620-004 CrossRefGoogle Scholar
Camburn, B., Dunlap, B., Gurjar, T., Hamon, C., Green, M., Jensen, D., Crawford, R., Otto, K., & Wood, K. (2015). Systematic, A. Method for Design Prototyping. Journal of Mechanical Design, 137(8), 081102. https://doi.org/10.1115/1.4030331 CrossRefGoogle Scholar
Cardoso, C., Hurst, A., & Nespoli, O. G. (2020). Reflective inquiry in design reviews: The role of question-asking during exchanges of peer feedback. International Journal of Engineering Education, 36(2), 614622.Google Scholar
Codner, A., & Lauff, C. A. (2024). Designing toy prototypes: an exploration of how fidelity affects children’s feedback on prototypes. Design Science, 10, e33. doi: https://doi.org/10.1017/dsj.2024.42 CrossRefGoogle Scholar
Deininger, M., Daly, S. R., Sienko, K. H., & Lee, J. C. (2017). Novice designers’ use of prototypes in engineering design. Design Studies, 51, 2565. https://doi.org/10.1016/j.destud.2017.04.002 CrossRefGoogle Scholar
Deininger, M., Daly, S. R., Lee, J. C., Seifert, C. M., & Sienko, K. H. (2019). Prototyping for context: Exploring stakeholder feedback based on prototype type, stakeholder group and question type. Research in Engineering Design, 30(4), 453471.CrossRefGoogle Scholar
Donker, A., & Reitsma, P. (2004). Usability testing with young children. Proceedings of the 2004 Conference on Interaction Design and Children: Building a Community, 4348. https://doi.org/10.1145/1017833.1017839 CrossRefGoogle Scholar
Ege, D. N., Goudswaard, M., Gopsill, J., Steinert, M., & Hicks, B. (2024). What, how, and when should I prototype? An empirical study of design team prototyping practices at the IDEA challenge hackathon. Design Science, 10, e22. doi: https://doi.org/10.1017/dsj.2024.16 CrossRefGoogle Scholar
Grover, M., Wright, N., Hoody, J. M., & Lauff, C. (2022, August). Developing design ethnography interviewing competencies for novices. In ASEE Annual Conference and Exposition, Conference Proceedings.Google Scholar
Hansen, C. A., Jensen, L. S., özkil, A. G., & Martins, Pacheco N. M. (2020). FOSTERING PROTOTYPING MINDSETS IN NOVICE DESIGNERS WITH THE PROTOTYPING PLANNER. Proceedings of the Design Society: Conference DESIGN, 1, 17251734. https://doi.org/10.1017/dsd.2020.132 CrossRefGoogle Scholar
Hilton, E., Linsey, J., & Goodman, J. (2015). Understanding the prototyping strategies of experienced designers. 2015 IEEE Frontiers in Education Conference (FIE), 18. https://doi.org/10.1109/FIE.2015.7344060 CrossRefGoogle Scholar
Lauff, C., Kotys-Schwartz, D., & Rentschler, M. E. (2017). Perceptions of Prototypes: Pilot Study Comparing Students and Professionals. Volume 3: 19th International Conference on Advanced Vehicle Technologies; 14th International Conference on Design Education; 10th Frontiers in Biomedical Devices, V003T04A011. https://doi.org/10.1115/DETC2017-68117 CrossRefGoogle Scholar
Mohedas, I., Daly, S. R., Loweth, R. P., Huynh, L., Cravens, G. L., & Sienko, K. H. (2022). The use of recommended interviewing practices by novice engineering designers to elicit information during requirements development. Design Science, 8, e16.10.1017/dsj.2022.4CrossRefGoogle Scholar
Vang, P. C., & Lauff, C. A. (2024, June). Reflections on Data Collection during Toy Prototype Development in a Design Studio Course. In Proceedings of the 23rd Annual ACM Interaction Design and Children Conference (pp. 940944).10.1145/3628516.3659422CrossRefGoogle Scholar
Figure 0

Figure 1. Prototype evolution of five design teams across three design checkpoints

Figure 1

Figure 2. Prototype Testing Set Up

Figure 2

Table 1. Adapted codebook from interviews to prototype testing sessions (Grover et al., 2022).

Figure 3

Figure 3. Example: testing interaction codes in NVivo for one prototype testing session

Figure 4

Table 2. Instances of testing interactions per design team for all checkpoints.

Figure 5

Table 3. Open vs. closed-ended questions during specific contexts.