Self-reflection during maker activities is known to enhance conceptual comprehension and lead to better skill learning. While educational makerspaces commonly leverage reflective exercises guided by instructors, this practice often goes amiss in scenarios when makers . . .
Self-reflection during maker activities is known to enhance conceptual comprehension and lead to better skill learning. While educational makerspaces commonly leverage reflective exercises guided by instructors, this practice often goes amiss in scenarios when makers interested in self-directed learning use online tutorials like Instructables. In this short paper, we explore the approach of using Large Language Models (LLMs), specifically OpenAI's GPT-4 for generating reflection prompts with the existing Instructable tutorials and aligning them with a list of learning goals adapted from prior work on maker skills learning. We built a system to generate Reflectables: Instructables designed for self-reflection while making. To exemplify the approach, we generated 9 Reflectables consisting of 128 prompts and evaluated them on seven attributes: goal-oriented, timely, contextual, sequential, multilevel, clear, and personalized. Our analysis highlights the benefits and limitations of this approach and points to further research directions for designing AI-based systems for reflection-focused self-directed learning in makerspaces.
Reflective learning is a type of learning in which the learner has an analytical dialogue with themselves, i.e., learners think about their actions to gain a deeper understanding of the problems they intend to solve. Reflective learning is more than memorizing facts and enables rich learning within an activity through a process of meticulous inquiry. Reflective learning is especially important for acquiring skills in hands-on STE(A)M fields, such as engineering and product design, as highlighted by Dewey (1933), Schon (1983), and Rodgers (2002).
Effectively utilizing reflective learning for a large number of students, however, is challenging because reflection requires a guided thought-provoking dialogue. To make reflective learning effective, the prompts for reflection (what the learner should think about, when reflection should occur) need to be personalized to the learner (Kitchener 1984). Different learners struggle at different times and with different parts of the task, and thus the reflection needs to be timed differently and its wording adjusted to focus on the specific knowledge gap the learner has. While teachers have been trained to personalize reflection for each student by talking to them about their mistakes, such manual intervention does not scale to large classrooms. Thus, many teachers only provide generic reflection prompts, such as standardized questions for students to think about as they do the task. This, however, fails to personalize reflection for each student’s individual learning gaps.
The problem is emphasized in self-directed learning activities in which the student practices by themselves without the presence of a teacher. While reflection prompts that teachers prepare ahead of time can provide some guidance for the learner to reflect on their progress, the teacher cannot know ahead of time what parts the learner will particularly struggle with. Thus, there is a large need to generate such reflection prompts in situ based on the learner’s execution of each step of the learning activity. Lastly, with the advent of online tutorials that teach specific skills, there is now a vast amount of material available for learners to practice in a self-directed way. However, many online tutorials only contain step-by-step instructions intended to reach a certain goal (e.g., build a certain artifact in a making tutorial or program a piece of code to create a desired functionality) with no check-in points that would prompt the learner to reflect on what they are doing.
In this paper, we are investigating the use of Generative AI and, in particular, large language models (LLMs) to automatically generate reflection prompts. In particular, we will focus on creating reflection prompts for self-directed learning activities, specifically online tutorials for STE(A)M learning, such as on the online repository ‘Instructables,’1 which contains thousands of tutorials for self-directed making activities that teach maker skills for creative projects. We built a system for automatically generating reflection prompts for these online tutorials using Open AI’s ChatGPT4 model,2 which is quickly emerging as a tool for personalized learning (Tseng and Resnick 2014). For the remainder of the paper, we will refer to a self-directed tutorial that has been augmented with an AI-generated reflection prompt as a ‘Reflectable.’ We implemented our system as a web interface that allows DIY-makers and educators to create personalized goal-oriented Reflectables for themselves and their students respectively.
To create a Reflectable, the user can input the link to a specific Instructable tutorial of their interest and add to the system assigned learning goals that are adapted from Turakhia et al's. (2023a) comprehensive sets of learning goals within maker activities. For designing the prompts within Reflectables, we identified the following seven attributes of reflection exercises derived from existing literature and insights from a formative interview with an educator: (1) goal-oriented, (2) timely, (3) contextual, (4) sequential, (5) multilevel, (6) clear, and (7) personalized. Our system uses these characteristics, the specified learning goals, and the Instructables link to generate the corresponding Reflectable, which comprises a series of self-reflection prompts integrated within the instruction steps. To evaluate the outputs generated using our toolkit, we created 9 Reflectables with a total of 128 reflection prompts for a wide range of maker activities, ranging from 3D printing a snake (Figure 1) to building a solar-powered hydroponic. We rated each prompt across the seven characteristics listed above to identify the limitations of the current Generative AI technologies and potential areas for further research.
We begin by examining established theories on reflection and frameworks that leverage its role in supporting learning. We then review related work on learning maker skills and identify the gap in tapping the potential of online platforms for self-directed learning via integrated reflection activities.
Reviews by Fleck and Fitzpatrick (2010) and Baumer et al. (2014) define reflection as “a conscious, purposeful thought directed at a problem” and “a process in which people recapture their experience, think about it and evaluate it.” Dewey (1933), linking reflection to learning, suggests that reflective thought builds successively upon prior thoughts and is not a random assortment of thoughts. Schon (1983) builds upon this, characterizing reflection as a conversation rendering implicit knowledge tacit and categorizes reflection exercises as ‘in-action’ and ‘on-action.’ Reflection-in-action improves real-time performance, while reflection-on-action allows learning from mistakes, refines subsequent actions, and deepens self-awareness (Sharmin and Bailey 2013). Schon’s dichotomy of reflection helps us identify how different learning outcomes can be achieved by prompting reflection at different moments of an activity. Based on the type of reflection prompts, ranging from low-level specific to high-level abstract (Fleck and Fitzpatrick 2010), reflection can be intricately intertwined with a task to improve knowledge and understanding (Anzalone 2000). This grounding work thus presents that the reflective process is ‘intentional,’ ‘consecutive or sequential,’ ‘retrospective,’ ‘conversational,’ and ‘goal-oriented;’ involving ‘intrapersonal’ or ‘interpersonal’ dialogues; and aimed for specific objectives like knowledge enhancement (Fleck and Fitzpatrick 2010) and error reduction (Hetzner et al. 2011). In our work, we build on these attributes of reflection for the design of prompts in the context of self-directed learning within maker activities.
Research on reflective practices (Pirzadeh et al. 2013) illustrates that reflection plays a role in several outcomes, such as facilitating personal development (Moon 1999; Gao 2012), refining decision-making (Gresch et al. 2017), and nurturing self-awareness (Isaacs et al. 2013), with learning and education being primary objectives (Dewey 1933). With such a wide range of benefits for applications like dance (Leijen et al. 2009), music (Johnston et al. 2005), and even programming (Durak 2020), the design space of reflection exercises is large and multimodal across multiple dimensions (Sas and Dix 2009). For example, reflection is shown to be practiced in several ways, including note-taking (Bagnara and Pozzi 2012; Webb et al. 2013), journaling (Boud 2001), recording oneself (Anderson et al. 2007) and then evaluating the recording in retrospect, or visualizing information about one’s actions in real time and evaluating the data to gain self-awareness (Li et al. 2010) or improve performance (Malacria et al. 2013). Furthermore, assistance for reflection can be via prompts (Boud et al. 2013) and self-explanations (Conati and Carenini 2001) through speech, text, or illustrations. Furthermore, systems for reflection can be designed for individual or collaborative exercises. Noteworthy examples of individual reflection involve designing for slowness and solitude (Cheng et al. 2011) and technology-mediated reflection’s positive impact on well-being (Bjork 1998), while examples of collaborative reflection include designing for teachers’ social reflection (Fleck and Fitzpatrick 2009) and informal reflection at workplaces (Muller 2013), workshops (Hailpern et al. 2007), and even public forums (Kriplean et al. 2012). While evaluating the quality of reflection remains challenging (Sumsion and Fleet 1996), research shows that engaging in all these diverse forms of reflective practices transforms implicit knowledge into tacit knowledge, amplifying awareness and understanding, thus affirming its impact in enhancing learning (Kori et al. 2014). Despite this historical interest in the use of reflection for various applications, including creative processes for design (Dalsgaard et al. 2012; Hummels and Frens 2009; Ford and Bryan-Kinns 2023; Sharmin and Bailey 2011) and architecture (Schon 1984), studying its role in supporting the learning of STE(A)M skills via making remains underexplored.
To aid novices and self-directed learners in acquiring STE(A)M skills via making activities, such as design prototyping, breadboarding, and tool usage, researchers have developed several systems that offer guidance during task implementation. For example, ‘ToastBoard’ (Drew et al. 2016) visually illustrates circuit connections to reduce errors, and ‘HeyTeddy’ (Kim et al. 2019a) uses voice guidance for precise circuit wiring, ensuring accuracy. Other tools, such as ‘CircuitStack’ (Wang et al. 2016), ‘VirtualWire’ (Lee et al. 2021), and ‘VirtualComponent’ (Kim et al. 2019b), assist in circuit configuration and layout iteration. Similarly, smart makerspace concepts provide real-time guidance tailored to learner skill levels (Knibbe et al. 2015). While these systems significantly contribute to learners’ skill development through visualizations, guidance, and real-time assistance within makerspaces, these systems focus on providing explicit feedback-based learning instead of integrating self-reflection within maker activities, making this an unexplored aspect in their design (Sas and Dix 2009). While some tools like ‘Replay’ (Tseng et al. 2012) and toolkits like ‘ReflectiveMaker’ (Turakhia et al. 2022) and ‘Reflective make-AR-in-action’ (Turakhia et al. 2023a) have begun leveraging reflection for maker skill learning through self-documentation and mediums like augmented reality, this approach remains largely unexplored, particularly for self-directed learning using DIY online tutorials (Sas and Dix 2009). In their work, Kuznetsov and Paulos (2010) discuss the impact of online DIY tutorial platforms like Instructables (Wilhelm 2012) that are a catalyst for the rise of ‘expert amateurs’ who attain substantial expertise through open access to creative projects (Desjardins et al. 2017). While such online DIY tutorial platforms offer the opportunity for wide-scale skill-learning, it is typically experienced implicitly by users. Thus, there is a missed potential for using a tacit reflection approach to support widely accessible and scalable self-directed learning during maker activities.
To tap into the potential for designing reflection-based tutorials for self-directed learning, we explore the use of LLMs to integrate reflection prompts in existing making tutorials on the online repository Instructables. We term the reflection-based tutorials as Reflectables. In this section, we describe the framework and the toolkit to design Reflectables and detail the implementation of the toolkit.
Our framework for designing Reflectables considers three components (Figure 2): (1) the design characteristics of reflection prompts deducted from the literature review and a formative interview, (2) the learning goals set by educators adapted from (Turakhia et al. 2023a), and (3) the source tutorials available on online platforms. We aim for users, i.e., educators and self-motivated learners, to use our approach to generate Reflectables, which contain reflection prompts aimed at one or more learning goals chosen by them. These prompts can facilitate reflection-in-action (i.e., while the learner is going through the steps of the tutorial) and reflection-on-action (i.e., once the learner has completed all the steps of the maker activity). We next discuss these three components of our framework in further detail.
From the literature review, we know that the reflective process is intentional, consecutive or sequential, retrospective, conversational, and goal-oriented; involving internal or interpersonal dialogues; and aimed for specific objectives like knowledge enhancement and error reduction (Fleck and Fitzpatrick 2010). To further understand the specific characteristics of reflection in the context of maker skills and to gain a practical perspective on how educators view and incorporate reflection in their daily practices, we conducted a semi-structured formative interview with an educator at a makerspace at our institution who advises undergraduate and graduate students on their design projects and has advised over 1,500 students over the last ten years. Our interview findings validated and revealed the following specific characteristics to be considered in the design of the reflection prompts for maker activities: (1) goal-oriented, (2) timely, (3) contextual, (4) sequential, (5) multilevel, (6) clear, and (7) personalized. We used these characteristics not only for the design of the system that generated the reflection prompts but also to evaluate and rate the outputs from the system. Figure 3 below provides the definitions of each of these characteristics in the context of our system design.
While several studies on maker activities list different learning goals and educational potentials of makerspaces (Litts 2015), we scope our work using prior work by (Turakhia et al. 2023b) who interviewed educators from various makerspaces and detailed six sets of competencies that educators value in makerspaces for their students. We use this comprehensive list of competencies and their definitions as our list of learning goals to input into our system. Figure 4, adapted from this prior work (Turakhia et al. 2023b), provides a list of six sets of learning goals, including developing technical skills, a creative mindset, a sense of agency, communication skills, contextual reflection, and self-expression.
While there are several maker-related repositories, such as ‘Thingiverse’ (Baumann and Roller 2018) and ‘Tinkercad’ (Abburi et al. 2021), we chose the online repository Instructables (Wilhelm 2013) for our study. With a vast repository of community-contributed step-by-step tutorials on diverse projects and a thriving user base, Instructables offers access to the ‘how to make’ knowledge for both novices and seasoned makers interested in maker activities and learning maker skills. Instructables typically have pictures and corresponding descriptions of the sequential steps involved in making the intended object. If needed, the tutorials also have links to the essential files required to make the project, such as the fabrication files, the microcontroller Arduino code, or the CAD models. For our study, we tested Instructables with nine different applications ranging from making solar hydroponics to robotics to 3D printing flexible structures.
Using this framework, we designed a toolkit that allows users, both educators and individual learners, to integrate reflection prompts within existing Instructables tutorials. At its core, the toolkit harnesses the capabilities of LLMs, exemplified by OpenAI’s ChatGPT4, to generate contextually relevant reflection prompts. Once the source Instructables and the learning goals are provided by the educator or the learner, the toolkit generates Reflectables, each with a summary of the step and two to four self-reflection questions for the step. Users can enter their reflections via text into this interface, which are saved in the Reflection Diary in the system for their future access.
We next provide the input prompt for OpenAI’s ChatGPT4 that we use to generate the reflection prompts in our interface.
Input to OpenAI’s ChatGPT4
The user will provide a DIY maker tutorial, their learning goals for the task, and their prior relevant experiences. \
You need to analyze the tutorial content based on their goals and experiences and construct Reflection Questions for each step of the tutorial to facilitate their learning objectives. \
Detailed Instructions:
1. Analyze the Tutorial
- Upon receiving the DIY tutorial, thoroughly analyze each step of the process.
- Understand the sequence, importance, and intricacies of each step.
2. Learning Goals
You will be provided with specific learning goals in the following areas:
- Tool expertise: the knowledge of different tools (software and hardware), the skills to use them correctly, and a clear understanding of choosing the right tools for the right tasks.
- Solution design and development: the ability to plan out the steps and implement them to produce solutions.
- Creative problem solving: the ability to frame and break down problems in novel ways to develop creative solutions and the understanding of how to plan and implement the novel solutions.
- Diversity mindset: the ability to understand a problem through multiple viewpoints, accept the difference in perspectives in others, and be able to receive feedback from others.
- Curiosity and play: the ability and eagerness to engage in inquiry and explore design and fabrication in a playful way.
- Self efficacy: having confidence in one’s skills and proficiencies.
- Self-driven learning: the independence to seek solutions on one’s own and learning through meta-cognition strategies.
- Agency: the ability to make decisions on one’s own with confidence in their choices and to take ownership of their projects.
- Communicating ideas: the ability to describe their design concepts, vision for solutions, and the project goals and articulate the comprehensive plans for executing the solutions.
- Process documentation: the ability to document their design implementation journey and present it orally and visually.
- Social contextualization: the ability to critically situate their design ideas in different social contexts and reflect on the impact of their solutions on diverse social groups and communities.
- Personal contextualization: the ability to examine their designs with respect to oneself and articulate how their ideas reflect their personal journeys, goals, and values.
- Self-expression: the ability to spontaneously and freely express their individuality and uniqueness in their design solutions.
3. Reflection Questions
Based on the tutorial, the learning goals, and the prior experiences (i.e., the user’s specific expertise and training), construct Reflection Questions. \
Try to come up with two to four questions for each step of the tutorial. The questions will be shown to the user after the corresponding step is finished. \
Ensure that your Reflection Questions encourage a reflective maker learning process that is
- Intentional: Directed toward a specific problem or aspect of the tutorial.
- Consecutive: Following a logical sequence in line with the tutorial steps.
- Retrospective: Encouraging reflection during or after the experience.
- Conversational: Promoting internal dialogue or discussion with others.
- Goal-oriented: Aimed at enhancing knowledge or skills related to the tutorial.
To evaluate the quality of the Reflectables generated using our framework and toolkit, we selected nine Instructables focused on a range of different maker activities, ranging from building solar-powered hydroponics to automated mechanical walking to a robotic animal with thermal vision. We chose these projects to diversify the skills involved in completing the projects based on the tags provided on the Instructables platform, namely, woodworking, digital fabrication, mechanical engineering, electronics, and arts and crafts. We also specifically chose widely downloaded and highly rated featured projects, as these metrics typically indicate high-quality tutorials. We purposely limited the scale of the projects to be desktop-scale objects as these are typically easier to build on one’s own at home and hence used more frequently by users for creative making. While choosing the projects, we were also mindful of the applications, i.e., these projects could be widely used to self-learn concepts of engineering, arts and crafts, architecture, interactive design, and product design. Figure 5 shows the resulting objects from the nine selected projects.
After selecting these projects, we listed possible learning goals that could be assigned to the projects. We used Turakhia et al.’s set of competencies (Turakhia et al. 2023b) to choose one or more learning goals per project and further personalized the goals by providing additional specific goals. Figure 5 lists all the projects’ corresponding learning goals. We then input this information into our toolkit to generate the corresponding Reflectables. In total, we generated 128 reflection prompts across the nine projects. We then evaluated the quality of the prompts along the seven characteristics discussed in section 3.1.1. We next detail the generated reflection prompts and discuss our evaluation process and results in the following section. We focus our discussion on three of the nine projects that exemplify different categories of maker skills and maker activities for the sake of brevity of the paper.
This project tutorial shows the steps for making a walking animal (a Tapir) automaton using only mechanical (and no electronic) components assembled to imitate an animal’s movement (Figure 6). To generate a Reflectable for this project, we assigned the learning goals of developing technical skills, communication skills, a creative mindset, a sense of agency, and self-expression. We further added specific goals (listed in Figure 5) such as “develop a foundational understanding of mechanical movement and assembly” and “showcase the finished automaton with personal modifications or design choices.” We also added that the learner had previous experience with building basic mechanical structures to check for personalized prompts generated from the tutorial content. This tutorial has three steps: design, cutting the parts, and assembly. Our system generated seven reflection prompts for the Reflectable—two reflection-in-action prompts per step and three reflection-on-action prompts for the end of the activity. We list a few of these questions below along with the step summaries generated in the Reflectable.
Step 1: In this step, the tutorial guides you through the process of designing the walking tapir automaton using Fusion 360 software. The mechanism is based on Jansen’s linkage, and you will learn how to tweak design parameters and visualize the resulting movements.
RP b1.1: “How does the use of Fusion 360 software enhance your ability to optimize the design and visualize the movements of the automaton?”
RP b1.2: “What design modifications or improvements can you make to the tapir automaton to personalize it and make it unique?”
Step 2: This step involves cutting the parts for the tapir automaton, including the body, links, gears, and dowel axles. You will use a laser cutter or a scroll saw to cut the parts from plywood and hardwood.
RP b2.1: “How can you ensure accurate and precise cutting of the parts using either a laser cutter or a scroll saw?”
RP b2.2: “How might the choice of different materials for the body and links affect the overall performance and appearance of the automaton?”
Reflection-on-action step: In this final step, you are encouraged to personalize and complete your tapir automaton. You can make modifications or design choices that differentiate it from the original guide. The tutorial suggests making a stand for the automaton to make it easier to operate.
RP b4.1: “How can you personalize the tapir automaton to reflect your own creativity and artistic style?”
RP b4.2: “What challenges did you face during the construction process, and how did you overcome them to successfully complete the project?”
RP b4.3: “How does your finished automaton demonstrate your ownership and responsibility in its construction and completion?”
This project tutorial demonstrates the steps for constructing a simple solar-powered deep hydroponic, which is made using tools like scissors and knives (Figure 7). The project is a good example for learning basic concepts of solar-powered devices and efficient indoor plant maintenance. For generating the Reflectable of this project, we assigned all the six learning goals and added the specifics of “ability to adapt the given design to different plants, consider environmental implications, think broadly about eco-friendly solutions,” and “personalize the design to suit their taste in creative ways.” The generated Reflectable contained ten reflection prompts—two for each step. We list several of the generated questions below.
Step 1: This step provides an overview of the design considerations, measurements, and material selection for the activity.
RP e1.2: “How can you determine the optimal water level within the cultivation box based on the depth of the orchid pods?”
RP e1.2: “How would adjusting the length of the tubing connecting to the solar fountain affect the water flow and the hydroponic system’s functionality?”
Step 2: This step gives the details involved in preparing the cultivation lid box based on the number of pods being used.
RP e2.1: “How would you determine the diameter of the holes for the orchid pods to ensure they fit securely without falling through?”
Step 5: In this final step, the tutorial provides the specifications of the different plants that can be grown using this hydroponic.
RP e5.1: “How would you choose the best type of plants to grow using this hydroponic system? What factors would you consider?”
RP e5.2: “How can you ensure the proper balance of nutrients in the water for the plants’ growth and health?”
This project tutorial demonstrates the steps for building an origami robotic bird (Cucco) installed with sensors and actuators so that when exposed to a loud enough noise the Cucco goes into revenge mode and starts chasing the closest object giving a heat signal contrasting the ambient temperature (Figure 8). For generating the Reflectable of this project, we assigned all four learning goals of technical skills, communication skills, a sense of agency, and self-expression. Additionally, we specified the goals of learning to “document the process, being able to explain the complexity and interdependence of different systems,” and “foster a sense of ownership and responsibility by taking on a complex project, troubleshooting issues, and seeing it through to completion.” We also added background information specifying that the user had a background in mechanical engineering. The system generated 26 reflection prompts—two for each of the twelve steps and two prompts for the final reflection-on-action step.
Step 1: This step introduces the layout and behavior of the robot as well as the principles of thermal navigation.
RP h1.1: “How does the layout and behavior of the robot contribute to its impersonation of the Cucco character?”
RP h1.2: “How does thermal navigation work and why is it important for the robot’s functionality?”
Step 2: This step guides you in ordering the necessary sensors and actuators. It also provides instructions on substituting the microphone with another sensor if desired.
RP h2.1: “How do the sensors and actuators contribute to the overall functionality of the robot?”
RP h2.2: “How would substituting the microphone with another sensor affect the robot’s behavior? What are some possible alternative sensors that could be used?”
Reflection-on-action step: This step encourages you to admire the finished robot and reflect on the process.
RP h13.1: “How do you feel about the final result of the robot?”
RP h13.2: “What did you learn from the process of building and customizing the robot?”
To evaluate the quality of the reflection prompts within our nine Reflectables, we conducted a comprehensive content analysis. Three of the authors rated the 128 reflection prompts generated by GPT-4, considering seven critical characteristics as introduced in section 3.1.1.: (1) goal-oriented, (2) timely, (3) contextual, (4) sequential, (5) multilevel, (6) clear, and (7) personalized. These ratings helped us gain insights into the effectiveness of our generated prompts.
We observed that the majority of the prompts centered on technical skills as the primary learning goal (e.g., RP b2.1, RP e2.1), followed by a creative mindset (e.g., RP b2.2, RP h2.1). In contrast, prompts targeting communication skills were comparatively scarce (e.g., RP h13.2). We also observed a mixed evaluation regarding prompt timeliness. Some prompts were well-placed within the learning process, offering timely guidance (e.g., RP e1.2), while others were deemed less well-timed (e.g., RP e1.2). Similarly, while some prompts integrated the input of the learner’s background well within the prompts (e.g., RP h2.2), overall the integration of the learner’s background showed room for improvement to better facilitate meaningful, context-aware reflection. Furthermore, repetition emerged as a notable issue, disrupting the intended sequential flow of prompts, which was designed to scaffold the reflective process effectively (Example 1). We also observed that the system created the reflection-on-action summary step only on some occasions (e.g., RP b4, RP h13) and skipped it on other instances (e.g., RP e5). Moreover, the prompts showed potential for supporting reflection, but there is still room for improvement that is untapped. We observed that clarity and conciseness of the questions were fairly well-designed, making them easily understandable and accessible to learners (e.g., RP e2.1). At the same time, we noticed inconsistencies in the format of reflection prompts and summaries generated. Some summaries were succinct (Case Study 1), while a couple of instances had the content from the Instructable steps replicated (Case Study 2). Finally, while some personalization was evident (e.g., RP b4.1), the prompts could benefit significantly from a more tailored approach that accounts for users’ individual characteristics and progress (e.g., RP b4.3).
Another critical consideration is that learners may not always possess a clear understanding of their precise learning goals, so crafting open-ended questions that strike a balance between specificity and generality remains crucial for stimulating meaningful reflection. The prompts, while valuable for promoting reflection, currently involve primarily unidirectional format, representing a missed opportunity for fostering more complex dialogues with makers. Furthermore, despite some well-crafted prompts, we observed a few instances of unnecessary repetition, hindering the overall sequential progression of reflection. Thus, the sequencing and alignment of prompts with specific learning objectives needs improvement for facilitating the ‘deep’ self-reflection described in our background section.
In summary, while the generated reflection prompts within the Reflectables provided good opportunities for thinking about the technical aspects of making, there is still potential to improve them to support comprehensive learning. Given our reliance on the performance and functionality of LLMs for generating the outputs, there could be several ways to address this untapped potential, which we discuss in the next section.
We observed that effective learning-goal mapping, consistency in outputs, and dynamic progressive dialogue with the systems are three key areas for improvement and strengthening the pipeline of our approach. Below, we discuss ways to achieve these improvements.
The current system occasionally lacked precision in mapping learners’ goals with specific making steps, resulting in less relevant reflection prompts. We propose developing design frameworks tailored to maker skills that enable better alignment between learning goals and steps that provide the opportunities to achieve those goals and thus generate relevant and personalized reflection prompts. Furthermore, having the system to evaluate its own prompt or having an adversarial AI system to evaluate the generated prompt could allow for generating more refined and contextually appropriate sets of prompts.
The current system heavily relies on the quality and structure of input tutorials, limiting its consistency, i.e., sometimes the system generated succinct summaries and prompts, while on other occasions, it generated repetitive prompts as the input tutorial had limited text. To mitigate this dependency, we can explore methods to adaptively extract and integrate related content from a broader range of online resources beyond the input tutorial. For example, for our Case Study 2, the system could pool in content from multiple gardening and solar-power–related projects to design the reflection prompt. Additionally, the system could provide constructive feedback to content creators, encouraging them to improve the missing aspects of their tutorials.
Implementing a more dynamic approach to reflection that adapts to users’ progress and responses throughout the making process is an intriguing avenue for future work to improve the quality of reflection prompts. Creating prompts that build upon earlier reflections as suggested by Dewey (1933), offering deeper insights and connections between steps, could promote a more continuous dialogue and transform the learning experience. Furthermore, leveraging multimodal interaction, including voice and visual cues, could make the experience seem more natural and intuitive and possibly enhance the effectiveness of reflection prompts. For example, integrating speech-based interactions or visual feedback within the tutorial content could provide additional avenues for users to engage in reflection.
While our work presents a promising avenue for enhancing self-directed maker skill learning through Reflectables generated with Generative AI and LLMs, we acknowledge its limitations in user testing beyond the evaluation presented in the paper. We are currently expanding our work by running additional user studies. One of the studies involves educators, who will actively engage with our system to craft Reflectables that align with their specific pedagogical objectives. This work aims to assess the quality of the Reflectables produced, considering various characteristics, and gather insights into how the outputs support reflection and might benefit the educational community at large. Additionally, we are actively preparing for a controlled learning study with end-users to examine the effectiveness of AI-generated reflection prompts in the context of actual maker tasks. This study will provide empirical evidence regarding the impact of our system on enhancing learning outcomes during hands-on making activities. Furthermore, while our paper focuses primarily on the conceptual and design exploration of Reflectables, future work will delve into further technical development of the system.
In this paper, we explored the use of Generative AI and LLMs for automatically generating reflection prompts for self-directed learning, in particular, STE(A)M skills in the form of maker skills. We demonstrated that our approach that builds on OpenAI’s ChatGPT4 is generally able to create reflection prompts aligned with specific learning goals and integrate them into existing tutorials. Our evaluation of the outputs helped us identify areas for further improvement in systems design and paving the way for future empirical and learning studies.