Eating Spaghetti with My Elbows
As a professional instructional designer, I have a vested interest in protecting the integrity of the learning process. After all, much of my work involves painstaking arrangement of materials to calibrate active learning experiences. I stand shoulder to shoulder with other educational professionals who decry the use of any tool that could replace a student’s balanced meal of deep cognition with the empty fast-food calories of drive-through AI answers.
Given the stalwart efforts of instructional designers, professors, and teachers encouraging their students to utilize safeguards (AI detectors, a list of approved or banned behaviors) that prevent bots from short-circuiting a well-designed assignment, however, I’m thoroughly convinced that the ed tech space does not need another blog post on how to convince students to do so.
Instead, I’d like to step into this conversation as a student, which is also one of my roles, as I earn Instructional Design certification through taking master's level courses at a public university. In this context, I found my first use of AI onerous, done grudgingly to complete my professor’s prescribed checklist for an assignment on graphics design. Having wisely foreseen that generative AI would soon become indispensable to the ed tech industry, she had insisted we incorporate (and credit) some small use of AI while completing a formative assessment for her class.
Despite my early awkwardness in that first class- the prompting process felt like using my elbows to eat spaghetti, and the result was at least that messy- I’m so grateful for my professor’s nudge into the world of human-machine collaboration for learning purposes. Subsequent assignments and even work projects have brought me more opportunities to enhance my learning through AI interactions.
In that time, I’ve benefited from a learning experience that differs markedly from the passive rat-seeking-cheese behaviors that assuredly could short-circuit the learning process. The difference, I’ve found, flows from a few habits- I call them “interface keys” that I use to guide my own learning process:

- Explicitly state that you’re in it to learn. The first few times I consulted with Anthropic’s Claude to help me untangle coding snags, the platform pelted me with stacks of code that I could simply copy and paste directly into my designs, essentially destroying the cognitive friction necessary to deep learning.
Finally, I realized that just like a good teacher moderating classroom learners, I could protect my own process, if I advocated for myself. “Stop telling me the answers; let me work my way up to the correct block of code,” I insisted. Immediately, Claude’s pattern changed, conforming to my own learning preferences.
After experimenting with a few options, I architected a pattern in which I formulated my own answer, input it into the machine, and asked for corrections in the form of “right” or “wrong”. I explicitly stated that, in case of the latter, the bot was not to provide the correct answer unless I asked for it. - Request the "what" and then hypothesize the "why". In cases when finding a complex answer on my own was clearly beyond my current scope, I asked Claude to supply the answer, and then I hypothesized the explanation for each piece of code, one at a time, receiving “yes” and “no” responses until I had a reasonable grasp on the new skill.
- Save (and consider submitting) the discussion. To incorporate transparency and to incentivize my own ethical behaviors, I began submitting my AI conversation transcripts along with my assignments. While my professors have not requested these and may never feel the need to read them, I find that sharing my learning sessions helps me stay accountable and aware of my own processes. That sort of metacognitive processing has been seminal to my own learning long before any bots came along- and it enables me to remain an active learner, whether or not I have a bot at my fingertips.