
Artificial Intelligence has become an increasingly prominent force in education, reshaping how students learn, problem-solve, and engage with complex material. In the field of software engineering, where the pace of change is rapid and the breadth of knowledge required is vast, AI tools have emerged as practical companions throughout the development process — from understanding new concepts to writing and debugging code.
Throughout ICS 314, I made frequent use of several AI tools, including ChatGPT, Claude, and Google Gemini. Rather than treating them as a shortcut, I approached them as a resource to accelerate understanding and work through challenges more efficiently. Over the course of the semester, AI became a consistent part of my workflow across a wide range of tasks, from clarifying concepts during tutorials to assisting with code during WODs and the final project.
For Experience WODs, I consistently used AI as a primary resource. For example, in E12 (Jamba Juice 1), I started by pasting the full WOD instructions into ChatGPT and asking something like “What are the key things I need to implement based on these instructions?” to get a high-level breakdown before writing any code. Once I had a clear picture of the overall structure, I worked through each part individually — for instance, prompting “Write a TypeScript class called MenuItem with a constructor that takes a name, ingredients array, price object, and calories object.” After receiving the output, I would review it, test it in the TypeScript Playground, and follow up with additional comments or corrections if something was off. The results were generally very close to what was needed, requiring only minor adjustments on my end. Overall, AI was extremely useful for Experience WODs — it significantly reduced the time spent on initial implementation and allowed me to focus more on understanding the logic rather than getting stuck on syntax.
For In-class Practice WODs, I used AI more actively as a learning tool to understand each component before writing code myself. For example, in the Murphy’s Bar and Grill WOD, I pasted the instructions into ChatGPT and asked something like “What Bootstrap 5 components do I need to recreate this layout — navbar, background image section, and content blocks?” This helped me break down the page structure into manageable parts before starting. I then asked follow-up questions for each step, such as “How do I set a full-width background image using Bootstrap 5?” to understand the underlying logic. This approach was useful for building familiarity with Bootstrap patterns so that by the time a graded WOD came around, I already had a working mental model of how the pieces fit together.
For In-class WODs, I used AI differently compared to practice — rather than relying on it to understand the structure from scratch, I primarily used it to verify my own thinking. For example, in the Corn Hole Score WOD, I first attempted to outline the Round and Game classes myself based on what I had studied, then used ChatGPT to confirm my logic with prompts like “Is this the correct way to implement the scoreTeam1() method given that only one team can score per round?” If something was off, I would adjust and re-check rather than asking for a complete solution. This approach helped me use AI as a confidence check rather than a crutch, which felt more aligned with the actual goal of the WOD.
For essays, I used AI primarily for drafting and refining my writing. My typical approach was to first jot down my own thoughts and key points, then ask ChatGPT something like “Help me write an introduction for an essay about my experience learning Bootstrap 5 in a software engineering course” based on those notes. Once I had a rough draft, I would then use AI to polish the language — prompting things like “Rewrite this paragraph to sound more natural and concise.” This combination of AI-assisted drafting and refinement was very useful throughout the semester. Without it, producing well-structured and clearly written essays within a reasonable amount of time would have been significantly more difficult.
For the final project, I was primarily responsible for the frontend, so I used AI in two main ways. First, during the planning stage, I used ChatGPT to brainstorm UI layouts and UX decisions — for example, asking things like “What are some best practices for organizing a navigation flow for a web app that has user authentication and a dashboard?” This helped me think through the user experience before committing to a specific design direction. Second, once I had a clear idea of what I wanted to build, I used AI to assist with the actual implementation — prompting things like “Write a React component for a responsive navbar using Bootstrap 5 that collapses on mobile.” Overall, AI was a valuable partner throughout the final project, both for expanding my thinking during the design phase and for speeding up development during implementation.
For learning new concepts and tutorials, I consistently turned to AI as a first resource whenever something was unclear. For example, when I was first introduced to React, I found the official documentation helpful but sometimes difficult to follow without additional context. I would ask ChatGPT things like “Explain how React components and props work with a simple example” to get a more digestible explanation tailored to my current level of understanding. When I encountered unfamiliar patterns in tutorials, such as how state management works with hooks, I would follow up with prompts like “What is the difference between useState and useEffect and when should I use each one?” This approach made picking up new concepts significantly faster compared to reading documentation alone, and helped me build a working understanding before applying the concepts in actual assignments.
For answering questions in class or on Discord, I did not actively answer other students’ questions using AI. However, during the final project, whenever I was unsure about something and considered posting a question on Discord, I would first run it by ChatGPT to see if I could resolve it on my own. For example, if I was confused about how a certain component was supposed to behave, I would prompt something like “Why would a React component re-render unexpectedly when state is updated in a parent component?” More often than not, this resolved my confusion before I even needed to ask anyone else. While I did not use AI to directly answer others’ questions, it effectively reduced the number of questions I needed to ask myself.
I did not use AI for asking or answering smart questions during ICS 314, primarily because I did not actively participate in the smart question forum. Whenever I encountered a problem or had a question, I found that asking AI directly resolved most issues quickly enough that I did not feel the need to post a formal smart question. As a result, I have no specific experience using AI in this context.
For coding examples, I frequently asked AI to provide concrete usage examples when learning new libraries or frameworks. For instance, when getting familiar with React hooks, I would prompt ChatGPT with something like “Give me an example of using the useState hook to manage a form input in React.” Similarly, when working with Bootstrap components, I would ask things like “Show me an example of a responsive Bootstrap 5 navbar with a dropdown menu.” Having working code examples to reference made it much easier to understand how individual pieces fit together, and I could then adapt them to fit my specific use case rather than starting from scratch every time.
For explaining code, I regularly used AI when I encountered code that was difficult to follow on my own. For example, when working through tutorial examples or in-class demonstrations, if a particular snippet was unclear, I would paste it into ChatGPT and ask something like “Can you explain what this code is doing line by line?” This was especially helpful in the earlier modules when TypeScript class syntax and type annotations were still unfamiliar to me. Having AI break down the logic in plain language helped me build understanding much faster than staring at the code alone, and made it easier to then apply similar patterns in my own work.
For writing code, AI was a consistent part of my workflow throughout the semester. As mentioned in previous sections, I used it across WODs, the final project, and various assignments — typically by describing what I needed in plain language and asking for a working implementation as a starting point. Rather than copying output directly, I would review and adjust the generated code to fit the specific requirements. This approach sped up development significantly while still requiring me to understand and take ownership of the final result.
For documenting code, I did not make use of AI. Throughout the semester, code documentation was not something I prioritized heavily, and when I did add comments, they were brief enough that writing them manually felt more straightforward than prompting AI to generate them. I did not find a strong enough need to incorporate AI into this particular aspect of my workflow.
For quality assurance, I frequently used AI to help debug code when something wasn’t working as expected. My typical approach was to paste the problematic code into ChatGPT along with a description of the issue, asking something like “This React component is supposed to update the UI when the state changes but nothing is happening — what’s wrong?” AI was generally very effective at identifying the root cause quickly, often catching issues like incorrect state updates, missing dependencies in useEffect, or improper prop passing that I had overlooked. This saved a significant amount of time that would have otherwise been spent manually tracing through the code trying to find the problem.
Outside of the uses listed above, I did not find any other notable ways in which I used AI within ICS 314. The categories covered above fairly comprehensively reflect how AI was integrated into my workflow throughout the semester.
The incorporation of AI into ICS 314 had a meaningful impact on my learning experience, both positively and in some areas where it required more careful balance. On the positive side, AI allowed me to engage with course material at my own pace and level of understanding. Whenever official readings or lecture content felt too abstract or dense, being able to ask AI to re-explain a concept in simpler terms — and then follow up with more specific questions — made the learning process feel much more personalized and accessible. This was particularly valuable when applying new concepts in practice, as I could quickly clarify doubts and move forward with confidence rather than getting stuck for long periods of time.
However, there were also moments where I noticed a downside to relying on AI too heavily. When I used AI-generated code or explanations without fully working through the underlying logic myself, I sometimes found that the knowledge did not stick as well. I could get something working without truly understanding why it worked, which occasionally surfaced as a gap in understanding when I encountered a slightly different problem later on. This taught me that AI is most effective as a learning tool when it is used to deepen understanding rather than simply produce an output, and that the extra effort of engaging critically with AI responses made a noticeable difference in how well I retained and could apply what I had learned.
Beyond ICS 314, AI has become a practical tool in other areas of my academic life as well. For example, in my Finance course, I frequently turned to AI when encountering concepts that were difficult to grasp from lectures or textbook explanations alone. When a particular formula or financial principle was unclear, I would ask ChatGPT to explain it in simpler terms and walk me through a concrete example step by step. Similarly, when working through application problems, if I was unsure about the approach, I would use AI to verify my reasoning or clarify what the question was actually asking before attempting a solution. This pattern of using AI as a personalized tutor — checking understanding one concept at a time at my own pace — proved effective across different subject areas, suggesting that its value extends well beyond software engineering into any field where complex concepts require deeper explanation than what is provided in standard course materials.
One of the main challenges I encountered with using AI in ICS 314 was the risk of moving forward without truly understanding the material. Because AI can produce working code or clear-sounding explanations quickly, it was sometimes tempting to accept the output and move on without taking the time to fully internalize the underlying concept. This occasionally left me with a surface-level understanding that was enough to complete an assignment but not deep enough to confidently apply the same knowledge in a different context. Managing this tendency required conscious effort — reminding myself to question and engage with AI responses rather than simply accepting them.
In terms of opportunities, I think there is significant potential for AI to be more deliberately integrated into software engineering education. For instance, using AI as a structured study companion — where students are encouraged to use it for explanation and verification rather than solution generation — could help maximize its benefits while minimizing the risk of shallow learning. Courses could also incorporate reflection exercises, like this essay, that prompt students to critically evaluate how and why they used AI, which in itself builds more intentional habits around its use.
Comparing traditional learning methods with AI-enhanced approaches, the most noticeable difference was in how personalized the learning experience felt. With conventional methods — reading course materials, watching lecture videos, or following tutorials — the pace and depth of explanation are fixed, which means it is easy to get stuck on a concept without a clear path forward. AI, on the other hand, allowed me to ask as many follow-up questions as needed at my own pace, making it much easier to fill in gaps in understanding on the spot rather than moving on with unresolved confusion.
That said, traditional course materials had one clear advantage: reliability. There were moments where AI provided explanations that felt overly advanced or went beyond the scope of what was being covered in the course, which sometimes added unnecessary complexity rather than clarifying things. There were also occasional instances where I was not entirely confident that the information AI provided was accurate, which required me to cross-reference with official documentation or course content. This suggests that AI works best not as a replacement for structured course materials, but as a complement to them — most effective when used alongside reliable sources rather than in isolation.
Looking ahead, I believe AI will play an increasingly central role in software engineering education. As AI tools continue to improve, their ability to adapt explanations and examples to each individual student’s level of understanding makes them particularly well-suited as personalized learning companions. Rather than a one-size-fits-all approach, students could use AI to progress through material at their own pace, asking questions and receiving tailored feedback in a way that traditional instruction cannot always provide at scale.
At the same time, I think it will be important for educators to emphasize the development of critical thinking skills alongside AI use. The risk of becoming overly dependent on AI — accepting its output without questioning or deeply engaging with it — is real, and students who do not actively work to understand the material themselves may find their independent problem-solving abilities underdeveloped. The ideal future, in my view, is one where AI serves as a powerful support tool that enhances learning, but where students are still expected and encouraged to think for themselves, verify what AI tells them, and build genuine understanding through their own effort.
Throughout ICS 314, AI proved to be a valuable and versatile tool that shaped nearly every aspect of my experience in the course. From breaking down unfamiliar concepts in React and TypeScript to assisting with code during WODs and contributing to the final project, AI allowed me to learn more efficiently and engage with material at a level that felt accessible and personalized. At the same time, this experience also revealed the importance of using AI intentionally — the moments where I relied on it without fully understanding the output were the moments where my learning felt the shallowest.
The key takeaway from this semester is that AI is most effective when treated as a thinking partner rather than a shortcut. Used critically and deliberately, it can significantly enhance comprehension, speed up development, and make complex material more approachable. For future students and educators, I would recommend embracing AI as a core part of the learning process, while also building in habits and expectations that encourage independent thinking and critical evaluation of AI-generated content. The goal should not be to use AI less, but to use it more thoughtfully.