|

Inquiry Post 2

For our inquiry project, our pod is exploring how AI tools can help students build stronger technical portfolios and navigate the modern job search. After outlining our inquiry direction in the first post, this week I moved into the experimentation phase with my pod. My goal was to test how different large language models handle my existing resumes and projects.

To explore this, I ran a small side-by-side comparison using ChatGPT, Gemini, and Microsoft Copilot. I used the same prompt for each tool and asked them to rewrite a few technical experiences from my resume while focusing on clarity, measurable impact, and relevant technical keywords.

  1. Vehicle Counting System with Object Detection (Oct. 2023)Focus: Computer vision (OpenCV/YOLOv8), Python, and cloud-based development.
  2. Event-Based Queueing Simulation for Network Optimization (Nov. 2025)Focus: Low-level C++, OMNeT++, and network protocols (TCP Reno/Dynamic Routing).
  3. Campus Accessibility Improvement Project (Oct. 2023)Focus: Strategic framework, user requirements, and stakeholder presentations.

One thing I noticed fairly quickly is that each tool approaches the task slightly differently. Some models stayed closer to my original wording and mainly improved clarity, while others tried to ā€œupgradeā€ the language by adding more enterprise-style terminology. In some cases this helped highlight tools or systems that were already part of the work, but occasionally the descriptions started to feel a bit too corporate or exaggerated. That raised an interesting question about authenticity: while AI can definitely help polish wording, it’s still important that the final description reflects what someone actually did.

The “Best in Class” Verdict (Early Impressions):

  • Best for Technical Accuracy: Gemini – This model was the most successful at understanding the nuances of the OMNeT++ simulation and didn’t confuse the networking terms.
  • Best for Professional “Polish”: ChatGPT – This model took the Campus Accessibility Project and reframed the findings into high-impact “Gold Certification” highlights.
  • Best for LaTeX Formatting: Gemini – When asked to put these into an Overleaf-ready format, this model provided the most stable code.

At the same time, the tools were helpful for identifying technical keywords that might otherwise be overlooked. This kind of keyword highlighting is important because many modern hiring systems rely on automated screening tools before a human ever reads the resume.

Moving forward, the next step for me will be translating some of these AI-refined bullet points into a LaTeX resume template using Overleaf. This will help us test how well AI-generated content integrates with more technical portfolio formats that many computer science students use. We also plan to look into tagged PDFs and accessibility considerations, since a major part of our project is making sure technical portfolios are readable not just by humans but also by automated hiring systems.

During our pod’s next meeting, we’re planning to review each other’s AI-generated resume. The goal is to make sure the AI suggestions actually improve clarity and communication without making anyone’s experience sound generic or overly polished.

Overall, this week helped show that AI tools can be useful for translating technical work into more accessible language, but they still require a fair amount of human judgment to make sure the final result stays accurate and meaningful.

Leave a Reply

Your email address will not be published. Required fields are marked *