Ben chats with Gias Uddin, an assistant professor at York College in Toronto, the place he teaches software program engineering, knowledge science, and machine studying. His analysis focuses on designing clever instruments for testing, debugging, and summarizing software program and AI methods. He not too long ago printed a paper about detecting errors in code generated by LLMs. Gias and Ben focus on the idea of hallucinations in AI-generated code, the necessity for instruments to detect and proper these hallucinations, and the potential for AI-powered instruments to generate QA assessments.