Monday, November 6, 2023

CSEdCon 2023: My Takeaways Regarding AI and the Future of Education

Attending CSEdCon was truly an enlightening experience for me. This three-day conference delved into the intricate connections between AI, education, and computer science. As I sat through thought-provoking sessions such as "AI and the Future of Education," "Reaching Rural Regions," and "Early Experiences in Teaching with AI," I found myself deeply engaged in discussions. These dialogues solidified some of my existing beliefs about AI in education while presenting fresh perspectives that broadened my understanding.

A few stand-out observations from the conference include:

  • AI promises to transform the landscape for teachers and software engineers. While it will not replace human expertise, the importance of adapting and capitalizing on AI tools cannot be understated.
  • The growing demand for AI skills is undeniable, emphasizing both technical and interpersonal attributes like communication and ethics. Yet, the gender disparity in AI is concerning and demands immediate attention.
  • AI's potential in refining computational thinking, tailoring learning experiences, and simplifying coding is evident. But challenges, such as inherent biases and inaccuracies, persist.
  • Embracing AI in classrooms has showcased notable benefits, including improved student outcomes. However, there's an underlying risk: if not accessible to all, it might intensify existing inequities.
  • The future of CS education is poised for change. Expect a decline in traditional tasks and languages, like HTML, and a surge in innovative modalities like block-based coding and audio input/output methods.
  • One crucial takeaway is the need to arm students with discerning knowledge about computer capabilities, ensuring they utilize AI with wisdom and responsibility.

As an edtech consultant and advocate, I frequently interact with educators apprehensive about AI's profound impact on education and the world at large. My experience at CSEdCon has equipped me with talking points that can assuage such concerns. For instance:

  • AI's ability to offer personalized, immediate feedback.
  • The automation of routine tasks, granting teachers more quality time for instruction.
  • Engaging AI-powered chatbots that make learning interactive.
  • AI's prowess in assessing student understanding and identifying gaps.
  • Novel brainstorming techniques introduced by AI text generation models.
  • AI-powered 24/7 tutoring systems.
  • The myriad ways students can employ AI for dynamic study materials.

The onset of AI in education marks just the beginning. Our collective task now is to ensure educators recognize its potential. We must guide our students to ethically and effectively wield AI. 

For educators looking ahead, consider these key points:

  • AI is redefining numerous sectors. Acquainting students with AI tools will be instrumental, irrespective of their career paths.
  • Cultivating a robust understanding of AI systems, emphasizing their strengths and flaws, is crucial.
  • Encouraging students to discern potential AI biases ensures ethical engagement.
  • As AI handles routine chores, honing soft skills in students becomes paramount.
  • Recognizing the soaring demand in AI-centric roles can shape future academic curricula.
  • With AI knowledge becoming a staple in most professions, disregarding its importance may jeopardize students' future employability.
  • While predicting AI's trajectory is challenging, imparting foundational principles to students ensures they remain agile and adaptable in a dynamic future.
This blog post was developed with the support of AI tools, including OpenAI's ChatGPT and Anthropic's Claude. After inputting my notes from CSEdCon into Claude, it helped me identify key takeaways and insights. With ChatGPT, I underwent several iterations to organize and refine these insights into a blog post tailored for educators. The collaborative process with these AI tools not only streamlined my thoughts but also emphasized the importance of refining and iterating for clarity, especially for a specific audience.

Friday, November 3, 2023

The Limitations of Generative AI LLMs: A Personal Example

 

In today's digital realm, generative AI Language Models (LLMs) like ChatGPT by OpenAI and Google Bard have become focal points of discussion. These models, equipped with the ability to craft human-like text, offer a myriad of applications. Yet, understanding their constraints remains paramount.

It's crucial to grasp that these models don't genuinely "think." They generate text by mimicking human language patterns from vast datasets, not from comprehension or consciousness. Contrary to popular belief, LLMs don't actively "search" the internet for real-time answers. They've been trained on extensive data, but they don’t browse the web live. Faced with unfamiliar topics, they make educated guesses based on previous patterns.

Let's turn to a personal experience. I'm an avid supporter of Fresno State Football, and my YouTube channel boasts four decades of game footage and highlights. Leveraging AI, I've crafted game recaps and summaries for each video description. An observable trend is that the AI's accuracy correlates with a game's media coverage. The more widespread the reporting, the more accurate the AI summary, although my expertise often catches occasional inaccuracies.

A case in point is a recap I requested from Google Bard for the 1986 NCAA Football game between Fresno State and University of Nevada-Las Vegas (UNLV). While the game had national coverage, it didn’t dominate ratings and missed many viewers and journalists, especially outside the Pacific time zone. During this engagement, Bard's recap showed marked inaccuracies. 

For example, in paragraph 1, Bard inaccurately labeled the game's conference as the Big West Conference. In 1986, both schools were part of the Pacific Coast Athletic Association (PCAA), which later was rebranded as the Big West in 1988. Furthermore, in paragraph 3, Bard mistakenly identified Jeff Tedford as Fresno State's quarterback for that game, even though he had vacated the position in 1982. Another error was with respect to UNLV's Ickey Woods and Charles Williams. While Woods was mischaracterized as the quarterback, he actually played as a running back. Intriguingly, Charles Williams, who began playing in 2017, was incorrectly cited in the 1986 account. A notable tidbit is that both Woods and Williams hail from Fresno.

These oversights illuminate AI’s tendency to piece together plausible, quasi-relevant information when faced with data gaps. The tidbits about the conference, Jeff Tedford, Ickey Woods and Charles Williams are all "semi-in-the-ballpark" information. It’s as if you threw a ball for a dog, and it earnestly brought back a stick; close, but not quite accurate.

The underlying message here is the imperative of scrutinizing AI outputs. LLMs, while powerful, can occasionally deliver out-of-context or misleading information. Critical assessment of AI responses is as essential as vetting any unfamiliar source.

Generative AI LLMs, revolutionary as they are, come with their set of challenges. Approaching their outputs with discernment is vital, especially in education. Teachers should rigorously vet AI-derived content, and students must be taught to assess the reliability of AI-generated information. In doing so, we foster a balanced approach, benefiting from AI while upholding the veracity of the information at hand.