The Student News Site of Frederick High School

Frederick Lantern

37° Frederick, CO
THE LATEST
  • Follow us @fredericklantern on Facebook and Instagram so you don't miss a single story
The Student News Site of Frederick High School

Frederick Lantern

The Student News Site of Frederick High School

Frederick Lantern

Sample Widget ad 300x200
Irish Wish starring Lindsay Lohan is a romcom movie with a twisted love story
Be Careful For What You Wish For
Kaylee Ledgerwood, Staff Writer • April 12, 2024
New film with Snoop dog didn’t disappoint and surprised many viewers with how much they enjoyed the film=.
Underdogg Film of the Year
Philimon Sang, Staff Writer • April 2, 2024
I Matter is all about embracing your true self and being who you really want to be.
If You Need Help, Remember I Matter
Emma Phillips, Sports Editor • March 26, 2024
Millie Bobby Brown stars in a fantasy movie and does a great job bringing the story to life.
A Damsel In Distress
Kaylee Ledgerwood, Staff Writer • March 25, 2024
Sample Widget ad 300x200

AI@FHS: A New Leap Forward

What AI is, what it can do, and why Frederick isn’t banning it from classrooms
A+sketch+of+OpenAIs+logo%2C+the+makers+of+the+revolutionary+ChatGPT.+ChatGPT+is+a+conversation+bot+that+can+do+anything+from+solve+math+equations+to+write+research+essays.+How+AI+usage+will+affect+real+classrooms+and+education+is+yet+to+be+fully+known%2C+but+Frederick+High+is+betting+its+reputation+the+pros+will+outweigh+the+cons
Julian Finch
A sketch of OpenAI’s logo, the makers of the revolutionary ChatGPT. ChatGPT is a conversation bot that can do anything from solve math equations to write research essays. How AI usage will affect real classrooms and education is yet to be fully known, but Frederick High is betting its reputation the pros will outweigh the cons

Schools teach ideas to students. This has been the central premise of education since the Greek Academy: students are taught how to think according to the philosophies and core knowledge of the day, and then add to that realm of knowledge through their own original ideas. To aid them in their education, students have also had tools: at first, the slate tablet and chalk; now, the No. 2 pencil and graphing calculator. Yet while these tools helped students express their thinking, they did not think for the students: the pencil makes the words but it does not write the essay.

That has changed. Models of Artificial Intelligence (AI) that are capable of writing entire research essays and scientific reports are now in the hands of students. This AI can solve complex word problems in math and identify symbols in literature. This has led to a crossroads in education: can AI be a tool to enhance student education, or should it instead be banned for doing the student’s thinking for them?

Frederick High School is betting on the former to be true. Principal Dr. Fox has decreed that AI cannot be outright forbidden in any FHS classroom, and the entire staff has committed to using this year to find ways to work with instead of against this emerging technology for the betterment of students.

 

What AI Is

On November 30, 2022, American artificial intelligence research company OpenAI released a revolutionary new technology. An AI model known as Generative Pre-trained Transformer 3.5 (GPT-3.5) had been in development by OpenAI for some time — an artificial intelligence model that could learn how to generate responses to another user by analyzing texts of real human writing and transforming what it had learned from this writing. On that November day, Open AI released a fine-tuned version known as ChatGPT. Nearly unlimited and completely free access was provided to anyone in what they have called a “free research preview” for their soon-to-be-released GPT4.

Over the next several months, ChatGPT gained significant traction for being the first of its kind: a powerful chatbot that was seemingly capable of synthesizing coherent, thorough, and original text instantly. While text-generative software has existed for decades, it used to be fairly easy to detect text a machine had written because it was typically not very cogent. ChatGPT has changed that and made artificial writing that is nearly indistinguishable from human writing.

The implications of this became clear very quickly, and other companies scrambled to compete: Google’s Bard, Microsoft’s Bing AI, Anthropic’s Claude, and so on all hit the market over the past year. This technology’s development has progressed into an arms race for the most powerful AI, and now even the average person has noticed AI’s encroachment on everyday life. AI now writes everything from commercials to entire websites.

 

What AI Can and Cannot Do

Despite its unbelievable results, it’s important to understand the limitations of what AI programs are currently capable of. ChatGPT is a content scraper, meaning it looks at its “training texts” from all over the internet and assembles together a cohesive response made up of all those bits of information. This is easier to imagine when using an AI image generator, which will create art based on hundreds of other artworks it sees online — AI art is often not entirely cohesive and has trouble with spelling words or putting the right number of fingers on a hand.

Scrapers like these often create results with incorrect information, issues with source citation and copyright, and often incorrect logical conclusions. These limitations, albeit subtle, are an incredible justification for not using ChatGPT and similar AIs for everyday classroom use. For example, ask ChatGPT to write an essay, and it will provide you with a very short essay, likely riddled with flaws — it simply is not capable of producing longer, nuanced text. Furthermore, it is very susceptible to forgetting the context previously given in the active conversation — this makes it very difficult to use ChatGPT for anything that really requires that large context.

Yet new AI is coming out to solve these issues. GPT-4, which requires a $20 monthly subscription to OpenAI, offers vast improvements to ChatGPT in virtually every way: it consistently provides accurate information and writes text that is logically consistent. It does this by increasing the number of tokens available, each of which splits a text into a common sequence of letters (or pixels for the art AIs) using lexical analysis to be recombined and reassembled into a final piece. The more tokens an AI language model is capable of processing, the more text it can produce, and the more context it can be given to create better results. The version of ChatGPT currently offered to non-subscribers has a token limit of 2,500; GPT-4 has a limit of 4,000 tokens; Anthropic’s Claude-2.0 is capable of 100,000 tokens.

Here’s the thing, though: its limited context does not make ChatGPT useless alone. In fact, everything you have read up to this point is a little over 1,000 tokens. This means ChatGPT would be able to process everything up to this point and provide an accurate response to it. For most people and for the average use case, the context limit is an issue dwarfed by a much greater problem that plays a more important role in ChatGPT’s potential use in classrooms: hallucination.

 

Hallucinations: AI’s Solution to IDK

When referring to artificial intelligence, hallucination is a phenomenon where the AI confidently provides a response not justified by its training data. It will synthesize information it does not truly know to avoid saying it doesn’t know. This is an issue that was extremely noticeable and significantly reduced the usefulness of ChatGPT when it was first released. The issue has been addressed, and attempts have been made to reduce it, but both GPT-3.5 and GPT4 still suffer from hallucinations.

The quality of the responses I created above is vastly different, but both plucked this information from nothing. I made up ‘NexaFiber Solutions’ and presented it to the AIs in a way that made it seem real. If they didn’t ‘think’ it was real, then they interpreted it as fictional and provided a response regardless. This is a real issue, as all AI language models currently in existence suffer from this in virtually all subjects they could talk about. However, it is somewhat rare at this point to encounter it: my prompt was enough to get it to hallucinate, but due to context limits, in most use cases, hallucination is hardly an issue anymore.

The point is that ChatGPT and similar AI models are flawed. However, make no mistake—they are not useless, and students have noticed. Pandora’s box has been opened, and this technology is being perfected at such a rate that avoiding it is impossible. Take this article: one of the paragraphs was completely AI-generated, yet it blends in seamlessly with all the others. Flaws or not, students are already using AI to complete assignments in their classes, sometimes by just copying the prompt into the AI and using everything that comes out. No real thought is required.

AI can be used for so many different factors. Now they’re introducing them to put them to use in the classrooms. We have seen an increase in that, especially this year at FHS

 

Working With and Not Against AI

A gut impulse is to see AI as a replacement brain and not a tool — something that should be banned and forbidden. Yet this would accomplish little: AI is already here and cannot be stopped. While the school could punish students for blatant use of it like it does plagiarism or cheating, proving AI use is much harder than either of these infractions. The education system has also never been infallible: in the time before computers and digital tools, students would get around doing their own intellectual work by paying other students to do their homework or even getting Mom and Dad to “help them out” by finishing a project. As long as there’s an avenue to cheat the system, some students will take it.

So how can this amazing generative technology be used constructively instead? There are some exciting trends in the educational space doing just that:

  • AI can be used as a fully personalized tutor. It is capable of simplifying topics that may be confusing to some students or explaining them in ways that make sense. Entire concepts could be taught in a completely different way, potentially improving academic performance on a student level. You can find an example here.
  • AI can encourage creative thinking and problem-solving. By posing open-ended questions to the AI, students can engage in deeper discussions, analyze the model’s responses, and critically evaluate its conclusions. This interactive approach allows for a dynamic dialogue between students and the AI, where the focus isn’t just on the answers provided but also on the quality and reasoning behind them. Just as students have historically debated the interpretations of a classic novel or a philosophical text with other students, they can now challenge the conclusions of an AI, further refining their analytical skills.
  • It can help educators more easily cater to diverse learning needs. AI is fundamentally multilingual, capable of nearly perfectly translating into the vast majority of languages. It could make its responses more or less visual, descriptive, or paced differently. If fully adopted by curriculums, it could potentially facilitate interactions between genuinely diverse groups of students, providing real-time translations and cultural insights and adapting what is being taught. Here is an example of the translating capabilities of ChatGPT. It’s also possible to verify the accuracy of translations.
  • AI can provide good justifications for its thinking. For this reason, it could easily alleviate some burdens on educators as well. It is capable of grading assignments and providing detailed feedback. Here is one example of a purely language-based assignment, which ChatGPT excels at as it is a language model. This tends to work best with language-based responses, but ChatGPT is capable of analyzing subjective aspects of that text. Here is one example (this was not originally created for an assignment, but it still theoretically works).

 

The question ahead for Frederick’s teachers is not if AI should be used but instead how it should be used. Every subject is different, and not every method of AI will work well for every teacher. Just as the AI tech is evolving, teachers will evolve along with the technology, making some mistakes but getting better as the year goes on. We plan on tracking their progress with AI over the year with a series of Lantern articles. While there may be questions of morals and ethics in regard to use of AI, one thing is certain: Frederick teachers are facing that challenge.

 

Leave a Comment
More to Discover

Comments (0)

All Frederick Lantern Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *