Use this lesson plan to help students think deeply about the role of faked and synthetic media in their lives.

Deepfake AI Face Mapping

Every day, teens engage with countless images and videos that have been edited or digitally altered. From simple Snapchat filters to short, lighthearted, or ironic TikTok videos, it’s clear that young people are pretty savvy when it comes to consuming — and creating — manipulated media. And in a lot of ways, seeing altered media is just part of being online today.

But are your students aware of — or have they thought deeply about — disinformation online, including the phenomenon of deepfake videos? Known more formally as a type of “synthetic media,” these altered videos usually show real people doing and saying things they never actually did or said. The thing that makes deepfakes different is that they’re created almost entirely with artificial intelligence, or “AI.” This technology is advancing quickly, and some experts believe that some of these fakes will soon be nearly imperceptible to the human eye.

“Deepfakes are really the latest incarnation of manipulated media. Think Photoshop on steroids.” –Dr. Hany Farid, UC Berkeley School of Information

Use this lesson plan to have your students consider questions like: Why are faked images and videos so popular? How might AI or deepfake technology subvert or threaten a democratic election? And perhaps more universally, How does the existence of this new technology change the way we see the world?

Recommended for:

Grades: 8-12
Subjects: ELA, social studies, newspaper/media, digital citizenship

Prep for teachers

  • Preview the media examples in the “Manipulated Media” Google Slide deck. If you’d like to use your own examples, you can simply make your own copy. Please note that the video in slide 8 contains explicit language — please preview before deciding whether it’s appropriate for your class.
  • Make copies of the “Manipulated Media” worksheet that accompanies the slides.

In the classroom

Hook (10 minutes):

Before showing the video, give students an essential question for their viewing: “What is deepfake technology, and what are its potential impacts for us, for our democracy, and for our world?”

As students watch, have them practice using questions from the “See, Think, Wonder” Visible Thinking routine to guide their critical thinking: What do you see? What did it make you think about? And, What do you wonder? 

Show the video: “Deepfakes and Democracy: Know Before You Vote.”

Pressing play on the YouTube video below will set third-party cookies controlled by Google if you are logged in to Chrome. See Google’s cookie information for details.

Discussion (15 minutes):

After watching, lead a whole-class discussion about the video. Ask your students: What did you see that stuck out to you? What did it make you think about? And, What do you still wonder about after watching the video? You can also follow up with some more direct questions for discussion, depending on time and your class’s response to the video:

  • Are you worried about what could happen if deepfake videos became more prominent? Why, or why not?
  • Should technology companies like Facebook, Google, or YouTube play a role in alerting the public about faked images or videos? Should the government play a role?
  • Right now, it’s easiest to make deepfake videos of famous people or public figures since the computer algorithms require a lot of existing images and footage to create a fake video. But as the technology advances, experts say, fakes could be made with far less source material (potentially with even just a few photos from public social media profiles). If that happens, how might everyday citizens be affected by this technology?
  • What are the potential consequences of deepfake video technology on our legal, criminal justice, or judicial systems?

Whole-class activity (20 minutes):

Ask your class to think about the larger role of manipulated media in our daily lives. Where else do we encounter images, videos, or audio recordings that are manipulated, edited, or altered in some way?

Then, view a series of manipulated images (and one video) in the “Manipulated Media” Google Slide presentation. Each of these examples has been heavily edited or doctored, or completely faked. As you view each example, have students complete the “Manipulated Media” worksheet (a variation of the “See, Think, Wonder” thinking routine from the video activity above). Please note that the video in slide 8 contains explicit language — please preview before deciding whether it’s appropriate for your class.

  • Give students time to reflect and record their thoughts on the worksheet. 
  • As you work your way through the examples, ask students to share their responses and reactions.

Final Discussion/Writing Reflection (5-10 minutes):

Before finishing the lesson, wrap up the whole-class discussion and/or have students complete a five- to 10-minute exit note to tie all their thoughts together. You can use any of the questions above, or return to the lesson’s essential question: What are the potential impacts of deepfake technology for us, for our democracy, and for our world?

Possible follow-ups


19
Dec 2023
POSTED BY Jeff
POSTED IN

Uncategorized

DISCUSSION No Comments
TAGS

Relax….ChatGPT isn’t replacing teachers as much as this Panda isn’t replacing a scientist

Here are three reasons why we don’t need to be worried about ChatGPT making the essay obsolete:

We were already moving to a world that looks more like the “Flipped Classroom”That is to say, a world where classroom time is increasingly dedicated for synthesis, and at-home time is increasingly dedicated to instruction. In a world where learning material was scarce, classroom time needed to be used more exclusively for direct instruction. That’s of course no longer the case – students often rely on Google, Wikipedia, YouTube, KhanAcademy, and maybe one day ChatGPT, to understand subject matter. As they should, that’s the world they’ll live in. Now that learning material is abundant, we can use in-class time for synthesis and higher-order tasks.

ChatGPT won’t approach human ingenuity for a long time. While self-driving cars feels like a phenomenon that’s only a decade old, in reality it’s been effort 80-years-in-the-making. And, weirdly, only a few years ago it felt more imminent than it does today. That’s because AI, like many things, follows a version of the Pareto Principle, or the 80/20 rule: 80% of outcomes comes from 20% of the effort. Yet as we all know, you can’t be 80% of a good driver, and similarly you can’t be 80% of a good essay writer. The last mile in AI, whether it’s in self-driving or essay-writing, takes a lot longer to traverse than it might seem. And as many educators have found, your average ChatGPT essay is still pretty bad.

The world isn’t static. When the car was invented, we also invented traffic signs and road markings to keep cars in line. Mere weeks after the introduction of ChatGPT we had GPTZero, which helps detect essays that plagiarize or are based off ChatGPT.

But those are reasons not to worry. Here are three reasons to be excited about what ChatGPT and AI has to offer in education:

Students can engage in a more diverse set of activities than ever. The essay is sort of one-size-fits-all; wherever synthesis needed to be done, the five-paragraph essay could fill the gap. Computing has given us a vastly greater diversity of activities to suit the subject matter and classroom culture. While many questions are yet unanswered, and humans are notoriously bad at predicting consequences, the trajectory of technology in educations suggests that more tools than ever will be at our disposal.

Inspecting and inquiring with ChatGPT might be an interesting activity unto itself. Evaluation is the second run in Bloom’s Taxonomy for a reason — it’s highly enriching to stand up in the face of (what may appear to be) information and quibble with it or reject it. Observers of ChatGPT have noted that its bugs as an essay writer could well be turned into features by having students address its inaccuracies.

Humans are still gonna human. ChatGPT isn’t studying history or, say, physics, so much as it’s learning from the stories that humans have told about history. Every novel story created by a human since ChatGPT was last trained is a story that ChatGPT has yet to learn, which means it’ll always be one step behind. There are a kajillion-and-one ways to have students tell novel stories, like with video-based storytelling on FlipTier Lists on Canva, or meme creation on Antimatter.

23
Nov 2023
POSTED BY Jeff
POSTED IN

Uncategorized

DISCUSSION No Comments
TAGS

AI and Writing: Detection….the next level.

It’s impossible to scroll more than a few seconds on social media without encountering someone mentioning how artificial intelligence is taking over the world & that everyone will probably be out of a job soon.

As great of a sci-fi movie as that would become, I don’t really think we’re at the point where AI can fully replace employees yet. That’s for another article.

Now can employees leverage AI to optimize their workflow & climb the work ladder faster? Absolutely. So much of a desk job can be sped up using AI (without any special training, too!)

From the release of ChatGPT to more nuanced writing tools like Jasper and Copy, AI is quickly becoming one of the most powerful tools in a digital employee’s arsenal.

Though many including myself raise the question:

Can anything written with AI be detectable? How does AI writing detection work?

The honest answer is not really.

AI detection is based on predicting patterns, and as large-language models continue to evolve, it’s getting harder to separate these robotic pieces from human-written ones.

Why Is AI Text Detection Even Important?

AI is increasingly being used to create content in fields like journalism, digital marketing, academia, and even law.

Yes… law! Some lawyers thought they could get away with using it in the courtroom.

It’s not that AI-produced content is necessarily bad, but shouldn’t you have the option to determine what is and isn’t AI writing?

One of the biggest concerns going on is the importance of academic integrity. While AI tools offer numerous benefits, using AI-generated content in academic or professional settings without proper attribution can lead to severe consequences (and that’s if one gets caught using them).

What’s crazy is that 65.8% of people believe AI content to be equal to or better than human-written content!

Sometimes academic researchers can’t even detect the difference between the two.

The safety implications that could result from a world where ChatGPT-produced thesis papers are published could certainly result in some dangerous things. Even if you can use AI to cite sources, you reach an ethical quandary.

Identifying AI text with a ChatGPT detector can help a person evaluate the efficacy of content up to a certain human standard, but it’s getting harder with how incredible AI writing has gotten in just the last few months.

On the bright side, knowing which pieces of copy have been written with AI can even help certain businesses decide if it’s worthwhile to keep their writing teams.

Certain tasks simply just don’t require much human creativity and can be outsourced. If you run into a good piece of marketing copy written with AI, you may benefit from using the same AI tools to recreate it yourself.

What Does AI Writing Detection Mean?

At its core, AI writing detection relies on reverse engineering language patterns to determine predictive text. This means that the machine breaks down a piece of text & then uses algorithms to detect patterns within those words.

If a pattern is easier to identify – it’s more in tune with what an AI would write, increasing the odds it was written by AI.

The first letter in AI is actually really important – artificial. That’s what separates humans from machines. AI writing detection is further based on noticing the differences in how words are arranged and used.

Machines write text based on the billions of data & patterns it was trained on, while natural human writing utilizes an aspect of creativity that can’t easily be reproduced by one of these bots. More on that later.

AI detection is also about context. As persuasive as it may seem, machines don’t actually understand the meaning of words, but they can identify patterns that are commonly used really well. They can also pick up on repeated phrases & words, which is often a tell-tale sign of automation or copy-pasting.

Is AI Writing Detection Accurate?

No, not really. AI Writing detection tools are really just a prediction giving insight into where writing came from. If you suspect writing as being AI-written and run it through a detector that says 100% AI, it probably was written by an AI tool like ChatGPT.

If you see anything under 80% I wouldn’t be too confident about accusing it as being AI. There’s just too many unknown and unprovable variables to account for. Just look at what’s happening to students across the world.

Take all the results you see with a grain of salt, as they really are just predictions.

Their have even been tools that claim to bypass AI detection. Tools like Undetectable.AI and HideMyAI try to get around detectors, making the problem even worse.

How Do AI Detectors Work

So now that we’ve gone over why detection is important and a little bit about what it means – we’ll explain what goes on behind the scenes.

Tools that “predict” AI content are largely based on analyzing the context to the left of the following word.

Imagine the sentence “The best part of my day is when I wake up for ___.” In this example, work is the most commonly predicted word based on the 117 million data points the GPT-2 language model was trained on.

The AI model will think back to all of its training data, then identify and analyze patterns in the context of the word set. It might know, for example, that the word “day” is often used after the words “best” and “part”. The algorithm will then calculate the likelihood of each word being the next predicted word, based on these contexts.

Based on training data, the word work had a 41% chance of occurring (the highest probability compared to other words), so it predicted it.

Example of AI prediction showing a 41% chance of the word work being predicted – tested with GLTR

Perplexity & Burstiness

Some other words you might see used when working with an AI detector are perplexity and burstiness.

Perplexity measures the unpredictability of a text. In simpler terms, it measures how often a piece of text might confuse or ‘perplex’ a reader. AI models strive for low perplexity, aiming for content that reads smoothly and logically.

Human writing often has higher perplexity, with more creative language choices and occasional errors.

Burstiness, meanwhile, evaluates the variation in sentence structure and length.

Temperature

An important thing to understand when working with AI-generated text is the concept of temperature. Temperature probability is a measure of the randomness of predictions. If the temperature is low, a model will probably output the most correct text, but it will be quite boring as it has a smaller degree of variation.

If temperature probability is high, the generated text will be more diverse – but includes a higher chance of the model producing grammar mistakes or straight nonsense.

Consumer-facing AI text generation tools like Jasper and ChatGPT seem to err on the side of caution. Although ChatGPT responses produce larger variations than what Jasper does, they are still fairly predictable models.

If you’re using an online tool to help write content, you’re working with pre-trained models (which generally set conservative temperature probabilities to reduce mass errors)

Low vs High temperature variables for AI (low has little variability, high has high variability)

So after calculating this for a single sentence, keep going with the rest of your text. If a sampled piece of text consistently selects the most predictable word throughout paragraphs, you’re almost certainly working with artificially generated text.

Think about it from a personal perspective – the best writers often make use of complex language and explain things in unpredictable, creative ways. Artificial writing doesn’t.

As language models become more and more complex, predicting AI based on the context of words will become a lot harder. The more data in a set, the more variability in generations. But for now, you could follow this pattern to analyze large chunks of text.

It’s extremely simple in concept: To what extent can an AI model predictively regenerate a given example of text?

So How Accurate Is It?

It’s crucial to differentiate between AI detectors and plagiarism checkers.

While both tools aim to ensure content originality, they operate waaay differently.

AI detectors analyze text characteristics to determine if it’s AI-generated. Plagiarism checkers compare content against vast databases to identify potential copying.

One of these has a direct source, one doesn’t (AI). You might be able to get a prediction based on running a sample of writing through a detector, but just know you’re not looking at anything that can be proved. OpenAI discontinued its official detector a while back. More recently, they said AI detection in general doesn’t even really work.

Best Tools To Detect AI Writing

Besides using math, there are grammatical and syntactical ways you can help identify if something was written with AI, but you could do that just from reading.

So how do you determine what percentage chance the context of a word has? Well for starters, you could use a few online tools.

We wrote a larger article on how to detect AI content, but depending on what kind of writing you’re checking you could use CopyLeaks (free) or Originality (paid).

If you want to check academic, industry, or professional content (especially in mass) – look into Copyleaks. It’s free and is better than most other tools out there. We wrote a full review testing it out in much depth.

ChatGPT generating a sample paragraph
CopyLeaks AI detector returning a sample of ChatGPT text as being AI-generated

If you want to check for marketing or copywriting material that got forged using ChatGPT, I’d say Originality is a more in-depth and comprehensive choice.

Originality lets you check for AI, plagiarism, and gives you a percentage that it believes a block of text was written with AI. Check out our full review to see everything it can do.

Originality AI returning a sample of ChatGPT text as being AI-generated

Detecting AI Writing Manually

Beyond automated tools, you could do a little more digging yourself. I won’t lie – it’s very hard. It’s getting extremely difficult to detect since these generators are just able to write more and more like humans.

If you’re looking at poorly-generated AI writing, these texts are often written a monotonous tone, have predictable word choices, and lack bold, original statements.

AI-generated content might lean heavily on hedging phrases or display inconsistencies in voice and style.

The Future of AI Writing & Content Detection

Whether you like it or not, there’s really no guaranteed way of determining if something was written with AI at this point in time. After ChatGPT went viral, tons of questions and concerns were raised about how this will impact the world: educationindustry, and even literature.

The realm of AI isn’t limited to text. AI-driven image and video generators are on the rise, making it super important to detect AI-created visuals, especially with the proliferation of ‘deepfakes’.

Although a guest researcher at OpenAI revealed that they’re developing a tool for “statistically watermarking the outputs of an AI text system.” Whenever a system generates text, the tool would stamp an “unnoticeable secret signal” indicating where the text came from.

If other companies follow suit, we might mitigate some of the ethical dilemmas raised by this new technology. I’m just not sure if it’s even mathematically possible to do.

Regardless, an unpredictable storm started and won’t be over for a damn long time! Generative writing tools are only going to get more nuanced, more creative, and eventually more complex.

For now, it’s best to use your intuition combined with detection tools if you’re skeptical. The next few years are going to be very interesting and fun to see what’s in store. How long until artificial intelligence can seamlessly integrate with our society? We honestly might already be there.

18
Nov 2023
POSTED BY Jeff
POSTED IN

Uncategorized

DISCUSSION No Comments
TAGS

Generative AI, NGSS and Lesson Planning….you be the judge…..

Developing NGSS Lesson Plans Using AI

Using the same prompt I asked ChatGPT, Bard and Bing for responses….you be the judges as to which one did it best…. same prompt.

Prompt: Build me a lesson plan for 8th grade students learning about the structure of matter using the Next Generation Science Standards (Using ChatGPT)
Lesson Title: Exploring the Structure of MatterGrade Level: 8th gradeNext Generation Science Standards

8-PS1-1. Develop models to describe the atomic composition of simple molecules and extended structures.

8-PS1-2. Compare and contrast the physical and chemical properties of matter.

8-PS1-3. Plan and conduct an investigation to describe and classify different kinds of materials by their observable properties.Objectives:

Students will be able to describe the atomic composition of simple molecules and extended structures.

Students will be able to compare and contrast the physical and chemical properties of different materials.

Students will be able to classify different materials by their observable properties.Materials:

Models of atoms (such as balls and sticks or foam atoms)

Various materials for students to observe and classify (such as metals, nonmetals, and compounds)

Magnifying glasses

Lab notebooksProcedure:

Begin the lesson by reviewing the basic structure of atoms and the difference between atoms and molecules. Use the models of atoms to help explain this concept.

Next, have students work in small groups to observe and classify various materials provided. Students should use magnifying glasses to closely examine the materials and record their observations in their lab notebooks.

After students have finished observing and classifying the materials, have them share their observations and classifications with the class. Discuss as a group how the physical and chemical properties of the materials observed can be used to classify them.

Conclude the lesson by reviewing the objectives and discussing the importance of understanding the structure of matter in everyday life.Assessment:

Students will turn in their lab notebooks, where the teacher will evaluate their observations and classifications of materials.

Informal assessment will be done through observation of students’ participation in the group activity and class discussion.Note: This plan is a general guide and could be adjusted based on the resources you have available and the specific needs of your students.

Prompt: Build me a lesson plan for 8th grade students learning about the structure of matter using the Next Generation Science Standards (Using Bard)

Lesson Plan: Exploring the Structure of Matter

Grade Level: 8th

Subject: Physical Science

Learning Objectives:

  • Students will be able to explain the composition of matter at the atomic and molecular level.
  • Students will be able to differentiate between solids, liquids, and gases based on their particle arrangement.
  • Students will be able to describe how changes in thermal energy affect the motion of particles and the state of matter.
  • Students will be able to apply their understanding of the structure of matter to real-world phenomena.

NGSS Standards:

  • MS-PS1-1: Develop models to describe that matter is made of particles too small to be seen.
  • MS-PS1-2: Analyze and interpret data on the properties of substances to identify patterns that explain why different substances have different properties.
  • MS-PS1-3: Gather and make sense of information to describe that synthetic materials come from natural resources and impact society.
  • MS-PS1-4: Develop a model that predicts and describes changes in particle motion, temperature, and state of a pure substance when thermal energy is added or removed.

Materials:

  • Variety of materials, including solids, liquids, and gases (e.g., sugar, water, helium)
  • Beakers or other containers
  • Thermometers
  • Hot plates or other sources of heat
  • Ice cubes
  • Modeling clay
  • Balls
  • Computer and internet access (optional)

Lesson Procedure:

Introduction (10 minutes)

  1. Begin by asking students what they know about matter. What is it? What are its properties?
  2. Introduce the concept of particles as the building blocks of matter. Explain that particles are too small to be seen with the naked eye, but they are constantly moving and interacting with each other.
  3. Engage students in a discussion about the different states of matter: solids, liquids, and gases. How are they different? What is the arrangement of particles in each state?

Activity 1: Exploring the Properties of Matter (30 minutes)

  1. Divide students into groups of 3-4.
  2. Provide each group with a variety of materials, including solids, liquids, and gases.
  3. Have students observe the properties of each material and record their observations in a table.
  4. Prompt students to identify patterns between the properties of the materials and their state of matter.

Activity 2: Modeling the States of Matter (20 minutes)

  1. Provide each group with modeling clay, balls, and beakers.
  2. Have students use the modeling clay to create models of solids, liquids, and gases.
  3. Ask students to explain how their models represent the arrangement of particles in each state of matter.
  4. Encourage students to compare and contrast their models.

Activity 3: Investigating the Effects of Thermal Energy (20 minutes)

  1. Fill two beakers with water at room temperature. Place one beaker on a hot plate and heat it gently. Place the other beaker in the freezer.
  2. Have students use thermometers to measure the temperature of the water in each beaker every minute for 10 minutes.
  3. Record the temperature readings in a table.
  4. Have students observe any changes in the state of the water in each beaker.
  5. Prompt students to explain how the changes in temperature affect the motion of particles and the state of matter.

Activity 4: Real-World Applications (10 minutes)

  1. Engage students in a discussion about how our understanding of the structure of matter is applied in various fields, such as medicine, materials science, and technology.
  2. Provide students with examples of how the structure of matter is important in everyday life, such as why ice cubes float in water and why balloons expand when filled with air.
  3. Encourage students to think of other examples of how the structure of matter affects our world.

Conclusion (5 minutes)

  1. Summarize the key concepts of the lesson, emphasizing the composition of matter at the atomic and molecular level, the differences between solids, liquids, and gases, and the effects of thermal energy on the motion of particles and the state of matter.
  2. Encourage students to reflect on what they have learned and ask any remaining questions.

Prompt: Build me a lesson plan for 8th grade students learning about the structure of matter using the Next Generation Science Standards (Using Bing)

Lesson Plan: Structure of Matter

Objectives

  • Students will be able to describe the structure of atoms and molecules.
  • Students will be able to explain the relationship between the structure of matter and its properties.
  • Students will be able to identify the three states of matter and describe their properties.

Materials

  • Whiteboard and markers
  • Interactive periodic table
  • Computer with internet access
  • Worksheets

Procedure

  1. Introduction (10 minutes)
    • Begin by asking students what they know about the structure of matter. Write their responses on the board.
    • Introduce the concept of atoms and molecules. Use the interactive periodic table to show students the different elements and their atomic structures.
  2. States of Matter (20 minutes)
    • Discuss the three states of matter: solid, liquid, and gas. Describe their properties and give examples of each.
    • Have students complete a worksheet where they identify the state of matter of different objects.
  3. Atomic Structure (30 minutes)
    • Explain the structure of atoms and molecules. Discuss the different subatomic particles and their charges.
    • Have students complete a worksheet where they identify the different parts of an atom.
  4. Properties of Matter (20 minutes)
    • Discuss how the structure of matter affects its properties. Give examples of how the properties of matter can be changed.
    • Have students complete a worksheet where they identify the properties of different objects.
  5. Conclusion (10 minutes)
    • Review the key concepts covered in the lesson.
    • Ask students to write a paragraph summarizing what they learned.

Assessment

  • Worksheets completed by students will be graded for accuracy.
  • Students will be assessed on their participation in class discussions.

This lesson plan is aligned with the Next Generation Science Standards 1. I hope this helps!

Learn more:1. nextgenscience.org2. georgiastandards.org3. scienceinthecity.stanford.edu4. isbe.net5. sciencebuddies.org+3 more

14
Nov 2023
POSTED BY Jeff
POSTED IN

Uncategorized

DISCUSSION No Comments
TAGS

ChatGPT and the AI Revolution

ChatGPT, the new artificial intelligence tool that can write remarkably cogent essays on any topic based on simple prompts, is sparking intense debates among educators about the nature and purpose of modern teaching methods — and how to sniff out plagiarists.

Why it matters: Some teachers foresee “a flood of cheating,” while others envision a big opportunity to improve — and modernize — how writing is taught.

Driving the news: Several school districts have blocked the chatbot from their systems, but critics say such bans are shortsighted and ineffective.

  • “Kids will always find ways to take shortcuts around hard work,” says Philip Vinogradov, director of innovation at Merion Mercy Academy, a private high school near Philadelphia.
  • From calculators and CliffsNotes to Google Translate, Grammarly and Photomath, teachers have always had to stay one step ahead of students, he notes.

Catch up quick: ChatGPT was released for free public use in November by research firm OpenAI, delivering an early holiday gift to tech-savvy students in need of, say, a pre-vacation essay about “Macbeth.”

  • The nation’s largest school district — New York City — swiftly blocked access to ChatGPT on city-owned laptops and networks (though teachers and students can presumably get it on their own devices).
  • A Princeton student wrote an app, GPTZero, to detect if something was written by ChatGPT.
  • OpenAI may add a watermarking system to flag plagiarism.

What’s happening: Teachers have been holding meetings and comparing notes about the tool’s ramifications and possible responses.

  • Suggestions have included requiring students to handwrite their essays, like in the days of yore.
  • Some schools are bolstering their honor codes, or doubling down on “authenticity pledges” students must sign before submitting work.

The intrigue: Some teachers are experimenting with ways to use ChatGPT in the classroom, either to explore the nature of “good” writing or to create linguistic scaffolding that students can build on.

  • Brett Vogelsinger, who teaches 9th grade English in Doylestown, Pennsylvania, invited his students to use ChatGPT as an aid — not a substitute — for writing an essay about “To Kill a Mockingbird.”
  • Only four took him up on it, and two dropped out, saying ChatGPT’s responses weren’t “long enough or deep enough or interesting enough,” Vogelsinger said.
  • For the others, it did “a good job of pointing out what parts of the text they should be thinking of.”
  • With ChatGPT, students have “this little AI friend who is going to bat around ideas with them — that’s how I look at it,” Vogelsinger told Axios.

What they’re saying: Students and teachers “will come to use this technology to augment the writing process, not replace it,” Marc Watkins, a lecturer at the University of Mississippi, wrote for Inside Higher Ed.

  • “It is crucial for us to teach our students about these models, how to use them ethically and what it will mean for their writing process.”

The other side: Others worry it’s “the end of high-school English,” per the headline on an Atlantic essay from teacher Daniel Herman.

  • “The arrival of OpenAI’s ChatGPT … may signal the end of writing assignments altogether — and maybe even the end of writing as a gatekeeper, a metric for intelligence, a teachable skill,” Herman wrote.

Reality check: The writing produced by ChatGPT is formulaic, and the content is often inaccurate.

  • It doesn’t include citations or sources, and it’s programmed to exclude anything that happened after 2021.

Yes, but: “It’s a lot better than the writing of some of our kids,” Vinogradov told Axios.

The bottom line: Students who use ChatGPT to slack off do so at their own peril — but there’s educational potential here too.

  • “It’s a mistake to be relying on it for anything important right now,” tweeted Sam Altman, CEO of OpenAI.
13
Nov 2023
POSTED BY Jeff
POSTED IN

Uncategorized

DISCUSSION No Comments
TAGS

Can an AI critique human writing?

My previous post on OpenAI’s latest offering – ChatGPT – demonstrated how far the technology has come in essay writing.

ChatGPT is a large language model developed by OpenAI. It is capable of generating human-like text based on a given prompt or input. ChatGPT uses a technique called “transformer” architecture to process the input text and generate responses. This allows the AI to generate responses that are coherent and consistent with the input, while also incorporating its own ideas and creativity.* In a very short period of time, we have seen public releases of AI writing technology which can rival student-level writing. This includes the structure, accuracy, and formal essay conventions like inline quotes and analysis.

One of the criticisms of the current technology, however, is that it frequently invents content or takes itself down mysterious rabbit-holes which end in nonsense. I experimented with forcing accuracy, including specifying that the AI had to use “real quotes” and to provide “actual, real references” which I could manually check. This actually noticeably improved the accuracy and voracity of the output.

I’ve been playing around with ChatGPT since it launched, and I’ve tried many different uses. Outside of the essay-style writing required by students, one of the most interesting so far has been using the AI to critique my own fiction.

Everyone’s a critic

I write a bit of fiction – generally sf and speculative fiction. Every now and again I’ll go through a period which is more productive than usual, and submit a round of short stories to different publishers and magazines. Sometimes, these stories end up published. Often, they bounce back with polite rejection letters which often include feedback from the reviewers. The feedback is generally very helpful, and on occasion I’ve edited and resubmitted a story based on the feedback which has later been published.

This process – drafting, editing, submitting, receiving feedback – isn’t unique to short story writing. It’s a process that most writers will go through on the road to publication. I wanted to see if OpenAi’s latest technology could successfully augment some of this process.

Earlier this year I submitted a short story to a science fiction magazine. It passed the first and second rounds, but was rejected at the final round. I later submitted the story to a second magazine, where it was again rejected. Along the way, the story received some great feedback from the various reviewers, including the following comments:

The story was okay, but perhaps not excellent in its current form. It was slightly difficult to tell the points of views apart and I believe the author wished to make Cathyߴs nature ambiguous till the end, but there seemed to be something mechanical of her existence from the start (e.g. tipping her whole frame forward) that maybe gives too much away (had that been the intention).

All too possible future, great characters well written, great tension and an emotional ending.

The writing in this story is efficient and endearing. Setting and history are compactly conveyed. The characters are neatly conveyed in few words. I think the story needs a deeper plot to give these strong elements greater gravitas; the protagonist or his ߴintervieweeߴ could take some action, make some decision or sacrifice to achieve an outcome or bring some change. While the ߴreporter finds mythical creaturesߴ outcome is effective, I think there needs to be a little more.

This is a great concept that’s well-written and emotionally charged, but overall I felt that the journalist needed to play a more active role instead of finding the town/Cathy by luck. As it is, this piece seems more like a vignette that focuses on worldbuilding/backstory.

I really like this, but it could be better. The opening isn’t as compelling as the rest of the story – reconsider the start point. Revealing Cathy’s hometown as McAffery is probably unnecessary given that the arrival scene happens so quickly. There doesn’t seem to be any recognition from Cathy or interaction with the narrator, despite the fact that she has clearly recognised someone new is in town.Feedback from five different (human) reviewers

As you can see, the reviews ranged from lukewarm to positive. In terms of positives, the story was rated as being well written, conceptually sound, “emotionally charged”. As for the negatives: the points of view were problematic, the plot is shallow, and the opening a little weak.

The story sat in a (virtual) drawer until yesterday when I decided to see what our machine colleague might think of it. I pasted the story – about 3500 words in total – into ChatGPT and provided a very simple prompt: Write a 700 word critique of this story. List suggestions for improvements at the end.

Here’s what it came up with:

This story is set in a post-apocalyptic world where the regions outside of the city are desolate and inhospitable. The protagonist, a journalist, is on a mission to find a story in these dead regions, despite the warnings of his colleagues. As he drives, he reflects on the lifeless landscape and the challenges of his assignment. Meanwhile, a character named Cathy navigates the desolate region on foot, encountering another survivor named Amir.

One of the main weaknesses of this story is its lack of clarity and direction. The narrative jumps between the journalist’s point of view and Cathy’s without any clear connection or purpose. The reader is left confused and unsure of the significance of the scenes and characters. Additionally, the characters are underdeveloped and lack depth, making it difficult for the reader to connect with them or care about their experiences.

Another issue with the story is its lack of world-building. The post-apocalyptic setting is intriguing, but the story does not provide enough details or descriptions to help the reader visualize and understand the world. We are told that the regions outside the city are dead and inhospitable, but we are not given any information about what led to this state or what the city looks like. This lack of context makes it hard to fully engage with the story and its themes.

To improve the story, I would suggest providing more details and descriptions to help the reader visualize the world and understand the characters’ experiences. This could include descriptions of the landscape, the city, and the characters’ physical appearances and movements. I would also recommend providing more context for the post-apocalyptic setting, such as how it came to be and how the characters are coping with it.

Additionally, I would suggest clarifying the purpose and connection between the journalist’s and Cathy’s stories. Are they related? If so, how? This could be done through dialogue, narration, or other means of showing the connection between the characters.

Finally, I would recommend fleshing out the characters to make them more relatable and interesting. This could include adding details about their backgrounds, motivations, and struggles in this post-apocalyptic world. This would help the reader connect with the characters and care about their experiences.

Overall, this story has potential but needs more clarity, world-building, and character development to truly engage the reader. With these improvements, it could be a compelling tale of survival and resilience in a post-apocalyptic world.Critique via ChatGPT

The first paragraph is a fairly innocuous summary of the plot. As soon as we reach the second paragraph, however, things start to get more interesting. The first point it picks up on is the point-of-view: “One of the main weaknesses of this story is its lack of clarity and direction. The narrative jumps between the journalist’s point of view and Cathy’s without any clear connection or purpose.” This mirrors the first reviewer’s comment that it is “difficult to tell the points of views apart”.

Next, the AI critiques the “lack of world-building” and “lack of context”. Compare this to the human reviewers’ comments that the story “seems more like a vignette” that “needs a deeper plot”. The AI even uses the term “world-building”, as did one of the reviewers. Ultimately, the AI makes recommendations based on those flaws. One of them, “showing the connection between the characters,” echoes the final reviewer’s comment that there “doesn’t seem to be any recognition from Cathy or interaction with the narrator”.Want more of this? Join the mailing list

What does this mean for human writing?

Much of the discussion around these AI language generators has been about “cheating”. My own earlier experiments – writing essays and short stories – centred on how students might use these technologies to perform the kinds of assessment tasks we set in schools.

The implications of this approach are much more interesting. Had I presented my short story to ChatGPT prior to submission, I would have received very similar feedback to that which I ultimately received after several months of the human review process. I could have worked on these aspects, strengthening the relationship between the characters and the overall depth of the plot. In fact, I could have gone through this process multiple times. I re-ran the prompt several times, and each time ChatGPT provided simliar feedback. This included, “the characters themselves are also not well-developed,” and that the story “lacks a clear conflict or plot.” Subsequent runs also provided more specific and nuanced feedback, including:

The descriptions of the desolate regions outside of the city are well-done, but the constant mention of the “orange dust” that covers everything becomes repetitive and dull. The inclusion of the error code and file retrieval failure in Cathy’s perspective also feels out of place and adds nothing to the story.

The potential to iterate through writing with the support of a keen-eyed AI assistant is much more interesting to me than the simple threat of “cheating”.

The more we experiment with these technologies, the more we will find creative ways to work with artificial intelligence in education. If you’re a teacher, writer, parent, or in any way involved in education I’d encourage you to check out what ChatGPT has to offer.

10
Jun 2023
POSTED BY Jeff
POSTED IN

Uncategorized

DISCUSSION No Comments
TAGS

Socrates Against The Machine: Can Looking Back Help Us to Think About the Future of Education?

In the past two weeks, AI writing has gone mainstream. OpenAI’s ChatGPT, an app sitting on top of its latest large language model, was released for “public testing”, and it seems to have captured the imagination of a large chunk of the internet.

The website passed 1 million users in the first five days. For comparison, Instagram took about 2 and a half months to reach that many users, and Netflix over three years. Whatever your opinion on the quality of the writing – and that ranges from “it’s a wooden, clunky, bullshit machine” to “this is going to spell the end of human writers everywhere!” – ChatGPT and similar models are bound to disrupt the education system. As educators, we need to learn the capabilities and limitations of the technology.

ChatGPT: The Cheating Machine

Most articles on ChatGPT and education are focused squarely on the biggest perceived threat: students using it to cheat on essays. It’s a valid fear. With a little bit of tinkering and almost no content knowledge at all, it’s easy to craft an essay that seems plausible and well written.

To test the capabilities of the Great Cheating Machine, I asked ChatGPT to dream up 5 possible areas of study which might be considered esoteric or a little left-of-centre. Here’s what it came up with:

The history and rituals of ancient esoteric religions, such as Gnosticism or the Kabbalah.
The theories and mathematical models behind quantum mechanics and the fundamental nature of reality.
The psychology and neuroscience of dreams, hallucinations, and altered states of consciousness.
The biology and ecology of deep sea creatures, which live in some of the most extreme and alien environments on Earth.
The study of unusual and exotic physical phenomena, such as cold fusion, time crystals, and spontaneous human combustion.

I can confidently say that with the exception of quantum mechanics (I am a science fiction nerd after all), I have no subject matter knowledge in any of those areas. I’m going to run with number three. Imagine a course on the neuroscience of hallucinations. The content is delivered over a semester, and is assessed primarily through a well researched essay which includes a literature review and a reflection on the student’s own experiences of hallucinations (stay with me…)

After Googling a couple of choice readings for my “literature review” and dumping them into ChatGPT, here’s the start of my assignment:

Hallucinations are a common symptom of schizophrenia and other mental health disorders (American Psychiatric Association, 2013). The neuroscience of hallucinations has been a subject of intense research, with a growing body of evidence suggesting that several brain areas are involved in their generation, including the sensory cortex, insula, putamen, and hippocampus (Arguinchona & Tadi, 2021). The process by which sensory representations become selected for conscious processing is not well understood, but theories like the Global Workspace Theory argue that the brain's interconnected processors select relevant pieces of information for broadcasting to other brain processors, rendering the information accessible to consciousness (Baars, 1988, 1997, 2019). Other theories, such as the Triple Network Model (Bolton et al., 2020) and the Higher-Order Approach to Consciousness (Brown et al., 2019) propose different mechanisms by which hallucinations may arise. Research continues to explore the underlying neural basis of hallucinations and how they may be treated (Craig, 2002; Diederen et al., 2010; Laroi et al., 2008; McCarthy-Jones, 2019; Mills et al., 2016; Schmack et al., 2013).

Are the references accurate? I didn’t bother checking – I’m no expert on neuroscience or hallucinations, but this proves my point: ChatGPT can write. What about the second part of the task – the personal reflection? Can the GPT shift voice easily from the academic to the reflective, while maintaining its grip on the content knowledge?

Let’s see…

Well… That works.

Obviously this would need some work to meet the requirements for a complete tertiary level assignment, but now we’re into the realms of prompt engineering. Give me half an hour, and I could have this expanded out to a few thousand words.

I still don’t know anything about the neuroscience of hallucinations.

The Future of Writing in Education

So, if I can churn out a half decent written assignment in a matter of minutes, what does that mean for the future of essay writing in education? Long before ChatGPT was released I wrote a post with a few possible scenarios. Unfortunately, the most likely is that the system rejects AI writing entirely and teachers, lecturers and tutors are forced to find ways to police the technology. Theoretically there are many ways to do this. Banning and blocking the websites, writing policies forbidding the use of AI, and even cryptographically watermarking AI writing have all been tried. In practice, people will always find a way around these safeguards.

The future of writing as an assessment tool looks bleak. If we can’t effectively police the technology, then the next obvious step will be to monitor and control student behaviour. This means an increase in timed and invigilated examinations, writing by hand, and lengthy authentication processes which typically fall to the teacher to oversee.

Deakin University’s Phill Dawson has written extensively on the impact of increased authentication pressures in tertiary as a result of more students working remotely. Whether online or face-to-face, trying to crack down on cheating via surveillance places a burden on instructors and can result in uproar from students.

Fortunately, there are alternatives to this heavy handed approach, and assessment doesn’t have to be conducted via writing at all.

Can we do better than the silent, supervised examination? Image via Stable Diffusion

Join the mailing list for more articles like this

Enter Socrates

Writing serves other purposes than assessment. People write for expression, reflection, to pin down and give shape to their knowledge and understanding. If we shift the paradigm from writing being the primary tool for assessment, then we can start exploring alternatives.

There are many ways to assess knowledge beyond a written assignment. Project-based and scenario-based learning, industry and workplace observations, and oral examinations like a viva voce offer students alternative ways to demonstrate their understanding and application of a subject. I hope we’ll see an increase in methods like these in both secondary and tertiary education as the system catches up with the realities of AI writing technologies.

Amongst these practices is the age-old Socratic method; a dialogue between student and teacher designed to prompt the student through increasingly thought-provoking questions. Both student and teacher may ask questions, with the process designed to probe any underlying assumptions, knowledge gaps, or biases in understanding. In the Socratic method there is a tacit acknowledgement that neither the teacher nor the student has all the answers: the questioning should not be adversarial, but tentative and even playful.

Questioning of this kind quickly demonstrates the depth and extent of a student’s knowledge. Even if the student had used an AI to write an initial response, it would quickly become apparent that the knowledge was built on shaky foundations – not dissimilar to a student who Googles the answers but never goes beyond the first couple of links in the search results.

The Socratic Seminar

One criticism of the Socratic method is that it can be too intimidating for some students. Faced with direct questioning and lacking in thinking time, students may find themselves unable to answer. There is also a time factor. Conducting a rich and lengthy dialogue with every student in a class may be possible if you have 15-20 students, but in a larger secondary class, or a tertiary course with a ratio of hundreds of students to a single lecturer, the method is impractical.

Socratic seminars take some of the ideas from the Socratic method and turn them into an effective group activity. This is the basic process:

  1. Choose a text or topic that invites authentic inquiry
  2. Prepare students by letting them know when a Socratic seminar will be held and providing them with tools for annotating the text or taking notes on the topic
  3. Prepare open-ended questions for the seminar
  4. Establish student expectations for behaviour during the seminar
  5. Establish the teacher’s role as facilitator rather than participant
  6. Assess the effectiveness of the seminar through reflection and goal-setting.

Read Write Think has a straightforward description of the full process here.

Beyond the Cheating Machine

This is where we can bring ChatGPT and other LLMs into the equation. Artificial Intelligence technologies should – ideally – reduce human labour in tasks which can easily be automated. At their best, they can also provide inspiration and creative ideas.

Imagine a Socratic seminar discussing this overarching topic: How do monstrous or inhuman characters in gothic literature reflect the fears and anxieties of their respective time periods, and what do they reveal about the nature of humanity? Here are a few ways in which we might use ChatGPT to support the seminar:

Students use ChatGPT to build their knowledge

Students might use ChatGPT as a research tool. GPT 3.5, the current model, has issues with reliability. In the near future GPT 4 and Google’s LaMDA will likely overcome some of these issues. Students will still need to review the materials with a critical eye. In a sense, students are using the LLM as a “super-Google”, reducing the time spent trawling multiple websites for information.

Note the development here from broad question about the topic and a request for further reading, followed by some more specific questions. Also, check out how ChatGPT’s limited “memory” function is keeping the answers filtered down around the initial topic of inhumanity and fear:

Using ChatGPT to support the teacher

Step 3 above requires the teacher to provide questions for the seminar. Providing the questions to students ahead of time allows them to order their thoughts and prepare responses, and makes for a richer discussion. Using a resource from Facing History.org I provided ChatGPT some generic sentence stems for suitable questions, and asked it to provide some questions tailored for our discussion:

Some of these would need tweaking. Question 4, for example, is more like a probing question that might occur mid way through a discussion. Nonetheless, this would save the teacher a lot of preparation time.

Moves and countermoves

Part of the Socratic method is debate and discourse. Whether or not you included this kind of back-and-forth in the Socratic seminar, it would be a useful exercise to teach students how to defend their positions. Because ChatGPT is trained on dialogue, it is well suited to “role playing” and can be used in a number of scenarios to help refine and test arguments. Here are a few examples of ways in which students (or teachers) could use the bot to work on their debating skills:

Image via Stable Diffusion

Classrooms filled with robots

One of the biggest risks to education is not keeping pace with these technologies. Whether we like it or not, students will use AI writing tools to cheat. On the flip side, there will be many companies using the technologies to produce tools and apps which “assist” teachers with marking and feedback. I’ve already tried using ChatGPT to critique some of my fiction writing, with pretty good results.

If we’re not careful, we might end up in a situation where we have essays written by AI, marked by apps underpinned by AI. In the process, no one learns anything at all.

AI isn’t going to replace human writing any time soon. But it does mean that we need to rethink how and why we teach and assess writing. We can adopt these tools and work with students on their ethical, appropriate, and creative use. We can use AI tools as another opportunity to build strong relationships through dialogue and discourse with our students. If we don’t, we might as well replace ourselves with robots.

09
Jun 2023
POSTED BY Jeff
POSTED IN

Uncategorized

DISCUSSION No Comments
TAGS

03
Apr 2023
POSTED BY Jeff
POSTED IN

Uncategorized

DISCUSSION No Comments
TAGS