We have enough orange cat lovers that you can always do orange cat posting here, and I bet the mods will let you get away with machine learning / AI stuff too.
How big of a problem is AI pulling from other AI? Also when are they going to release reliable tools to detect AI writing? I'm a professor and this is a huge problem for us.
Fair enough
"Trustworthy" AI is a really big concern. These large language models like ChatGPT ingest massive amounts of data and where do they get it from...scraping the internet. Now regardless of how you feel about things, one way or another, that data probably needs to be curated...and even in that you can say you can be placing bias on the model. Which leads into AI pulling from AI...its not magic, its statistics and math providing its best representation of what you say the world is, based on the dataset you give it. If its all feeding from the same source...I haven't focused on that extensively but its going to infer based on what you've already trained the model on. I also don't envy you, especially if you're dealing with a massive quantity of students. I think at the moment most of the plagiarism stuff just looks for matches to references on the internet.
That's the problem. You can't give a bad grade for using AI if the detection tool gives occasional false positives. I've heard there are reliable tools that haven't been released yet. For now students make it so obvious sometimes that you don't need a special tool to figure it out - I just got half a dozen papers that concluded with the phrase "my aunt is a shining example of economic resilience".
Have you tried, and I kid you not, copy+paste verbatim your syllabus and your assignment instructions into ChatGPT? The last part of what you said makes me think that's what happened, and if you can make it regurgitate what you quoted...its not a great answer, but for the blatant cases, I think its useful. A smart student can explain what they wrote, a dumb dumb can't.