We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 2
Barbocle, back again here.
I had mentioned that generative
AI has more problems, of course, than just hallucination. One of the biggest problems from our perspective as professors, is that it makes it really easy to cheat. This is indeed a problem. But I should say that years back, let's say, when people were first starting to learn to write, they were imprinting on clay tablets with cuneiform. When they were doing that, people were complaining because they said, "You, Hey, you're going to lose your memory. You're not going to remember things if you write things down." There were a lot of well placed criticisms of learning to write because people wouldn't rely so much on their memory. It's rather similar now. People know that there are bad aspects of generative AI. But the trade off is such that it can be worth it. But for us as professors and teachers, generative AI has really pulled the rug out from under us. It has changed many of the ways that we have used in the past to help students learn, for example, writing an essay or about how to work out their homework problems. For all of these things, learners went off and they had to do them on their own. You don't need to do that necessarily now with generative AI. It means that not only is it easier to cheat, but you can also end up learning more lightly. Why learn to write an essay, for example, if you can just go to generative AI and ask it to write essays for you? Well, that is a problem. As we're going along in the next decade and the decades to come, people will be figuring out the solutions to some of these challenges. But there's another aspect or another way of looking at this that can help you understand why it's still important to learn some of these ideas, whatever you're studying, to learn the rigor of mathematical thinking, for example, and to become very familiar with some of the key aspects of programming or to learn to speak another language. What I'm really driving at is related to something called the Flynn effect. In the Flynn effect, researchers found that from the 1930s to the 1970s, IQ scores rose dramatically. Well, why was that? Finally, the conclusion was it was due to education. People had improved access to good ways of learning and, of course, improved education. As a consequence, IQ scores went up. People got smarter. But here's the funny thing. From the 1970s onwards, there's plenty of research showing that IQ scores are declining. This is even within a family. You can see that kids who are younger, born after the latter part of the 1970s, have a lower IQ than older kids within the same family. Why would this be? In the 1970s, calculators came out. When calculators came out, suddenly it became, well, you don't need to remember things anymore. You can always just look things up. As you'll see later in this course, that's a real problem. Students stopped practicing within their own mind and remembering things. What this really tells us is that with the revolution of generative AI, we don't want to make the same mistake we made when calculators came out. We want to keep learning and getting practice with some of the essential ideas that are really important, for example, learning the multiplication tables or learning how to put together an essay. We don't want to be like the 1970s where everybody got all excited about a new technology, and then they started saying, you don't need to actually learn some of these important ideas because you can always go look it up. This idea that generative AI is problematic is, I think, an important one for us to keep in mind. But I also want to talk about something that's a little bit in defense of generative AI. I wasn't good at math and science when I was growing up. I enlisted in the army out of high school, studied a language, I picked Russian at random, and became a translator. But then I realized that I'd followed my passion, just as everyone had told me to do, but I hadn't broadened my passion and started learning about broader fields. That meant when it came time to get jobs, that was pretty tough for me to do because few people were interested in my Slavic languages and literature skills. At age 26, I went to the university and decided to try to retrain my brain and learn in math and science. I started at the lowest possible level of remedial high school algebra and slowly began climbing my way upwards. Now, of course, I'm a professor of engineering, and I love math and science. But one challenge that I had when I was trying to retrain my brain was I found that professors and teachers would often keep things hidden. In other words, you might get a lot of homework problems, but your teacher didn't necessarily want to give you the solutions to the homework problems because then you'd know what the answers were, and some people would use that to cheat. You couldn't get a lot of problems to solve and to work with something that related to exactly what the professor was teaching in the course. It was just difficult to find lots of practice problems because teachers certainly weren't going to be willing to share their testing or practice materials. But nowadays, it's very different. If you're really interested in learning, you can get lots and lots of practice materials from ChatGPT and other engines. You don't have to worry so much that the knowledge is something reserved only for those who worked really hard to create the test materials and such like. The world is more open for learners who truly want to learn, and you want to be someone who truly wants to learn. Again, I refer to Fei Fei Li's quote, AI won't replace humans, but humans that use AI will replace humans that don't.