I've been thinking about this for a while now... honestly (though I'm very embarrassed), I've unfortunately witnessed the decline in my academic performance that AI has caused. From some neurological research and some preliminary studies, I understand that a kind of vicious cycle develops because of the reward (for example, a good grade) that the brain loves. Even if we're not consciously aware of it, the brain gets used to not putting in the effort because it finds security in this tool that, with a click and loaded PDFs, can save you time (and diminish your knowledge). In my case, I fell so far behind in my subjects that I started using it, and I seemed addicted to it, something I wasn't proud of at all.
But I'm working on it and I've noticed a big change :) It's possible to break this dependency, to start little by little trying to solve problems and study on our own. It's not impossible, we just have to reawaken our brains and stop using AI as a replacement, see it as a tool that helps, perhaps to organize or test, but never let it think for you because that's where everything is put at risk.
What you say is very interesting, and I see it the same way you do.
I don't consider AI a tragedy; it's simply the continuation of a historical trend.
When calculators arrived, they diminished our ability to calculate mentally, but in return, we gained the ability to dedicate our resources to more valuable tasks than simply counting. With AI, it will be similar; it will weaken some of our abilities, but we'll be able to enhance others.
I use AI for tedious tasks that don't require my "human" input, tasks that are purely repetitive. I'm referring to translating languages, creating tables from scattered data, or compiling information that, if done manually, would take me countless hours or days. Sometimes I also use it to discuss a topic, to identify gaps in my knowledge, or to offer criticisms.
The problem I had many years ago with academia was the workload … I can understand why students are tempted to use AI to get the work done… but to me the whole point of college was learning to think. So using tools that spit out texts but don’t reflect how we individually are thinking through a thing ultimately works against us, even if the grades get us the nice looking resumé.
But now it seems since we need a degree to do basic work (unlike in previous generations… my grandfather worked on Wall Street from the 2950’s to the 1980’s with only a GED), it’s more important to get good grades than to actually prove we’re thinking through things? That’s been a problem for awhile in my opinion but AI access has exacerbated it.
Wow this is scary. I don't use it but my grandkids do. It's bad enough that spell check has taken away the need to remember how to spell! I don't think I'll be able to convince the grandkids to lay off the AI. Wondering if there is some other way to keep those brains working... Thanks for the great report. I love your footnotes with authorities for your propositions.
I think this will follow humanity's inevitable trend of delegating some tasks and enhancing others. Calculators are a good example; they reduced our mental calculation abilities but freed up resources to focus on more important tasks than counting. Something similar will happen with AI, which will cause some of our abilities to decline, but will likely leave us time to cultivate other, even more important skills. That has been the historical trend. What's clear is that we're in a period of change.
Thanks Álvaro. Of course you are right. We do evolve even as a result of technology and will continue to do so. I wonder how long it will be before we can identify certain attributes that have evolved because of technology. They are probably already here and we just don’t realize it. Perhaps from farming?
First of all, thank you so much for this post. I think it was/is needed. Over on this side, we use AI like a slave librarian drawing facts from unlimited sources and bringing it back in seconds. Besides, don’t we TELL the AI exactly what to do and when it tries to take over or go further or veer into something that we don’t want it to then like a parent with a child redirect-redirect because ultimately we can do this with or without AI. So, long story shorter I am blessed with endless ideas so after processing which one to give birth to and how it should be presented then I ask AI to rephrase if needed, period. Nonetheless, it sounds like some people are just plugging in and saying do all the thinking so I don’t have to. GZA from the Wu Tang Clan one wrote, “The significance was not the vulgar applause of interest, but the feeling that exit[s], at the completion of a sentence." That’s a mantra for me and I’m not letting no one or no-thing rob me of that feeling. ✌🏾
Oh, I hadn’t thought of it that way! I think AI can be like that “tireless librarian” you mentioned, just a tool to free up time and mental energy, not to replace our thinking.
In the end, I believe it’s part of a historical trend. When we started using calculators, we lost some mental calculation ability, but that allowed us to develop more valuable skills. This will be something similar, and we’re just beginning that change.
And thanks for the Wu Tang Clan mention, I ddin't know that! :)
Thank you for this concise article on what has been bothering me. I do not use ChatGPT yet or it’s equivalents, but am being pressured to. It’s exactly this desire to learn how to think better that keeps me from using it. It takes longer for me to work things out, but I do.
I work in a law office and we’re constantly talking about “AI is here and how can we use it ethically… we should figure it out because it’s being foisted on us so we’re not going to have a choice but to use it eventually, why not now…”
I don’t know. This reasoning seems so lame to me. So far all I see is example after example of how not to use it. Lawyers getting debarred or giving inaccurate legal advice that they would not have otherwise.
Another author on Substack just published an article the other day about a judge in Argentina who used Ai to create his judgment and the whole case got thrown out as a result… but only because he accidentally cut and pasted the AI prompt into his legal judgment with the output it gave, and lawyers receiving the judgment flagged him for misconduct. If he hadn’t made that error, he wouldn’t have been caught and his judgement (thinking-wise and legal-wise) may not have been analyzed for its appropriateness.
Lawyers in particular are taught in law school how to think critically, and on their feet…
And just as one lawyer told me 30 years ago that typing out legal arguments in a brief using MS Word instead of composing the brief with a dictaphone (and having the secretary type it out - he believed writing it by speaking it orally was critical to a trial lawyer’s ability to think and speak effectively in a trial), this new iteration of “ease” does, I agree, offload our critical thinking skills even more, which is troubling to me.
I have more thoughts on this but I’ll stop here for now! I extend the lawyer conundrum to each of us, because critical thinking is important for all.… this is one reason I went to a liberal arts college!
Wow, what a story. I understand; the pressure to adopt new tools like AI can cause us to lose critical skills.
I think that over time there will be regulation of its use in certain sectors, and it will be like with calculators. By using them, we lost mental calculation ability, but they also freed us up for more valuable tasks than counting. In the long run, I think it will be the same with AI; some skills will decline, but it will generally empower us when we cultivate the new skills that will be necessary in the future.
I've been thinking about this for a while now... honestly (though I'm very embarrassed), I've unfortunately witnessed the decline in my academic performance that AI has caused. From some neurological research and some preliminary studies, I understand that a kind of vicious cycle develops because of the reward (for example, a good grade) that the brain loves. Even if we're not consciously aware of it, the brain gets used to not putting in the effort because it finds security in this tool that, with a click and loaded PDFs, can save you time (and diminish your knowledge). In my case, I fell so far behind in my subjects that I started using it, and I seemed addicted to it, something I wasn't proud of at all.
But I'm working on it and I've noticed a big change :) It's possible to break this dependency, to start little by little trying to solve problems and study on our own. It's not impossible, we just have to reawaken our brains and stop using AI as a replacement, see it as a tool that helps, perhaps to organize or test, but never let it think for you because that's where everything is put at risk.
What you say is very interesting, and I see it the same way you do.
I don't consider AI a tragedy; it's simply the continuation of a historical trend.
When calculators arrived, they diminished our ability to calculate mentally, but in return, we gained the ability to dedicate our resources to more valuable tasks than simply counting. With AI, it will be similar; it will weaken some of our abilities, but we'll be able to enhance others.
I use AI for tedious tasks that don't require my "human" input, tasks that are purely repetitive. I'm referring to translating languages, creating tables from scattered data, or compiling information that, if done manually, would take me countless hours or days. Sometimes I also use it to discuss a topic, to identify gaps in my knowledge, or to offer criticisms.
Thanks, good comment :)
The problem I had many years ago with academia was the workload … I can understand why students are tempted to use AI to get the work done… but to me the whole point of college was learning to think. So using tools that spit out texts but don’t reflect how we individually are thinking through a thing ultimately works against us, even if the grades get us the nice looking resumé.
But now it seems since we need a degree to do basic work (unlike in previous generations… my grandfather worked on Wall Street from the 2950’s to the 1980’s with only a GED), it’s more important to get good grades than to actually prove we’re thinking through things? That’s been a problem for awhile in my opinion but AI access has exacerbated it.
Wow this is scary. I don't use it but my grandkids do. It's bad enough that spell check has taken away the need to remember how to spell! I don't think I'll be able to convince the grandkids to lay off the AI. Wondering if there is some other way to keep those brains working... Thanks for the great report. I love your footnotes with authorities for your propositions.
Hi David! :)
I think this will follow humanity's inevitable trend of delegating some tasks and enhancing others. Calculators are a good example; they reduced our mental calculation abilities but freed up resources to focus on more important tasks than counting. Something similar will happen with AI, which will cause some of our abilities to decline, but will likely leave us time to cultivate other, even more important skills. That has been the historical trend. What's clear is that we're in a period of change.
Thanks Álvaro. Of course you are right. We do evolve even as a result of technology and will continue to do so. I wonder how long it will be before we can identify certain attributes that have evolved because of technology. They are probably already here and we just don’t realize it. Perhaps from farming?
Maybe, I don’t know much about that, but it’s quite interesting. I’ll check it.
First of all, thank you so much for this post. I think it was/is needed. Over on this side, we use AI like a slave librarian drawing facts from unlimited sources and bringing it back in seconds. Besides, don’t we TELL the AI exactly what to do and when it tries to take over or go further or veer into something that we don’t want it to then like a parent with a child redirect-redirect because ultimately we can do this with or without AI. So, long story shorter I am blessed with endless ideas so after processing which one to give birth to and how it should be presented then I ask AI to rephrase if needed, period. Nonetheless, it sounds like some people are just plugging in and saying do all the thinking so I don’t have to. GZA from the Wu Tang Clan one wrote, “The significance was not the vulgar applause of interest, but the feeling that exit[s], at the completion of a sentence." That’s a mantra for me and I’m not letting no one or no-thing rob me of that feeling. ✌🏾
Hi Abir!
Oh, I hadn’t thought of it that way! I think AI can be like that “tireless librarian” you mentioned, just a tool to free up time and mental energy, not to replace our thinking.
In the end, I believe it’s part of a historical trend. When we started using calculators, we lost some mental calculation ability, but that allowed us to develop more valuable skills. This will be something similar, and we’re just beginning that change.
And thanks for the Wu Tang Clan mention, I ddin't know that! :)
Oh I agree with you wholeheartedly. Plus without AI, we lose grips on the quickly changing times around us.
Thank you for this concise article on what has been bothering me. I do not use ChatGPT yet or it’s equivalents, but am being pressured to. It’s exactly this desire to learn how to think better that keeps me from using it. It takes longer for me to work things out, but I do.
I work in a law office and we’re constantly talking about “AI is here and how can we use it ethically… we should figure it out because it’s being foisted on us so we’re not going to have a choice but to use it eventually, why not now…”
I don’t know. This reasoning seems so lame to me. So far all I see is example after example of how not to use it. Lawyers getting debarred or giving inaccurate legal advice that they would not have otherwise.
Another author on Substack just published an article the other day about a judge in Argentina who used Ai to create his judgment and the whole case got thrown out as a result… but only because he accidentally cut and pasted the AI prompt into his legal judgment with the output it gave, and lawyers receiving the judgment flagged him for misconduct. If he hadn’t made that error, he wouldn’t have been caught and his judgement (thinking-wise and legal-wise) may not have been analyzed for its appropriateness.
Lawyers in particular are taught in law school how to think critically, and on their feet…
And just as one lawyer told me 30 years ago that typing out legal arguments in a brief using MS Word instead of composing the brief with a dictaphone (and having the secretary type it out - he believed writing it by speaking it orally was critical to a trial lawyer’s ability to think and speak effectively in a trial), this new iteration of “ease” does, I agree, offload our critical thinking skills even more, which is troubling to me.
I have more thoughts on this but I’ll stop here for now! I extend the lawyer conundrum to each of us, because critical thinking is important for all.… this is one reason I went to a liberal arts college!
Hi Rosemay!
Wow, what a story. I understand; the pressure to adopt new tools like AI can cause us to lose critical skills.
I think that over time there will be regulation of its use in certain sectors, and it will be like with calculators. By using them, we lost mental calculation ability, but they also freed us up for more valuable tasks than counting. In the long run, I think it will be the same with AI; some skills will decline, but it will generally empower us when we cultivate the new skills that will be necessary in the future.
https://open.substack.com/pub/ethicsv1/p/the-judge-the-ai-and-the-verdict?r=1iuw5r&utm_medium=ios