With the rapid emergence of generative AI, more people are hopping onto the bandwagon with tools like Microsoft Copilot and integrating it into their lives and workflows to handle mundane and repetitive tasks, leaving them with more time to deal with more complex tasks that might require them to flex their mental muscles.
However, a new study by Microsoft researchers in collaboration with Carnegie Mellon University reveals that an overdependency and reliance on AI may negatively impact a person’s critical thinking, leading to the deterioration of cognitive faculties (via 404 Media).
According to the researchers:
“[A] key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise.”
The extensive study entailed evaluating AI use cases at the workplace, and the confidence or lack thereof depicted by employees when leveraging AI to accomplish tasks. Interestingly, the findings revealed that employees confidently using AI at work encountered issues when presented with scenarios that required them to put their critical thinking to the test compared to less reliant users. As such, this presented them with an edge to leverage their knowledge to enhance the quality of the AI-generated output.
Last year, I reported on the long-term negative implications forged by an overreliance and dependence on tools like ChatGPT and Microsoft Copilot. The same sentiments have been echoed across social media, with some users claiming they’ve “lost some brain cells.” “I can really see that ChatGPT will make us more dumb as we will increasingly use AI without thinking and engaging our brain. Do other people share this opinion as well?” another Reddit user added.
Perhaps more concerning, some users claimed that they had lost motivation and morale to use their critical and creative thinking skills, often leaning heavily on AI tools like ChatGPT for a “quick fix.”
This news comes after a new study by BBC revealed that AI tools like ChatGPT and Copilot struggle to differentiate opinions from facts when generating AI summaries for news posts. The study further revealed that the news summaries were riddled with inaccuracies and distortions.
Last week, we reported Bill Gates’ sentiments on AI and his prediction that the technology will replace humans for most things. As it happens, the philanthropic billionaire’s sentiments are echoed by key players in the tech industry, including NVIDIA CEO Jensen Huang, who claims coding might already be dead in the water with the prevalence of AI, while Elon Musk foresees a utopian future where AI claims jobs from humans, turning work into an optional hobby.