In the last twelve months, there has been an incessant buzz around Artificial Intelligence (AI) and the work the innovative technology can achieve. Similarly, many companies have risen to the occasion and hype around the technology, leading to the launch of products that are infused with Large Language Models (LLMs).
AGI Emulate Human Cognitive Ability
One part of AI is Artificial General Intelligence (AGI), a technology that is capable of understanding, learning, and applying knowledge across a wide range of tasks, similar to a human. As generally acknowledged, the AGI machine or software, as the case may be, can perform tasks like reasoning, abstract thinking, background knowledge, and transfer learning. It also possesses the ability to differentiate between cause and effect, amongst other functionalities.
Ultimately, it is safe to say that AGI attempts to emulate humans in terms of cognitive ability. It identifies unfamiliar tasks, takes in new experiences, and comes up with new ways to apply the knowledge gained.
Ordinarily, humans gain experience and knowledge from their daily activities including conversations and events around them. This could happen at home, in school, or even in workplaces. It would be through conversations with people, observing activities around them, reading a book/article, or even watching television.
Without much effort or consciousness, the human brain converges these bits of information and forms decisions around them. These decisions become handy in the face of difficulties that are related to the information linked.
Differentiating Between Artificial Intelligence And AGI
The goal behind the development of AGI is to come up with a machine that can perform advanced processes and achieve human-perfect results.
Scope and capabilities are the only distinguishing factors between AGI and the regular AI, which is also known as narrow AI. While AI tools like ChatGPT, Gemini, Sora, Grok, MM1, and others are function-specific, they remain limited to their built-in parameters. Meanwhile, AGI is not confined as it offers a more broadened and generalized kind of intelligence.
The perceived prospect of the AGI technology is believed to have piqued the interest of OpenAI CEO Sam Altman. In an interview, he recently stated his intention to invest billions of dollars towards the development of AGI.
Humans Fear the Unpredictability of AGI
Even though human cognitive ability is the background for AGI, many people are still worried about the technology for a lot of different reasons. Even Altman’s keen interest and commitment to the nascent technology has not done much to allay the apprehension that these entities feel.
There is the challenge of unpredictability because AGI can develop its own goals and behaviors. This unpredictability leads to fears about how it might act, especially if its goals conflict with human values or safety. AGI could also pose an existential risk to human beings once it becomes superintelligent, which is a possibility.
In the long run, humans may lose control over AGI and this is a source of concern. As the technology becomes more advanced, traditional control and regulations could become ineffective. Amongst other challenges, AGI could be used for malicious purposes, such as cyberattacks, surveillance, or autonomous weapons.
These fears surrounding AGI are rooted in a combination of uncertainty, ethical considerations, potential risks, and cultural narratives. Therefore, there is an urgent need for them to be addressed.
The post Artificial General Intelligence (AGI): Why Are People Spooked About It? appeared first on CoinGape.
from CoinGape