Risk Versus Reward: Artificial Intelligence Advances Despite ‘Danger to Society’

Washington – As the artificial intelligence (AI) revolution explodes, big tech leaders are calling for a pause in its development due to potential dangers to society—and also maybe to give themselves a chance to catch up.

    
 
Despite AI’s potential benefits in solving global issues, some experts are sounding the alarm about the potential dangers of this technology. Tech leaders such as Elon Musk and Bill Gates are calling for a pause in the development of AI, having recently signed on to an open letter calling on all AI labs to immediately suspend their training of AI systems more powerful than GPT-4 for at least 6 months. While the open letter posted by the Future of Life Institute has been available for signature on the Internet for quite some time, just in the last week or so the number of signatories has almost doubled to nearly 5,000 as a result of these tech leaders signing on.

AI has been revolutionizing the way people live and work for more than a decade. But what is it exactly? Technically speaking, it’s the simulation of human intelligence in machines that can learn, reason, and solve problems. In other words, it’s when machines can think and learn like humans to come up with solutions and do tasks without being explicitly programmed to do that.

Although current AI is like a more primitive version of the fictional C3PO robot from Star Wars, the rapid evolution of AI is astounding. Ray Kurzweil, futurist and Director of Engineering at Google, predicts that AI will reach human levels by around 2029. By about 2045, he says “we will have multiplied the human biological machine intelligence of our civilization a billion-fold.”

Experts believe AI can potentially help solve some of the world’s biggest problems, such as climate change and disease. AI can analyze large amounts of data quickly and accurately, facilitating better analysis and decisionmaking. AI is being used in healthcare to develop new treatments and cures for diseases. It can also help reduce society’s carbon footprint by optimizing energy use and predicting weather patterns.

Fei-Fei Li, co-director of the Stanford Institute for Human-Centered Artificial Intelligence, said: “AI is not a silver bullet, but it has the potential to drastically improve the quality of life for all people.” It can help automate tedious tasks, freeing up time for more enjoyable activities.

However, there are also risks associated with AI. One of the biggest fears is that AI could become more intelligent than humans, leading to a scenario known as “superintelligence.” This is a concern because “superintelligent AI” could develop goals and motives that are not aligned with human interests, potentially leading to catastrophic consequences. Bill Gates, the co-founder of Microsoft, expressed concerns about AI nearly ten years ago when he wrote, “I am in the camp that is concerned about superintelligence.” Gates called for more research into the risks of AI and for greater collaboration between governments and the tech industry.

Another danger of AI is the potential for bias and discrimination. AI algorithms are only as good as the data they are trained on. If that data already contains biases or discrimination, the algorithm will learn and perpetuate those biases. For example, facial recognition software has been shown to have a higher error rate for people with darker skin tones, potentially leading to discrimination in areas such as law enforcement.

There is also a big concern about the impact on people’s jobs. Take ChatGPT, for example. It is a large language model trained by OpenAI that is being used to automate tasks and generate written content, things that are normally done by humans. With ChatGPT’s ability to generate impressive content with almost any level of detail and in any style, using data open sourced from ALL of the content on the Internet, it certainly has the potential to improve efficiency and productivity. ChatGPT is also good at crunching numbers with relative accuracy. However, users of ChatGPT have found that the bot can also generate misinformation, incorrectly answer coding problems, and produce errors in basic math.

A variety of jobs could be displaced in the future. These could include tech jobs such as software developers, web developers, computer programmers, coders, and data scientists. It could also affect customer service representatives, virtual assistants, copywriters, research assistants, financial advisors, and even journalists.

Stephen Hawking, the renowned physicist and cosmologist, warned about the dangers of AI before his death in 2018. “The development of full artificial intelligence could spell the end of the human race,” Hawking wrote dramatically. “The rise of powerful AI will be either the best or the worst thing ever to happen to humanity. We do not yet know which.” Hawking called for a global effort to ensure that AI is developed in a way that is aligned with human values.

Elon Musk, the founder of SpaceX and Tesla, has been one of the most vocal proponents of developing this technology in a safe and responsible way. Surprisingly, Musk who eschews other areas of regulation has repeatedly warned about the potential dangers of AI and has called for greater regulation of the technology. He is one of the high profile tech leaders advocating a pause in the pace of development. On April 2, Musk tweeted : “Aerospace safety is overseen by the FAA, because people had had enough of dying due to shoddy manufacturing & maintenance, but there is no agency overseeing AI at all.”

But is a pause realistic or even possible right now? AI has been advancing so rapidly that the technology is already deeply ingrained in many industries. With the pace of innovation and competition among developers, and the many benefits and economic opportunities associated with AI, it seems highly unlikely that there will be a complete pause in AI development anytime soon.

While a pause in AI may not be possible, there are still ways to ensure that the technology is developed in a responsible and ethical manner. “The biggest challenge facing artificial intelligence is not the development of algorithms or hardware,” said Yoshua Bengio, a computer scientist and professor at the University of Montreal. Rather, it’s the “creation of a comprehensive and inclusive ethical framework.”

To be sure, ensuring that AI is being developed in alignment with human values and priorities will require collaboration among governments, the tech industry, and other stakeholders. This will mean developing regulations and guidelines, ensuring transparency and accountability in its development, and investing in education and training programs to help individuals adapt to the changing job market.

Scientist, author, and entrepreneur Gary Marcus perhaps has identified the crux of the matter: human greed. “The main thing that worries me,” he said, “is that I see self-interest on all sides, and not just the corporate players.”

In his recent blog entitled “I am not afraid of robots. I am afraid of people,” he wrote that “greed seemed to increase in late November, when ChatGPT started to take off. $ signs flashed. Microsoft started losing the cloud market, and saw OpenAI’s work as a way to take back search.” Meanwhile, OpenAI finalized a transformation from its original mission to a company that now is a “for-profit, heavily constrained by the need to generate a financial return, working around Microsoft’s goals, with far less emphasis on the admirable humanitarian goals in their initial charter.”

The reason he signed onto the letter, he said, was the need for “coordinated action” as spelled out clearly in the letter: to refocus the research on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.

“Who on earth,” he asked, “could actually object to that?”

But many have criticized the call for a pause. While there is a range of opinions among experts on the relative value and danger to humans from AI, the one thing the letter signatories agree on is the need to develop this little understood technology cautiously.

“AI can help us solve problems we can’t solve today, but we need to carefully consider its impact on society and ensure that it’s designed with human values in mind,” said Sundar Pichai, CEO of Google and Alphabet Inc.

Total
0
Shares
Leave a Reply

Your email address will not be published. Required fields are marked *

Previous Post

Mohamed Sadiki, Guest of Maroc Radar Forum on Wednesday

Next Post

Algeria Takes ‘Extreme Steps’ to Stifle Critical Voices – Amnesty

Related Posts
Total
0
Share