Selasa, 24 Oktober 2023

ChatGPT could be responsible for the next CYBER ATTACK: Scientists show how AI systems can be tricked into pro - Daily Mail

  • AI tools such as ChatGPT can be tricked into producing malicious code
  • Experts say this code could be used to launch cyber attacks 

ChatGPT can be tricked into carrying out cyberattacks by ordinary people, a report has warned.

A vulnerability allows users to ask the AI chatbot to write a malicious code that can hack into databases and steal sensitive information.

Researchers said their greatest fear was that people might accidentally do so without realising and cause major computer systems to crash.

A nurse for example could ask ChatGPT to help search through clinical records and without knowing be given a harmful code to do so that could disrupt the network without warning.

The team from the University of Sheffield said the chatbots were so complex that many – including the companies producing them – were 'simply not aware' of the threats they posed.

ChatGPT can be tricked into carrying out cyberattacks by ordinary people, a report has warned

The study has been published with just over a week to go before the government's AI Safety Summit on how to deploy the technology safely.

Global leaders, tech chiefs, and academics are set to meet face-to-face for the first time to agree a framework to protect the world against AI's potential 'catastrophic' harm.

OpenAI, the US start-up behind ChatGPT, said it had since fixed the specific loophole after the issue was flagged.

However the team at Sheffield's Department of Computer Science said there were likely to be more and called on the cybersecurity industry to look at the issue in more detail.

The paper is the first of its kind to show that so-called 'Text-to-SQL systems' – which is AI that can search databases by asking questions in plain language - can be exploited to attack computer systems in the real world.

Researchers analysed analysed five commercial AI tools in total – and found all were able to produce malicious codes that, once executed, could leak confidential information and interrupt or even completely destroy services.

The findings suggest it is not only expert hackers that could now carry out such attacks – but ordinary people too.

Researchers fear that it could lead to innocent users not realising they had done so and accidently infect computer systems.

Xutan Peng, a PhD student at the University of Sheffield who co-led the research, said: 'In reality, many companies are simply not aware of these types of threats and due to the complexity of chatbots, even within the community, there are things that are not fully understood. At the moment, ChatGPT is receiving a lot of attention.

'It's a standalone system, so the risks to the service itself are minimal, but what we found is that it can be tricked into producing malicious code that can do serious harm to other services.'

He added: 'The risk with AIs like ChatGPT is that more and more people are using them as productivity tools, rather than a conversational bot, and this is where our research shows the vulnerabilities are.

'For example, a nurse could ask ChatGPT to write an SQL command so that they can interact with a database, such as one that stores clinical records.

'As shown in our study, the SQL code produced by ChatGPT in many cases can be harmful to a database, so the nurse in this scenario may cause serious data management faults without even receiving a warning.'

Dr Mark Stevenson, a senior lecturer in the Natural Language Processing research group at the University of Sheffield, said AI systems were 'extremely powerful, but their behaviour is complex and can be difficult to predict.'

'At the University of Sheffield, we are currently working to better understand these models and allow their full potential to be safely realised.'

The findings were presented at the International Symposium on Software Reliability Engineering (ISSRE) in Florence, Italy, earlier this month.

The researchers also warned that people using AI to learn programming languages was a danger, as they could inadvertently create damaging code.

'The risk with AIs like ChatGPT is that more and more people are using them as productivity tools, rather than a conversational bot, and this is where our research shows the vulnerabilities are,' Peng said.

'For example, a nurse could ask ChatGPT to write an (programming language) SQL command so that they can interact with a database, such as one that stores clinical records.

'As shown in our study, the SQL code produced by ChatGPT in many cases can be harmful to a database, so the nurse in this scenario may cause serious data management faults without even receiving a warning.'

The UK will host an AI Safety Summit next week, with the Government inviting world leaders and industry giants to come together to discuss the opportunities and safety concerns around artificial intelligence.

Elon Musk's hatred of AI explained: Billionaire believes it will spell the end of humans - a fear Stephen Hawking shared

Elon Musk wants to push technology to its absolute limit, from space travel to self-driving cars — but he draws the line at artificial intelligence. 

The billionaire first shared his distaste for AI in 2014, calling it humanity's 'biggest existential threat' and comparing it to 'summoning the demon.'

At the time, Musk also revealed he was investing in AI companies not to make money but to keep an eye on the technology in case it gets out of hand. 

His main fear is that in the wrong hands, if AI becomes advanced, it could overtake humans and spell the end of mankind, which is known as The Singularity.

That concern is shared among many brilliant minds, including the late Stephen Hawking, who told the BBC in 2014: 'The development of full artificial intelligence could spell the end of the human race.

'It would take off on its own and redesign itself at an ever-increasing rate.' 

Despite his fear of AI, Musk has invested in the San Francisco-based AI group Vicarious, in DeepMind, which has since been acquired by Google, and OpenAI, creating the popular ChatGPT program that has taken the world by storm in recent months.

During a 2016 interview, Musk noted that he and OpenAI created the company to 'have democratisation of AI technology to make it widely available.'

Musk founded OpenAI with Sam Altman, the company's CEO, but in 2018 the billionaire attempted to take control of the start-up.

His request was rejected, forcing him to quit OpenAI and move on with his other projects.

In November, OpenAI launched ChatGPT, which became an instant success worldwide.

The chatbot uses 'large language model' software to train itself by scouring a massive amount of text data so it can learn to generate eerily human-like text in response to a given prompt. 

ChatGPT is used to write research papers, books, news articles, emails and more.

But while Altman is basking in its glory, Musk is attacking ChatGPT.

He says the AI is 'woke' and deviates from OpenAI's original non-profit mission.

'OpenAI was created as an open source (which is why I named it 'Open' AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft, Musk tweeted in February.

The Singularity is making waves worldwide as artificial intelligence advances in ways only seen in science fiction - but what does it actually mean?

In simple terms, it describes a hypothetical future where technology surpasses human intelligence and changes the path of our evolution.

Experts have said that once AI reaches this point, it will be able to innovate much faster than humans. 

There are two ways the advancement could play out, with the first leading to humans and machines working together to create a world better suited for humanity.

For example, humans could scan their consciousness and store it in a computer in which they will live forever.

The second scenario is that AI becomes more powerful than humans, taking control and making humans its slaves - but if this is true, it is far off in the distant future.

Researchers are now looking for signs of AI  reaching The Singularity, such as the technology's ability to translate speech with the accuracy of a human and perform tasks faster.

Former Google engineer Ray Kurzweil predicts it will be reached by 2045.

He has made 147 predictions about technology advancements since the early 1990s - and 86 per cent have been correct. 

Adblock test (Why?)


https://news.google.com/rss/articles/CBMilQFodHRwczovL3d3dy5kYWlseW1haWwuY28udWsvc2NpZW5jZXRlY2gvYXJ0aWNsZS0xMjY2NjY3Ny9DaGF0R1BULXJlc3BvbnNpYmxlLUNZQkVSLUFUVEFDSy1TY2llbnRpc3RzLUFJLXN5c3RlbXMtdHJpY2tlZC1wcm9kdWNpbmctbWFsaWNpb3VzLWNvZGUuaHRtbNIBmQFodHRwczovL3d3dy5kYWlseW1haWwuY28udWsvc2NpZW5jZXRlY2gvYXJ0aWNsZS0xMjY2NjY3Ny9hbXAvQ2hhdEdQVC1yZXNwb25zaWJsZS1DWUJFUi1BVFRBQ0stU2NpZW50aXN0cy1BSS1zeXN0ZW1zLXRyaWNrZWQtcHJvZHVjaW5nLW1hbGljaW91cy1jb2RlLmh0bWw?oc=5

2023-10-24 15:00:46Z
2527537677

Tidak ada komentar:

Posting Komentar