Orca Security deploys ChatGPT to secure the cloud with AI

Check out all of the on-demand sessions from Smart Security Summit here.


Securing the cloud is not easy. However, by using AI and automation, using tools like ChatGPT, ChatGPT security teams can work to streamline day-to-day operations to respond to cyber incidents more efficiently.

One provider that exemplifies this approach is Israel-based cloud cybersecurity company Orca Security, which today achieved a valuation of $1.8 billion in 2021. Today Orca announced that it will be the first cloud security company to implement the ChatGPT extension. The integration will handle security alerts and provide users with step-by-step repair instructions.

More broadly, this integration demonstrates how ChatGPT can help organizations streamline their security operations workflows, so they can process alerts and events faster.

For years, security teams have struggled with managing alerts. In fact, research shows that 70% of security professionals report their lives at home have been affected emotionally by their work managing IT threat alerts.

It happened

The pinnacle of smart security on demand

Learn about the critical role of AI and machine learning in cybersecurity and industry-specific case studies. Watch sessions on demand today.

Watch here

At the same time, 55% admitted that they were not confident in their ability to prioritize and respond to alerts.

Part of the reason for this mistrust is that the analyst has to check whether each alert is a false positive or a legitimate threat, and if it is malicious, respond in the shortest possible time.

This is particularly challenging in complex cloud and hybrid environments with lots of disparate solutions. It is a time consuming process with little margin for error. That’s why Orca Security is looking to use ChatGPT (which is based on GPT-3) to help users automate the process of managing alerts.

“We leveraged GPT-3 to enhance our platform’s ability to generate context-specific actionable remedial steps for Orca Security Alerts. This integration greatly simplifies and speeds up our customers’ mean time to resolution (MTTR),” said Itamar Golan, Head of Data Science at Orca Security. ), increasing their ability to deliver fast remedials and continuously keep their cloud environments secure.

Essentially, Orca Security uses a custom pipeline to forward security alerts to ChatGPT3, which will process the information, noting the assets, attack vectors and potential impact of the hack, and providing a detailed how-to in project tracking tools like Jira. to fix the problem.

Users also have the option to repair through the command line, infrastructure as code (Terraform and Pulumi), or the cloud console.

It’s an approach designed to help security teams make better use of their existing resources. “Especially given that most security teams are constrained by limited resources, this can greatly ease the day-to-day workloads of security practitioners and development teams,” Golan said.

Is ChatGPT a net positive for cybersecurity?

While Orca Security’s use of ChatGPT highlights the positive role AI can play in enhancing enterprise security, other organizations are less optimistic about the impact of such solutions on the threat landscape.

For example, Deep Instinct released threat intelligence research this week examining ChatGPT risks and concluded that “artificial intelligence is better at creating malware than providing ways to detect it.” In other words, it is easier for threat actors to create malicious code than it is for security teams to detect it.

“Basically, it’s always easier to attack than to defend (the best defense is offense), especially in this case, since ChatGPT allows you to bring old, forgotten symbol languages ​​to life, change or debug the attack flow in no time and create the whole process,” Alex said. Kozodoi, Director of Cyber ​​Research at Deep Instinct.

“On the other hand, it’s very difficult to defend when you don’t know what to expect, which makes defenders able to prepare for a limited range of attacks and for certain tools that can help them investigate what happened — usually after they’ve already been hacked,” Kozodwi said.

The good news is that as more organizations begin to experiment with ChatGPT to secure their on-premise and cloud infrastructure, defensive AI operations will become more advanced, and will have a better chance of keeping up with the growing number of AI-driven threats.

VentureBeat’s mission It is to be the digital city arena for technical decision makers to gain knowledge about the technology of transformational and transactional enterprises. Discover our briefings.

Leave a Comment