AI

Growing Adoption of Generative AI Tools in Cybersecurity and Nonprofit Initiatives

The adoption of generative AI tools in cybersecurity is increasing, with organizations experimenting with AI for tasks such as rule creation, attack simulation, and threat detection, despite concerns about potential risks and the need for a unified approach between C-suite executives and staff.

At a glance

  • Over half of organizations are projected to implement generative AI security tools by the end of the year.
  • C-suite executives generally favor AI security tools, while most professionals consider AI-enhancing security measures.
  • Primary use cases of AI in cybersecurity include reporting, proactive threat detection, and identifying vulnerabilities in code.
  • Google.org launched the Accelerator: Generative AI program to support nonprofits in developing AI applications.
  • Hercules AI introduced RosettaStoneLLM, a methodology for deploying virtual AI workers in regulated industries, outperforming general models of GPT-4 by up to 30 percent.

The details

In the realm of cybersecurity, a growing trend towards the adoption of generative AI tools is evident, despite the “department of no” stereotype suggesting hesitancy among security teams and Chief Information Security Officers (CISOs). Many security practitioners have already begun experimenting with AI and recognize its potential benefits.

Over half of organizations are projected to implement generative AI security tools by the end of the year.

These tools have been tested for various security tasks such as rule creation, attack simulation, compliance violation detection, network detection, reducing false positives, and classifying anomalies.

C-suite executives generally favor incorporating AI security tools, with only a small percentage of security professionals believing that AI will completely take over their roles.

The majority view AI as enhancing security measures.

However, there is a notable disconnect between C-suite executives and staff’s understanding and implementation of AI, highlighting the need for a unified approach.

Primary use cases of AI in cybersecurity revolve around reporting, proactive threat detection, end detection and response, vulnerability identification in code, recommending remediation actions, triaging alerts, and determining the importance of security threats.

AI can also quickly identify phishing emails, saving time compared to human analysts.

Enthusiasm for AI in cybersecurity is tempered by the risks it poses, as attackers can also leverage AI technologies to conduct more sophisticated and targeted malicious activities.

In a separate development, Google.org has launched the Accelerator: Generative AI program to support nonprofits in developing generative AI applications.

The program provides mentoring, technical training, and support from AI coaches, with more than $20 million in funding allocated for this initiative.

Participating organizations include Beyond 12, IDinsight Inc., the World Bank, and Full Fact.

These nonprofits are working on various AI projects, from assisting college students from underprivileged backgrounds to developing AI tools for healthcare inquiries and translation assistance for refugees.

Additionally, Hercules AI, a generative AI company focusing on the future of work, has introduced a new methodology for deploying virtual AI workers in regulated industries.

Their model, RosettaStoneLLM, automates complex workflows through a pre-configured “assembly line” process that offers high-quality, cost-efficient, and scalable AI agents.

The RosettaStoneLLM, built off Mistral-7B and WizardCoder-13B with 7 billion parameters, is designed to convert structured data into formats aligning with internal workflows in industries like finance, insurance, and legal services.

Early results indicate that RosettaStoneLLM can outperform general models of GPT-4 by up to 30 percent in tasks like entity mapping and code generation.

Overall, the landscape of generative AI applications in cybersecurity, nonprofit initiatives, and virtual AI workers in regulated industries is evolving rapidly, showcasing the potential benefits and challenges associated with integrating AI technologies across various sectors.

Article X-ray

Facts attribution

This section links each of the article’s facts back to its original source.

If you suspect false information in the article, you can use this section to investigate where it came from.

venturebeat.com
– The “department of no” stereotype in cybersecurity suggests that security teams and CISOs are hesitant to adopt generative AI tools.
– However, many security practitioners have experimented with AI and see its potential benefits.
– More than half of organizations are expected to implement gen AI security tools by the end of the year.
– Security practitioners have tested AI for security tasks, with the top use cases being rule creation, attack simulation, compliance violation detection, network detection, reducing false positives, and classifying anomalies.
– C-suites are largely in favor of incorporating AI security tools.
– Only 12% of security professionals believe AI will completely take over their roles.
– The majority see AI as enhancing security measures.
– C-level executives are more familiar with AI technologies compared to staff.
– The disconnect between C-suite and staff understanding and implementing AI highlights the need for a unified approach.
– The primary use of AI in cybersecurity is around reporting.
– AI can be used proactively to detect threats, perform end detection and response, find and fix vulnerabilities in code, and recommend remediation actions.
– AI can help triage alerts and determine the importance of security threats.
– AI can quickly determine if an email is phishing, saving time compared to human analysts.
– Leaders are looking to AI to supplement skills and knowledge gaps in cybersecurity.
– Realistic use cases for AI in cybersecurity are being discovered and applied.
– Enthusiasm for AI in cybersecurity is mixed with risks, as attackers can also benefit from AI technologies.
– AI allows attackers to be more sophisticated and focused in their malicious activities.
– AI can be used to personalize phishing attacks at scale.
aibusiness.com
– Google.org has launched Accelerator: Generative AI, a program to support nonprofits developing generative AI applications
– The program provides mentoring, technical training, and support from AI coaches
– Nonprofits in areas such as health care and education are eligible for the program
– Google.org is providing more than $20 million in funding to support nonprofits
– Twenty-one organizations, including Beyond 12 and IDinsight Inc., are participating in the initiative
– Beyond 12 is developing a generative AI coach for college students from underprivileged backgrounds
– IDinsight Inc. is developing AI tools to respond to health-related inquiries from expectant mothers in South Africa
– Google is supporting the World Bank and Full Fact in developing AI tools
– Google staff will work with Tarjimly, Benefits Data Trust, and mRelief to help build their AI solutions
– Tarjimly is developing AI translation tools to assist human translators aiding refugees
– Benefits Data Trust is using large language models to power AI assistants for workers helping low-income applicants access public benefits
– mRelief is developing an assistant to help with applying for the U.S. Supplemental Nutrition Assistance Program
– A Google.org survey found that 81% of nonprofits believe generative AI could help their efforts
– Two out of five organizations admitted to not using generative AI technology
– Nonprofits cited lacking technology skills and awareness of potential use cases as bottlenecks
– Google.org suggests the private sector should provide training on generative AI to nonprofits at low or no cost
– Google.org pledged to develop free training and educational resources for nonprofits to help them make the most of generative AI
– Google.org funding recipients report that AI helps them achieve their goals in one-third of the time at nearly half the cost
venturebeat.com
– Hercules AI is a generative AI company focused on the future of work
– The company has developed a new methodology for deploying virtual AI workers in the enterprise
– They have introduced RosettaStoneLLM, a model for automating complex workflows in regulated industries
– The process for deploying virtual AI workers is described as an “assembly line” process
– Organizations can choose prefabricated components for developing and deploying virtual AI workers
– Everything is premade, tested, and pre-configured in advance, with no custom ordering required
– Hercules AI claims that this process will produce high-quality, cost-efficient, and easily scalable AI agents
– Fine-tuning the model is necessary for the bot to follow the specified workflow
– Companies in regulated industries like finance, insurance, and legal services may find this offering appealing
– The RosettaStoneLLM is built off Mistral-7B and WizardCoder-13B and has 7 billion parameters
– It can convert structured data from spreadsheets for mapping and transformation by the AI
– The LLM is designed to turn large volumes of structured data into something that aligns with internal workflows
– Hercules AI claims that early results show RosettaStoneLLM can perform tasks like entity mapping and code generation better by up to 30 percent than general models of GPT-4
– The company is based in Campbell, California and has raised $12.1 million in venture funding
– Hercules AI is used by Fortune 1000 companies and 30 percent of the top law firms in the U.S.

What's your reaction?

Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0

You may also like

Comments are closed.

More in:AI