Organizations that integrate artificial intelligence into their workforce and their offers accelerate innovation, but many are not prepared for the safety challenges that accompany it. While they rush to deploy more effective and more effective models, they often neglect the risks of model handling and contradictory attacks, threats that traditional defenses are not equipped to detect or stop. At the same time, many leaders are still struggling with the way of operation in complete safety and safe in their environment. As AI is deeply rooted in commercial operations and critical infrastructure, the risks are developing rapidly and worldwide.
To help organizations navigate these risks and help them take control, the Institute without a major initiative. Today, without announced the next release of his Security guidelines for Critical AI V1.0A practical framework and focused on operations built for defenders and managers who now need to secure AI systems. The guidelines will make its debut Without Ai Summit 2025 And focus on six critical areas: access controls, data protection, deployment strategies, inference security, surveillance and governance, risk and compliance. They are designed to provide security teams and leadership with a clear and practical direction to defend AI systems in real environments. Each section provides usable recommendations to help organizations identify, mitigate and manage the risks associated with modern AI technologies. Once published, the guidelines will be open to community comments, allowing practitioners, researchers and industry leaders to contribute information and updates as threats are evolving and new best practices are emerging.
“We see organizations deploying large languages, a recovery generation and autonomous agents faster than they can secure them,” said Rob T. Lee, Research Head and Co -President of the Sans Ai Summit. “These directives are designed for the place where the domain is located. They are not theoretical; They are written for analysts and leaders of the trenches, who must protect these systems from today. ”
While AI technologies become at the heart of all aspects of commercial operations, the need for open source tools to increase safety teams and new capacities to help secure AI has never been greater. To remedy it, the Without Ai Cybersecurity Hackathon Invited the cybersecurity community to design open source tools directly aligned on new security guidelines. This unique event challenged participants to develop innovative solutions to protect AI models, monitor the inference processes, defend against contradictory attacks and approach other vulnerabilities specific to AI systems. The tools produced during the hackathon will be presented at the top of the AI, offering tangible and real solutions for organizations.
“We need more people who understand how AI works under the hood and how to defend it,” said Kate Marshall, director of hackathon without AI and co -president of the Somme Sans AI. “The hackathon is already making a difference. It is not only the creation of tools; it highlights talent, and that’s exactly what we need to secure AI systems for the future.”
The hackathon is a powerful step to fill the growing deficit in AI skills, offering participants practical experience and direct mentoring of the best IA security experts. With the growing demand for AI security professionals, initiatives like this are essential to guarantee that the talent pipeline is ready to meet the needs of industry. Winning tools will not only receive visibility and support, but will also be an integral part of organizations to help organizations effectively implement security guidelines.
These collective efforts will end at Without Ai Summit 2025 On March 31, where the leaders of cybersecurity, the development of AI and the policy will meet to launch the directives and explore how to secure AI systems in real world applications. The summit will present in -depth discussions on the implementation of guidelines, live demonstrations of winning hackathon tools and discussions on AI security challenges in sectors such as government, health care and critical infrastructure. It is here that these efforts meet, with guidelines, hackathon projects and conversations at the top creating a complete and usable roadmap to secure AI.
“We have reached a point where this type of work is not optional,” said Rob T. Lee. “The industry needed something central, somewhere, to trust, to rally to the security of the AI. We need real controls, real tools and a means of developing the skills that will protect the world. That’s what it is. It is not a question of San. It’s about getting together as a community to do things right. ”
By combining the publication of Critical Safety Directives of AI, the momentum of the Hackathon of AI cybersecurity and collaborative education in AI at AI top, without creating a critical moment in industry and obtaining a place where IA professionals can unite, innovate and build the future of secure AI together.