The U.S. government has recently issued new security rules aimed at safeguarding critical infrastructure from potential threats posed by A.I. technology.

“These guidelines are informed by the whole-of-government effort to assess A.I. risks across all sixteen critical infrastructure sectors and address threats to, from, and involving A.I. systems,” DHS stated Monday.

The agency is also advocating for the safe, responsible, and ethical use of technology, ensuring it upholds privacy, civil rights, and civil liberties.

The new guidance concerns the use of AI to augment and scale attacks on critical infrastructure, adversarial manipulation of AI systems, and shortcomings in such tools that could result in unintended consequences, necessitating the need for transparency and secure by design practices to evaluate and mitigate AI risks.

Specifically, this spans four different functions such as govern, map, measure, and manage all through the AI lifecycle –

  • Establish an organizational culture of AI risk management
  • Understand your individual AI use context and risk profile
  • Develop systems to assess, analyze, and track AI risks
  • Prioritize and act upon AI risks to safety and security

“Critical infrastructure owners and operators should account for their own sector-specific and context-specific use of A.I. when assessing A.I. risks and selecting appropriate mitigations,” the group stated.

Weeks prior to this, the Five Eyes (FVEY) intelligence alliance—comprising Australia, Canada, New Zealand, the U.K., and the U.S.—issued a cybersecurity information sheet emphasizing the meticulous setup and configuration required for the deployment of AI systems.

The states said, “The rapid adoption, deployment, and use of A.I. capabilities can make them highly valuable targets for malicious cyber actors.”

“Actors, who have historically used data theft of sensitive information and intellectual property to advance their interests, may seek to co-opt deployed A.I. systems and apply them to malicious ends.”

The recommended best practices are to secure the deployment environment, review A.I. model sources and supply chain security, ensure a robust architecture, harden deployment environment configurations, validate the A.I. system to ensure its integrity, protect model weights, enforce strict access controls, conduct external audits, and implement robust logging.

The CERT Coordination Center (CERT/CC) reported earlier this month a Keras 2 neural network library flaw that might be used to trojanize a popular A.I. model and disseminate it, poisoning the supply chain of dependent applications.

Recent research has shown that A.I. systems are subject to a wide range of rapid injection attacks that cause the A.I. model to bypass safety safeguards and create destructive outputs.

“Prompt injection attacks through poisoned content are a major security risk because an attacker who does this can potentially issue commands to the A.I. system as if they were the user,” Microsoft said in a new study.

Like Anthropic’s many-shot jailbreaking, Crescendo is a multiturn large language model (LLM) jailbreak that tricks the model into generating malicious content by “asking carefully crafted questions or prompts that gradually lead the LLM to a desired outcome, rather than asking for the goal all at once.”

Cybercriminals use LLM jailbreak prompts to create powerful phishing lures, even as nation-state players use generative A.I. for spying and influence.

Even more worrisome, University of Illinois Urbana-Champaign researchers found that LLM agents can be used to “hack websites, performing tasks as complex as blind database schema extraction and SQL injections without human feedback.”

SOURCE

Contact us if you have any questions or concerns.