NSA Artificial Intelligence Security Center Plus Quantumdotgov with EU Ai Act

The integration of Artificial Intelligence (AI) and Quantum Information Science (QIS) within national security frameworks is a significant development in modern defense strategies. Both the National Security Agency (NSA) and the National Quantum Initiative (NQI) in the United States play crucial roles in this advancement.

https://www.myprivacy.blog/bytedance-plus-tiktok-plus-openai-llm/

NSA’s Artificial Intelligence Security Center (AISC)

The NSA has established the Artificial Intelligence Security Center (AISC) to ensure the secure development, integration, and adoption of AI capabilities within U.S. National Security Systems and the Defense Industrial Base. The AISC is focused on addressing the vast attack surface that AI technology presents, especially considering that adversaries are actively developing tools to exploit these vulnerabilities. The NSA’s goal is to remain ahead of these adversarial tactics and techniques through the AISC, which plays a key part in its cybersecurity mission. The AISC’s objectives include detecting and countering AI vulnerabilities, developing and promoting AI security best practices, and advancing partnerships with industry and experts​​​​​​​​.

AI Security at the NSA involves protecting AI systems from various threats, including learning errors, unauthorized actions, and data breaches. It encompasses practices to secure all aspects of AI systems, such as training data, models, abilities, and lifecycles. This comprehensive approach is an extension of current secure software development practices in cybersecurity and is aimed at ensuring the confidentiality, integrity, and availability of information and services​​​​.

https://www.nsa.gov/AISC/

National Quantum Initiative (NQI)

Quantum.gov serves as the central hub for the National Quantum Initiative and its activities in exploring and promoting Quantum Information Science (QIS). The NQI, established by the National Quantum Initiative Act, aims to maintain the United States’ leadership in QIS and its technological applications. This initiative includes a coordinated federal program to accelerate quantum research and development, crucial for the country’s economic and national security. The strategy for QIS R&D in the United States is outlined in the National Strategic Overview for QIS and supplementary documents​​.

The integration of AI and QIS within national security frameworks like those of the NSA and NQI underscores the strategic importance of these technologies in current and future defense and intelligence operations. These initiatives represent a concerted effort to not only develop but also secure advanced technologies against emerging threats, ensuring the United States remains at the forefront of technological advancements in national security.

https://www.quantum.gov/

https://www.compliancehub.wiki/the-united-nations-has-established-an-ai-advisory-body/

EU AI Act

The EU’s new AI Act represents a significant step in regulating artificial intelligence on a global scale. Here are key points to understand about this groundbreaking legislation:

https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai

  1. Sweeping AI Law: The AI Act is the world’s first comprehensive AI law, focusing on mitigating harm in critical areas like healthcare, education, and public services. It specifically targets “high-risk” AI systems, imposing strict rules for risk mitigation, data quality, documentation, and human oversight​​.
  2. Transparency and Ethics: The Act introduces binding rules requiring transparency in AI interactions, such as notifying people when interacting with AI systems like chatbots or biometric categorization. It mandates labeling of AI-generated content and deepfakes, and all organizations offering essential services will have to assess AI systems’ impact on fundamental rights​​.
  3. Regulation of Foundation Models: The AI Act will regulate foundation models, particularly the most powerful ones, based on the computing power used in their training. These models will need to adhere to documentation standards, EU copyright laws, and share training data details. However, it leaves room for companies to self-assess if they fall under these stricter rules​​.
  4. AI Enforcement Body: The European AI Office will be established to enforce the AI Act, marking the EU as a leading global AI regulator. This body will include a panel of independent experts to guide on systemic AI risks. Noncompliance could result in substantial fines, up to 7% of a company’s global sales turnover​​.
  5. Ban on Certain AI Uses: The Act bans specific uses of AI, like untargeted scraping of facial images and emotion recognition in public places, unless for certain specific crimes and with court approval. However, AI systems developed exclusively for military and defense uses are exempt from the Act​​.

https://www.breached.company/wef-cybersecurity-futures-2030-new-foundations/

This AI Act could potentially become a global standard, influencing how AI is regulated and used worldwide, similar to the impact of the GDPR. It represents a substantial shift in AI governance, emphasizing the need for ethical and transparent AI practices.

Leave a Reply