Document

Request for Information (RFI) Related to NIST's Assignments Under Sections 4.1, 4.5 and 11 of the Executive Order Concerning Artificial Intelligence (Sections 4.1, 4.5, and 11)

The National Institute of Standards and Technology (NIST) is seeking information to assist in carrying out several of its responsibilities under the Executive order on Safe, Sec...

Department of Commerce
National Institute of Standards and Technology
  1. [Docket Number: 231218-0309]
  2. RIN: 0693-XC135
( printed page 88368)

AGENCY:

National Institute of Standards and Technology (NIST), Commerce.

ACTION:

Notice; request for Information.

SUMMARY:

The National Institute of Standards and Technology (NIST) is seeking information to assist in carrying out several of its responsibilities under the Executive order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence issued on October 30, 2023. Among other things, the E.O. directs NIST to undertake an initiative for evaluating and auditing capabilities relating to Artificial Intelligence (AI) technologies and to develop a variety of guidelines, including for conducting AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems.

DATES:

Comments containing information in response to this notice must be received on or before February 2, 2024. Submissions received after that date may not be considered.

ADDRESSES:

Comments may be submitted by any of the following methods:

Electronic submission: Submit electronic public comments via the Federal e-Rulemaking Portal.

1. Go to www.regulations.gov and enter NIST-2023-0309 in the search field,

2. Click the “Comment Now!” icon, complete the required fields, and

3. Enter or attach your comments.

Electronic submissions may also be sent as an attachment to and may be in any of the following unlocked formats: HTML; ASCII; Word; RTF; Unicode, or .pdf.

Written comments may also be submitted by mail to Information Technology Laboratory, ATTN: AI E.O. RFI Comments, National Institute of Standards and Technology, 100 Bureau Drive, Mail Stop 8900, Gaithersburg, MD 20899-8900.

Response to this RFI is voluntary. Submissions must not exceed 25 pages (when printed) in 12-point or larger font, with a page number provided on each page. Please include your name, organization's name (if any), and cite “NIST AI Executive order” in all correspondence.

Comments containing references, studies, research, and other empirical data that are not widely published should include copies of the referenced materials. All comments and submissions, including attachments and other supporting materials, will become part of the public record and subject to public disclosure. Relevant comments will generally be available on the Federal eRulemaking Portal at www.regulations.gov. After the comment period closes, relevant comments will generally be available on https://www.nist.gov/​artificial-intelligence/​executive-order-safe-secure-and-trustworthy-artificial-intelligence. NIST will not accept comments accompanied by a request that part or all of the material be treated confidentially because of its business proprietary nature or for any other reason. Therefore, do not submit confidential business information or otherwise sensitive, protected, or personal information, such as account numbers, Social Security numbers, or names of other individuals.

FOR FURTHER INFORMATION CONTACT:

For questions about this RFI contact: or Rachel Trello, National Institute of Standards and Technology, 100 Bureau Drive, Stop 8900, Gaithersburg, MD 20899, (202) 570-3978. Direct media inquiries to NIST's Office of Public Affairs at (301) 975-2762. Users of telecommunication devices for the deaf, or a text telephone, may call the Federal Relay Service toll free at 1-800-877-8339.

Accessible Format: NIST will make the RFI available in alternate formats, such as Braille or large print, upon request by persons with disabilities.

SUPPLEMENTARY INFORMATION:

NIST is responsible for contributing to several deliverables assigned to the Secretary of Commerce. Among those is a report identifying existing standards, tools, methods, and practices, as well as the potential development of further science-backed and non-proprietary standards and techniques, related to synthetic content, including potentially harmful content, such as child sexual abuse material and non-consensual intimate imagery of actual adults. NIST will also assist the Secretary of Commerce to establish a plan for global engagement to promote and develop AI standards.

Respondents may provide information on one or more of the topics in this RFI and may elect not to address every topic.

NIST is seeking information to assist in carrying out several of its responsibilities under Sections 4.1, 4.5, and 11 of E.O. 14110. This RFI addresses the specific assignments cited below. Other assignments to NIST in E.O. 14110 related to cybersecurity and privacy, synthetic nucleic acid sequencing, and supporting agencies' implementation of minimum risk-management practices are being addressed separately. Information about NIST's assignments and plans under E.O. 14110, along with further opportunities for public input, may be found here: https://www.nist.gov/​artificial-intelligence/​executive-order-safe-secure-and-trustworthy-artificial-intelligence.

In considering information for submission to NIST, respondents are encouraged to review recent guidance documents that NIST has developed with significant public input and feedback, including the NIST AI Risk Management Framework ( https://www.nist.gov/​itl/​ai-risk-management-framework). Other NIST AI resources may be found on the NIST AI Resource Center ( https://airc.nist.gov/​home). In addition, respondents are encouraged to take into consideration the activities of the NIST Generative AI Public Working Group ( https://airc.nist.gov/​generative_​ai_​wg).

Information that is specific and actionable is of special interest, versus general statements about the challenges and needs. Copyright protections of materials, if any, should be clearly noted. Responses which include information generated by means of AI techniques should be identified clearly.

NIST is interested in receiving information pertinent to any or all of the assignments described below.

1. Developing Guidelines, Standards, and Best Practices for AI Safety and Security

NIST is seeking information regarding topics related to generative AI risk management, AI evaluation, and red-teaming.

a. E.O. 14110 Sections 4.1(a)(i)(A) and (C) direct NIST to establish guidelines and best practices in order to promote consensus industry standards in the development and deployment of safe, secure, and trustworthy AI systems. Accordingly, NIST is seeking information regarding topics related to this assignment, including:

(1) Developing a companion resource to the AI Risk Management Framework (AI RMF), NIST AI 100-1 ( https://www.nist.gov/​itl/​ai-risk-management-framework), for generative AI. Following is a non-exhaustive list of possible topics that may be addressed in any comments relevant to AI RMF companion resource for generative AI: ( printed page 88369)

○ Model validation and verification, including AI red-teaming;

○ Human rights impact assessments, ethical assessments, and other tools for identifying impacts of generative AI systems and mitigations for negative impacts;

○ Content authentication, provenance tracking, and synthetic content labeling and detection, as described in Section 2a below; and

○ Measurable and repeatable mechanisms to assess or verify the effectiveness of such techniques and implementations.

(2) Creating guidance and benchmarks for evaluating and auditing AI capabilities, with a focus on capabilities and limitations through which AI could be used to cause harm. Following is a non-exhaustive list of possible topics that may be addressed in any comments relevant to AI evaluations:

○ Negative effects of system interaction and tool use, including from the capacity to control physical systems or from reliability issues with such capacity or other limitations;

○ Exacerbating chemical, biological, radiological, and nuclear (CBRN) risks;

○ Enhancing or otherwise affecting malign cyber actors' capabilities, such as by aiding vulnerability discovery, exploitation, or operational use;

○ Introduction of biases into data, models, and AI lifecycle practices;

○ Risks arising from AI value chains in which one developer further refines a model developed by another, especially in safety- and rights-affecting systems;

○ Impacts to human and AI teaming performance;

○ Impacts on equity, including such issues as accessibility and human rights; and

○ Impacts to individuals and society; including both positive and negative impacts on safety and rights.

○ Model benchmarking and testing; and

○ Structured mechanisms for gathering human feedback, including randomized controlled human-subject trials; field testing, A/B testing, AI red-teaming.

b. E.O. 14110 Section 4.1(a)(ii) directs NIST to establish guidelines (except for AI used as a component of a national security system), including appropriate procedures and processes, to enable developers of AI, especially of dual-use foundation models, to conduct AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems. The following is a non-exhaustive list of possible topics that may be addressed in any comments relevant to red-teaming:

2. Reducing the Risk of Synthetic Content

NIST is seeking information regarding topics related to synthetic content creation, detection, labeling, and auditing.

a. E.O. 14110 Section 4.5(a) directs the Secretary of Commerce to submit a report to the Director of the Office of Management and Budget (OMB) and the Assistant to the President for National Security Affairs identifying existing standards, tools, methods, and practices, along with a description of the potential development of further science-backed standards and techniques for reducing the risk of synthetic content from AI technologies. NIST is seeking information regarding the following topics related to reducing the risk of synthetic content in both closed and open source models that should be included in the Secretary's report, recognizing that the most promising approaches will require multistakeholder input, including scientists and researchers, civil society, and the private sector. Existing tools and the potential development of future tools, measurement methods, best practices, active standards work, exploratory approaches, challenges and gaps are of interest for the following non-exhaustive list of possible topics and use cases of particular interest.

3. Advance Responsible Global Technical Standards for AI Development

NIST is seeking information regarding topics related to the development and implementation of AI-related consensus standards, cooperation and coordination, and information sharing that should be considered in the design of standards.

a. E.O. 14110 Section 11(b) directs the Secretary of Commerce, within 270 days and in coordination with the Secretary of State and the heads of other relevant agencies, to establish a plan for global engagement on promoting and developing AI consensus standards, cooperation, and coordination, ensuring that such efforts are guided by principles set out in the NIST AI Risk Management Framework ( https://www.nist.gov/​itl/​ai-risk-management-framework) and the U.S. Government National Standards Strategy for Critical and Emerging Technology ( https://www.whitehouse.gov/​wp-content/​uploads/​2023/​05/​US-Gov-National-Standards-Strategy-2023.pdf). The following is a non-exhaustive list of possible topics that may be addressed:

Across all these topics, NIST is seeking information about costs and ease of implementation for tools, systems, practices, and the extent to which they will benefit the public if they can be efficiently adopted and utilized.

Authority:Executive Order 14110 of Oct. 30, 2023; 15 U.S.C. 272.

Alicia Chambers,

NIST Executive Secretariat.

[FR Doc. 2023-28232 Filed 12-19-23; 4:15 pm]

BILLING CODE 3510-13-P

Legal Citation

Federal Register Citation

Use this for formal legal and research references to the published document.

88 FR 88368

Web Citation

Suggested Web Citation

Use this when citing the archival web version of the document.

“Request for Information (RFI) Related to NIST's Assignments Under Sections 4.1, 4.5 and 11 of the Executive Order Concerning Artificial Intelligence (Sections 4.1, 4.5, and 11),” thefederalregister.org (December 21, 2023), https://thefederalregister.org/documents/2023-28232/request-for-information-rfi-related-to-nist-s-assignments-under-sections-4-1-4-5-and-11-of-the-executive-order-concernin.